survey_title
stringlengths
19
197
section_num
int64
3
56
references
stringlengths
4
1.34M
section_outline
stringlengths
531
9.08k
A Review of Localization and Tracking Algorithms in Wireless Sensor Networks
20
--- paper_title: Localization algorithms of Wireless Sensor Networks: a survey paper_content: In Wireless Sensor Networks (WSNs), localization is one of the most important technologies since it plays a critical role in many applications, e.g., target tracking. If the users cannot obtain the accurate location information, the related applications cannot be accomplished. The main idea in most localization methods is that some deployed nodes (landmarks) with known coordinates (e.g., GPS-equipped nodes) transmit beacons with their coordinates in order to help other nodes localize themselves. In general, the main localization algorithms are classified into two categories: range-based and range-free. In this paper, we reclassify the localization algorithms with a new perspective based on the mobility state of landmarks and unknown nodes, and present a detailed analysis of the representative localization algorithms. Moreover, we compare the existing localization algorithms and analyze the future research directions for the localization algorithms in WSNs. --- paper_title: Cognitive radio, software defined radio, and adaptiv wireless systems paper_content: Preface. Chapter 1: Introducing Adaptive, Aware, and Cognitive Radios Bruce Fette. Chapter 2: Cognitive Networks Ryan W. Thomas, Daniel H. Friend, Luiz A. DaSilva, Allen B. MacKenzie. Chapter 3: Cognitive Radio Architecture Joseph Mitola III. Chapter 4: Software Defined Radio Architectures for Cognitive radios H. Arslan, H. celebi. Chapter 5: Value-Creation and Migration in Adaptive and Cognitive Radio Systems Keith E. Nolan, Francis J. Mullany, Eamonn Ambrose, Linda E. Doyle. Chapter 6: Codes and Games for Dynamic Spectrum Access Yiping Xing, Harikeshwar Kushwaha, K.P. Subbalakshmi, R. Chandramouli. Chapter 7: Efficiency and Coexistence Strategies for Cognitive Radio Sai Shankar N. Chapter 8: Enabling Cognitive Radio Through Sensing, Awareness, and Measurements H. Arslan, S. yarkan. Chapter 9: Spectrum Sensing for Cognitive Radio Applications H. Arslan, T. Yucek. Chapter 10: Location Information Management Systems for Cognitive Wireless Networks H. Arslan, H. Celebi. Chapter 11: OFDM for Cognitive Radio: Merits and Challenges H. Arslan, H. A. Mahmoud, T.Yucek. Chapter 12: UWB Cognitive Radio H. Arslan, M.E. Sahin. Chapter 13: Applications of Cognitive radio H. Arslan, S. Ahmed. Chapter 14: Cross-layer Adaptation and Optimization for Cognitive Radio H. Arslan, S. Yarkan. Index. --- paper_title: An Efficient Compartmental Model for Real-Time Node Tracking Over Cognitive Wireless Sensor Networks paper_content: In this paper, an efficient compartmental model for real-time node tracking over cognitive wireless sensor networks is proposed. The compartmental model is developed in a multi-sensor fusion framework with cognitive bandwidth utilization. The multi-sensor data attenuation model using radio, acoustic, and visible light signal is first derived using a sum of exponentials model. A compartmental model that selectively combines the multi-sensor data is then developed. The selection of individual sensor data is based on the criterion of bandwidth utilization. The parameters of the compartmental model are computed using the modified Prony estimator, which results in high tracking accuracies. Additional advantages of the proposed method include lower computational complexity and asymptotic distribution of the estimator. Cramer-Rao bound and elliptical error probability analysis are also discussed to highlight the advantages of the compartmental model. Experimental results for real-time node tracking in indoor environment indicate a significant improvement in tracking performance when compared to state-of-the-art methods in literature. --- paper_title: An Efficient Compartmental Model for Real-Time Node Tracking Over Cognitive Wireless Sensor Networks paper_content: In this paper, an efficient compartmental model for real-time node tracking over cognitive wireless sensor networks is proposed. The compartmental model is developed in a multi-sensor fusion framework with cognitive bandwidth utilization. The multi-sensor data attenuation model using radio, acoustic, and visible light signal is first derived using a sum of exponentials model. A compartmental model that selectively combines the multi-sensor data is then developed. The selection of individual sensor data is based on the criterion of bandwidth utilization. The parameters of the compartmental model are computed using the modified Prony estimator, which results in high tracking accuracies. Additional advantages of the proposed method include lower computational complexity and asymptotic distribution of the estimator. Cramer-Rao bound and elliptical error probability analysis are also discussed to highlight the advantages of the compartmental model. Experimental results for real-time node tracking in indoor environment indicate a significant improvement in tracking performance when compared to state-of-the-art methods in literature. --- paper_title: Localization of Wireless Sensor Networks in the Wild: Pursuit of Ranging Quality paper_content: Localization is a fundamental issue of wireless sensor networks that has been extensively studied in the literature. Our real-world experience from GreenOrbs, a sensor network system deployed in a forest, shows that localization in the wild remains very challenging due to various interfering factors. In this paper, we propose CDL, a Combined and Differentiated Localization approach for localization that exploits the strength of range-free approaches and range-based approaches using received signal strength indicator (RSSI). A critical observation is that ranging quality greatly impacts the overall localization accuracy. To achieve a better ranging quality, our method CDL incorporates virtual-hop localization, local filtration, and ranging-quality aware calibration. We have implemented and evaluated CDL by extensive real-world experiments in GreenOrbs and large-scale simulations. Our experimental and simulation results demonstrate that CDL outperforms current state-of-art localization approaches with a more accurate and consistent performance. For example, the average location error using CDL in GreenOrbs system is 2.9 m, while the previous best method SISR has an average error of 4.6 m. --- paper_title: Localization system for wireless networks paper_content: Positioning in wireless environments is crucial for the continuity of rich and mobile multimedia applications. A good position accuracy is particularly difficult to obtain in indoor or mixed indoor-outdoor scenarios. An efficient positioning system must accurately localize any mobile terminal/user in these demanding environments, being at the same time low-cost and easy to deploy. This paper proposes a real time tracking system for Wi-Fi networks based on trilateration calculated from RSSI values that are returned by the different wireless network interfaces. A fundamental characteristic of the system is the fact that it operates in passive mode, which means that there is no relationship between the positioning system and the terminal/user whose position is being calculated. The characteristics of the developed application are presented, together with the most important considerations that were taken into account during its development phase. The accuracy of the proposed system is evaluated by applying it to different scenarios and the results obtained prove that the application is able to achieve a good precision level, in spite of being a low cost solution that is very easy to deploy in practice. --- paper_title: Indoor Positioning System Using Visible Light and Accelerometer paper_content: Indoor positioning system is a critical part in location-based services. Highly precise positioning systems can support different mobile applications in future wireless systems. Positioning systems using existing wireless networks have low deployment costs, but the position error can be up to several meters. While there are positioning systems proposed in the literature that have low position error, they require extra hardware and are therefore costly to deploy. In this paper, we propose an indoor positioning system based on visible light communications (VLC). In contrast to existing works on VLC for positioning, our system estimates the location of the receiver in three dimensions even without: 1) the knowledge of the height of the receiver from ground; and 2) requiring the alignment of the receiver’s normal with the LED’s normal. Our system has low installation cost as it uses existing lighting sources as transmitters. Light sensor and accelerometer, which can be found in most smartphones, are used at the receiver’s side. They are used to measure the received light intensity and the orientation of the smartphone. A low-complexity algorithm is then used to find out the receiver’s position. Our system does not require the knowledge of the LED transmitters’ physical parameters. Experimental results show that our system achieves average position errors of less than 0.25 m. --- paper_title: Distributed Angle Estimation for Localization in Wireless Sensor Networks paper_content: In this paper, we design a new distributed angle estimation method for localization in wireless sensor networks (WSNs) under multipath propagation environment. We employ a two-antenna anchor that can emit two linear chirp waves simultaneously, and propose to estimate the angle of departure (AOD) of the emitted waves at each receiving node via frequency measurement of the local received signal strength indication (RSSI) signal. An improved estimation method is further proposed where multiple parallel arrays are adopted to provide the space diversity. The proposed methods rely only on radio transceivers and do not require frequency synchronization or precise time synchronization between the transceivers. More importantly, the angle is estimated at each sensor in a completely distributed manner. The performance analysis is derived and simulations are presented to corroborate the proposed studies. --- paper_title: Constrained Least Squares Algorithm for TOA-Based Mobile Location under NLOS Environments paper_content: This paper presents a mobile station (MS) location method using constrained least-squares (CLS) estimation in the non-line-sight (NLOS) conditions. Three or more time-of-arrival (TOA) measurements of a signal traveling between a MS and base stations (BSs) are necessary for its localization. However, when some of the measurements are from NLOS paths, the location errors can be very large. We propose a method that mitigates possible large TOA error measurements caused by NLOS. This method does not depend on a particular distribution of the NLOS error. Simulation results show that the location accuracy is significantly improved over traditional algorithms, even under highly NLOS conditions. geometrical approach, the geometric relationship between the mobile device and its reference is exploited to establish the Euclidean distance between them and to identify the physical location of the device. In this paper, we propose a novel least-square (LS) approach combining with geometrical relationship. It firstly adjusts the NLOS-corrupted range measurements to approach their LOS values, and then minimizes a constrained least- squares function incorporating the known relation between the intermediate variable and the position coordinate, based on the technique of Lagrange multipliers. This algorithm does not require the distinction between NLOS and LOS BSs (6), and the knowledge of the statistics of measurement noise and NLOS errors. Our approach also has the advantage of requiring no modifications to the subscriber equipment. The location estimation can be performed at either the MS if it has the functionality or at special location units in the network. The remainder of this paper is organized as follows. The proposed algorithm is outlined in Section II and the simulation results and performance analysis are discussed in Section III. Finally, conclusions are drawn in Section IV. --- paper_title: RF Localization and tracking of mobile nodes in Wireless Sensors Networks: Architectures, Algorithms and Experiments paper_content: In this paper we address the problem of localizing, tracking ::: and navigating mobile nodes associated to operators acting in a ?xed ::: wireless sensor network (WSN) using only RF information. We propose ::: two alternative and somehow complementary strategies: the ?rst one is ::: based on an empirical map of the Radio Signal Strength (RSS) distribu- ::: tion generated by the WSN and on the stochastic model of the behavior ::: of the mobile nodes, while the second one is based on a maximum like- ::: lihood estimator and a radio channel model for the RSS. We compare ::: the two approaches and highlight pros and cons for each of them. Fi- ::: nally, after implementing them into two real-time tracking systems, we ::: analyze their performance on an experimental testbed in an industrial ::: indoor environment. --- paper_title: A new hybrid algorithm on TDOA localization in wireless sensor network paper_content: A hybrid algorithm for TDOA localization is proposed in this paper. It has well combined the advantages of genetic algorithm and quasi-Newton algorithm. The hybrid algorithm has sufficiently displayed the characteristics of genetic algorithm's group searching and quasi-Newton method's local strong searching. At the same time it effectively overcomes the problem of high sensitivity to initial point of quasi-Newton method and shortcoming of genetic algorithm which reduces the searching efficiency in later period. The experimental results show that if the parameters are assumed reasonably the hybrid algorithm has extremely stability, higher localization rate and localization precision than genetic algorithm and quasi-Newton algorithm. --- paper_title: Experimental analysis of RSSI-based indoor localization with IEEE 802.15.4 paper_content: This paper presents a comparison between some of the most used ranging localization methods based on the Received Signal Strength Indicator (RSSI) in low-power IEEE 802.15.4 wireless sensor networks. In particular, the Trilateration, the Min-Max and the Maximum-Likelihood algorithms have been compared using only a limited number of reference nodes. In order to perform an exhaustive comparison we carried out tests in an indoor environment: dozens of RSSI values for every estimation have been gathered and cleaned from outliers values. Our results show that it is possible to some extent to obtain positioning information from nodes equipped with IEEE 802.15.4 radio modules, given the position and the number of reference nodes. --- paper_title: Joint Node Localization and Time-Varying Clock Synchronization in Wireless Sensor Networks paper_content: The problems of node localization and clock synchronization in wireless sensor networks are naturally tied from a statistical signal processing perspective. In this work, we consider the joint estimation of an unknown node's location and clock parameters by incorporating the effect of imperfections in node oscillators, which render a time varying nature to the clock parameters. In order to alleviate the computational complexity associated with the optimal maximum a-posteriori estimator, a simpler approach based on the Expectation-Maximization (EM) algorithm is proposed which iteratively estimates the clock parameters using a Kalman smoother in the E-step, and the location of the unknown node in the M-step. The convergence and the mean square error (MSE) performance of the proposed algorithm are evaluated using simulation studies which demonstrate the high fidelity of the proposed joint estimation approach. --- paper_title: Angle-of-arrival localization based on antenna arrays for wireless sensor networks q paper_content: Among the large number of contributions concerning the localization techniques for wireless sensor networks (WSNs), there is still no simple, energy and cost efficient solution suitable in outdoor scenarios. In this paper, a technique based on antenna arrays and angle-of-arrival (AoA) measurements is carefully discussed. While the AoA algorithms are rarely considered for WSNs due to the large dimensions of directional antennas, some system configurations are investigated that can be easily incorporated in pocket-size wireless devices. A heuristic weighting function that enables decreasing the location errors is introduced. Also, the detailed performance analysis of the presented system is provided. The localization accuracy is validated through realistic Monte-Carlo simulations that take into account the specificity of propagation conditions in WSNs as well as the radio noise effects. Finally, trade-offs between the accuracy, localization time and the number of anchors in a network are addressed. --- paper_title: Network-based wireless location: challenges faced in developing techniques for accurate wireless location information paper_content: Wireless location refers to the geographic coordinates of a mobile subscriber in cellular or wireless local area network (WLAN) environments. Wireless location finding has emerged as an essential public safety feature of cellular systems in response to an order issued by the Federal Communications Commission (FCC) in 1996. The FCC mandate aims to solve a serious public safety problem caused by the fact that, at present, a large proportion of all 911 calls originate from mobile phones, the location of which cannot be determined with the existing technology. However, many difficulties intrinsic to the wireless environment make meeting the FCC objective challenging. These challenges include channel fading, low signal-to-noise ratios (SNRs), multiuser interference, and multipath conditions. In addition to emergency services, there are many other applications for wireless location technology, including monitoring and tracking for security reasons, location sensitive billing, fraud protection, asset tracking, fleet management, intelligent transportation systems, mobile yellow pages, and even cellular system design and management. This article provides an overview of wireless location challenges and techniques with a special focus on network-based technologies and applications. --- paper_title: Adaptive AOA/TOA Localization Using Fuzzy Particle Filter for Mobile WSNs paper_content: Location-awareness is crucial and becoming increasingly important to many applications in wireless sensor networks. This paper presents a network-based positioning system and outlines recent work in which we have developed an efficient principled approach to localize a mobile sensor using time of arrival (TOA) and angle of arrival (AOA) information employing multiple seeds in the line-of-sight scenario. By receiving the periodic broadcasts from the seeds, the mobile target sensors can obtain adequate observations and localize themselves automatically. Based on the distance measurements and the initial position estimate, adaptive fuzzy control scheme is applied to solve the localization adjustment problem. The simulations show that the proposed approach provides adaptive flexibility and robust improvement in position estimation. --- paper_title: Localization of WSN node based on Time of Arrival using Ultra wide band spectrum paper_content: In this paper, the proposed methodology calculates the distance between two nodes in Wireless Sensor Network for localization purpose. The methodology is a variant of Time of Arrival (TOA) methodology. The simulations are done in Matlab using Ultra wide band spectrum and Gaussian monocycle pulses. The measured distances are compared with set distances and mean square errors are calculated. Finally, the set distance and measured distances are compared for various transmission frequencies within UWB spectrum. --- paper_title: 3-D mobile node localization using constrained volume optimization over ad-hoc sensor networks paper_content: This paper proposes a three dimensional mobile node localization and obstacle avoidance mechanism in ad-hoc sensor networks (AHSN). The localization task is performed through constrained volume optimization. Obstacle avoidance is achieved through weighted distribution method. Constrained volume optimization is performed by minimizing the squared location error and imposing the distance and boundary conditions. Obstacle avoidance is achieved by choosing the angular direction which minimizes the cost function. The solution to the constrained optimization problems ensures that the method is robust to change in environmental conditions and NLOS issues. Additionally, the algorithm is scalable and follows a distributed approach to localization. The performance of this method is assessed by deploying nodes in both indoor and outdoor environments. Improved localization accuracy is noted when compared to conventional methods in terms of statistical location estimates and Cramer-Rao lower bound localization error analysis. --- paper_title: Energy efficient optimal node-source localization using mobile beacon in ad-hoc sensor networks paper_content: In this paper, a single mobile beacon based method to localize nodes using principle of maximum power reception is proposed. Optimal positioning of the mobile beacon for minimum energy consumption is also discussed. In contrast to existing methods, the node localization is done with prior location of only three nodes. There is no need of synchronization, as there is only one mobile anchor and each node communicates only with the anchor node. Also, this method is not constrained by a fixed sensor geometry. The localization is done in a distributed fashion, at each sensor node. Experiments on node-source localization are conducted by deploying sensors in an ad-hoc manner in both outdoor and indoor environments. Localization results obtained herein indicate a reasonable performance improvement when compared to conventional methods. --- paper_title: Hybrid TOA/AOA-Based Mobile Localization with and without Tracking in CDMA Cellular Networks paper_content: This paper proposes a hybrid TOA/AOA (Time of Arrival/Angle of Arrival)-based localization algorithm for Code Division Multiple Access (CDMA) networks. The algorithm extends the Taylor Series Least Square (TS-LS) method originally developed for TOA-based systems to incorporate AOA measurements. In addition, tracking algorithms utilizing velocity and acceleration measurements are investigated. Simulation results illustrate that the proposed TOA/AOA TS-LS can provide better performance than conventional schemes in localization accuracy and in reduced likelihood of encountering non-convergence problem compared with TOA TS-LS. Tracking algorithms using the Extended and Unscented Kalman Filter (EKF and UKF) can track the objects relatively well, further decreasing the positioning error. UKF is found to provide closer tracking of the trajectory than EKF, for it truly captures the statistical mean and variance of the noises. --- paper_title: A simple and efficient estimator for hyperbolic location paper_content: An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > --- paper_title: NLOS mitigation in TOA-based localization using semidefinite programming paper_content: In this work, time-of-arrival (TOA)-based wireless sensor localization in non-line-of-sight (NLOS) environments is investigated. In such environments, the accuracy of localization techniques is significantly degraded. While previous work often assumes some knowledge of the NLOS environment, we assume that the estimator knows neither which connections are NLOS nor the distribution of the NLOS errors. It is shown that the maximum likelihood estimator using only LOS connections provides a lower bound on the estimation accuracy. Furthermore, a novel NLOS mitigation technique based on semidefinite programming (SDP) is proposed. The proposed SDP technique estimates the source location jointly with the NLOS biases. The performance of the proposed estimator is compared with the aforementioned lower bound and with previous algorithms through computer simulations. Simulation results show that the proposed SDP estimator outperforms the other algorithms substantially, especially in severe NLOS environments. --- paper_title: Indoor node localization using geometric dilution of precision in ad-hoc sensor networks paper_content: In this paper, a new method for sensor node localization using geometric dilution of precision (GDOP) is described. The proposed methods are not constrained by fixed geometry. They can be used for robust node localization under both LOS and NLOS conditions in an ad-hoc sensor network. These methods are robust since they utilize measurements obtained from both LOS and NLOS conditions. Extensive simulations and real indoor deployments are used to evaluate the performance of the proposed node localization methods based on GDOP. The localization accuracy of these algorithms is reasonably better when compared to similar methods in literature. --- paper_title: Novel Robust Direction-of-Arrival-Based Source Localization Algorithm for Wideband Signals paper_content: Source localization for wideband signals using acoustic sensor networks has drawn much research interest recently. The maximum-likelihood is the predominant objective for a wide variety of source localization approaches, and we have previously proposed an expectation-maximization (EM) algorithm to solve the source localization problem. In this paper, we tackle the source localization problem based on the realistic assumption that the sources are corrupted by spatially-non-white noise. We explore the respective limitations of our recently proposed algorithm, namely EM source localization algorithm, and design a new direction-of-arrival (DOA) estimation based (DEB) source localization algorithm. We also derive the Cramer-Rao lower bound (CRLB) analysis and the computational complexity study for the aforementioned source localization schemes. Through Monte Carlo simulations and our derived CRLB analysis, it is demonstrated that our proposed DEB algorithm significantly outperforms the previous EM method in terms of both source localization accuracy and computational complexity. --- paper_title: Accuracy of RSS-Based Centroid Localization Algorithms in an Indoor Environment paper_content: In this paper, we analyze the accuracy of indoor localization measurement based on a wireless sensor network. The position estimation procedure is based on the received-signal-strength measurements collected in a real indoor environment. Two different classes of low-computational-effort algorithms based on the centroid concept are considered, i.e., the weighted centroid localization method and the relative-span exponential weighted localization method. In particular, different sources of measurement uncertainty are analyzed by means of theoretical simulations and experimental results. --- paper_title: A Survey on TOA Based Wireless Localization and NLOS Mitigation Techniques paper_content: Localization of a wireless device using the time-of-arrivals (TOAs) from different base stations has been studied extensively in the literature. Numerous localization algorithms with different accuracies, computational complexities, a-priori knowledge requirements, and different levels of robustness against non-line-of-sight (NLOS) bias effects also have been reported. However, to our best knowledge, a detailed unified survey of different localization and NLOS mitigation algorithms is not available in the literature. This paper aims to give a comprehensive review of these different TOA-based localization algorithms and their technical challenges, and to point out possible future research directions. Firstly, fundamental lower bounds and some practical estimators that achieve close to these bounds are summarized for line-of-sight (LOS) scenarios. Then, after giving the fundamental lower bounds for NLOS systems, different NLOS mitigation techniques are classified and summarized. Simulation results are also provided in order to compare the performance of various techniques. Finally, a table that summarizes the key characteristics of the investigated techniques is provided to conclude the paper. --- paper_title: Performance comparison of localization techniques for sequential WSN discovery paper_content: In this paper, the performance of different localization algorithms are compared in the context of the sequential Wireless Sensor Network (WSN) discovery problem. Here, all sensor nodes are at unknown locations except for a very small number of so called anchor nodes at known locations. The locations of nodes are sequentially estimated such that when the location of a given node is found, it may be used to localize others. The underlying performance of such an approach is largely dependent upon the localization technique employed. In this paper, several well-known localization techniques are presented using a united notation. These methods are time of arrival (TOA), time difference of arrival (TDOA), received signal strength (RSS), direction of arrival (DOA) and large aperture array (LAA) localization. The performance of a sequential network discovery process is then compared when using each of these localization algorithms. These algorithms are implemented in the Java-DSP software package as part of a localization toolbox. (5 pages) --- paper_title: Localization of acoustic beacons using iterative null beamforming over ad-hoc wireless sensor networks paper_content: In this paper an iterative method to localize and separate multiple audio beacons using the principles of null beam forming is proposed. In contrast to standard methods, the source separation is done optimally by putting a null on all the other sources while obtaining an estimate of a particular source. Also, this method is not constrained by fixed sensor geometry as is the case with general beamforming methods. The wireless sensor nodes can therefore be deployed in any random geometry as required. Experiments are performed to estimate the location and also the power spectral density of the separated sources. The experimental results indicate that the method can be used in ad-hoc, flexible and low-cost wireless sensor network deployment. --- paper_title: Overview of Radiolocation in CDMA Cellular Systems paper_content: Applications for the location of subscribers of wireless services continue to expand. Consequently, location techniques for wireless technologies are being investigated. With code-division multiple access (CDMA) being deployed by a variety of cellular and PCS providers, developing an approach for location in CDMA networks is imperative. This article discusses the applications of location technology, the methods available for its implementation in CDMA networks, and the problems that are encountered when using CDMA networks for positioning. --- paper_title: Angle-of-arrival localization based on antenna arrays for wireless sensor networks q paper_content: Among the large number of contributions concerning the localization techniques for wireless sensor networks (WSNs), there is still no simple, energy and cost efficient solution suitable in outdoor scenarios. In this paper, a technique based on antenna arrays and angle-of-arrival (AoA) measurements is carefully discussed. While the AoA algorithms are rarely considered for WSNs due to the large dimensions of directional antennas, some system configurations are investigated that can be easily incorporated in pocket-size wireless devices. A heuristic weighting function that enables decreasing the location errors is introduced. Also, the detailed performance analysis of the presented system is provided. The localization accuracy is validated through realistic Monte-Carlo simulations that take into account the specificity of propagation conditions in WSNs as well as the radio noise effects. Finally, trade-offs between the accuracy, localization time and the number of anchors in a network are addressed. --- paper_title: Robust and Low Complexity Source Localization in Wireless Sensor Networks Using Time Difference of Arrival Measurement paper_content: Wireless source localization has found a number of applications in wireless sensor networks. In this work, we investigate robust and low complexity solutions to the problem of source localization based on the time-difference of arrivals (TDOA) measurement model. By adopting a min-max approximation to the maximum likelihood source location estimation, we develop two low complexity algorithms that can be reliably and rapidly solved through semi-definite relaxation. Our approach hinges on the use of a reference sensor node which can be optimized according to the Cramer-Rao lower bound or selected heuristically. Our low complexity estimate can be used either as the final location estimation output or as the initial point for other traditional search algorithms. --- paper_title: Wireless Sensor Network Localization Techniques paper_content: Wireless sensor network localization is an important area that attracted significant research interest. This interest is expected to grow further with the proliferation of wireless sensor network applications. This paper provides an overview of the measurement techniques in sensor network localization and the one-hop localization algorithms based on these measurements. A detailed investigation on multi-hop connectivity-based and distance-based localization algorithms are presented. A list of open research problems in the area of distance-based sensor network localization is provided with discussion on possible approaches to them. --- paper_title: Localization of irregular Wireless Sensor Networks based on multidimensional scaling paper_content: In many applications of Wireless Sensor Networks (WSN), it is crucial to know the location of sensor nodes. Although several methods have been proposed, most of them have poor performance in irregularly shaped networks. MDS-MAP is one of the localization methods based on multidimensional scaling (MDS) technique. It uses the connectivity information to derive the location of the nodes in the network. In presence of additional data such as estimated distances between adjacent neighbors, it can also enhance the localization precision. Since MDS-MAP uses the length of the shortest path as Euclidian distance between the nodes, it is sensitive to the shape of the network. In this paper we present MDS-MAP(I), a modified algorithm based on MDS-MAP which improves the localization task in irregular networks. The simulation results show that the algorithm is more reliable in various topologies and achieves a significant performance improvement upon existing method. --- paper_title: Improved MDS-based localization paper_content: It is often useful to know the geographic positions of nodes in a communications network, but adding GPS receivers or other sophisticated sensors to every node can be expensive. MDS-MAP is a recent localization method based on multidimensional scaling (MDS). It uses connectivity information - who is within communications range of whom - to derive the locations of the nodes in the network, and can take advantage of additional data, such as estimated distances between neighbors or known positions for certain anchor nodes, if they are available. However, MDS-MAP is an inherently centralized algorithm and is therefore of limited utility in many applications. In this paper, we present a new variant of the MDS-MAP method, which we call MDS-MAP(P) standing for MDS-MAP using patches of relative maps, that can be executed in a distributed fashion. Using extensive simulations, we show that the new algorithm not only preserves the good performance of the original method on relatively uniform layouts, but also performs much better than the original on irregularly-shaped networks. The main idea is to build a local map at each node of the immediate vicinity and then merge these maps together to form a global map. This approach works much better for topologies in which the shortest path distance between two nodes does not correspond well to their Euclidean distance. We also discuss an optional refinement step that improves solution quality even further at the expense of additional computation. --- paper_title: MDS and Trilateration Based Localization in Wireless Sensor Network paper_content: Localization of sensor nodes is crucial in Wireless Sensor Network because of applications like surveillance, tracking, navigation etc. Various optimization techniques for localization have been proposed in literature by different researchers. In this paper, we propose a two phase hybrid approach for localization using Multidi- mensional Scaling and trilateration, namely, MDS with refinement using trilateration. Trilateration refines the estimated locations obtained by the MDS algorithm and hence acts as a post optimizer which improves the accuracy of the estimated positions of sensor nodes. Through extensive simulations, we have shown that the proposed algorithm is more robust to noise than previous approaches and provides higher accuracy for estimating the positions of sensor nodes. --- paper_title: ALESSA: MDS - based localization algorithm for Wireless Sensor Networks paper_content: Self - localization in Wireless Sensor Networks (WSN) should be precise and reliable. Alternative Least-Square Scaling Algorithm (ALESSA) is a recently proposed centralized Multidimensional Scaling (MDS)-based localization algorithm, which uses an iterative approach to solve for the coordinates of discrete points. While ALESSA converges most of the time, like most iterative algorithm, it can be trapped in local minima causing large errors in the location estimates. In this paper, we propose the reseeding of the initial random estimates to improve the convergence of the algorithm. Performance of the proposed algorithm is evaluated under different network topologies with limited connectivity. We also analyzed the effects of low Signal to Noise Ratio (SNR) and the population of the nodes deployed in the network to the algorithm's localization precision. Simulation results show that at 26 dB SNR reseeding always results in convergence with the estimation errors within 5 % of the reference communication range. Analysis and test runs also verified that our algorithm provides accurate and consistent localization estimates under range-based localization with limited network connectivity, even with a low SNR. The algorithm also performs well with limited number of nodes. --- paper_title: Geolocation Techniques: Principles and Applications paper_content: Basics of Distributed and Cooperative Radio and Non-Radio Based Geolocation provides a detailed overview of geolocation technologies. The book covers the basic principles of geolocation, including ranging techniques to localization technologies, fingerprinting and localization in wireless sensor networks. This book also examines the latest algorithms and techniques such as Kalman Filtering, Gauss-Newton Filtering and Particle Filtering. --- paper_title: Localization algorithms of Wireless Sensor Networks: a survey paper_content: In Wireless Sensor Networks (WSNs), localization is one of the most important technologies since it plays a critical role in many applications, e.g., target tracking. If the users cannot obtain the accurate location information, the related applications cannot be accomplished. The main idea in most localization methods is that some deployed nodes (landmarks) with known coordinates (e.g., GPS-equipped nodes) transmit beacons with their coordinates in order to help other nodes localize themselves. In general, the main localization algorithms are classified into two categories: range-based and range-free. In this paper, we reclassify the localization algorithms with a new perspective based on the mobility state of landmarks and unknown nodes, and present a detailed analysis of the representative localization algorithms. Moreover, we compare the existing localization algorithms and analyze the future research directions for the localization algorithms in WSNs. --- paper_title: Monte Carlo localization for mobile wireless sensor networks paper_content: Localization is crucial to many applications in wireless sensor networks. In this article, we propose a range-free anchor-based localization algorithm for mobile wireless sensor networks that builds upon the Monte Carlo localization algorithm. We concentrate on improving the localization accuracy and efficiency by making better use of the information a sensor node gathers and by drawing the necessary location samples faster. To do so, we constrain the area from which samples are drawn by building a box that covers the region where anchors' radio ranges overlap. This box is the region of the deployment area where the sensor node is localized. Simulation results show that localization accuracy is improved by a minimum of 4% and by a maximum of 73% (average 30%), for varying node speeds when considering nodes with knowledge of at least three anchors. The coverage is also strongly affected by speed and its improvement ranges from 3% to 55% (average 22%). Finally, the processing time is reduced by 93% for a similar localization accuracy. --- paper_title: DV Based Positioning in Ad Hoc Networks paper_content: Many ad hoc network protocols and applications assume the knowledge of geographic location of nodes. The absolute position of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map. Finding position without the aid of GPS in each node of an ad hoc network is important in cases where GPS is either not accessible, or not practical to use due to power, form factor or line of sight conditions. Position would also enable routing in sufficiently isotropic large networks, without the use of large routing tables. We are proposing APS --- a localized, distributed, hop by hop positioning algorithm, that works as an extension of both distance vector routing and GPS positioning in order to provide approximate position for all nodes in a network where only a limited fraction of nodes have self positioning capability. --- paper_title: GPS-less Low Cost Outdoor Localization For Very Small Devices paper_content: Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like environmental monitoring of water and soil, requires that these nodes be very small, lightweight, untethered, and unobtrusive. The problem of localization, that is, determining where a given node is physically located in a network, is a challenging one, and yet extremely crucial for many of these applications. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the reliance on GPS of all nodes in these networks. We review localization techniques and evaluate the effectiveness of a very simple connectivity metric method for localization in outdoor environments that makes use of the inherent RF communications capabilities of these devices. A fixed number of reference points in the network with overlapping regions of coverage transmit periodic beacon signals. Nodes use a simple connectivity metric, which is more robust to environmental vagaries, to infer proximity to a given subset of these reference points. Nodes localize themselves to the centroid of their proximate reference points. The accuracy of localization is then dependent on the separation distance between two-adjacent reference points and the transmission range of these reference points. Initial experimental results show that the accuracy for 90 percent of our data points is within one-third of the separation distance. However, future work is needed to extend the technique to more cluttered environments. --- paper_title: An improved DV-Hop localization algorithm for wireless sensor networks paper_content: Aiming at the positioning problem of wireless sensor network node location, an improved DV-hop positioning algorithm is proposed in this paper, together with its basic principle and realization issues. The proposed method can improve location accuracy without increasing hardware cost for sensor node. Simulation results show that it has good positioning accuracy and coverage. The influences of anchor nodes on the DV-hop algorithm are also explored in the paper. --- paper_title: Maximum likelihood multiple-source localization using acoustic energy measurements with wireless sensor networks paper_content: A maximum likelihood (ML) acoustic source location estimation method is presented for the application in a wireless ad hoc sensor network. This method uses acoustic signal energy measurements taken at individual sensors of an ad hoc wireless sensor network to estimate the locations of multiple acoustic sources. Compared to the existing acoustic energy based source localization methods, this proposed ML method delivers more accurate results and offers the enhanced capability of multiple source localization. A multiresolution search algorithm and an expectation-maximization (EM) like iterative algorithm are proposed to expedite the computation of source locations. The Crame/spl acute/r-Rao Bound (CRB) of the ML source location estimate has been derived. The CRB is used to analyze the impacts of sensor placement to the accuracy of location estimates for single target scenario. Extensive simulations have been conducted. It is observed that the proposed ML method consistently outperforms existing acoustic energy based source localization methods. An example applying this method to track military vehicles using real world experiment data also demonstrates the performance advantage of this proposed method over a previously proposed acoustic energy source localization method. --- paper_title: DuRT: Dual RSSI Trend Based Localization for Wireless Sensor Networks paper_content: Localization is a key issue in wireless sensor networks. The geographical location of sensors is important information that is required in sensor network operations such as target detection, monitoring, and rescue. These methods are classified into two categories, namely range-based and range-free. Range-based localizations achieve high location accuracy by using specific hardware or using absolute received signal strength indicator (RSSI) values, whereas range-free approaches obtain location estimates with lower accuracy. Because of the hardware and energy constraints in sensor networks, RSSI offers a convenient method to find the position of sensor nodes. However, in the presence of channel noise, fading, and attenuation, it is not possible to estimate the actual location. In this paper, we propose an RSSI-based localization scheme that considers the trend of RSSI values obtained from beacons to estimate the position of sensor nodes. Through applying polynomial modeling on the relationship between received RSSI and distance, we are able to locate the maximum RSSI point on the anchor trajectory. Using two such trajectories, the sensor position can be determined by calculating the intersection point of perpendiculars passing through the maximum RSSI point on each trajectory. In addition, we devised schemes to improve the localization method to perform under a variety of cases such as single trajectory, unavailability of RSSI trends, and so. The advantage of our scheme is that it does not rely on absolute RSSI values and hence, can be applied in dynamic environments. In simulations, we demonstrate that the proposed localization scheme achieves higher location accuracy compared with existing localization approaches. --- paper_title: Localization for mobile sensor networks paper_content: Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions. --- paper_title: Organizing a Global Coordinate System from Local Information on an Ad Hoc Sensor Network paper_content: We demonstrate that it is possible to achieve accurate localization and tracking of a target in a randomly placed wireless sensor network composed of inexpensive components of limited accuracy. The crucial enabler for this is a reasonably accurate local coordinate system aligned with the global coordinates. We present an algorithm for creating such a coordinate system without the use of global control, globally accessible beacon signals, or accurate estimates of inter-sensor distances. The coordinate system is robust and automatically adapts to the failure or addition of sensors. Extensive theoretical analysis and simulation results are presented. Two key theoretical results are: there is a critical minimum average neighborhood size of 15 for good accuracy and there is a fundamental limit on the resolution of any coordinate system determined strictly from local communication. Our simulation results show that we can achieve position accuracy to within 20% of the radio range even when there is variation of up to 10% in the signal strength of the radios. The algorithm improves with finer quantizations of inter-sensor distance estimates: with 6 levels of quantization position errors better than 10% are achieved. Finally we show how the algorithm gracefully generalizes to target tracking tasks. --- paper_title: Range-free localization schemes for large scale sensor networks paper_content: Wireless Sensor Networks have been proposed for a multitude of location-dependent applications. For such systems, the cost and limitations of the hardware on sensing nodes prevent the use of range-based localization schemes that depend on absolute point-to-point distance estimates. Because coarse accuracy is sufficient for most sensor network applications, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. In this paper, we present APIT, a novel localization algorithm that is range-free. We show that our APIT scheme performs best when an irregular radio pattern and random node placement are considered, and low communication overhead is desired. We compare our work via extensive simulation, with three state-of-the-art range-free localization schemes to identify the preferable system configurations of each. In addition, we study the effect of location error on routing and tracking performance. We show that routing performance and tracking accuracy are not significantly affected by localization error when the error is less than 0.4 times the communication radio radius. --- paper_title: A new weighted centroid localization algorithm in wireless sensor networks paper_content: Nodes in a sensor network are often randomly distributed. To assign measurements to locations, each node has to determine its own position. Algorithms for positioning in wireless sensor networks are classified into two groups: approximate and exact. In this paper, we propose a range-based approximate positioning approach which is almost the combination of WCL and EBTB. Then, compare it with two other approximate positioning approaches (WCL with time complexity of O(n)) and EBTB with time complexity of O(n*n) and an exact positioning approach (QR Factorization with time complexity of O(n*n*n)). Finally, it will be shown that EWCL (with time complexity of O(n*n)) is the best localization algorithm with respect to the three other localization algorithms when the noise is high and its accuracy is close to the accuracy of QR when the noise is medium. --- paper_title: Monte Carlo localization for mobile wireless sensor networks paper_content: Localization is crucial to many applications in wireless sensor networks. In this article, we propose a range-free anchor-based localization algorithm for mobile wireless sensor networks that builds upon the Monte Carlo localization algorithm. We concentrate on improving the localization accuracy and efficiency by making better use of the information a sensor node gathers and by drawing the necessary location samples faster. To do so, we constrain the area from which samples are drawn by building a box that covers the region where anchors' radio ranges overlap. This box is the region of the deployment area where the sensor node is localized. Simulation results show that localization accuracy is improved by a minimum of 4% and by a maximum of 73% (average 30%), for varying node speeds when considering nodes with knowledge of at least three anchors. The coverage is also strongly affected by speed and its improvement ranges from 3% to 55% (average 22%). Finally, the processing time is reduced by 93% for a similar localization accuracy. --- paper_title: An adaptive anchor navigation algorithm for localization in MANET paper_content: Automatic anchor navigation for sensor node localization in mobile networks is a challenging task. Previous work on anchor navigation has mostly concentrated on static node localization. This paper proposes an adaptive anchor navigation algorithm to localize sensor nodes in a mobile ad-hoc networks (MANET). The novelty of the proposed algorithm lies in fact that an anchor chooses an optimal position at every instant. This adaptive position selection of the anchor ensures low energy consumption. On the other hand, an optimal path mechanism proposed herein constrains the anchor to traverse in a region densely populated with nodes. This ensures that the path traversed by the anchors is minimal. In order to localize the nodes, the ratio of grid benefit to distance (GBD) within the probable region of anchor is first computed. Subsequently, an optimality criterion is used to decide the next anchor location. Localization is then performed using geometric methods. Experimental results on node localization in a mobile network scenario indicate a reasonable performance improvement when compared to conventional methods. --- paper_title: Optimal anchor guiding algorithms for maximal node localization in mobile sensor networks paper_content: Localization of mobile nodes is a challenging problem especially when both anchor and node are mobile. In this paper, algorithms for optimal anchor guiding, to localize maximum number of nodes, are proposed. The algorithms are based on the principle of jointly maximizing a grid benefit criterion and the number of nodes localized. The advantage of these algorithms is that, that both the anchors and nodes can be deployed randomly and can traverse the region with varying speeds. Additionally, the optimal path for the anchor is decided in such a way that the maximum number of nodes are localized. The proposed algorithms are extensively analyzed for their performance by conducting experiments on both, NI and Crossbow set-up. The results obtained from localization error analysis indicate that the algorithms discussed in this work perform reasonably better than the similar conventional algorithms available in literature. Additional analysis performed on energy consumption indicate that the proposed algorithms are energy efficient. --- paper_title: Localization for mobile sensor networks paper_content: Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions. --- paper_title: An improved DV-Hop localization algorithm for wireless sensor networks paper_content: Aiming at the positioning problem of wireless sensor network node location, an improved DV-hop positioning algorithm is proposed in this paper, together with its basic principle and realization issues. The proposed method can improve location accuracy without increasing hardware cost for sensor node. Simulation results show that it has good positioning accuracy and coverage. The influences of anchor nodes on the DV-hop algorithm are also explored in the paper. --- paper_title: Maximum likelihood multiple-source localization using acoustic energy measurements with wireless sensor networks paper_content: A maximum likelihood (ML) acoustic source location estimation method is presented for the application in a wireless ad hoc sensor network. This method uses acoustic signal energy measurements taken at individual sensors of an ad hoc wireless sensor network to estimate the locations of multiple acoustic sources. Compared to the existing acoustic energy based source localization methods, this proposed ML method delivers more accurate results and offers the enhanced capability of multiple source localization. A multiresolution search algorithm and an expectation-maximization (EM) like iterative algorithm are proposed to expedite the computation of source locations. The Crame/spl acute/r-Rao Bound (CRB) of the ML source location estimate has been derived. The CRB is used to analyze the impacts of sensor placement to the accuracy of location estimates for single target scenario. Extensive simulations have been conducted. It is observed that the proposed ML method consistently outperforms existing acoustic energy based source localization methods. An example applying this method to track military vehicles using real world experiment data also demonstrates the performance advantage of this proposed method over a previously proposed acoustic energy source localization method. --- paper_title: Multi-sensor data fusion methods for indoor localization under collinear ambiguity paper_content: Sensor node localization in mobile ad-hoc sensor networks is a challenging problem. Often, the anchor nodes tend to line up in a linear fashion in a mobile sensor network when nodes are deployed in an ad-hoc manner. This paper discusses novel node localization methods under the conditions of collinear ambiguity of the anchors. Additionally, the work presented herein also describes a methodology to fuse data available from multiple sensors for improved localization performance under conditions of collinear ambiguity. In this context, data is first acquired from multiple sensors sensing different modalities. The data acquired from each sensor is used to compute attenuation models for each sensor. Subsequently, a combined multi-sensor attenuation model is developed. The fusion methodology uses a joint error optimization approach on the multi-sensor data. The distance between each sensor node and anchor is itself computed using the differential power principle. These distances are used in the localization of sensor nodes under the condition of collinear ambiguity of anchors. Localization error analysis is also carried out in indoor conditions and compared with the Cramer-Rao lower bound. Experimental results on node localization using simulations and real field deployments indicate reasonable improvements in terms of localization accuracy when compared to methods likes MLAR and MGLR. --- paper_title: Sensor node tracking using semi-supervised Hidden Markov Models paper_content: In this paper, a novel method for mobile sensor node tracking using semi-supervised Hidden Markov Models (HMM) is discussed. A new methodology to develop a combined attenuation model from data gathered from multiple sensors is also described. Observations emitted from the nodes are sparsely measured over the network area with beacons placed on the boundaries. HMMs are trained using observations measured at each grid point. The distances between a node passing through a specific grid point and beacons are estimated using likelihood maximization. The local location co-ordinates of the node positions are then computed by solving a constrained volume optimization problem. Quaternion rotation is used to finally obtain global coordinates of the node location. Several standard manoeuvres of mobile nodes are first simulated. Similar manoeuvres are also recorded from real field deployments. The experimental results are obtained for node localization and tracking from these experiments. Results indicate an improvement in the localization accuracy, when compared to the conventional localization methods. --- paper_title: Semi-supervised Laplacian regularized least squares algorithm for localization in wireless sensor networks paper_content: In this paper, we propose a new approach for localization in wireless sensor networks based on semi-supervised Laplacian regularized least squares algorithm. We consider two kinds of localization data: signal strength and pair-wise distance between nodes. When nodes are close within their physical location space, their localization data vectors should be similar. We first propose a solution using the alignment criterion to learn an appropriate kernel function in terms of the similarities between anchors, and the kernel function is used to measure the similarity between pair-wise sensor nodes in the networks. We then propose a semi-supervised learning algorithm based upon manifold regularization to obtain the locations of the non-anchors. We evaluate our algorithm under various network topology, transmission range and signal noise, and analyze its performance. We also compare our approach with several existing approaches, and demonstrate the high efficiency of our proposed algorithm in terms of location estimation error. --- paper_title: RF Localization and tracking of mobile nodes in Wireless Sensors Networks: Architectures, Algorithms and Experiments paper_content: In this paper we address the problem of localizing, tracking ::: and navigating mobile nodes associated to operators acting in a ?xed ::: wireless sensor network (WSN) using only RF information. We propose ::: two alternative and somehow complementary strategies: the ?rst one is ::: based on an empirical map of the Radio Signal Strength (RSS) distribu- ::: tion generated by the WSN and on the stochastic model of the behavior ::: of the mobile nodes, while the second one is based on a maximum like- ::: lihood estimator and a radio channel model for the RSS. We compare ::: the two approaches and highlight pros and cons for each of them. Fi- ::: nally, after implementing them into two real-time tracking systems, we ::: analyze their performance on an experimental testbed in an industrial ::: indoor environment. --- paper_title: Non-Line-of-Sight Identification and Mitigation Using Received Signal Strength paper_content: Indoor wireless systems often operate under non-line-of-sight (NLOS) conditions that can cause ranging errors for location-based applications. As such, these applications could benefit greatly from NLOS identification and mitigation techniques. These techniques have been primarily investigated for ultra-wide band (UWB) systems, but little attention has been paid to WiFi systems, which are far more prevalent in practice. In this study, we address the NLOS identification and mitigation problems using multiple received signal strength (RSS) measurements from WiFi signals. Key to our approach is exploiting several statistical features of the RSS time series, which are shown to be particularly effective. We develop and compare two algorithms based on machine learning and a third based on hypothesis testing to separate LOS/NLOS measurements. Extensive experiments in various indoor environments show that our techniques can distinguish between LOS/NLOS conditions with an accuracy of around 95%. Furthermore, the presented techniques improve distance estimation accuracy by 60% as compared to state-of-the-art NLOS mitigation techniques. Finally, improvements in distance estimation accuracy of 50% are achieved even without environment-specific training data, demonstrating the practicality of our approach to real world implementations. --- paper_title: A tutorial on hidden Markov models and selected applications in speech recognition paper_content: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > --- paper_title: A RSS-EKF localization method using HMM-based LOS/NLOS channel identification paper_content: Knowing channel sight condition is important as it has a great impact on localization performance. In this paper, a RSS-based localization algorithm, which jointly takes into consideration the effect of channel sight conditions, is investigated. In our approach, the channel sight conditions experience by a moving target to all sensors is modeled as a hidden Markov model (HMM), with the quantized measured RSSs as its observation. The parameters of HMM are obtained by an off-line training assuming that the LOS/NLOS can be identified during the training phase. With the HMM matrices, a forward-only algorithm can be utilized for real time sight conditions identification. The target is localized by extended Kalman Filter (EKF) by suitably combining with the sight conditions. Simulation results show that our proposed localization strategy can provide good identification to channel sight conditions, hence results in a better localization estimation. --- paper_title: Convex position estimation in wireless sensor networks paper_content: A method for estimating unknown node positions in a sensor network based exclusively on connectivity-induced constraints is described. Known peer-to-peer communication in the network is modeled as a set of geometric constraints on the node positions. The global solution of a feasibility problem for these constraints yields estimates for the unknown positions of the nodes in the network. Providing that the constraints are tight enough, simulation illustrates that this estimate becomes close to the actual node positions. Additionally, a method for placing rectangular bounds around the possible positions for all unknown nodes in the network is given. The area of the bounding rectangles decreases as additional or tighter constraints are included in the problem. Specific models are suggested and simulated for isotropic and directional communication, representative of broadcast-based and optical transmission respectively, though the methods presented are not limited to these simple cases. --- paper_title: Walkie-Markie: Indoor Pathway Mapping Made Easy paper_content: We present Walkie-Markie - an indoor pathway mapping system that can automatically reconstruct internal pathway maps of buildings without any a-priori knowledge about the building, such as the floor plan or access point locations. Central to Walkie-Markie is a novel exploitation of the WiFi infrastructure to define landmarks (WiFi-Marks) to fuse crowdsourced user trajectories obtained from inertial sensors on users' mobile phones. WiFi-Marks are special pathway locations at which the trend of the received WiFi signal strength changes from increasing to decreasing when moving along the pathway. By embedding these WiFi-Marks in a 2D plane using a newly devised algorithm and connecting them with calibrated user trajectories, Walkie-Markie is able to infer pathway maps with high accuracy. Our experiments demonstrate that Walkie-Markie is able to reconstruct a high-quality pathway map for a real office-building floor after only 5-6 rounds of walks, with accuracy gradually improving as more user data becomes available. The maximum discrepancy between the inferred pathway map and the real one is within 3m and 2.8m for the anchor nodes and path segments, respectively. --- paper_title: Anchor-Based Localization via Interval Analysis for Mobile Ad-Hoc Sensor Networks paper_content: Location awareness is a fundamental requirement for many applications of sensor networks. This paper proposes an original technique for self-localization in mobile ad-hoc networks. This method is adapted to the limited computational and memory resources of mobile nodes. The localization problem is solved in an interval analysis framework. The propagation of the estimation errors is based on an interval formulation of a state space model, where observations consist of anchor-based connectivities. The problem is then formulated as a constraint satisfaction problem where a simple Waltz algorithm is applied in order to contract the solution. This technique yields a guaranteed and robust online estimation of the mobile node positions. Observation errors as well as anchor node imperfections are taken into consideration in a simple and computational-consistent way. Multihop anchor-based and backpropagated localizations are also made possible in our method. Simulation results on mobile node trajectories corroborate the efficiency of the proposed technique and show that it outperforms the particle filtering methods. --- paper_title: Localization from mere connectivity paper_content: It is often useful to know the geographic positions of nodes in a communications network, but adding GPS receivers or other sophisticated sensors to every node can be expensive. We present an algorithm that uses connectivity information who is within communications range of whom to derive the locations of the nodes in the network. The method can take advantage of additional information, such as estimated distances between neighbors or known positions for certain anchor nodes, if it is available. The algorithm is based on multidimensional scaling, a data analysis technique that takes O(n3) time for a network of n nodes. Through simulation studies, we demonstrate that the algorithm is more robust to measurement error than previous proposals, especially when nodes are positioned relatively uniformly throughout the plane. Furthermore, it can achieve comparable results using many fewer anchor nodes than previous methods, and even yields relative coordinates when no anchor nodes are available. --- paper_title: Locating in fingerprint space: wireless indoor localization with little human intervention paper_content: Indoor localization is of great importance for a range of pervasive applications, attracting many research efforts in the past decades. Most radio-based solutions require a process of site survey, in which radio signatures of an interested area are annotated with their real recorded locations. Site survey involves intensive costs on manpower and time, limiting the applicable buildings of wireless localization worldwide. In this study, we investigate novel sensors integrated in modern mobile phones and leverage user motions to construct the radio map of a floor plan, which is previously obtained only by site survey. On this basis, we design LiFS, an indoor localization system based on off-the-shelf WiFi infrastructure and mobile phones. LiFS is deployed in an office building covering over 1600m2, and its deployment is easy and rapid since little human intervention is needed. In LiFS, the calibration of fingerprints is crowdsourced and automatic. Experiment results show that LiFS achieves comparable location accuracy to previous approaches even without site survey. --- paper_title: Anchor-Free Distributed Localization in Sensor Networks paper_content: Many sensor network applications require that each node’s sensor stream be annotated with its physical location in some common coordinate system. Manual measurement and configuration methods for obtaining location don’t scale and are error-prone, and equipping sensors with GPS is often expensive and does not work in indoor and urban deployments. Sensor networks can therefore benefit from a self-configuring method where nodes cooperate with each other, estimate local distances to their neighbors, and converge to a consistent coordinate assignment. This paper describes a fully decentralized algorithm called AFL (Anchor-Free Localization) where nodes start from a random initial coordinate assignment and converge to a consistent solution using only local node interactions. The key idea in AFL is fold-freedom, where nodes first configure into a topology that resembles a scaled and unfolded version of the true configuration, and then run a force-based relaxation procedure. We show using extensive simulations under a variety of network sizes, node densities, and distance estimation errors that our algorithm is superior to previously proposed methods that incrementally compute the coordinates of nodes in the network, in terms of its ability to compute correct coordinates under a wider variety of conditions and its robustness to measurement errors. --- paper_title: GPS-less Low Cost Outdoor Localization For Very Small Devices paper_content: Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like environmental monitoring of water and soil, requires that these nodes be very small, lightweight, untethered, and unobtrusive. The problem of localization, that is, determining where a given node is physically located in a network, is a challenging one, and yet extremely crucial for many of these applications. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the reliance on GPS of all nodes in these networks. We review localization techniques and evaluate the effectiveness of a very simple connectivity metric method for localization in outdoor environments that makes use of the inherent RF communications capabilities of these devices. A fixed number of reference points in the network with overlapping regions of coverage transmit periodic beacon signals. Nodes use a simple connectivity metric, which is more robust to environmental vagaries, to infer proximity to a given subset of these reference points. Nodes localize themselves to the centroid of their proximate reference points. The accuracy of localization is then dependent on the separation distance between two-adjacent reference points and the transmission range of these reference points. Initial experimental results show that the accuracy for 90 percent of our data points is within one-third of the separation distance. However, future work is needed to extend the technique to more cluttered environments. --- paper_title: Gaussian Process Regression for Fingerprinting based Localization paper_content: Abstract In this paper, Gaussian process regression (GPR) for fingerprinting based localization is presented. In contrast to general regression techniques, the GPR not only infers the posterior received signal strength (RSS) mean but also the variance at each fingerprint location. The GPR does take into account the variance of input i.e., noisy RSS data. The hyper-parameters of GPR are estimated using trust-region-reflective algorithm. The Cramer-Rao bound is analysed to highlight the performance of the parameter estimator. The posterior mean and variance of RSS data is utilized in fingerprinting based localization. The principal component analysis is employed to choose the k strongest wi-fi access points (APs). The performance of the proposed algorithm is validated using using real field deployments. Accuracy improvements of 10% and 30% are observed in two sites compared to the Horus fingerprinting approach. --- paper_title: WiFi-SLAM Using Gaussian Process Latent Variable Models paper_content: WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization. --- paper_title: Improved MDS-based localization paper_content: It is often useful to know the geographic positions of nodes in a communications network, but adding GPS receivers or other sophisticated sensors to every node can be expensive. MDS-MAP is a recent localization method based on multidimensional scaling (MDS). It uses connectivity information - who is within communications range of whom - to derive the locations of the nodes in the network, and can take advantage of additional data, such as estimated distances between neighbors or known positions for certain anchor nodes, if they are available. However, MDS-MAP is an inherently centralized algorithm and is therefore of limited utility in many applications. In this paper, we present a new variant of the MDS-MAP method, which we call MDS-MAP(P) standing for MDS-MAP using patches of relative maps, that can be executed in a distributed fashion. Using extensive simulations, we show that the new algorithm not only preserves the good performance of the original method on relatively uniform layouts, but also performs much better than the original on irregularly-shaped networks. The main idea is to build a local map at each node of the immediate vicinity and then merge these maps together to form a global map. This approach works much better for topologies in which the shortest path distance between two nodes does not correspond well to their Euclidean distance. We also discuss an optional refinement step that improves solution quality even further at the expense of additional computation. --- paper_title: No need to war-drive: Unsupervised indoor localization paper_content: We propose UnLoc, an unsupervised indoor localization scheme that bypasses the need for war-driving. Our key observation is that certain locations in an indoor environment present identifiable signatures on one or more sensing dimensions. An elevator, for instance, imposes a distinct pattern on a smartphone’s accelerometer; a corridor-corner may overhear a unique set of WiFi access points; a specific spot may experience an unusual magnetic fluctuation. We hypothesize that these kind of signatures naturally exist in the environment, and can be envisioned as internal landmarks of a building. Mobile devices that “sense” these landmarks can recalibrate their locations, while dead-reckoning schemes can track them between landmarks. Results from 3 different indoor settings, including a shopping mall, demonstrate median location errors of 1.69m. War-driving is not necessary, neither are floorplans ‐ the system simultaneously computes the locations of users and landmarks, in a manner that they converge reasonably quickly. We believe this is an unconventional approach to indoor localization, holding promise for real-world deployment. --- paper_title: Simulated Annealing based Wireless Sensor Network Localization with Flip Ambiguity Mitigation paper_content: Accurate self-localization capability is highly desirable in wireless sensor networks. A major problem in wireless sensor network localization is the flip ambiguity, which introduces large errors in the location estimates. In this paper, we propose a two phase simulated annealing based localization (SAL) algorithm to address the issue. Simulated annealing (SA) is a technique for combinatorial optimization problems and it is robust against being trapped into local minima. In the first phase of our algorithm, simulated annealing is used to obtain an accurate estimate of location. Then a second phase of optimization is performed only on those nodes that are likely to have flip ambiguity problem. Based on the neighborhood information of nodes, those nodes likely to have affected by flip ambiguity are identified and moved to the correct position. The proposed scheme is tested using simulation on a sensor network of 200 nodes whose distance measurements are corrupted by Gaussian noise. Simulation results show that the proposed scheme gives accurate and consistent location estimates of the nodes and mitigate errors due to flip ambiguities. --- paper_title: Fusion of Radio and Camera Sensor Data for Accurate Indoor Positioning paper_content: Indoor positioning systems have received a lot of attention recently due to their importance for many location-based services, e.g. indoor navigation and smart buildings. Lightweight solutions based on WiFi and inertial sensing have gained popularity, but are not fit for demanding applications, such as expert museum guides and industrial settings, which typically require sub-meter location information. In this paper, we propose a novel positioning system, RAVEL (Radio And Vision Enhanced Localization), which fuses anonymous visual detections captured by widely available camera infrastructure, with radio readings (e.g. WiFi radio data). Although visual trackers can provide excellent positioning accuracy, they are plagued by issues such as occlusions and people entering/exiting the scene, preventing their use as a robust tracking solution. By incorporating radio measurements, visually ambiguous or missing data can be resolved through multi-hypothesis tracking. We evaluate our system in a complex museum environment with dim lighting and multiple people moving around in a space cluttered with exhibit stands. Our experiments show that although the WiFi measurements are not by themselves sufficiently accurate, when they are fused with camera data, they become a catalyst for pulling together ambiguous, fragmented, and anonymous visual tracklets into accurate and continuous paths, yielding typical errors below 1 meter. --- paper_title: Improving Simultaneous Localization and Mapping for pedestrian navigation and automatic mapping of buildings by using online human-based feature labeling paper_content: In this paper we present an extension to odometry based SLAM for pedestrians that incorporates human-reported measurements of recognizable features, or “places” in an environment. The method which we have called “PlaceSLAM” builds on the Simultaneous Localization and Mapping (SLAM) principle in that a spatial representation of such places can be built up during the localization process. We see an important application to be in mapping of new areas by volunteering pedestrians themselves, in particular to improve the accuracy of “FootSLAM” which is based on human step estimation (odometry). We present a description of various flavors of PlaceSLAM and derive a Bayesian formulation and particle filtering implementation for the most general variant. In particular we distinguish between two important cases which depend on whether the pedestrian is required to report a place's identifier or not. Our results based on experimental data show that our approach can significantly improve the accuracy and stability of FootSLAM and this with very little additional complexity. After mapping has been performed, users of such improved FootSLAM maps need not report places themselves. --- paper_title: The horus wlan location determination system paper_content: This report presents a general analysis for the performance of WLAN location determination systems. In particular, we present an analytical method for calculating the average distance error and probability of error of WLAN location determination systems. These expressions are obtained with no assumptions regarding the distribution of signal strength or the probability of the user being at a specific location, which is usually taken to be a uniform distribution over all the possible locations in current WLAN location determination systems. We use these expressions to find the optimal strategy to estimate the user location and to prove formally that probabilistic techniques give more accuracy than deterministic techniques, which has been taken for granted without proof for a long time. The analytical results are validated through simulation experiments. We also study the effect of the assumption that the user position follows a uniform distribution over the set of possible locations on the accuracy of WLAN location determination systems. The results show that knowing the probability distribution of the user position can reduce the number of access points required to obtain a given accuracy. However, with a high density of access points, the performance of a WLAN location determination system is consistent under different probability distributions for the user position. --- paper_title: Place Lab: Device Positioning Using Radio Beacons in the Wild paper_content: Location awareness is an important capability for mobile computing. Yet inexpensive, pervasive positioning—a requirement for wide-scale adoption of location-aware computing—has been elusive. We demonstrate a radio beacon-based approach to location, called Place Lab, that can overcome the lack of ubiquity and high-cost found in existing location sensing approaches. Using Place Lab, commodity laptops, PDAs and cell phones estimate their position by listening for the cell IDs of fixed radio beacons, such as wireless access points, and referencing the beacons' positions in a cached database. We present experimental results showing that 802.11 and GSM beacons are sufficiently pervasive in the greater Seattle area to achieve 20-30 meter median accuracy with nearly 100% coverage measured by availability in people's daily lives. --- paper_title: Crowdsourced indoor localization for diverse devices through radiomap fusion paper_content: Crowdsourcing is an emerging field that allows to tackle difficult problems by soliciting contributions from common people, rather than trained professionals. In the post-pc era, where smartphones dominate the personal computing market offering both constant mobility and large amounts of spatiotemporal sensory data, crowdsourcing is becoming increasingly popular. In this context, crowdsourcing stands as the only viable solution for collecting the large amount of location-related network data required to support location-based services, e.g., the signal strength radiomap of a fingerprinting localization system inside a multi-floor building. However, this benefit does not come for free, because crowdsourcing also poses new challenges in radiomap creation. We focus on the problem of device diversity that occurs frequently as the contributors usually carry heterogeneous mobile devices that report network measurements very differently. We demonstrate with simulations and experimental results that the traditional signal strength values from the surrounding network infrastructure are not suitable for crowdsourcing the radiomap. Moreover, we present an alternative approach, based on signal strength differences, that is far more robust to device variations and maintains the localization accuracy regardless of the number of contributing devices. --- paper_title: Hybrid maximum depth-kNN method for real time node tracking using multi-sensor data paper_content: In this paper, a hybrid maximum depth - k Nearest Neighbour (hybrid MD-kNN) method for real time sensor node tracking and localization is proposed. The method combines two individual location hypothesis functions obtained from generalized maximum depth and generalized kNN methods. The individual location hypothesis functions are themselves obtained from multiple sensors measuring visible light, humidity, temperature, acoustics, and link quality. The hybridMD-kNN method therefore combines the lower computational power of maximum depth and outlier rejection ability of kNN method to realize a robust real time tracking method. Additionally, this method does not require the assumption of an underlying distribution under non-line-of-sight (NLOS) conditions. Additional novelty of this method is the utilization of multivariate data obtained from multiple sensors which has hitherto not been used. The affine invariance property of the hybrid MD-kNN method is proved and its robustness is illustrated in the context of node localization. Experimental results on the Intel Berkeley research data set indicates reasonable improvements over conventional methods available in literature. --- paper_title: Indoor location fingerprinting with heterogeneous clients paper_content: Heterogeneous wireless clients measure signal strength differently. This is a fundamental problem for indoor location fingerprinting, and it has a high impact on the positioning accuracy. Mapping-based solutions have been presented that require manual and error-prone calibration for each new client. This article presents hyperbolic location fingerprinting, which records fingerprints as signal strength ratios between pairs of base stations instead of absolute signal strength values. This article also presents an automatic mapping-based method that avoids calibration by learning from online measurements. The evaluation shows that the solutions can address the signal strength heterogeneity problem without requiring extra manual calibration. --- paper_title: RADAR: an in-building RF-based user location and tracking system paper_content: The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy. --- paper_title: Localization algorithms of Wireless Sensor Networks: a survey paper_content: In Wireless Sensor Networks (WSNs), localization is one of the most important technologies since it plays a critical role in many applications, e.g., target tracking. If the users cannot obtain the accurate location information, the related applications cannot be accomplished. The main idea in most localization methods is that some deployed nodes (landmarks) with known coordinates (e.g., GPS-equipped nodes) transmit beacons with their coordinates in order to help other nodes localize themselves. In general, the main localization algorithms are classified into two categories: range-based and range-free. In this paper, we reclassify the localization algorithms with a new perspective based on the mobility state of landmarks and unknown nodes, and present a detailed analysis of the representative localization algorithms. Moreover, we compare the existing localization algorithms and analyze the future research directions for the localization algorithms in WSNs. --- paper_title: A Particle-Filtering Approach for Vehicular Tracking Adaptive to Occlusions paper_content: In this paper, we propose a new particle-filtering approach for handling partial and total occlusions in vehicular tracking situations. Our proposed method, which is named adaptive particle filter (APF), uses two different operation modes. When the tracked vehicle is not occluded, the APF uses a normal probability density function (pdf) to generate the new set of particles. Otherwise, when the tracked vehicle is under occlusion, the APF generates the new set of particles using a Normal-Rayleigh pdf. Our approach was designed to detect when a total occlusion starts and ends and to resume vehicle tracking after disocclusions. We have tested our APF approach in a number of traffic surveillance video sequences with encouraging results. Our proposed approach tends to be more accurate than comparable methods in the literature, and at the same time, it tends to be more robust to target occlusions. --- paper_title: Survey of Target Tracking Protocols Using Wireless Sensor Network paper_content: Target tracking is one of the non trivial applications of wireless sensor network which is set up in the areas of field surveillance, habitat monitoring, indoor buildings, and intruder tracking. Various approaches have been investigated for tracking the targets, considering diverse metrics like scalability, overheads, energy consumption and target tracking accuracy. This paper for the first time contributes a survey of target tracking protocols for sensor networks and presents their classification in a precise manner. The five main categories explored in this paper are, hierarchical, tree-based, prediction- based, mobicast message-based tracking and hybrid methods. To be more precise, the survey promotes overview of recent research literature along with their performance comparison and evaluation based on simulation with real data. Certainly this task is challenging and not straight forward due to differences in estimations, parameters and performance metrics, therefore the paper concludes with open research challenges. --- paper_title: Dynamic clustering for acoustic target tracking in wireless sensor networks paper_content: We devise and evaluate a fully decentralized, light-weight, dynamic clustering algorithm for target tracking. Instead of assuming the same role for all the sensors, we envision a hierarchical sensor network that is composed of 1) a static backbone of sparsely placed high-capability sensors which assume the role of a cluster head (CH) upon triggered by certain signal events and 2) moderately to densely populated low-end sensors whose function is to provide sensor information to CHs upon request. A cluster is formed and a CH becomes active, when the acoustic signal strength detected by the CH exceeds a predetermined threshold. The active CH then broadcasts an information solicitation packet, asking sensors in its vicinity to join the cluster and provide their sensing information. We address and devise solution approaches (with the use of Voronoi diagram) to realize dynamic clustering: (I1) how CHs operate with one another to ensure that only one CH (preferably the CH that is closes to the target) is active with high probability, (I2) when the active CH solicits for sensor information, instead of having all the sensors in its vicinity reply, only a sufficient number of sensors respond with nonredundant, essential information to determine the target location, and (I3) both the packets that sensors send to their CHs and packets that CHs report to subscribers do not incur significant collision. Through both probabilistic analysis and ns-2 simulation, we use with the use of Voronoi diagram, the CH that is usually closes to the target is (implicitly) selected as the leader and that the proposed dynamic clustering algorithm effectively eliminates contention among sensors and renders more accurate estimates of target locations as a result of better quality data collected and less collision incurred. --- paper_title: A survey: localization and tracking mobile targets through wireless sensors network paper_content: Wireless sensor network applications have been deployed widely. Sensor networks involve sensor nodes which are very small in size. They are low in cost, and have a low battery life. Sensor nodes are capable of solving a variety of collaborative problems, such as, monitoring and surveillance. One of the critical components in wireless sensor networks is the localizing tracking sensor or mobile node. In this paper we will discuss the various location system techniques and categorize these techniques based on the communication between nodes into centralized and decentralized localization techniques. The tracking techniques are categorized into four main types. Each type will be compared and discussed in detail. We will suggest ways of implementing the techniques and finally carry out an evaluation. --- paper_title: Greedy node localization in mobile sensor networks using Doppler frequency shift paper_content: The principle of Doppler frequency shift can be utilized for node localization when mobile nodes are introduced into a sensor network. In this paper, a greedy method for mobile node localization using the principle of Doppler frequency shift is presented. A localization framework which accounts for multiple nodes and multiple reception paths is first developed. Subsequently, a greedy approach to anchor path guidance for maximal node localization is proposed. The method is advantageous both in terms of energy consumption and the number of nodes localized. Experiments on mobile node localization are conducted to illustrate the effectiveness of the proposed method. --- paper_title: Distributed particle filter with GMM approximation for multiple targets localization and tracking in wireless sensor network paper_content: Two novel distributed particle filters with Gaussian mixer approximation are proposed to localize and track multiple moving targets in a wireless sensor network. The distributed particle filters run on a set of uncorrelated sensor cliques that are dynamically organized based on moving target trajectories. These two algorithms differ in how the distributive computing is performed. In the first algorithm, partial results are updated at each sensor clique sequentially based on partial results forwarded from a neighboring clique and local observations. In the second algorithm, all individual cliques compute partial estimates based only on local observations in parallel, and forward their estimates to a fusion center to obtain final output. In order to conserve bandwidth and power, the local sufficient statistics (belief) is approximated by a low dimensional Gaussian mixture model (GMM) before propagating among sensor cliques. We further prove that the posterior distribution estimated by distributed particle filter convergence almost surely to the posterior distribution estimated from a centralized Bayesian formula. Moreover, a data-adaptive application layer communication protocol is proposed to facilitate sensor self-organization and collaboration. Simulation results show that the proposed DPF with GMM approximation algorithms provide robust localization and tracking performance at much reduced communication overhead. --- paper_title: Classification of Object Tracking Techniques in Wireless Sensor Networks paper_content: Object tracking is one of the killer applications for wireless sensor networks (WSN) in which the network of wireless sensors is assigned the task of tracking a particular object. The network employs the object tracking techniques to continuously report the position of the object in terms of Cartesian coordinates to a sink node or to a central base station. A family tree of object tracking techniques has been prepared.In this paper we have summarized the object tracking techniques available so far in wireless sensor networks. --- paper_title: Monte Carlo localization for mobile wireless sensor networks paper_content: Localization is crucial to many applications in wireless sensor networks. In this article, we propose a range-free anchor-based localization algorithm for mobile wireless sensor networks that builds upon the Monte Carlo localization algorithm. We concentrate on improving the localization accuracy and efficiency by making better use of the information a sensor node gathers and by drawing the necessary location samples faster. To do so, we constrain the area from which samples are drawn by building a box that covers the region where anchors' radio ranges overlap. This box is the region of the deployment area where the sensor node is localized. Simulation results show that localization accuracy is improved by a minimum of 4% and by a maximum of 73% (average 30%), for varying node speeds when considering nodes with knowledge of at least three anchors. The coverage is also strongly affected by speed and its improvement ranges from 3% to 55% (average 22%). Finally, the processing time is reduced by 93% for a similar localization accuracy. --- paper_title: Distributed tracking in wireless ad hoc sensor networks paper_content: Abstract : Target tracking is an important application for wireless ad hoc sensor networks. Because of the energy and communication constraints imposed by the size of the sensors, the processing has to be distributed over the sensor nodes. This paper discusses issues associated with distributed multiple target tracking for ad hoc sensor networks and examines the applicability of tracking algorithms developed for traditional networks of large sensors. when data association is not an issue, the standard pre- predict/update structure in single target tracking can be used to assign individual tracks to the sensor nodes based on their locations. Track ownership will have to be carefully migrated, using for example information driven sensor tasking, to minimize the need for communication when targets move. when data association is needed in tracking multiple interacting targets, clusters of tracks should be assigned to groups of collaborating nodes. Some recent examples of this type of distributed processing are given. Keywords: Wireless ad hoc sensor networks, multiple target tracking, distributed tracking --- paper_title: A Hybrid Cluster-Based Target Tracking Protocol for Wireless Sensor Networks paper_content: Target tracking is a typical and important application of wireless sensor networks (WSNs). In consideration of the network scalability and energy efficiency for target tracking in large-scale WSNs, it has been employed as an effective solution by organizing the WSNs into clusters. However, tracking a moving target in cluster-based WSNs suffers a boundary problem when the target moves across or along the boundaries of clusters, as the static cluster membership prevents sensors in different clusters from sharing information. In this paper, we propose a novel mobility management protocol, called hybrid cluster-based target tracking (HCTT), which integrates on-demand dynamic clustering into a cluster-based WSN for target tracking. By constructing on-demand dynamic clusters at boundary regions, nodes from different static clusters that detect the target can temporarily share information, and the tracking task can be handed over smoothly from one static cluster to another. As the target moves, static clusters and on-demand dynamic clusters alternately manage the target tracking task. Simulation results show that the proposed protocol performs better in tracking the moving target when compared with other typical target tracking protocols. --- paper_title: A survey of secure target tracking algorithms for wireless sensor networks paper_content: Tracking a target as it moves in monitored area has become an increasingly important application for wireless sensor networks (WSNs). Target tracking algorithms continuously report the position of the target in terms of its coordinates to a sink node or a central base station. Due to the rapid development of WSNs, there is no standardized classification of target tracking algorithms. Some of those classifications consider scalability, energy consumption and accuracy; other classification considers network topology, position of target, mobility of target/object etc. In this paper, we have considered target tracking algorithms of WSNs from the security point of view. We have compared and discussed problem of security in the most important target tracking algorithms for WSNs. To the best of our knowledge, this is the first study that analyses the target tracking algorithms in terms of security. --- paper_title: A protocol for tracking mobile targets using sensor networks paper_content: With recent advances in device fabrication technology, economical deployment of large scale sensor networks, capable of pervasive monitoring and control of physical systems have become possible. Scalability, low overhead anti distributed functionality are some of the key requirements for any protocol designed for such large scale sensor networks. In this paper, we present a protocol, Distributed Predictive Tracking, for one of the most likely applications for sensor networks: tracking moving targets. The protocol uses a clustering based approach for scalability and a prediction based tracking mechanism to provide a distributed and energy efficient solution. The protocol is robust against node or prediction failures which may result in temporary loss of the target and recovers from such scenarios quickly and with very little additional energy use. Using simulations we show that the proposed architecture is able to accurately track targets with random movement patterns with accuracy over a wide range of target speeds. --- paper_title: Spatiotemporal multicast in sensor networks paper_content: Sensor networks often involve the monitoring of mobile phenomena. We believe this task can be facilitated by a spatiotemporal multicast protocol which we call "mobicast". Mobicast is a novel spatiotemporal multicast protocol that distributes a message to nodes in a delivery zone that evolves over time in some predictable manner. A key advantage of mobicast lies in its ability to provide reliable and just-in-time message delivery to mobile delivery zones on top of a random network topology. Mobicast can in theory achieve good spatiotemporal delivery guarantees by limiting communication to a mobile forwarding zone whose size is determined by the global worst-case value associated with a compactness metric defined over the geometry of the network (under a reasonable set of assumptions). In this work, we first studied the compactness properties of sensor networks with uniform distribution. The results of this study motivate three approaches for improving the efficiency of spatiotemporal multicast in such networks. First, spatiotemporal multicast protocols can exploit the fundamental tradeoff between delivery guarantees and communication overhead in spatiotemporal multicast. Our results suggest that in such networks, a mobicast protocol can achieve relatively high savings in message forwarding overhead by slightly relaxing the delivery guarantee, e.g., by optimistically choosing a forwarding zone that is smaller than the one needed for a 100% delivery guarantee. Second, spatiotemporal multicast may exploit local compactness values for higher efficiency for networks with non uniform spatial distribution of compactness. Third, for random uniformly distributed sensor network deployment, one may choose a deployment density to best support spatiotemporal communication. We also explored all these directions via simulation and results are presented in this paper. --- paper_title: Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization paper_content: Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m. --- paper_title: An Adaptive Particle Filter for Indoor Robot Localization paper_content: This paper develops an adaptive particle filter for indoor mobile robot localization, in which two different resampling operations are implemented to adjust the number of particles for fast and reliable computation. Since the weight updating is usually much more computationally intensive than the prediction, the first resampling-procedure so-called partial resampling is adopted before the prediction step, which duplicates the large weighted particles while reserves the rest obtaining better estimation accuracy and robustness. The second resampling, adopted before the updating step, decreases the number of particles through particle merging to save updating computation. In addition to speeding up the filter, sample degeneracy and sample impoverishment are counteracted. Simulations on a typical 1D model and for mobile robot localization are presented to demonstrate the validity of our approach. --- paper_title: Performance analysis based on least squares and extended Kalman filter for localization of static target in wireless sensor networks paper_content: Abstract Wireless sensor network localization is an essential problem that has attracted increasing attention due to wide demands such as in-door navigation, autonomous vehicle, intrusion detection, and so on. With the a priori knowledge of the positions of sensor nodes and their measurements to targets in the wireless sensor networks (WSNs), i.e. posterior knowledge, such as distance and angle measurements, it is possible to estimate the position of targets through different algorithms. In this contribution, two commonly-used approaches based on least-squares and Kalman filter are described and analyzed for localization of one static target in the WSNs with distance, angle, or both distance and angle measurements, respectively. Noting that the measurements of these sensors are generally noisy of certain degree, it is crucial and interesting to analyze how the accuracy of localization is affected by the sensor errors and the sensor network, which may help to provide guideline on choosing the specification of sensors and designing the sensor network. In addition, the problem of optimal sensor placement is also addressed to minimize the localization error. To this end, theoretical analysis have been made for the different methods based on three typical types of measurement noise: bounded noise, uniformly distributed noise, and Gaussian white noise. Simulation results illustrate the performance comparison of these different methods, the theoretical analysis and simulations and the optimal sensor geometry which may be meaningful and guideful in practice. --- paper_title: Dynamic clustering for object tracking in wireless sensor networks paper_content: Object tracking is an important feature of the ubiquitous society and also a killer application of wireless sensor networks. Nowadays, there are many researches on object tracking in wireless sensor networks under practice, however most of them cannot effectively deal with the trade-off between missing-rate and energy efficiency. In this paper, we propose a dynamic clustering mechanism for object tracking in wireless sensor networks. With forming the cluster dynamically according to the route of moving, the proposed method can not only decrease the missing-rate but can also decrease the energy consumption by reducing the number of nodes that participate in tracking and minimizing the communication cost, thus can enhance the lifetime of the whole sensor networks. The simulation result shows that our proposed method achieves lower energy consumption and lower missing-rate. --- paper_title: Tracking and Predicting Moving Targets in Hierarchical Sensor Networks paper_content: Target tracking is an important application of newly developed wireless sensor networks (WSN). Much work has been done on this topic using a plane network architecture. We propose a scheme, namely hierarchical prediction strategy (HPS), for target prediction in hierarchical sensor networks. The network is divided into clusters, which are composed of one cluster-head and many normal nodes, by Voronoi division. For an existing target, cluster-heads only selectively activate nearby sensor nodes to perform tracking. Moreover, Recursive Least Square technique is used to predict the target trajectory and help activate next-round sensor nodes. Extended simulations show the properties of the proposed network architecture and the efficiency of the prediction scheme. --- paper_title: Voronoi-Based Sensor Network Engineering for Target Tracking Using Wireless Sensor Networks paper_content: Recent advances in integrated electronic devices motivated the use of wireless sensor networks in many applications including target surveillance and tracking. A number of sensor nodes are scattered within a sensitive region to detect the presence of intruders and forward subsequent events to the analysis center(s). Obviously, the sensor deployment should guarantee an optimal event detection rate. This paper proposes a tracking framework based on Voronoi tessellations. Two mobility models are proposed to control the coverage degree according to target presence. The objective is to set a non-uniform coverage within the monitored zone to allow detecting the target by multiple sensor nodes. Moreover, we introduce an algorithm to discover redundant nodes (which do not provide additional information about target position). This algorithm is shown to be effective in reducing the energy consumption using an activity scheduling approach. --- paper_title: Localization for mobile sensor networks paper_content: Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions. --- paper_title: A Survey on Target Tracking Techniques in Wireless Sensor Networks paper_content: Target Tracking as it moves through a sensor network has become an increasingly important application in Wireless Sensor Networks. This paper examines some of the target tracking techniques in use today. An analysis of each technique is presented along with it advantages, problems and possible improvements. There are seven main categories explored in this paper. The survey promotes overview of recent research literature along with their performance comparison and evaluation. --- paper_title: Distributed tracking and classification of targets with sensor networks paper_content: Localization and tracking of target is an important application of the wireless sensor network. In this paper, we propose to apply a classification algorithm to sensor network nodes aimed at target tracking. Localization and tracking is based on a distributed method that enables us to simplify the signal processing and makes a more robust system. Also we address to group sensors, each group or cluster is led by one of them (leader node/sensor). This sensor is responsible for processing all the information about the target and estimate its position. Simulation results show that this classification algorithm reduces the estimate error in tracking targets. --- paper_title: Prediction-based strategies for energy saving in object tracking sensor networks paper_content: In order to fully realize the potential of sensor networks, energy awareness should be incorporated into every stage of the network design and operation. In this paper, we address the energy management issue in a sensor network killer application - object tracking sensor networks (OTSNs). Based on the fact that the movements of the tracked objects are sometimes predictable, we propose a prediction-based energy saving scheme, called PES, to reduce the energy consumption for object tracking under acceptable conditions. We compare PES against the basic schemes we proposed in the paper to explore the conditions under which PES is most desired. We also test the effect of some parameters related to the system workload, object moving behavior and sensing operations on PES through extensive simulation. Our results show that PES can save significant energy under various conditions. --- paper_title: Dynamic clustering for acoustic target tracking in wireless sensor networks paper_content: We devise and evaluate a fully decentralized, light-weight, dynamic clustering algorithm for target tracking. Instead of assuming the same role for all the sensors, we envision a hierarchical sensor network that is composed of 1) a static backbone of sparsely placed high-capability sensors which assume the role of a cluster head (CH) upon triggered by certain signal events and 2) moderately to densely populated low-end sensors whose function is to provide sensor information to CHs upon request. A cluster is formed and a CH becomes active, when the acoustic signal strength detected by the CH exceeds a predetermined threshold. The active CH then broadcasts an information solicitation packet, asking sensors in its vicinity to join the cluster and provide their sensing information. We address and devise solution approaches (with the use of Voronoi diagram) to realize dynamic clustering: (I1) how CHs operate with one another to ensure that only one CH (preferably the CH that is closes to the target) is active with high probability, (I2) when the active CH solicits for sensor information, instead of having all the sensors in its vicinity reply, only a sufficient number of sensors respond with nonredundant, essential information to determine the target location, and (I3) both the packets that sensors send to their CHs and packets that CHs report to subscribers do not incur significant collision. Through both probabilistic analysis and ns-2 simulation, we use with the use of Voronoi diagram, the CH that is usually closes to the target is (implicitly) selected as the leader and that the proposed dynamic clustering algorithm effectively eliminates contention among sensors and renders more accurate estimates of target locations as a result of better quality data collected and less collision incurred. --- paper_title: Performance evaluation of selective and adaptive heads clustering algorithms over wireless sensor networks paper_content: Target tracking in wireless sensor networks can be considered as a milestone of a wide range of applications to permanently report, through network sensors, the positions of a mobile target to the base station during its move across a certain path. While tracking a mobile target, a lot of open challenges arise and need to be investigated and maintained which mainly include energy efficiency and tracking accuracy. In this paper, we propose three algorithms for tracking a mobile target in wireless sensor network utilizing cluster-based architecture, namely adaptive head, static head, and selective static head. Our goal is to achieve a promising tracking accuracy and energy efficiency by choosing the candidate sensor nodes nearby the target to participate in the tracking process while preserving the others in sleep state. Through Matlab simulation, we investigate the performance of the proposed algorithms in terms of energy consumption, tracking error, sensor density, as well as target speed. The results show that the adaptive head is the most efficient algorithm in terms of energy consumption while static and selective static heads algorithms are preferred as far as the tracking error is concerned especially when the target moves rapidly. Furthermore, the effectiveness of our proposed algorithms is verified through comparing their results with those obtained from previous algorithms. --- paper_title: Prediction-based cluster management for target tracking in wireless sensor networks paper_content: The key impediments to a successful wireless sensor network (WSN) application are the energy and the longevity constraints of sensor nodes. Therefore, two signal processing oriented cluster management strategies, the proactive and the reactive cluster management, are proposed to efficiently deal with these constraints. The former strategy is designed for heterogeneous WSNs, where sensors are organized in a static clustering architecture. A non-myopic cluster activation rule is realized to reduce the number of hand-off operations between clusters, while maintaining desired estimation accuracy. The proactive strategy minimizes the hardware expenditure and the total energy consumption. On the other hand, the main concern of the reactive strategy is to maximize the network longevity of homogeneous WSNs. A Dijkstra-like algorithm is proposed to dynamically form active cluster based on the relation between the predictive target distribution and the candidate sensors, considering both the energy efficiency and the data relevance. By evenly distributing the energy expenditure over the whole network, the objective of maximizing the network longevity is achieved. The simulations evaluate and compare the two proposed strategies in terms of tracking accuracy, energy consumption and execution time. Copyright © 2010 John Wiley & Sons, Ltd. --- paper_title: CODA: A Continuous Object Detection and Tracking Algorithm for Wireless Ad Hoc Sensor Networks paper_content: Wireless sensor networks make possible many new applications in a wide range of application domains. One of the primary applications of such networks is the detection and tracking of continuously moving objects, such as wild fires, biochemical materials, and so forth. This study supports such applications by developing a continuous object detection and tracking algorithm, designated as CODA, based on a hybrid static/dynamic clustering technique. The CODA mechanism enables each sensor node to detect and track the moving boundaries of objects in the sensing field. The numerical results obtained using a Qualnet simulator confirm the effectiveness and robustness of the proposed approach. --- paper_title: Target localization based on energy considerations in distributed sensor networks paper_content: Wireless distributed sensor networks (DSNs) are important for a number of strategic applications such as coordinated target detection, surveillance, and localization. Energy is a critical resource in wireless sensor networks and system lifetime needs to be prolonged through the use of energy-conscious sensing strategies during system operation. We propose an energy-aware target detection and localization strategy for cluster-based wireless sensor networks. The proposed method is based on an a posteriori algorithm with a two-step communication protocol between the cluster head and the sensors within the cluster. Based on a limited amount of data received from the sensor nodes, the cluster head executes a localization procedure to determine the subset of sensors that must be queried for detailed target information. This approach reduces both energy consumption and communication bandwidth requirements, and prolongs the lifetime of the wireless sensor network. Simulation results show that a large amount of energy is saved during target localization using this strategy. --- paper_title: A dynamic tracking mechanism for mobile target in wireless sensor networks paper_content: How to save energy for target tracking in WSNs is an important issue. The basic idea of energy-saving method is using sleep mechanism. However, if nodes enter sleep state too much, targets may lose easily from the sleeping nodes. It is high consuming the energy to find back these losing targets. Thus, it would notice accuracy of tracing targets when saving energy. In this proposal, it offers a scheme about trace of moving target. It uses cluster scheme and algorithm, which apply different strengths and energy-consuming according to different possibility of losing targets. The possibility of lost is aimed to different acceleration of targets and wake up different number of nodes. Lowering losing rate of target can own energy-saving and improved performance. --- paper_title: Information-driven dynamic sensor collaboration paper_content: This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications. --- paper_title: RARE: An Energy-Efficient Target Tracking Protocol for Wireless Sensor Networks paper_content: Energy efficiency for target tracking in wireless sensor networks is very important and can be improved by reducing the number of nodes involved in communications. We propose two algorithms, RARE-area and RARE-node to reduce the number of nodes participating in tracking and so increase energy efficiency. The RARE-area algorithm ensures that only nodes that receive a given quality of data participate in tracking and the RARE-node algorithm ensures that any nodes with redundant information do not participate in tracking. Simulation studies show significant energy savings are obtained with implementation of either the RARE-area algorithm alone or both RARE-area and RARE-node algorithms together. --- paper_title: Distributed event localization and tracking with wireless sensors ⋆ paper_content: In this paper we present the distributed event localization and tracking algorithm DELTA that solely depends on light measurements. Based on this information and the positions of the sensors, DELTA is able to track a moving person equipped with a flashlight by dynamically building groups and electing well located nodes as group leaders. Moreover, DELTA supports object localization. The gathered data is sent to a monitoring entity in a fixed network which can apply pattern recognition techniques to determine the legitimacy of the moving person. DELTA enables object tracking with minimal constraints on both sensor hardware and the moving object. We show the feasibility of the algorithm running on the limited hardware of an existing sensor platform. --- paper_title: Distributed tracking and classification of targets with sensor networks paper_content: Localization and tracking of target is an important application of the wireless sensor network. In this paper, we propose to apply a classification algorithm to sensor network nodes aimed at target tracking. Localization and tracking is based on a distributed method that enables us to simplify the signal processing and makes a more robust system. Also we address to group sensors, each group or cluster is led by one of them (leader node/sensor). This sensor is responsible for processing all the information about the target and estimate its position. Simulation results show that this classification algorithm reduces the estimate error in tracking targets. --- paper_title: Survey of Target Tracking Protocols Using Wireless Sensor Network paper_content: Target tracking is one of the non trivial applications of wireless sensor network which is set up in the areas of field surveillance, habitat monitoring, indoor buildings, and intruder tracking. Various approaches have been investigated for tracking the targets, considering diverse metrics like scalability, overheads, energy consumption and target tracking accuracy. This paper for the first time contributes a survey of target tracking protocols for sensor networks and presents their classification in a precise manner. The five main categories explored in this paper are, hierarchical, tree-based, prediction- based, mobicast message-based tracking and hybrid methods. To be more precise, the survey promotes overview of recent research literature along with their performance comparison and evaluation based on simulation with real data. Certainly this task is challenging and not straight forward due to differences in estimations, parameters and performance metrics, therefore the paper concludes with open research challenges. --- paper_title: Distributed tracking in wireless ad hoc sensor networks paper_content: Abstract : Target tracking is an important application for wireless ad hoc sensor networks. Because of the energy and communication constraints imposed by the size of the sensors, the processing has to be distributed over the sensor nodes. This paper discusses issues associated with distributed multiple target tracking for ad hoc sensor networks and examines the applicability of tracking algorithms developed for traditional networks of large sensors. when data association is not an issue, the standard pre- predict/update structure in single target tracking can be used to assign individual tracks to the sensor nodes based on their locations. Track ownership will have to be carefully migrated, using for example information driven sensor tasking, to minimize the need for communication when targets move. when data association is needed in tracking multiple interacting targets, clusters of tracks should be assigned to groups of collaborating nodes. Some recent examples of this type of distributed processing are given. Keywords: Wireless ad hoc sensor networks, multiple target tracking, distributed tracking --- paper_title: Dual prediction-based reporting for object tracking sensor networks paper_content: As one of the wireless sensor network killer applications, object tracking sensor networks (OTSNs) disclose many opportunities for energy-aware system design and implementations. We investigate prediction-based approaches for performing energy efficient reporting in OTSNs. We propose a dual prediction-based reporting mechanism (called DPR), in which both sensor nodes and the base station predict the future movements of the mobile objects. Transmissions of sensor readings are avoided as long as the predictions are consistent with the real object movements. DPR achieves energy efficiency by intelligently trading off multihop/long-range transmissions of sensor readings between sensor nodes and the base station with one-hop/short-range communications of object movement history among neighbor sensor nodes. We explore the impact of several system parameters and moving behavior of tracked objects on DPR performance, and also study two major components of DPR: prediction models and location models through simulations. Our experimental results show that DPR is able to achieve considerable energy savings under various conditions and outperforms existing reporting mechanisms. --- paper_title: Energy-aware location error handling for object tracking applications in wireless sensor networks paper_content: Developing an efficient object tracking system has been an interesting challenge in wireless sensor network communities. Due to the severe resource constraints of sensor hardware, the accuracy of the tracking system could be compromised by the processing power or energy consumption. A sophisticated tracking algorithm is therefore not applicable to sensor applications, and any tracking system should explicitly consider the energy issue. In this paper, we present energy-aware location error handling techniques, namely error avoidance and error correction, to prevent and handle errors efficiently. Real situations such as an unexpected change in the mobile event's direction, failure of event detection, or transmission failure of an error message are considered in the design of the proposed mechanisms. The prototype system is built with real sensor hardware, and the functionality is validated in real experiments. The experimental evaluation, together with simulation analysis, shows that the proposed mechanism saves energy while achieving good tracking accuracy. --- paper_title: Optimal tracking interval for predictive tracking in wireless sensor network paper_content: An important application of wireless sensor networks is tracking moving objects. Prediction-based techniques have been proposed to reduce the power consumption in wireless sensor networks by limiting the sensor active time. This paper proposes a quantitative method to optimize the power efficiency by analyzing the effect of prediction on the energy consumption in such networks. To our best knowledge, our efforts are the first attempt made to calculate the optimal tracking interval for a given predictive tracking algorithm. Based on this method, the lifetime and power efficiency of a sensor network can be effectively improved. --- paper_title: A protocol for tracking mobile targets using sensor networks paper_content: With recent advances in device fabrication technology, economical deployment of large scale sensor networks, capable of pervasive monitoring and control of physical systems have become possible. Scalability, low overhead anti distributed functionality are some of the key requirements for any protocol designed for such large scale sensor networks. In this paper, we present a protocol, Distributed Predictive Tracking, for one of the most likely applications for sensor networks: tracking moving targets. The protocol uses a clustering based approach for scalability and a prediction based tracking mechanism to provide a distributed and energy efficient solution. The protocol is robust against node or prediction failures which may result in temporary loss of the target and recovers from such scenarios quickly and with very little additional energy use. Using simulations we show that the proposed architecture is able to accurately track targets with random movement patterns with accuracy over a wide range of target speeds. --- paper_title: On localized prediction for power efficient object tracking in sensor networks paper_content: Energy is one of the most critical constraints for sensor network applications. In this paper, we exploit the localized prediction paradigm for power-efficient object tracking sensor network. Localized prediction consists of a localized network architecture and a prediction mechanism called dual prediction, which achieve power savings by allowing most of the sensor nodes stay in sleep mode and by reducing the amount of long-range transmissions. Performance evaluation, based on mathematical analysis, shows that localized prediction can significantly reduce the power consumption in object tracking sensor networks. --- paper_title: Prediction-based strategies for energy saving in object tracking sensor networks paper_content: In order to fully realize the potential of sensor networks, energy awareness should be incorporated into every stage of the network design and operation. In this paper, we address the energy management issue in a sensor network killer application - object tracking sensor networks (OTSNs). Based on the fact that the movements of the tracked objects are sometimes predictable, we propose a prediction-based energy saving scheme, called PES, to reduce the energy consumption for object tracking under acceptable conditions. We compare PES against the basic schemes we proposed in the paper to explore the conditions under which PES is most desired. We also test the effect of some parameters related to the system workload, object moving behavior and sensing operations on PES through extensive simulation. Our results show that PES can save significant energy under various conditions. --- paper_title: DCTC: dynamic convoy tree-based collaboration for target tracking in sensor networks paper_content: Most existing work on sensor networks concentrates on finding efficient ways to forward data from the information source to the data centers, and not much work has been done on collecting local data and generating the data report. This paper studies this issue by proposing techniques to detect and track a mobile target. We introduce the concept of dynamic convoy tree-based collaboration, and formalize it as a multiple objective optimization problem which needs to find a convoy tree sequence with high tree coverage and low energy consumption. We propose an optimal solution which achieves 100% coverage and minimizes the energy consumption under certain ideal situations. Considering the real constraints of a sensor network, we propose several practical implementations: the conservative scheme and the prediction-based scheme for tree expansion and pruning; the sequential and the localized reconfiguration schemes for tree reconfiguration. Extensive experiments are conducted to compare the practical implementations and the optimal solution. The results show that the prediction-based scheme outperforms the conservative scheme and it can achieve similar coverage and energy consumption to the optimal solution. The experiments also show that the localized reconfiguration scheme outperforms the sequential reconfiguration scheme when the node density is high, and the trend is reversed when the node density is low. --- paper_title: Mobile object tracking in wireless sensor networks paper_content: Wireless sensor network is an emerging technology that enables remote monitoring objects and environment. This paper proposes a protocol to track a mobile object in a sensor network dynamically. The previous researches almost focus on how to track object accurately and they do not consider the query for mobile sources. Additionally, they need not report the tracking information to user. The work is concentrated on mobile user how to query target tracks and obtain the target position effectively. The mobile user can obtain the tracking object position without broadcast query. The user is moving and approaching the target when he/she knows the target's position. Wireless sensor networks can assist user to detect target as well as keep the movement information of the target. Sensor nodes establish face structure to track the designated target and keep target tracks. The source follows the tracks to approaching target. To chase the object quick and maintain an accurate tracking route, the sensors cooperate together to shorten the route between target and source dynamically. A source can quickly approach a target along a shortened route. Finally, we compare the proposed scheme with three flooding-based query methods. By the simulation results, the proposed protocol has better performance than that of flooding-based query methods. --- paper_title: Efficient location tracking using sensor networks paper_content: We apply sensor networks to the problem of tracking moving objects. We describe a publish-and-subscribe tracking method, called scalable tracking using networked sensors (STUN), that scales well to large numbers of sensors and moving objects by using hierarchy. We also describe a method, called drain-and-balance (DAB), for building efficient tracking hierarchies, computed from expected characteristics of the objects movement patterns. DAB is shown to perform well by running it on 1D and 2D sensor network topologies, and comparing it to schemes, which do not utilize movement information. --- paper_title: Efficient in-network moving object tracking in wireless sensor networks paper_content: The rapid progress of wireless communication and embedded microsensing MEMS technologies has made wireless sensor networks possible. In light of storage in sensors, a sensor network can be considered as a distributed database, in which one can conduct in-network data processing. An important issue of wireless sensor networks is object tracking, which typically involves two basic operations: update and query. This issue has been intensively studied in other areas, such as cellular networks. However, the in-network processing characteristic of sensor networks has posed new challenges to this issue. In this paper, we develop several tree structures for in-network object tracking which take the physical topology of the sensor network into consideration. The optimization process has two stages. The first stage tries to reduce the location update cost based on a deviation-avoidance principle and a highest-weight-first principle. The second stage further adjusts the tree obtained in the first stage to reduce the query cost. The way we model this problem allows us to analytically formulate the cost of object tracking given the update and query rates of objects. Extensive simulations are conducted, which show a significant improvement over existing solutions --- paper_title: Voronoi-Based Sensor Network Engineering for Target Tracking Using Wireless Sensor Networks paper_content: Recent advances in integrated electronic devices motivated the use of wireless sensor networks in many applications including target surveillance and tracking. A number of sensor nodes are scattered within a sensitive region to detect the presence of intruders and forward subsequent events to the analysis center(s). Obviously, the sensor deployment should guarantee an optimal event detection rate. This paper proposes a tracking framework based on Voronoi tessellations. Two mobility models are proposed to control the coverage degree according to target presence. The objective is to set a non-uniform coverage within the monitored zone to allow detecting the target by multiple sensor nodes. Moreover, we introduce an algorithm to discover redundant nodes (which do not provide additional information about target position). This algorithm is shown to be effective in reducing the energy consumption using an activity scheduling approach. --- paper_title: Evaluations of target tracking in wireless sensor networks paper_content: Target tracking is one of the most important applications of wireless sensor networks. Optimized computation and energy dissipation are critical requirements to maximize the lifetime of the sensor network. There exists a demand for self-organizing and routing capabilities in the sensor network. Existing methods attempting to achieve these requirements, such as the LEACH-based algorithms, however, suffer either redundancy in data and sensor node deployment, or complex computation incurred in the sensor nodes. Those drawbacks result in energy use inefficiency and/or complex computation overhead. OCO, or Optimized Communication and Organization, is an algorithm that ensures maximum accuracy of target tracking, efficient energy dissipation, and low computation overhead on the sensor nodes. Simulation evaluations of OCO are compared with other two methods under various scenarios. --- paper_title: Structures for in-network moving object tracking in wireless sensor networks paper_content: One important application of wireless sensor networks is the tracking of moving objects. The recent progress has made it possible for tiny sensors to have more computing power and storage space. Therefore, a sensor network can be considered as a distributed database, on which one can conduct in-network data processing. This paper considers in-network moving object tracking in a sensor network. This typically consists of two operations: location update and query. We propose a message-pruning tree structure that is an extension of the earlier work (H.T. Kung and D. Vlah, March 2003), which assumes the existence of a logical structure to connect sensors in the network. We formulate this problem as an optimization problem. The formulation allows us to take into account the physical structure of the sensor network, thus leading to more efficient solutions than in the previous paper of H.T. Kung and D. Vlah (March 2003) in terms of communication costs. We evaluate updating and querying costs through simulations. --- paper_title: Distributed sensor activation algorithm for target tracking with binary sensor networks paper_content: Target tracking with wireless sensor networks (WSNs) has been a hot research topic recently. Many works have been done to improve the algorithms for localization and prediction of a moving target with smart sensors. However, the results are frequently difficult to implement because of hardware limitations. In this paper, we propose a practical distributed sensor activation algorithm (DSA2) that enables reliable tracking with the simplest binary-detection sensors. In this algorithm, all sensors in the field are activated with a probability to detect targets or sleep to save energy, the schedule of which depends on their neighbor sensors' behaviors. Extensive simulations are also shown to demonstrate the effectiveness of the proposed algorithm. Great improvement in terms of energy-quality tradeoff and excellent robustness of the algorithm are also emphasized in the simulations. --- paper_title: Adaptive Sensor Activation Algorithm for Target Tracking in Wireless Sensor Networks paper_content: Target tracking is an important application of wireless sensor networks where energy conservation plays an important role. In this paper, we propose an energy-efficient sensor activation protocol based on predicted region technique, called predicted region sensor activation algorithm (PRSA). The proposed algorithm predicts the moving region of target in the next time interval instead of predicting the accurate position, by analyzing current location and velocity of the target. We take these nodes within the predicted region as waiting-activation nodes and establish activation strategy. The fewest essential number of sensor nodes within the predicted region will be activated to monitor the target. Thus, the number of nodes that was involved in tracking the target will be decreased to save energy and prolong the network’s operational lifetime. The simulation results demonstrate the effectiveness of the proposed algorithm. --- paper_title: Quality Tradeoffs in Object Tracking with Duty-Cycled Sensor Networks paper_content: Extending the lifetime of wireless sensor networks requires energy-conserving operations such as duty-cycling. However, such operations may impact the effectiveness of high fidelity real-time sensing tasks, such as object tracking, which require high accuracy and short response times. In this paper, we quantify the influence of different duty-cycle schemes on the efficiency of bearings-only object tracking. Specifically, we use the Maximum Likelihood localization technique to analyze the accuracy limits of object location estimates under different response latencies considering variable network density and duty-cycle parameters. Moreover, we study the tradeoffs between accuracy and response latency under various scenarios and motion patterns of the object. We have also investigated the effects of different duty-cycled schedules on the tracking accuracy using acoustic sensor data collected at Aberdeen Proving Ground, Maryland, by the U.S. Army Research Laboratory (ARL). --- paper_title: Space-time Coordinated Distributed Sensing Algorithms for Resource Efficient Narrowband Target Localization and Tracking paper_content: Distributed sensing has been used for enhancing signal to noise ratios for space-time localization and tracking of remote objects using phased array antennas, sonar, and radio signals. The use of these technologies in identifying mobile targets in a field, emitting acoustic signals, using a network of low-cost narrow band acoustic micro-sensing devices randomly dispersed over the region of interest, presents unique challenges. The effects of wind, turbulence, and temperature gradients and other environmental effects can decrease the signal to noise ratio by introducing random errors that cannot be removed through calibration. This paper presents methods for dynamic distributed signal processing to detect, identify, and track targets in noisy environments with limited resources. Specifically, it evaluates the noise tolerance of adaptive beamforming and compares it to other distributed sensing approaches. Many source localization and direction-of-arrival (DOA) estimation methods based on beamforming using acoustic sensor array have been proposed. We use the approximate maximum likelihood parameter estimation method to perform DOA estimation of the source in the frequency domain. Generally, sensing radii are large and data from the nodes are transmitted over the network to a centralized location where beamforming is done. These methods therefore depict low tolerance to environmental noise. Knowledge based localized distributed processing methods have also been developed for distributed in-situ localization and target tracking in these environments. These methods, due to their reliance only on local sensing, are not significantly affected by spatial perturbations and are robust in tracking targets in low SNR environments. Specifically, Dynamic Space-time Clustering (DSTC)-based localization and tracking algorithm has demonstrated orders of magnitude improvement in noise tolerance with nominal impact on performance. We also propose hybrid algorithms for energy efficient robust performance in very noisy environments. This paper compares the performance of hybrid algorithms with sparse beamforming nodes supported by randomly dispersed DSTC nodes to that of beamforming and DSTC algorithms. Hybrid algorithms achieve relative high accuracy in noisy environments with low energy consumption. Sensor data from a field test in the Marine base at 29 Palms, CA, were analyzed for validating the results in this paper. The results were compared to “ground truth” data obtained from GPS receivers on the vehicles. --- paper_title: Spatiotemporal multicast in sensor networks paper_content: Sensor networks often involve the monitoring of mobile phenomena. We believe this task can be facilitated by a spatiotemporal multicast protocol which we call "mobicast". Mobicast is a novel spatiotemporal multicast protocol that distributes a message to nodes in a delivery zone that evolves over time in some predictable manner. A key advantage of mobicast lies in its ability to provide reliable and just-in-time message delivery to mobile delivery zones on top of a random network topology. Mobicast can in theory achieve good spatiotemporal delivery guarantees by limiting communication to a mobile forwarding zone whose size is determined by the global worst-case value associated with a compactness metric defined over the geometry of the network (under a reasonable set of assumptions). In this work, we first studied the compactness properties of sensor networks with uniform distribution. The results of this study motivate three approaches for improving the efficiency of spatiotemporal multicast in such networks. First, spatiotemporal multicast protocols can exploit the fundamental tradeoff between delivery guarantees and communication overhead in spatiotemporal multicast. Our results suggest that in such networks, a mobicast protocol can achieve relatively high savings in message forwarding overhead by slightly relaxing the delivery guarantee, e.g., by optimistically choosing a forwarding zone that is smaller than the one needed for a 100% delivery guarantee. Second, spatiotemporal multicast may exploit local compactness values for higher efficiency for networks with non uniform spatial distribution of compactness. Third, for random uniformly distributed sensor network deployment, one may choose a deployment density to best support spatiotemporal communication. We also explored all these directions via simulation and results are presented in this paper. --- paper_title: VE-mobicast: A variant-egg-based mobicast routing protocol for sensornets paper_content: In this paper, we present a new "spatiotemporal multicast" protocol for supporting applications which require spatiotemporal coordination in sensornets. To simultaneously consider the factors of moving speed and direction, this work mainly investigates a new mobicast routing protocol, called variant-egg-based mobicast (VE-mobicast), by utilizing the variant-egg shape of the forwarding zone to achieve a high predicted accuracy. The contributions of our protocol are summarized as follows: (1) it builds a new shape of a forwarding zone, called the variant-egg, to adaptively and efficiently determine the location and shape of the forwarding zone to maintain the same number of wake-up sensor nodes; (2) it is a fully distributed algorithm which reduces the communication overhead of determining the forwarding zone and the mobicast message forwarding overhead; (3) it can improve the predicted accuracy of the forwarding zone by considering the factors of moving speed and direction. Finally, the simulation results illustrate the performance achievements, compared to existing mobicast routing protocols. --- paper_title: HVE-mobicast: a hierarchical-variant-egg-based mobicast routing protocol for wireless sensornets paper_content: In this paper, we propose a new mobicast routing protocol, called the HVE-mobicast (hierarchical-variant-egg-based mobicast) routing protocol, in wireless sensor networks (WSNs). Existing protocols for a spatiotemporal variant of the multicast protocol called a "mobicast" were designed to support a forwarding zone that moves at a constant velocity, $\stackrel{\rightarrow}{v}$ , through sensornets. The spatiotemporal characteristic of a mobicast is to forward a mobicast message to all sensor nodes that are present at time t in some geographic zone (called the forwarding zone) Z, where both the location and shape of the forwarding zone are a function of time over some interval (t start ,t end ). Mobicast routing protocol aims to provide reliable and just-in-time message delivery for a mobile sink node. To consider the mobile entity with the different moving speed, a new mobicast routing protocol is investigated in this work by utilizing the cluster-based approach. The message delivery of nodes in the forwarding zone of the HVE-mobicast routing protocol is transmitted by two phases; cluster-to-cluster and cluster-to-node phases. In the cluster-to-cluster phase, the cluster-head and relay nodes are distributively notified to wake them up. In the cluster-to-node phase, all member nodes are then notified to wake up by cluster-head nodes according to the estimated arrival time of the delivery zone. The key contribution of the HVE-mobicast routing protocol is that it is more power efficient than existing mobicast routing protocols, especially by considering different moving speeds and directions. Finally, simulation results illustrate performance enhancements in message overhead, power consumption, needlessly woken-up nodes, and successful woken-up ratio, compared to existing mobicast routing protocols. --- paper_title: Reliable mobicast via face-aware routing paper_content: This paper presents a novel protocol for a spatiotemporal variant of multicast called mobicast, designed to support message delivery in ad hoc sensor networks. The spatiotemporal character of mobicast relates to the obligation to deliver a message to all the nodes that will he present at time t in some geographic zone Z, where both the location and shape of the delivery zone are a function of time over some interval (t start, tend). The protocol, called face-aware routing (FAR), exploits ideas adapted from existing applications of face routing to achieve reliable mobicast delivery. The key features of the protocol are a routing strategy, which uses information confined solely to a node's immediate spatial neighborhood, and a forwarding schedule, which employs only local topological information. Statistical results shows that, in uniformly distributed random disk graphs, the spatial neighborhood size is usually less than 20. This suggests that FAR is likely to exhibit a low average memory cost. An estimation formula for the average size of the spatial neighborhood in a random network is another analytical result reported in this paper. This paper also presents a novel and low cost distributed algorithm for spatial neighborhood discovery ---
Title: A Review of Localization and Tracking Algorithms in Wireless Sensor Networks Section 1: INTRODUCTION Description 1: This section introduces the challenges, importance, and applications of localization and tracking in wireless sensor networks. Section 2: BROAD CLASSIFICATION OF LOCALIZATION METHODS Description 2: This section provides a broad classification of localization methods including range-free and range-based methods, and discusses their differences and applications. Section 3: Range-based Localization Methods Description 3: This subsection details the various range-based localization methods such as Received Signal Strength (RSS), Angle-of-Arrival (AOA), Time-of-Arrival (TOA), and Time-Difference-of-Arrival (TDOA). Section 4: Time-of-Arrival (TOA)-based Localization Description 4: This subsection explores TOA-based localization methods and their formulation including challenges related to synchronization. Section 5: Angle-of-Arrival (AOA)-based Localization Description 5: This subsection discusses AOA-based localization methods and their utilization of angle information with the associated costs and hardware requirements. Section 6: Received Signal Strength (RSS) based Localization Description 6: This subsection examines RSS-based localization methods, including model dependencies and error considerations. Section 7: Range-free Localization Methods Description 7: This section explains various range-free localization techniques and their subdivision into local techniques and hop-counting methods. Section 8: OVERVIEW OF IMPLEMENTATION METHODS Description 8: This section provides an overview of different implementation methods for localization such as machine learning-based methods, centralized and distributed methods, fingerprinting, and multi-sensor data fusion techniques. Section 9: BROAD CLASSIfiCATION OF TRACKING METHODS Description 9: This section broadly classifies tracking methods into cluster-based, tree-based, activation-based, and mobicast-based tracking methods. Section 10: Cluster-based Tracking Methods Description 10: This subsection delves into cluster-based tracking methods, exploring static, dynamic, and spatio-temporal approaches. Section 11: Prediction-based Tracking Methods Description 11: This subsection discusses energy-efficient prediction-based tracking approaches, detailing their architecture and limitations. Section 12: Tree-based Tracking Methods Description 12: This subsection covers tree-based tracking methods, focusing on hierarchical organization and convoy trees. Section 13: Activation-based Tracking Methods Description 13: This subsection explains different activation-based tracking algorithms and their energy consumption implications. Section 14: Mobicast-based Tracking Methods Description 14: This subsection explores spatio-temporal methods for tracking using mobicast and the relevance of geographic zone messaging. Section 15: EXPERIMENTAL SETUP FOR LOCALIZATION AND TRACKING Description 15: This section reviews various experimental kits and setups available for localization and tracking applications, highlighting their features and applications. Section 16: National Instruments Wireless Sensor Nodes Description 16: This subsection discusses the features and applications of National Instruments Wireless Sensor Nodes. Section 17: Crossbow Motes Description 17: This subsection details the Crossbow motes setup and their application in research. Section 18: SensWiz Networks Kit Description 18: This subsection covers the SensWiz Networks Kit and its application. Section 19: Hand-Held Devices Description 19: This subsection discusses the use of hand-held devices like smartphones and tablets for localization and tracking purposes. Section 20: CONCLUSION Description 20: This concluding section summarizes the key points and findings of the survey, discussing the trade-offs and applications of different localization and tracking methods.
EURASIP Journal on Applied Signal Processing 2003:10, 941–952 c ○ 2003 Hindawi Publishing Corporation Physically Informed Signal Processing Methods for Piano Sound Synthesis: A Research Overview
7
--- paper_title: Commuted Piano Synthesis paper_content: The \commuted piano synthesis" algorithm is described, based on a simpli ed acoustic model of the piano. The model includes multiple coupled strings, a nonlinear hammer, and an arbitrarily large soundboard and enclosure. Simpli cations are employed which greatly reduce computational complexity. Most of the simpli cations are made possible by the commutativity of linear, time-invariant systems. Special care is given to the felt-covered hammer which is highly nonlinear and therefore does not commute with other components. In its present form, a complete, two-key piano can be synthesized in real time on a single 25MHz Motorola DSP56001 signal processing chip. --- paper_title: Structured audio and effects processing in the MPEG-4 multimedia standard paper_content: While previous generations of the MPEG multimedia standard have focused primarily on coding and transmission of content digitally sampled from the real world,MPEG-4 contains extensive support for structured, syntheticand synthetic/natural hybrid coding methods. An overviewis presented of the "Structured Audio" and "AudioBIFS"components of MPEG-4, which enable the description ofsynthetic soundtracks, musical scores, and effects algorithmsand the compositing, manipulation, and synchronization ofreal and synthetic audio sources. A discussion of the separation of functionality between the systems layer and the audiotoolset of MPEG-4 is presented, and prospects for efficientDSP-based implementations are discussed. --- paper_title: Structured Audio: Creation, Transmission, and Rendering of Parametric Sound Representations paper_content: Structured audio representations are semantic and symbolic descriptions that are useful for ultralow-bit-rate transmission, flexible synthesis, and perceptually based manipulation and retrieval of sound. We present an overview of techniques for transmitting and synthesizing sound represented in structured format, and for creating structured representations from audio waveforms. We discuss applications for structured audio in virtual environments, music synthesis, gaming, content-based retrieval, interactive broadcast, and other multimedia contexts. --- paper_title: A Tutorial on Digital Sound Synthesis Techniques paper_content: Progress in electronics and computer technology has led to an ever-increasing utilization of digital techniques for musical sound production. Some of these are the digital equivalents of techniques employed in analog synthesizers and in other fields of electrical engineering. Other techniques have been specifically developed for digital music devices and are peculiar to these. This paper introduces the fundamentals of the main digital synthesis techniques. Mathematical developments have been restricted in the exposition and can be found in the papers listed in the references. To simplify the discussion, whenever possible, the techniques are presented with reference to continuous signals. Sound synthesis is a procedure used to produce a sound without the help of acoustic instruments. In digital synthesis, a sound is represented by a sequence of numbers (samples). Hence, a digital synthesis technique consists of a computing procedure or mathematical formula, which computes each sample value. Normally, the synthesis formula depends on some values, that is, parameters. Frequency and amplitude are examples of such parameters. Parameters can be constant or slowly time variant during the sound. Time-variant parameters are also called control functions. Synthesis techniques can be classified as (1) generation techniques (Fig. la), which directly produce the signal from given data, and (2) transformation techniques (Fig. Ib), which can be divided into two stages, the generation of one or more simple signals and their modification. Often, more or less elaborate combinations of these techniques are employed. Fixed-Waveform Synthesis --- paper_title: Understanding Musical Sound with Forward Models and Physical Models paper_content: This research report describes an approach to parameter estimation for physical models of sound-generating systems using distal teachers and foward models (Jordan & Rumelhart, 1992; Jordan, 1990). The general problem is to find an inverse model of a sound-generating system that transforms sounds to action parameters; these parameters constitute a model-base description of the sound. We first show that a two-layer feedforward model is capable of performing inverse mappings for a simple physical model of a violin string. We refer to this learning strategy as direct inverse modeling; it requires an explicit teacher and it is only suitable for convex regions of the parameter space. A model of two strings was implemented that had non-convex regions in its parameter space. We show how the direct modeling strategy failed at the task of learning the inverse model in this case and that forward models can be used, in conjunction with distal teachers, to bias the learning of an inverse model so that non-convex regions are mapped to single-point solutions in the parameter space. Our results show that forward models are appropriate for learning to map sounds to parametric representations --- paper_title: Synchronizing Computer Graphics Animation and Audio paper_content: Our prototype authoring tool synchronizes computer graphics animation, narration and background music. It lets users visualize the contents of a background music track and narration along a single time line and displays an animation's scenario sharing the same time line. A graphical user interface supports interactive editing of both background music and narration without any loss in quality. --- paper_title: Principles of Digital Waveguide Models of Musical Instruments paper_content: Basic principles of digital waveguide modeling of musical instruments are presented in a tutorial introduction intended for graduate students in electrical engineering with a solid background in signal processing and acoustics. The vibrating string is taken as the principal illustrative example, but the formulation is unified with that for acoustic tubes. Modeling lossy stiff strings using delay lines and relatively low-order digital filters is described. Various choices of wave variables are discussed, including velocity waves, force waves, and root-power waves. Signal scattering at an impedance discontinuity is derived for an arbitrary number of waveguides intersecting at a junction. Various computational forms are discussed, including the Kelly-Lochbaum, one-multiply, and normalized scattering junctions. A relatively new three-multiply normalized scattering junction is derived using a two-multiply transformer to normalize a one-multiply scattering junction. Conditions for strict passivity of the model are discussed. Use of commutativity of linear, time-invariant elements to greatly reduce computational cost is described. Applications are summarized, and models of the clarinet and bowed-string are described in some detail. The reed-bore and bow-string interactions are modeled as nonlinear scattering junctions attached to the bore/string acoustic waveguide. --- paper_title: Hysteretic model of the grand piano hammer felt paper_content: The experimental relationships of dynamic force versus grand piano hammer felt deformation show the significant influence of hysteresis characteristics. To explain this phenomenon, a new mathematical model of the hammer felt is proposed. In this model the hammer felt is considered as a nonlinear history‐dependent (hysteretic) material with an exponential kernel function. The numerical simulation of interaction of the hammer with a fixed target was used to identify the nonlinear and hysteresis parameters of the felt, and good agreement with experiments was achieved. Also, this model is used here for the analysis of interaction of the hammer with a real grand piano string. --- paper_title: Numerical simulations of piano strings. II. Comparisons with measurements and systematic exploration of some hammer‐string parameters paper_content: A physical model of the piano string, using finite difference methods, has recently been developed. [Chaigne and Askenfelt, J. Acoust. Soc. Am. 95, 1112–1118 (1994)]. The model is based on the fundamental equations of a damped, stiff string interacting with a nonlinear hammer, from which a numerical finite difference scheme is derived. In the present study, the performance of the model is evaluated by systematic comparisons between measured and simulated piano tones. After a verification of the accuracy of the method, the model is used as a tool for systematically exploring the influence of string stiffness, relative striking position, and hammer‐string mass ratio on string waveforms and spectra. --- paper_title: Piano string excitation. VI: Nonlinear modeling paper_content: Accurate modeling of the piano string–hammer interaction requires that the nonlinearity of the force‐displacement relation for the hammer be recognized and included. Two models that accomplish this with numerical integration are described, and sample results are presented. The first model is valid for a perfectly flexible string, where nondispersive traveling waves can be conveniently followed with a digital delay line. The second model uses standing waves instead to describe the vibrations of a stiff string, with the stiffness itself justifying truncation of what would otherwise be a poorly converging normal‐mode summation. Predictions with these models give significantly better agreement with data than did previous calculations with completely linear models. --- paper_title: Numerical simulations of piano strings. I. A physical model for a struck string using finite difference methods paper_content: The first attempt to generate musical sounds by solving the equations of vibrating strings by means of finite difference methods (FDM) was made by Hiller and Ruiz [J. Audio Eng. Soc. 19, 462–472 (1971)]. It is shown here how this numerical approach and the underlying physical model can be improved in order to simulate the motion of the piano string with a high degree of realism. Starting from the fundamental equations of a damped, stiff string interacting with a nonlinear hammer, a numerical finite difference scheme is derived, from which the time histories of string displacement and velocity for each point of the string are computed in the time domain. The interacting force between hammer and string, as well as the force acting on the bridge, are given by the same scheme. The performance of the model is illustrated by a few examples of simulated string waveforms. A brief discussion of the aspects of numerical stability and dispersion with reference to the proper choice of sampling parameters is also incl... --- paper_title: Traveling Wave Implementation of a Lossless Mode-coupling Filter and the Wave Digital Hammer paper_content: Monochloro-substituted saturated compounds may be prepared by condensing a saturated compound such as cycloalkyl hydrocarbon with a chloromonoolefin possessing not more than 4 carbon atoms and having the chlorine atom attached to one of the doubly-bonded carbon atoms in the presence of a free radical-generating catalyst and a promoter comprising a hydrogen chloride compound. --- paper_title: Elimination of delay-free loops in discrete-time models of nonlinear acoustic systems paper_content: Nonlinear acoustic systems are often described by means of nonlinear maps acting as instantaneous constraints on the solutions of a system of linear differential equations. This description leads to discrete-time models exhibiting noncomputable loops. We present a solution to this computability problem by means of geometrical transformation of the nonlinearities and algebraic transformation of the time-dependent equations. The proposed solution leads to stable and accurate simulations even at relatively low sampling rates. --- paper_title: Numerical simulations of piano strings. II. Comparisons with measurements and systematic exploration of some hammer‐string parameters paper_content: A physical model of the piano string, using finite difference methods, has recently been developed. [Chaigne and Askenfelt, J. Acoust. Soc. Am. 95, 1112–1118 (1994)]. The model is based on the fundamental equations of a damped, stiff string interacting with a nonlinear hammer, from which a numerical finite difference scheme is derived. In the present study, the performance of the model is evaluated by systematic comparisons between measured and simulated piano tones. After a verification of the accuracy of the method, the model is used as a tool for systematically exploring the influence of string stiffness, relative striking position, and hammer‐string mass ratio on string waveforms and spectra. --- paper_title: Numerical simulations of piano strings. I. A physical model for a struck string using finite difference methods paper_content: The first attempt to generate musical sounds by solving the equations of vibrating strings by means of finite difference methods (FDM) was made by Hiller and Ruiz [J. Audio Eng. Soc. 19, 462–472 (1971)]. It is shown here how this numerical approach and the underlying physical model can be improved in order to simulate the motion of the piano string with a high degree of realism. Starting from the fundamental equations of a damped, stiff string interacting with a nonlinear hammer, a numerical finite difference scheme is derived, from which the time histories of string displacement and velocity for each point of the string are computed in the time domain. The interacting force between hammer and string, as well as the force acting on the bridge, are given by the same scheme. The performance of the model is illustrated by a few examples of simulated string waveforms. A brief discussion of the aspects of numerical stability and dispersion with reference to the proper choice of sampling parameters is also incl... --- paper_title: Commuted Piano Synthesis paper_content: The \commuted piano synthesis" algorithm is described, based on a simpli ed acoustic model of the piano. The model includes multiple coupled strings, a nonlinear hammer, and an arbitrarily large soundboard and enclosure. Simpli cations are employed which greatly reduce computational complexity. Most of the simpli cations are made possible by the commutativity of linear, time-invariant systems. Special care is given to the felt-covered hammer which is highly nonlinear and therefore does not commute with other components. In its present form, a complete, two-key piano can be synthesized in real time on a single 25MHz Motorola DSP56001 signal processing chip. --- paper_title: Physical Modeling of Plucked String Instruments with Application to Real-Time Sound Synthesis paper_content: An efficient approach for real-time synthesis of plucked string instruments using physical modeling and DSP techniques is presented. Results of model-based resynthesis are illustrated to demonstrate that high-quality synthetic sounds of several string instruments can be generated using the proposed modeling principles. Real-time implementation using a signal processor is described, and several aspects of controlling physical models of plucked string instruments are studied. --- paper_title: Principles of Digital Waveguide Models of Musical Instruments paper_content: Basic principles of digital waveguide modeling of musical instruments are presented in a tutorial introduction intended for graduate students in electrical engineering with a solid background in signal processing and acoustics. The vibrating string is taken as the principal illustrative example, but the formulation is unified with that for acoustic tubes. Modeling lossy stiff strings using delay lines and relatively low-order digital filters is described. Various choices of wave variables are discussed, including velocity waves, force waves, and root-power waves. Signal scattering at an impedance discontinuity is derived for an arbitrary number of waveguides intersecting at a junction. Various computational forms are discussed, including the Kelly-Lochbaum, one-multiply, and normalized scattering junctions. A relatively new three-multiply normalized scattering junction is derived using a two-multiply transformer to normalize a one-multiply scattering junction. Conditions for strict passivity of the model are discussed. Use of commutativity of linear, time-invariant elements to greatly reduce computational cost is described. Applications are summarized, and models of the clarinet and bowed-string are described in some detail. The reed-bore and bow-string interactions are modeled as nonlinear scattering junctions attached to the bore/string acoustic waveguide. --- paper_title: Development and Calibration of a Guitar Synthesizer paper_content: A specific efficient implementation for the synthesis of acoustic guitar tones is described which is based on recent digital waveguide synthesis techniques. Enhanced analysis methods are used for calibrating the synthesis model based on digitized tones. The lowest resonances of the guitar body are implemented with separate parametric second-order resonators that run at a lower sampling rate than the string model. --- paper_title: Perceptual study of decay parameters in plucked string synthesis paper_content: A listening experiment was conducted to study the audibility of variation of decay parameters in plucked string synthesis. A digital commuted-waveguide-synthesis model was used to generate the test sounds. The decay of each tone was parameterized with an overall and a frequency-dependent decay parameter. Two different fundamental frequencies, tone durations, and types of excitation signals were used totalling in eight test sets for both parameters. The results indicate that variations between 25% and 40% in the time constant of decay are inaudible. This suggests that large deviations in decay parameters can be allowed from a perceptual viewpoint. The results are applied in model-based audio processing. --- paper_title: Robust loss filter design for digital waveguide synthesis of string tones paper_content: A robust loss filter design method is presented for digital waveguide string models, which can be used with high filter orders. The method aims at minimizing the decay time error in partials of the synthetic tone. This is achieved by a new weighting function based on the first-order Taylor series approximation of the decay time errors. Smoothing of decay time data and requiring the design to be minimum-phase are also proposed to facilitate the stability of the design. The new method is applicable to analysis-based sound synthesis of piano and guitar tones, for example. --- paper_title: Physical Modeling of Plucked String Instruments with Application to Real-Time Sound Synthesis paper_content: An efficient approach for real-time synthesis of plucked string instruments using physical modeling and DSP techniques is presented. Results of model-based resynthesis are illustrated to demonstrate that high-quality synthetic sounds of several string instruments can be generated using the proposed modeling principles. Real-time implementation using a signal processor is described, and several aspects of controlling physical models of plucked string instruments are studied. --- paper_title: Synchronizing Computer Graphics Animation and Audio paper_content: Our prototype authoring tool synchronizes computer graphics animation, narration and background music. It lets users visualize the contents of a background music track and narration along a single time line and displays an animation's scenario sharing the same time line. A graphical user interface supports interactive editing of both background music and narration without any loss in quality. --- paper_title: Simple and robust method for the design of allpass filters using least-squares phase error criterion paper_content: We consider a simple scheme for the design of allpass filters for approximation (or equalization) of a given phase function using a least-squares error criterion. Assuming that the desired phase response is prescribed at a discrete set of frequency points, we formulate a general least-squares equation-error solution with a possible weight function. Based on the general formulation and detailed analysis of the introduced error, we construct a new algorithm for phase approximation. In addition to iterative weighting of the equation error, the nominal value of the desired group delay is also adjusted iteratively to minimize the total phase error measure in equalizer applications. This new feature essentially eliminates the difficult choice of the nominal group delay which is known to have a profound effect on the stability of the designed allpass filter. The proposed method can be used for highpass and bandpass equalization as well, where the total phase error can be further reduced by introducing an adjustable-phase offset in the optimization. The performance of the algorithm is analyzed in detail with examples. First we examine the approximation of a given phase function. Then we study the equalization of the nonlinear phase of various lowpass filters. Also, a bandpass example is included. Finally we demonstrate the use of the algorithm for the design of approximately linear-phase recursive filters as a parallel connection of a delay line and an allpass filter. > --- paper_title: Quality of Piano Tones paper_content: A synthesizer was constructed to produce simultaneously 100 pure tones with means for controlling the intensity and frequency of each one of them. The piano tones were analyzed by conventional apparatus and methods and the analysis set into the synthesizer. The analysis was considered correct only when a jury of eight listeners could not tell which were real and which were synthetic tones. Various kinds of synthetic tones were presented to the jury for comparison with real tones. A number of these were judged to have better quality than the real tones. According to these tests synthesized piano like tones were produced when the attack time was less than 0.01 sec. The decay can be as long as 20 sec for the lower notes and be less than 1 sec for the very high ones. The best quality is produced when the partials decrease in level at the rate of 2 db per 100 cps increase in the frequency of the partial. The partials below middle C must be inharmonic in frequency to be piano like. --- paper_title: Audibility of the timbral effects of inharmonicity in stringed instrument tones paper_content: Listening tests were conducted to find the audibility of inharmonicity in musical sounds produced by stringed instruments, such as the piano or the guitar. The audibility threshold of inharmonicity was measured at five fundamental frequencies. Results show that the detection of inharmonicity is strongly dependent on the fundamental frequency f0. A simple model is presented for estimating the threshold as a function of f0. The need to implement inharmonicity in digital sound synthesis is discussed. --- paper_title: Bandwidth of perceived inharmonicity for physical modeling of dispersive strings paper_content: The influence of accurate reproduction of inharmonicity on the perceived quality of piano tones is investigated. Acoustic piano tones were resynthesized by changing the bandwidth of correct positioning of partials. Cutoff frequencies of inharmonicity for different pitches have been established by psychoacoustic experimentation. The results are applicable to the design of dispersive resonators in sound synthesis by physical modeling. --- paper_title: Circulant and elliptic feedback delay networks for artificial reverberation paper_content: The feedback delay network (FDN) has been proposed for digital reverberation, The digital waveguide network (DWN) is also proposed with similar advantages. This paper notes that the commonly used FDN with an N/spl times/N orthogonal feedback matrix is isomorphic to a normalized digital waveguide network consisting of one scattering junction joining N reflectively terminated branches. Generalizations of FDNs and DWNs are discussed. The general case of a lossless FDN feedback matrix is shown to be any matrix having unit-modulus eigenvalues and linearly independent eigenvectors. A special class of FDNs using circulant matrices is proposed. These structures can be efficiently implemented and allow control of the time and frequency behavior. Applications of circulant feedback delay networks in audio signal processing are discussed. --- paper_title: Resynthesis of Coupled Piano String Vibrations Based on Physical Modeling paper_content: This paper presents a technique to resynthesize the sound generated by the vibrations of two piano strings tuned to a very close pitch and coupled at the bridge level. Such a mechanical system produces doublets of components, thus generating beats and double decays on the amplitudes of the partials. We design a digital waveguide model by coupling two elementary digital waveguides. This model is able to reproduce perceptually relevant sounds. The parameters of the model are estimated from the analysis of real signals collected directly on the strings by laser velocimetry. Sound transformations can be achieved by modifying relevant parameters in order to simulate various physical situations. --- paper_title: Pedal resonance effect simulation device for digital pianos paper_content: The present invention relates to a pedal resonance effect simulation device for digital pianos consisting of a resonance effect circuit for the simulation of the resonance effects in the strings of a traditional mechanical piano, coupled with a reference model which varies the resonance contributions of the various strings, which are equivalent to those of a mechanical piano, by using delay lines with a method which considers the position of the resonance pedal pressed by the performer upon reproduction. --- paper_title: Coupled piano strings paper_content: The admittance of the piano bridge has a crucial effect on piano tone by coupling together the strings belonging to one note into a single dynamical system. In this paper, we first develop theoretical expressions that show how the rate of energy transmission to the bridge as a function of time (including the phenomena of beats and ’’aftersound’’) depends on bridge admittance, hammr irregularities, and the exact state in which the piano is tuned. We then present experimental data showing the effects of mutual string coupling on beats and aftersound, as well as the great importance of the two polarizations of the string motion. The function of the una corda pedal in controlling the aftersound is explained, and the stylistic possibilities of a split damper are pointed out. The way in which an excellent tuner can use fine tuning of the unisons to make the aftersound more uniform is discussed. --- paper_title: Accurate and efficient modeling of beating and two-stage decay for string instrument synthesis paper_content: In this paper a novel modeling method is presented for beating and two-stage decay. Here, one digital waveguide is used for each note and some resonators are run in parallel to simulate the beating and two-stage decay of those partials, where these phenomena are most prominent. The resonator bank is implemented by using the multi-rate approach, resulting in a decrease of computational cost by a factor of 10. By taking the advantage of the characteristics of the resonators, relatively simple upsampling and downsampling filters are used. Two different filtering approaches are presented and compared with respect to computational complexity. Examples are shown with the application to piano sound synthesis. --- paper_title: Circulant and elliptic feedback delay networks for artificial reverberation paper_content: The feedback delay network (FDN) has been proposed for digital reverberation, The digital waveguide network (DWN) is also proposed with similar advantages. This paper notes that the commonly used FDN with an N/spl times/N orthogonal feedback matrix is isomorphic to a normalized digital waveguide network consisting of one scattering junction joining N reflectively terminated branches. Generalizations of FDNs and DWNs are discussed. The general case of a lossless FDN feedback matrix is shown to be any matrix having unit-modulus eigenvalues and linearly independent eigenvectors. A special class of FDNs using circulant matrices is proposed. These structures can be efficiently implemented and allow control of the time and frequency behavior. Applications of circulant feedback delay networks in audio signal processing are discussed. ---
Title: Physically Informed Signal Processing Methods for Piano Sound Synthesis: A Research Overview Section 1: INTRODUCTION Description 1: Write an introduction to the paper, outlining the primary motivation, context, and objectives of physical modeling in piano sound synthesis. Section 2: ACOUSTICS AND MODEL STRUCTURE Description 2: Discuss the general aspects of piano acoustics and the structure of the physical model used for piano sound synthesis. Section 3: THE HAMMER Description 3: Delve into the physical and numerical modeling of the piano hammer, discussing hammer-string interaction and various implementation approaches. Section 4: THE STRING Description 4: Describe different string modeling techniques with a focus on digital waveguide methods, including loss filter design, dispersion simulation, and string coupling. Section 5: RADIATION MODELING Description 5: Address the modeling of the soundboard and the radiation properties, including different implementation strategies and computational considerations. Section 6: CONCLUSIONS Description 6: Summarize the main stages of development in physical modeling for the piano, highlighting key computational aspects and discussing the importance of perceptual and stability considerations. Section 7: ACKNOWLEDGMENTS Description 7: Provide acknowledgments for any contributing individuals or organizations and mention funding sources where applicable.
A Review of Machine Vision-Based Structural Health Monitoring: Methodologies and Applications
11
--- paper_title: Detection and classification of surface defects of gun barrels using computer vision and machine learning paper_content: Abstract This work proposes a machine vision based approach for the detection and classification of the surface defects such as normal wear, corrosive pitting, rust and erosion that are usually present in used gun barrels. Surface images containing the defective regions of several used gun barrels were captured in a non-destructive manner using a Charge-Coupled Device (CCD) camera attached with a miniature microscopic probe. Among the captured images, normal wear appeared as bright and the rest of the three defects appeared as dark. Therefore, the classification has been carried out in two stages. Various segmentation methods were tested and extended maxima transform gave the best result. The defective area was calculated in metric units. Multiple textural features based on histogram and gray level co-occurrence matrix were extracted from the segmented images and ranked them automatically using the sequential forward feature selection method in order to select the best minimal features for the classification purpose. Many classifiers based on Bayes, k -Nearest Neighbor, Artificial Neural Network and Support Vector Machine (SVM) were tested and the results demonstrated the efficacy of SVM for this application. All these steps were carried out at six different scales of image sizes and the best scale was selected for the entire analysis based on the segmentation and classification accuracy. The introduction of this Gaussian scale spacing concept could reduce the computation without compromising on the accuracy. Overall, the methodology forms a novel framework for surface defect detection and classification that has a potential to automate the inspection process. --- paper_title: Image recognition technology in rotating machinery fault diagnosis based on artificial immune paper_content: By using image recognition technology, this paper presents a new fault diagnosis method for rotating machinery with artificial immune algorithm. This method focuses on the vibration state parameter image. The main contribution of this paper is as follows: firstly, 3-D spectrum is created with raw vibrating signals. Secondly, feature information in the state parameter image of rotating machinery is extracted by using Wavelet Packet transformation. Finally, artificial immune algorithm is adopted to diagnose rotating machinery fault. On the modeling of 600MW turbine experimental bench, rotor`s normal rate, fault of unbalance, misalignment and bearing pedestal looseness are being examined. It`s demonstrated from the diagnosis example of rotating machinery that the proposed method can improve the accuracy rate and diagnosis system robust quality effectively. --- paper_title: Achievements and Challenges in Machine Vision-Based Inspection of Large Concrete Structures paper_content: Large concrete structures need to be inspected in order to assess their current physical and functional state, to predict future conditions, to support investment planning and decision making, and to allocate limited maintenance and rehabilitation resources. Current procedures in condition and safety assessment of large concrete structures are performed manually leading to subjective and unreliable results, costly and time-consuming data collection, and safety issues. To address these limitations, automated machine vision-based inspection procedures have increasingly been proposed by the research community. This paper presents current achievements and open challenges in vision-based inspection of large concrete structures. First, the general concept of Building Information Modeling is introduced. Then, vision-based 3D reconstruction and as-built spatial modeling of concrete civil infrastructure are presented. Following that, the focus is set on structural member recognition as well as on concrete damage detection and assessment exemplified for concrete columns. Although some challenges are still under investigation, it can be concluded that vision-based inspection methods have significantly improved over the last 10 years, and now, as-built spatial modeling as well as damage detection and assessment of large concrete structures have the potential to be fully automated. --- paper_title: A Vision-Based Dynamic Rotational Angle Measurement System for Large Civil Structures paper_content: In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system. --- paper_title: Monitoring global earthquake-induced demands using vision-based sensors paper_content: A vision-based approach is evaluated for its applicability as a new sensing technology for measuring earthquake-induced motions. The approach evaluated in this paper is advantageous since it requires very limited physical attachment to the structure of interest, is high-speed, high-resolution, and does not introduce additional mass or otherwise modify the properties of the structure. A demonstration experiment is described in which four digital high-speed, high-resolution, charge-coupled-device cameras outfitted with red light-emitting diodes are used to track 21 reflective (nearly) mass less spherical elements discretely mounted on a scale five-story steel frame structure. The structure is mounted on a large bi-axial shake table and subjected to different earthquake motions. A total of eleven conventional (wired) transducers [linear variable displacement transducers and accelerometers] are also discretely mounted on the structure, providing a unique comparison with the vision-based approach. Results from these experiments show that the nonintrusive vision-based approach is extremely promising in terms of its ability to capture inter-story drift, floor level velocities, and accelerations, provided proper post-processing of the dynamic data occurs. --- paper_title: An advanced vision-based system for real-time displacement measurement of high-rise buildings paper_content: This paper introduces an advanced vision-based system for dynamic real-time displacement measurement of high-rise buildings using a partitioning approach. The partitioning method is based on the successive estimation of relative displacements and rotational angles at several floors using a multiple vision-based displacement measurement system. In this study, two significant improvements were made to realize the partitioning method: (1) time synchronization, (2) real-time dynamic measurement. Displacement data and time synchronization information are wirelessly transferred via a network using the TCP/IP protocol. The time synchronization process is periodically conducted by the master system to guarantee the system time at the master and slave systems are synchronized. The slave system is capable of dynamic real-time measurement and it is possible to economically expand measurement points at slave levels using commercial devices. To verify the accuracy and feasibility of the synchronized multi-point vision-based system and partitioning approach, many laboratory tests were carried out on a three-story steel frame model. Furthermore, several tests were conducted on a five-story steel frame tower equipped with a hybrid mass damper to experimentally confirm the effectiveness of the proposed system. --- paper_title: Horizontal Roadway Curvature Computation Algorithm Using Vision Technology paper_content: Horizontal roadway curvature data is essential for roadway safety analysis. However, collecting such data is time-consuming, costly, and dangerous using traditional, manual surveying methods. It is especially difficult to perform such manual measurement when roadways have high traffic volumes. Thus, it would be valuable for transportation agencies if roadway curvature data could be computed from photographic images taken using low-cost digital cameras. This is the first article that develops an algorithm using emerging vision technology to acquire horizontal roadway curvature data from roadway images to perform roadway safety assessment. The proposed algorithm consists of 4 steps: 1) curve edges image processing, 2) mapping edge positions from an image domain to the real-world domain, 3) calibrating camera parameters, and 4) calculating the curve radius and center from curve points. The proposed algorithm was tested on roadways having various levels of curves and using different image sources to demonstrate its capability. The ground truth curvatures for 2 cases were also collected to evaluate the error of the proposed algorithm. The test results are very promising, and the computed curvatures are especially accurate for curves of small radii (less than 66 m/200 ft) with less than 1.0% relative errors with respect to the ground truth data. The proposed algorithm can be used as an alternative method that complements the traditional measurement methods used by state DOTs to collect roadway curvature data. --- paper_title: A preliminary study on the response of steel structures using surveillance camera image with vision-based method during the Great East Japan Earthquake paper_content: Abstract Steel structures had experienced an intense and long transient response caused by the great earthquake motion during the Great East Japan Earthquake on 11th March 2011. Lots of non-structural damages such as ceiling fallen and glass broken are the main damage characteristics of steel structures although there were rarely structural damages. Some opened earthquake records and videos were adopted in this study in order to investigate the seismic behavior of the steel structures. An application of vision-based method has been carried out on a video of surveillance camera provided by Japan Broadcasting Corporation (Nippon Housou Kyoukai) to obtain the displacement response and natural period of the high-rise steel buildings in Tokyo. --- paper_title: Safety Monitoring of Railway Tunnel Construction Using FBG Sensing Technology paper_content: In comparison with above-ground structures, the investigation of underground space structures still faces great challenges because of the extremely complicated constitutive relationships of the soils or rocks. Implementation of structural health monitoring (SHM) systems on the underground structures such as tunnels commencing from the construction stage may be of help in understanding their operational behaviors and long-term trends. This paper explores the application of the fiber Bragg grating (FBG) sensing technology for safety monitoring during railway tunnel construction. An FBG-based temperature monitoring system is first developed for real-time temperature measurement of the frozen soils during freezing construction of a metro-tunnel cross-passage. Through in-situ deployment of FBG-based liquid-level sensors, the subgrade settlement of a segment of a high-speed rail line is then monitored in an automatic manner during construction of an undercrossing tunnel. The field results indicate that the FBG ... --- paper_title: Statistical analysis of stress spectra for fatigue life assessment of steel bridges with structural health monitoring data paper_content: Abstract This paper aims at developing a monitoring-based method for fatigue life assessment of steel bridges with use of long-term monitoring data of dynamic strain. A standard daily stress spectrum is derived by statistically analyzing the stress spectra accounting for highway traffic, railway traffic, and typhoon effects. The optimal number of daily strain data for derivation of the standard daily stress spectrum is determined by examining the predominant factors which affect the prediction of fatigue life. With the continuously measured dynamic strain responses from the instrumented Tsing Ma Bridge carrying both highway and railway traffic, the proposed method is exemplified to evaluate the fatigue life of fatigue-critical welded details on the bridge. --- paper_title: Multistep Explicit Stereo Camera Calibration Approach to Improve Euclidean Accuracy of Large-Scale 3D Reconstruction paper_content: AbstractThe spatial accuracy of point clouds generated from a mobile stereo camera set is very sensitive to the intrinsic and extrinsic camera parameters (i.e., camera calibration) used in stereo image-based three-dimensional (3D) reconstruction methods. The existing camera calibration algorithms induce a significant amount of error owing to poor estimation accuracy in camera parameters when they are used for large-scale scenes such as mapping civil infrastructure. This leads to higher uncertainties in the location of 3D points, and may result in the failure of the whole reconstruction process. This paper proposes a novel procedure to address this problem. It hypothesizes that a set of multiple calibration parameters created by videotaping a moving calibration pattern along a specific path can increase overall calibration accuracy and ultimately enhance the Euclidean accuracy of the generated point cloud. This is achieved by using conventional camera calibration algorithms to perform separate estimations ... --- paper_title: A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure paper_content: Visual inspection of civil infrastructure is essential for condition assessment.We focus on concrete bridges, tunnels, underground pipes, and asphalt pavements.Accordingly, we review the latest computer vision based defect detection methods.Using computer vision most relevant types of defects can be automatically detected.Automatic defect properties retrieval and assessment has not been achieved yet. To ensure the safety and the serviceability of civil infrastructure it is essential to visually inspect and assess its physical and functional condition. This review paper presents the current state of practice of assessing the visual condition of vertical and horizontal civil infrastructure; in particular of reinforced concrete bridges, precast concrete tunnels, underground concrete pipes, and asphalt pavements. Since the rate of creation and deployment of computer vision methods for civil engineering applications has been exponentially increasing, the main part of the paper presents a comprehensive synthesis of the state of the art in computer vision based defect detection and condition assessment related to concrete and asphalt civil infrastructure. Finally, the current achievements and limitations of existing methods as well as open research challenges are outlined to assist both the civil engineering and the computer science research community in setting an agenda for future research. --- paper_title: Local positioning systems versus structural monitoring: a review paper_content: SUMMARY ::: ::: Structural monitoring and structural health monitoring could take advantage from different devices to record the static or dynamic response of a structure. A positioning system provides displacement information on the location of moving objects, which is assumed to be the basic support to calibrate any structural mechanics model. The global positioning system could provide satisfactory accuracy in absolute displacement measurements. But the requirements of an open area position for the antennas and a roofed room for its data storage and power supply limit its flexibility and its applications. Several efforts are done to extend its field of application. The alternative is local positioning system. Non-contact sensors can be easily installed on existing infrastructure in different locations without changing their properties: several technological approaches have been exploited: laser-based, radar-based, vision-based, etc. In this paper, a number of existing options, together with their performances, are reviewed. Copyright © 2014 John Wiley & Sons, Ltd. --- paper_title: Multi-image stitching and scene reconstruction for evaluating defect evolution in structures paper_content: It is well-recognized that civil infrastructure monitoring approaches that rely on visual approaches will continue to be an important methodology for condition assessment of such systems. Current inspection standards for structures such as bridges require an inspector to travel to a target structure site and visually assess the structure's condition. A less time-consuming and inexpensive alternative to current visual monitoring methods is to use a system that could inspect structures remotely and also more frequently. This article presents and evaluates the underlying technical elements for the development of an integrated inspection software tool that is based on the use of inexpensive digital cameras. For this purpose, digital cameras are appropriately mounted on a structure (e.g., a bridge) and can zoom or rotate in three directions (similar to traffic cameras). They are remotely controlled by an inspector, which allows the visual assessment of the structure's condition by looking at images captured by... --- paper_title: Yarn features extraction using image processing and computer vision: a study with cotton and polyester yarns paper_content: Abstract The aim of this paper is the development of a new technological solution, for the automatic characterization of the yarn mass parameters (linear mass, diameter, and hairiness) based on image processing and computer vision techniques. A preliminary study for the detection and distinction between loop and protruding fibers is also presented. A custom-made application developed in LabVIEW from National Instruments with the IMAQ Vision Toolkit was used to acquire, analyze and process the yarn images. Some experimental results using cotton and polyester yarns were performed and compared with a commercial solution for validation. The presented approach allows a correct yarn parameterization improving products’ quality in the textile industry. --- paper_title: Experimental Study of Dynamic Crack Growth in Unidirectional Graphite/Epoxy Composites using Digital Image Correlation Method and High-speed Photography paper_content: In this work, fracture behavior of multilayered unidirectional graphite/epoxy composite (T800/3900-2) materials is investigated. Rectangular coupons with a single-edged notch are studied under geometrically symmetric loading configurations and impact loading conditions. The notch orientation parallel to or at an angle to the fiber orientation is considered to produce mode-I or mixed-mode (mode-I and -II) fracture. Feasibility of studying stress-wave induced crack initiation and rapid crack growth in fiber-reinforced composites using the digital image correlation method and high-speed photography is demonstrated. Analysis of photographed random speckles on specimen surface provides information pertaining to crack growth history as well as surface deformations in the crack-tip vicinity. Measured deformation fields are used to estimate mixed-mode fracture parameters and examine the effect of fiber orientation (β) on crack initiation and growth behaviors. The samples show differences in fracture responses dep... --- paper_title: Close-range photogrammetry applications in bridge measurement: Literature review paper_content: Abstract Close-range photogrammetry has found many diverse applications in the fields of industry, biomechanics, chemistry, biology, archaeology, architecture, automotive, and aerospace, as well as accident reconstruction. Although close-range photogrammetry has not been as popular in bridge engineering as in other fields, the investigations that have been conducted demonstrate the potential of this technique. The availability of inexpensive, off-the-shelf digital cameras and soft-copy, photogrammetry software systems has made close-range photogrammetry much more feasible and affordable for bridge engineering applications. To increase awareness of the use of this powerful non-contact, non-destructive technique in the bridge engineering field, this paper presents a literature review on the basic development of close-range photogrammetry and briefly describes previous work related to bridge deformation and geometry measurement; structural test monitoring; and historic documentation. The major aspects of photogrammetry bridge measurement are covered starting from the late 1970s and include a description of measurement types, cameras, targets, network control, and software. It is shown that early applications featured the use of metric cameras (specially designed for photogrammetry purposes), diffuse targets (non-retroreflective), stereoscopic photogrammetry network layout, and analog analytical tools, which transformed over time to the use of non-metric cameras, retro-reflective targets, highly convergent network layout, and digital computerized analytical tools. --- paper_title: Monitoring-Based Fatigue Reliability Assessment of Steel Bridges: Analytical Model and Application paper_content: A fatigue reliability model which integrates the probability distribution of hot spot stress range with a continuous probabilistic formulation of Miner's damage cumulative rule is developed for fatigue life and reliability evaluation of steel bridges with long-term monitoring data. By considering both the nominal stress obtained by measurements and the corresponding stress concentration factor (SCF) as random variables, a probabilistic model of the hot spot stress is formulated with the use of the S-N curve and the Miner's rule, which is then used to evaluate the fatigue life and failure probability with the aid of structural reliability theory. The proposed method is illustrated using the long-term strain monitoring data from the instrumented Tsing Ma Bridge. A standard daily stress spectrum accounting for highway traffic, railway traffic, and typhoon effects is derived by use of the monitoring data. Then global and local finite element models (FEMs) of the bridge are developed for numerically calculating the SCFs at fatigue-susceptible locations, while the stochastic characteristics of SCF for a typical welded T-joint are obtained by full-scale model experiments of a railway beam section of the bridge. A multimodal probability density function (PDF) of the stress range is derived from the monitoring data using the finite mixed Weibull distributions in conjunction with a hybrid parameter estimation algorithm. The failure probability and reliability index versus fatigue life are achieved from the obtained joint PDF of the hot spot stress in terms of the nominal stress and SCF random variables. --- paper_title: Modeling of Stress Spectrum Using Long-Term Monitoring Data and Finite Mixture Distributions paper_content: This study focuses on how to exploit long-term monitoring data of structural strain for analytical modeling of multimodal rainflow-counted stress spectra by use of the method of finite mixture distributions in conjunction with a hybrid mixture parameter estimation algorithm. The long-term strain data acquired from an instrumented bridge carrying both highway and railway traffic is used to verify the procedure. A wavelet-based filtering technique is first applied to eliminate the temperature effect inherent in the measured strain data. The stress spectrum is obtained by extracting the stress range and mean stress from the stress time histories with the aid of a rainflow counting algorithm. By synthesizing the features captured from daily stress spectra, a representative sample of stress spectrum accounting for multiple loading effects is derived. Then, the modeling of the multimodal stress range is performed by use of finite mixed normal, lognormal, and Weibull distributions, with the best mixed distributi... --- paper_title: Vision Metrology And Three Dimensional Visualization In Structural Testing And Monitoring paper_content: This paper describes initial work carried out to advance the Erse of vision metrology and three dimensional visualization techniques in structural testing and monitoring. A description is given of a methodology for semi-automatically generating dynamic three dimensional computer graphics models, which is employed to provide a three dimensional spatial framework for conventional engineering instrumentation. The use of techniques developed for two structural measurement applications is explained and opportunities for current and future research are reported. --- paper_title: Dynamic displacement measurement of large-scale structures based on the Lucas–Kanade template tracking algorithm paper_content: Abstract The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas–Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures. --- paper_title: Systematic errors in digital image correlation due to undermatched subset shape functions paper_content: Digital image correlation techniques are commonly used to measure specimen displacements by finding correspondences between an image of the specimen in an undeformed or reference configuration and a second image under load. To establish correspondences between the two images, numerical techniques are used to locate an initially square image subset in a reference image within an image taken under load. During this process, shape functions of varying order can be applied to the initially square subset. Zero order shape functions permit the subset to translate rigidly, while first-order shape functions represent an affine transform of the subset that permits a combination of translation, rotation, shear and normal strains. --- paper_title: Accurate and robust ROI localization in a camshift tracking application paper_content: Camshift has been well accepted as one of the most popular methods for object tracking. However, it fails to address complex situations, such as similar color interference, object occlusion, and illumination changes. In this paper, we enhance Camshift to enable it to handle the above-mentioned problems. A two-dimensional (2D) histogram of the hue and luminance is used for the color features of the target. To reduce the influence from irrelevant background pixels, a Flood-fill operation is implemented. The obtained 2D target model can precisely describe the target as well as achromatic points. A similarity score is evaluated to prevent similar color interference. When a target's colors are not distinguishable from the background colors, motion information will contribute to the tracking task. Finally, an average rate change is adopted to maintain progressive but not abrupt changes in the window size. The proposed algorithm has been extensively tested. The results are satisfactory while maintaining the processing in real time. --- paper_title: A vision-based system for dynamic displacement measurement of long-span bridges: algorithm and verification paper_content: Dynamic displacement of structures is an important index for in-service structural condition and behavior assessment, but accurate measurement of structural displacement for large-scale civil structures such as long-span bridges still remains as a challenging task. In this paper, a vision-based dynamic displacement measurement system with the use of digital image processing technology is developed, which is featured by its distinctive characteristics in non-contact, long-distance, and high-precision structural displacement measurement. The hardware of this system is mainly composed of a high-resolution industrial CCD (charge-coupled-device) digital camera and an extended-range zoom lens. Through continuously tracing and identifying a target on the structure, the structural displacement is derived through cross-correlation analysis between the predefined pattern and the captured digital images with the aid of a pattern matching algorithm. To validate the developed system, MTS tests of sinusoidal motions under different vibration frequencies and amplitudes and shaking table tests with different excitations (the El-Centro earthquake wave and a sinusoidal motion) are carried out. Additionally, in-situ verification experiments are performed to measure the mid-span vertical displacement of the suspension Tsing Ma Bridge in the operational condition and the cable-stayed Stonecutters Bridge during loading tests. The obtained results show that the developed system exhibits an excellent capability in real-time measurement of structural displacement and can serve as a good complement to the traditional sensors. --- paper_title: Vision-based structural displacement measurement: System performance evaluation and influence factor analysis paper_content: Abstract In the past decade, the emerging machine vision-based measurement technology has gained great concerns among civil engineers due to its overwhelming merits of non-contact, long-distance, and high-resolution. A critical issue regarding to the measurement performance and accuracy of the vision-based system is how to identify and eliminate the systematic and unsystematic error sources. In this paper, a vision-based structural displacement measurement system integrated with a digital image processing approach is developed. The performance of the developed vision-based system is evaluated by comparing the results simultaneously obtained by the vision-based system and those measured by the magnetostrictive displacement sensor (MDS). A series of experiments are conducted on a shaking table to examine the influence factors which will affect the accuracy and stability of the vision-based system. It is demonstrated that illumination and vapor have a critical effect on the measurement results of the vision-based system. --- paper_title: Multi-point displacement monitoring of bridges using a vision-based approach paper_content: To overcome the drawbacks of the traditional contact-type sensor for structural displacement measurement, the vision-based technology with the aid of the digital image processing algorithm has received increasing concerns from the community of structural health monitoring (SHM). The advanced vision-based system has been widely used to measure the structural displacement of civil engineering structures due to its overwhelming merits of non-contact, long-distance, and high-resolution. However, seldom currently-available vision-based systems are capable of realizing the synchronous structural displacement measurement for multiple points on the investigated structure. In this paper, the method for vision-based multi-point structural displacement measurement is presented. A series of moving loading experiments on a scale arch bridge model are carried out to validate the accuracy and reliability of the vision-based system for multi-point structural displacement measurement. The structural displacements of five points on the bridge deck are measured by the vision-based system and compared with those obtained by the linear variable differential transformer (LVDT). The comparative study demonstrates that the vision-based system is deemed to be an effective and reliable means for multi-point structural displacement measurement. --- paper_title: Image-based structural dynamic displacement measurement using different multi-object tracking algorithms paper_content: With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system. --- paper_title: Calibration methodology of a vision system for measuring the displacements of long-deck suspension bridges paper_content: SUMMARY ::: ::: Structural health monitoring is an emergent powerful diagnostic tool that can be used to identify and to prevent possible failures of the various components that comprise an infrastructure. In a suspension bridge, the measurement of the vertical and transversal displacements plays an important role for its safety evaluation. Considering the restrictions usually found on these structures, an enhanced solution comprises a non-contact vision-based measurement system with dynamic response, accuracy and amplitude range well suited to the physical phenomenon to measure. ::: ::: ::: ::: A methodology to perform the vision system calibration is described, which can be carried out in situ, while the deck is moving, requiring little effort and a minimum set of information. The key idea is to use the factorization method to get an initial estimation of the object shape and camera's parameters, and to incorporate the knowledge about the distances between the calibration targets in a non-linear optimization process to achieve a metric shape reconstruction and optimize the camera's parameters estimation. ::: ::: ::: ::: Results obtained by numerical simulation experiments are reported, showing the negligible influence of noisy calibration data in the calibration performance and also the robustness of the calibration process on the camera's parameters estimation. Tests carried out to assess the calibration and tracking of the bridge deck motion showed that, even in an environmental severely affected by noise, it is possible to measure the vertical and transversal displacements of the bridge deck with excellent levels of accuracy. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: Dynamic displacement measurement of large-scale structures based on the Lucas–Kanade template tracking algorithm paper_content: Abstract The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas–Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures. --- paper_title: Vision-Based Displacement Sensor for Monitoring Dynamic Response Using Robust Object Search Algorithm paper_content: This study develops a vision-based displacement measurement system for remote monitoring of vibration of large-size structures such as bridges and buildings. The system consists of one or multiple video cameras and a notebook computer. With a telescopic lens, the camera placed at a stationary point away from a structure captures images of an object on the structure. The structural displacement is computed in real time through processing the captured images. A robust object search algorithm developed in this study enables accurate measurement of the displacement by tracking existing features on the structure without requiring a conventional target panel to be installed on the structure. The efficacy of the system in remote measurement of dynamic displacements was demonstrated through a shaking table test and a field experiment on a long-span bridge. --- paper_title: Vertical Displacement Measurements for Bridges Using Optical Fiber Sensors and CCD Cameras — A Preliminary Study paper_content: Bridge managers all over the world are always looking for simple ways to measure bridge vertical displacements for structural health monitoring. However, traditional methods to obtain such data are either tedious or expensive. There is a need to develop a simple, inexpensive, and yet practical method to measure bridge vertical displacements. This paper proposes two methods using either optical fiber (FBG) sensors or a charge-coupled-device (CCD) camera, respectively, for vertical displacement measurements of bridges. The FBG sensor method is based on the measured horizontal strains together with the identified curvature functions obtained by a self-developed FBG Tilt sensor. CCD cameras use a large number of pixels to form an image. The CCD camera method utilizes image processing techniques for pixel identification and subsequent edge detection. A preliminary study to validate the proposed methods in laboratory was presented. The tests include applying the methods to determine the vertical displacements s... --- paper_title: Vibration Monitoring of Multiple Bridge Points by Means of a Unique Vision-Based Measuring System paper_content: Bridge static and dynamic vibration monitoring is a key activity for both safety and maintenance purposes. The development of vision-based systems allows to use this type of devices for remote estimation of a bridge vibration, simplifying the measuring system installation. The uncertainty of this type of measurements is strongly related to the experimental conditions (mainly the pixel-to-millimeters conversion, the target texture, the camera characteristics and the image processing technique). In this paper two different types of cameras are used to monitor the response of a bridge to a train pass-by. The acquired images are analyzed using three different image processing techniques (Pattern Matching, Edge Detection and Digital Image Correlation) and the results are compared with a reference measurement, obtained by a laser interferometer providing single point measurements. Tests with different zoom levels are shown and the corresponding uncertainty values are estimated. As the zoom level decreases it is possible not only to measure the displacement of one point of the bridge, but also to grab images from a wide structure portion in order to recover displacements of a large number of points in the field of view. The extreme final solution would be having wide area measurements with no targets, to make measurements really easy, with clear advantages, but also with some drawbacks in terms of uncertainty to be fully comprehended. --- paper_title: Remote sensing of building structural displacements using a microwave interferometer with imaging capability paper_content: Phase interference of microwave images has been experimented for remote submillimeter-accuracy detection of structural displacements of a real-scale building, subject to tensional stress. The images are obtained by a synthetic-aperture interferometric radar, making use of continuous-wave step-frequency waveform. Phase information of the synthesized microwave images is exploited for detecting displacements of the illuminated structure. --- paper_title: Seismic structural displacement measurement using a line-scan camera: camera-pattern calibration and experimental validation paper_content: This article examines high-speed, line-scan cameras as a robust and high-speed displacement sensor for a range of seismic monitoring applications. They have the additional benefit of requiring no invasive mechanisms or added processing to provide a high-resolution output measure, and do not interfere architecturally. Following the method proposed by Lim et al. for measuring foundation pile movements, multiple displacements and motions of any structure can be determined in real time at rates over 1 kHz using only one high-speed, line-scan camera and a special pattern. This resolution is more than sufficient for structural monitoring and control problems. Moreover, a novel edge tracking algorithm is proposed that enables high-resolution measurement of large motions using relatively low-resolution, line-scan cameras. Further, as the accuracy of the measurement results depends directly on camera-pattern calibration and satisfying the assumptions made by Lim et al., an easy-to-implement calibration procedure is developed that ensures the accuracy of the measurement results. Finally, versatility of the total measurement procedure is examined through both harmonic and random vibration experiments with a suite of different input motions applied to a computer-controlled cart. Comparing the input and the measured motions confirms that vision-based structural displacement measurement utilising a high-speed, line-scan camera offers a robust, high-resolution and low-cost means of measuring structural vibration displacements. --- paper_title: 3D displacement measurement model for health monitoring of structures using a motion capture system paper_content: Abstract Unlike 1D or 2D displacement measurement sensors, a motion capture system (MCS) can determine the movement of markers in any direction precisely. In addition, an MCS can overcome the limitations of the sampling frequency in 3D measurements by terrestrial laser scanning (TLS) and global positioning system (GPS). This paper presents a method to measure three dimensional (3D) structural displacements using a motion capture system (MCS) with a high accuracy and sampling rate. The MCS measures 2D coordinates of a number of markers with multiple cameras; these measurements are then used to calculate the 3D coordinates of markers. Therefore, unlike previous 1D or 2D displacement measurement sensors, the MCS can determine precisely the movement of markers in any direction. In addition, since the MCS cameras can monitor several markers, measurement points are increased by the addition of more markers. The effectiveness of the proposed model was tested by comparing the displacements measured in a free vibration experiment of a 3-story structure with a height of 2.1 m using both the MCS and laser displacement sensors. --- paper_title: Non-contact measurement of the dynamic displacement of railway bridges using an advanced video-based system paper_content: Abstract This article describes the development of a non-contact dynamic displacement measurement system for railway bridges based on video technology. The system, consisting of a high speed video camera, an optical lens, lighting lamps and a precision target, can perform measurements with acquisition frame rates ranging from 64 fps to 500 fps, and be articulated with other measurement systems, which promotes its integration in structural health monitoring (SHM) systems. Preliminary tests of the system have shown that the measurements’ precision can be affected by: (i) movements of the camera stand and, therefore, rigid supports should be used and the camera should be protected from air flows; (ii) the distortion of the field of view, caused by the flow of heat waves generated by IR incandescent lighting and, therefore, the operating time of the lamps should be limited. The system was used to measure the displacement of a railway bridge’s deck, induced by the passage of trains at speeds between 120 km/h and 180 km/h, yielding a very good agreement between the results of displacement measurement obtained with the video system and with a LVDT. The achieved precision was below 0.1 mm for distances from the camera to the target up to 15 m, and in the order of 0.25 mm for a distance of 25 m. The application of an image processing technique at subpixel level resulted in real precisions generally inferior to the theoretical precisions. --- paper_title: Speckle pattern quality assessment for digital image correlation paper_content: Abstract To perform digital image correlation (DIC), each image is divided into groups of pixels known as subsets or interrogation cells. Larger interrogation cells allow greater strain precision but reduce the spatial resolution of the data field. As such the spatial resolution and measurement precision of DIC are limited by the resolution of the image. In the paper the relationship between the size and density of speckles within a pattern is presented, identifying that the physical properties of a pattern have a large influence on the measurement precision which can be obtained. These physical properties are often overlooked by pattern assessment criteria which focus on the global image information content. To address this, a robust morphological methodology using edge detection is devised to evaluate the physical properties of different speckle patterns with image resolutions from 23 to 705 pixels/mm. Trends predicted from the pattern property analysis are assessed against simulated deformations identifying how small changes to the application method can result in large changes in measurement precision. An example of the methodology is included to demonstrate that the pattern properties derived from the analysis can be used to indicate pattern quality and hence minimise DIC measurement errors. Experiments are described that were conducted to validate the findings of morphological assessment and the error analysis. --- paper_title: Systematic errors in digital image correlation caused by intensity interpolation paper_content: Recently, digital image correlation as a tool for surface defor- mation measurements has found widespread use and acceptance in the field of experimental mechanics. The method is known to reconstruct displacements with subpixel accuracy that depends on various factors such as image quality, noise, and the correlation algorithm chosen. How- ever, the systematic errors of the method have not been studied in detail. We address the systematic errors of the iterative spatial domain cross- correlation algorithm caused by gray-value interpolation. We investigate the position-dependent bias in a numerical study and show that it can lead to apparent strains of the order of 40% of the actual strain level. Furthermore, we present methods to reduce this bias to acceptable lev- els. © 2000 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(00)00911-9) --- paper_title: Recognition of camera-captured low-quality characters using motion blur information paper_content: Camera-based character recognition has gained attention with the growing use of camera-equipped portable devices. One of the most challenging problems in recognizing characters with hand-held cameras is that captured images undergo motion blur due to the vibration of the hand. Since it is difficult to remove the motion blur from small characters via image restoration, we propose a recognition method without de-blurring. The proposed method includes a generative learning method in the training step to simulate blurred images by controlling blur parameters. The method consists of two steps. The first step recognizes the blurred characters based on the subspace method, and the second one reclassifies structurally similar characters using blur parameters estimated from the camera motion. We have experimentally proved that the effective use of motion blur improves the recognition accuracy of camera-captured characters. --- paper_title: Study of systematic errors in strain fields obtained via DIC using heterogeneous deformation generated by plastic FEA paper_content: In this article, systematic errors that arise in the derivation of strain fields based on displacement fields obtained by digital image correlation (DIC) are analysed. Attention is paid to errors that arise from different implementations of the DIC technique. In particular, we investigate the influence of the shape function, the interpolation order and the subset size on the derived strains. In addition, we focus on errors that can be directly attributed to the derivation of the strain fields, e.g. the strain-window size and the strain-window interpolation order. The errors are estimated using numerically deformed images that were obtained by imposing finite element (FE) displacement fields on an undeformed image yielding plastic deformation of the specimen. This FE procedure simulates realistic experimental heterogeneous deformations at various load steps. It is shown that the errors on the strain fields can be substantially reduced if conscious choices in the abovementioned implementations are made. --- paper_title: The errors in digital image correlation due to overmatched shape functions paper_content: In subset-based digital image correlation (DIC), a proper shape function must be chosen to approximate the underlying displacement field of the target subsets to ensure an accurate subset matching. Shape functions with varying orders of Taylor's expansions (e.g. zero-, first- and second-order) have been proposed. However, since the actual deformation occurring in the deformed subsets cannot be known a priori in most practical measurements, problems of mismatch (undermatched or overmatched) inevitably arise, which lead to additional errors in the measurement displacements. Although systematic errors due to undermatched shape functions have been thoroughly studied, the displacement errors associated with overmatched shape functions are still not sufficiently clear to us. In this paper, the systematic and random errors caused by the use of overmatched shape functions are first examined using numerical translation tests with precisely controlled subpixel motions. The results reveal that overmatched shape functions will not introduce additional systematic error but can induce increased random error, while the latter is negligibly small when a larger subset is used. Thus, it can be made explicit that second-order shape functions, capable of depicting more complex local deformation, can be used for practical DIC applications as a default, because this effectively eliminates the possible systematic error associated with the use of first-order shape functions, while the possibly increased random errors are small enough if using a relatively larger subset. Two sets of images with inhomogeneous deformation are also used to verify the accuracy of DIC measurements using second-order shape functions. --- paper_title: Blind Deblurring and Denoising of Images Corrupted by Unidirectional Object Motion Blur and Sensor Noise paper_content: Low light photography suffers from blur and noise. In this paper, we propose a novel method to recover a dense estimate of spatially varying blur kernel as well as a denoised and deblurred image from a single noisy and object motion blurred image. A proposed method takes the advantage of the sparse representation of double discrete wavelet transform-a generative model of image blur that simplifies the wavelet analysis of a blurred image-and the Bayesian perspective of modeling the prior distribution of the latent sharp wavelet coefficient and the likelihood function that makes the noise handling explicit. We demonstrate the effectiveness of the proposed method on moderate noise and severely blurred images using simulated and real camera data. --- paper_title: Dynamics-based motion de-blurring for a PZT-driven, compliant camera orientation mechanism paper_content: This paper proposes a method for removing motion blur from images captured by a fast-moving robot eye. Existing image techniques focused on recovering blurry images due to camera shake with long exposure time. In addition, previous studies relied solely on properties of the images or used external sensors to estimate a blur kernel, or point spread function PSF. This paper focuses on estimating a latent image from the blur images taken by the robotic camera orientation system. A PZT-driven, compliant camera orientation system was employed to demonstrate the effectiveness of this approach. Discrete switching commands were given to the robotic system to create a rapid point-to-point motion while suppressing the vibration with a faster response. The blurry images were obtained when the robotic system created a rapid point-to-point motion, like human saccadic motion. This paper proposes a method for estimating the PSF in knowledge of system dynamics and input commands, resulting in a faster estimation. The proposed method was investigated under various motion conditions using the single-degree-of-freedom camera orientation system to verify the effectiveness and was compared with other approaches quantitatively and qualitatively. The experiment results show that overall the performance metric of the proposed method was 27.77% better than conventional methods. The computation time of the proposed method was 50 times faster than that of conventional methods. --- paper_title: Assessment of measuring errors in DIC using deformation fields generated by plastic FEA paper_content: In this article, systematic errors that arise from different implementations of digital image correlation (DIC) techniques are analyzed. In particular, we investigate the influence of the adopted correlation function, the interpolation order, the shape function and the subset size on the derived displacements. These errors are estimated using numerically deformed images that were obtained by imposing finite element (FE) displacement fields on an undeformed image yielding plastic deformation of the specimen. This FE procedure simulates realistic experimental heterogeneous deformations at various load steps. It is shown that DIC is able to reproduce these displacements up to a satisfactory level if conscious choices in the above-mentioned implementations are made. --- paper_title: Lens distortion correction for digital image correlation by measuring rigid body displacement paper_content: A method of lens distortion correction is proposed in order to improve the measurement accuracy of digital image correlation for two-dimensional displacement measurement. The amounts of lens distortion are evaluated from displacement distributions obtained in a rigid body in-plane translation or rotation test. After detecting the lens distortion, its coefficient is determined using the method of least squares. Then, the corrected displacement distributions are obtained. The effectiveness of the proposed method is demonstrated by applying the correction method to an in-plane translation test and tension tests. The experimental results show that the proposed distortion correction method eliminates the effect of lens distortion from measured displacements. --- paper_title: Systematic errors in digital image correlation due to undermatched subset shape functions paper_content: Digital image correlation techniques are commonly used to measure specimen displacements by finding correspondences between an image of the specimen in an undeformed or reference configuration and a second image under load. To establish correspondences between the two images, numerical techniques are used to locate an initially square image subset in a reference image within an image taken under load. During this process, shape functions of varying order can be applied to the initially square subset. Zero order shape functions permit the subset to translate rigidly, while first-order shape functions represent an affine transform of the subset that permits a combination of translation, rotation, shear and normal strains. --- paper_title: Use of rigid-body motion for the investigation and estimation of the measurement errors related to digital image correlation technique paper_content: The aim of this work is to investigate the sources of errors related to digital image correlation (DIC) technique applied to strain measurements. The knowledge of such information is important before the measured kinematic fields can be exploited. After recalling the principle of DIC, some sources of errors related to this technique are listed. Both numerical and experimental tests, based on rigid-body motion, are proposed. These tests are simple and easy-to-implement. They permit to quickly assess the errors related to lighting, the optical lens (distortion), the CCD sensor, the out-of-plane displacement, the speckle pattern, the grid pitch, the size of the subset and the correlation algorithm. The errors sources that cannot be uncoupled were estimated by amplifying their contribution to the global error. The obtained results permit to address a classification of the error related to the used equipment. The paper ends by some suggestions proposed in order to minimize the errors. --- paper_title: Experimental Analysis of the Errors due to Polynomial Interpolation in Digital Image Correlation paper_content: Digital image correlation attempts to estimate displacement fields by digitally correlating two images acquired before and after motion. To do so, pixel intensity has to be interpolated at non-integer locations. The ideal interpolator is the sinc, but as it requires infinite support, it is not normally used and is replaced by polynomials. Polynomial interpolation produces visually appealing results but introduces positional errors in the signal, thus causing the digital image correlation algorithms to converge to incorrect results. In this work, an experimental campaign is described, that aims to characterise the errors introduced by interpolation, focusing in particular on the systematic error and the standard deviation of displacements. --- paper_title: The systematic error in digital image correlation induced by self-heating of a digital camera paper_content: The systematic strain measurement error in digital image correlation (DIC) induced by self-heating of digital CCD and CMOS cameras was extensively studied, and an experimental and data analysis procedure has been proposed and two parameters have been suggested to examine and evaluate this. Six digital cameras of four different types were tested to define the strain errors, and it was found that each camera needed between 1 and 2 h to reach a stable heat balance, with a measured temperature increase of around 10 ?C. During the temperature increase, the virtual image expansion will cause a 70?230 ?? strain error in the DIC measurement, which is large enough to be noticed in most DIC experiments and hence should be eliminated. --- paper_title: Study of optimal subset size in digital image correlation of speckle pattern images paper_content: This paper investigates the effect of subset size, associated with image pattern quality and subset displacement functions, on the accuracy of deformation measurements by digital image correlation(DIC). A concept of subset entropy is introduced in this work to quantify the subset image pattern quality for DIC analysis and its effectiveness was demonstrated by experimental studies. By employing white-light images with almost uniform subset entropy and first-order displacement functions, the effect of subset size on DIC analysis was investigated for the deformation cases of translation, uniform deformation, and simulated quadratic deformation, respectively. The results show that the chosen subset size must be large enough for precise displacement measurements when subset displacement functions match underlying actual deformation. On the other hand, optimal subset size in DIC for nonhomgeneous deformation measurements appears as a result of a tradeoff between the influence of random errors and systematic errors. --- paper_title: Motion-blur evaluation: A comparison of approaches paper_content: — In this paper, the results obtained from two independent evaluations of motion-blur effects with respect to the agreement between the two different approaches used, imaging and non-imaging, are analyzed. The measurements have been carried out in different laboratories by different operators without the prior intention of a subsequent analysis as presented here. The resulting data is analyzed to quantify the repeatability of each instrument and, in a second step, the comparability of results from the two approaches is investigated. The imaging approach used in these experiments is based on a stationary high-speed camera with temporal oversampling and numerical image-data processing to obtain the intensity distribution on the retina of an observer under the condition of smooth pursuit eye tracking. Results from that approach are compared to results obtained from the evaluation of step responses acquired with optical transient recorders by frame-period convolution. Measurements are carried out with a first LCD monitor with test patterns of both contrast polarities, with three velocities of translation, and four levels of gray. A second object of measurement is used for investigation of the effect of operator intervention in the process of evaluation of the imaging approach, especially on the determination of the reference levels that are needed for evaluation of the normalized blurred edge (NBE). Possible sources of uncertainties are identified for all approaches and instruments. Based on the analysis of that data, the practicability of step-response-based evaluations of the “blurred edge width/time” compared to the results obtained using the high-speed imaging approach, as long as there is no motion-dependent image processing, are confirmed. --- paper_title: Restoration of TDI camera images with motion distortion and blur paper_content: Abstract Platform movement during exposure of imaging system severely degrades image quality. In the case of Time delay and integration (TDI) camera, abnormal movements cause not only image blur but also distortion, for image Point Spread Function (PSF) is space-variant. In this paper, we present a motion degradation model of TDI image, and provide a method to restore such degraded image. While a TDI camera is imaging, it outputs images row by row (or line by line) along the scanning axis, and our method processes in the same track. We firstly calculate the space-invariant PSF of each row using the movement information of the TDI camera. Then, we substitute pixels of the row and the ones of their neighbor rows together with the PSF into standard Richardson–Lucy algorithm. By deconvoluting we get the restored pixels of the row. The same operations are executed for all rows of the degraded TDI image. Finally, a restored image can be reconstructed from those restored rows. Both simulated and experimental results prove the effectiveness of our method. --- paper_title: Adaptive subset offset for systematic error reduction in incremental digital image correlation paper_content: Abstract Digital image correlation (DIC) relies on a high correlation between the intensities in the reference image and the target image. When decorrelation occurs due to large deformation or viewpoint change, incremental DIC is utilized to update the reference image and use the correspondences in this renewed image as the reference points in subsequent DIC computation. As each updated reference point is derived from previous correlation, its location is generally of sub-pixel accuracy. Conventional subset which is centered at the point results in subset points at non-integer positions. Therefore, the acquisition of the intensities of the subset demands interpolation which is proved to introduce additional systematic error. We hereby present adaptive subset offset to slightly translate the subset so that all the subset points fall on integer positions. By this means, interpolation in the updated reference image is totally avoided regardless of the non-integer locations of the reference points. The translation is determined according to the decimal of the reference point location, and the maximum are half a pixel in each direction. Such small translation has no negative effect on the compatibility of the widely used shape functions, correlation functions and the optimization algorithms. The results of the simulation and the real-world experiments show that adaptive subset offset produces lower measurement error than the conventional method in incremental DIC when applied in both 2D-DIC and 3D-DIC. --- paper_title: Recovering ball motion from a single motion-blurred image paper_content: Motion blur often affects the ball image in photographs and video frames in many sports such as tennis, table tennis, squash and golf. In this work, we operate on a single calibrated image depicting a moving ball over a known background, and show that motion-blurred ball images, usually unwelcome in computer vision, bear more information than a sharp image. We provide techniques for extracting such information ranging from low-level image processing to 3D reconstruction, and present a number of experiments and possible applications, such as ball localization with speed and direction measurement from a single image, and ball trajectory reconstruction from a single long-exposure photograph. --- paper_title: Long Deck Suspension Bridge Monitoring: The Vision System Calibration Problem paper_content: : Structural Health Monitoring is an emergent powerful diagnostic tool that can be used to identify and to prevent possible failures of the various components that comprise an infrastructure. In the particular case of a suspension bridge, the measurement of the vertical and transversal displacements plays an important role for its safety evaluation. Taking into account the restrictions usually found on these structures, an enhanced solution comprises a non-contact vision-based measurement system with dynamic response, accuracy and amplitude range well suited to the physical phenomenon to measure. The paper describes a methodology to perform the vision system calibration that can be carried out in-situ, while the deck is moving, requiring little effort and a minimum set of information. Specifically, only a set of active targets fixed on the deck and the knowledge about the distances between them is required. Results related to the performance evaluation, obtained by numerical simulation and by real experiments with a reduced structure model, are presented and they show that, even in an environment severely affected by disturbance noise, it is possible to measure the vertical and transversal displacements with a standard accuracy better than 10 mm. --- paper_title: Study of image characteristics on digital image correlation error assessment paper_content: In this paper, errors related to digital image correlation (DIC) technique applied to measurements of displacements are estimated. This work is based on the generation of synthetic images representative of real speckle patterns. With these images, various parameters are treated in order to determine their impact on the measurement error. These parameters are related to the type of deformation imposed on the speckle, the speckle itself (encoding of the image and image saturation) or the software (subset size). --- paper_title: Error estimation in measuring strain fields with DIC on planar sheet metal specimens with a non-perpendicular camera alignment paper_content: Abstract The determination of strain fields based on displacement components obtained via 2D-DIC is subject to several errors that originate from various sources. In this contribution, we study the impact of a non-perpendicular camera alignment to a planar sheet metal specimen's surface. The errors are estimated in a numerical experiment. To this purpose, deformed images – that were obtained by imposing finite element (FE) displacement fields on an undeformed image – are numerically rotated for various Euler angles. It is shown that a 3D-DIC stereo configuration induces a substantial compensation for the introduced image-plane displacement gradients. However, higher strain accuracy and precision are obtained – up to the level of a perfect perpendicular alignment – in a proposed “rectified” 2D-DIC setup. This compensating technique gains benefit from both 2D-DIC (single camera view, basic amount of correlation runs, no cross-camera matching nor triangulation) and 3D-DIC (oblique angle compensation). Our conclusions are validated in a real experiment on SS304. --- paper_title: Measurement of sinusoidal vibration from motion blurred images paper_content: Previous vision-based methods usually measure vibration from large sequence of unblurred images recorded by high-speed video or stroboscopic photography. In this paper, we propose a novel method for sinusoidal vibration measurement based on motion blurred images. We represent the motion blur information in images by the relationship between the geometric moments of the motion blurred images and the motion, and estimate the vibration parameters from this motion blur cue. We need only one motion blurred image and an unblurred image or two successive frames of blurred images to calculate the parameters of low-frequency vibration as well as the amplitude and direction of high-frequency vibration, while unblurred-image-based techniques rely on much more images to obtain the same results and existing motion-blurred-image-based approaches only estimate the amplitude of high-frequency vibration. Experimental results with both simulated and real vibrations of low and high frequencies are employed to demonstrate the effectiveness of the proposed scheme. --- paper_title: Quality assessment of speckle patterns for digital image correlation paper_content: Digital image correlation (DIC) is an optical-numerical full-field displacement measuring technique, which is nowadays widely used in the domain of experimental mechanics. The technique is based on a comparison between pictures taken during loading of an object. For an optimal use of the method, the object of interest has to be covered with painted speckles. In the present paper, a comparison is made between three different speckle patterns originated by the same reference speckle pattern. A method is presented for the determination of the speckle size distribution of the speckle patterns, using image morphology. The images of the speckle patterns are numerically deformed based on a finite element simulation. Subsequently, the displacements are measured with DIC-software and compared to the imposed ones. It is shown that the size of the speckles combined with the size of the used pixel subset clearly influences the accuracy of the measured displacements. --- paper_title: Dynamic testing of a laboratory model via vision-based sensing paper_content: In the class of not-contact sensors, the techniques of vision-based displacement estimation enable one to gather dense global measurements of static deformation as well as of dynamic response. They are becoming more and more available thanks to the ongoing technology developments. In this work, a vision system, which takes advantage of fast-developing digital image processing and computer vision technologies and provides high sample rate, is implemented to monitor the 2D plane vibrations of a reduced scale frame mounted on a shaking table as available in a laboratory. The physical meanings of the camera parameters, the trade-off between the system resolution and the field-of-view, and the upper limitation of marker density are discussed. The scale factor approach, which is widely used to convert the image coordinates measured by a vision system in the unit of pixels into space coordinates, causes a poor repeatability of the experiment, an unstable experiment precision, and therefore a global poor flexibility. To overcome these problems, two calibrations approaches are introduced: registration and direct linear transformation. Based on the constructed vision-based displacement measurement system, several experiments are carried out to monitor the motion of a scale-reduced model on which dense markers are glued. The experiment results show that the proposed system can capture and successfully measure the motion of the laboratory model within the required frequency band. --- paper_title: Use of Digital Image Processing in the Monitoring of Deformations in Building Structures. paper_content: Digital image processing was applied to develop a simple, robust and economical measuring system for detecting deformations in building structures. Selected points of the structure are captured by electronic cameras at regular intervals and their positions are measured in the digital image. The target points, in this case light-emitting diodes (LEDs), as well as the cameras used for measuring operate in the infrared range, which means that the quality of the images is not affected by the lighting conditions. Tests in the laboratory showed that using this technique it is possible to track targets over distances typical for building structures within a matter of millimeters. The practical applicability of the method was verified in a pilot project, for which the gym of the Staffelsee High School in Murnau in the district of Garmisch-Partenkirchen was chosen. The wide-span timber roof structure of the building was equipped with a system based on digital image processing gauging the movement of three points on each of the four main beams. Reference measurements can be taken with a built-in laser gauge. To be able to observe the correlation between weather conditions and deformation, the system is complemented by a weather station with snow-cushions on the roof of the building. Furthermore, the monitoring system is connected to multiple alarming devices. --- paper_title: Real-Time Displacement Measurement of a Flexible Bridge Using Digital Image Processing Techniques paper_content: In this study, real-time displacement measurement of bridges was carried out by means of digital image processing techniques. This is innovative, highly cost-effective and easy to implement, and yet maintains the advantages of dynamic measurement and high resolution. First, the measurement point is marked with a target panel of known geometry. A commercial digital video camera with a telescopic lens is installed on a fixed point away from the bridge (e.g., on the coast) or on a pier (abutment), which can be regarded as a fixed point. Then, the video camera takes a motion picture of the target. Meanwhile, the motion of the target is calculated using image processing techniques, which require a texture recognition algorithm, projection of the captured image, and calculation of the actual displacement using target geometry and the number of pixels moved. Field tests were carried out for the verification of the present method. The test results gave sufficient dynamic resolution in amplitude as well as the frequency. Use of this technology for a large suspension bridge is discussed considering the characteristics of such bridges having low natural frequencies within 3 Hz and the maximum displacement of several centimeters. --- paper_title: A simple image-based strain measurement method for measuring the strain fields in an RC-wall experiment paper_content: A simple image-based method for measuring plane strain fields on the surface of specimens in earthquake engineering experiments was developed. This method integrated camera calibration, stereo triangulation, image metric rectification and image template matching techniques to develop a method that was cost-effective, easy to apply and provided a satisfactory level of measurement accuracy. A zero-strain test conducted using this method showed that the measurement accuracy achieved was 0.04 pixels. That is, the relative displacement accuracy achieved was 0.005 mm and the strain accuracy was 0.001. This level of accuracy was achieved using eight-mega-pixel digital cameras to measure a 17 cm × 28 cm measurement region. Cracks that were 0.012 mm wide were identified in the concrete by examining the displacement fields calculated through the application of this image-based method in an RC-wall experiment. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: A vision-based approach for the direct measurement of displacements in vibrating systems paper_content: This paper reports the results of an analytical and experimental study to develop, calibrate, implement and evaluate the feasibility of a novel vision-based approach for obtaining direct measurements of the absolute displacement time history at selectable locations of dispersed civil infrastructure systems such as long-span bridges. The measurements were obtained using a highly accurate camera in conjunction with a laser tracking reference. Calibration of the vision system was conducted in the lab to establish performance envelopes and data processing algorithms to extract the needed information from the captured vision scene. Subsequently, the monitoring apparatus was installed in the vicinity of the Vincent Thomas Bridge in the metropolitan Los Angeles region. This allowed the deployment of the instrumentation system under realistic conditions so as to determine field implementation issues that need to be addressed. It is shown that the proposed approach has the potential of leading to an economical and robust system for obtaining direct, simultaneous, measurements at several locations of the displacement time histories of realistic infrastructure systems undergoing complex three-dimensional deformations. --- paper_title: Monitoring of a civil structure’s state based on noncontact measurements paper_content: In this article, the comparison of two noncontact measurement methods dedicated to civil engineering structures’ state examination is presented. The vision-based method computes the displacement field of the analyzed structure by means of the digital image correlation coefficient. The system consists of one or more high-resolution digital cameras mounted on a head or on portable tripods. The developed methodology and created software application embedded in an MS-Windows operating system are presented. The second system measures the deflection of the structures by means of a radar interferometer. In both cases, it is possible to measure many points on the structure simultaneously. This article presents a comparison of the displacement field measurement performed on a field setup, as well as the span of a steel bridge designed for tram traffic. Both systems are described, with special attention given to their application in measurements of civil engineering structures. This article demonstrates a prelimina... --- paper_title: A Synchronized Multipoint Vision-Based System for Displacement Measurement of Civil Infrastructures paper_content: This study presents an advanced multipoint vision-based system for dynamic displacement measurement of civil infrastructures. The proposed system consists of commercial camcorders, frame grabbers, low-cost PCs, and a wireless LAN access point. The images of target panels attached to a structure are captured by camcorders and streamed into the PC via frame grabbers. Then the displacements of targets are calculated using image processing techniques with premeasured calibration parameters. This system can simultaneously support two camcorders at the subsystem level for dynamic real-time displacement measurement. The data of each subsystem including system time are wirelessly transferred from the subsystem PCs to master PC and vice versa. Furthermore, synchronization process is implemented to ensure the time synchronization between the master PC and subsystem PCs. Several shaking table tests were conducted to verify the effectiveness of the proposed system, and the results showed very good agreement with those from a conventional sensor with an error of less than 2%. --- paper_title: Analysis of the structural behavior of a membrane using digital image processing paper_content: Abstract This article presents a methodology for the experimental analysis of thin membranes using digital image processing techniques. The methodology is particularly suitable for structures that cannot be monitored using conventional systems, particularly those systems that involve contact with the structure. This methodology consists of a computer vision system that integrates the digital image acquisition and processing techniques on-line using special programming routines. Because the membrane analyzed is very thin and displays large displacements/strains, the use of any conventional sensor based on contact with the structure would not be possible. The methodology permits the measurement of large displacements at several points of the membrane simultaneously, thus enabling the acquisition of the global displacement field. The accuracy of the acquired displacement field is a function of the number of cameras and measured points. The second step is to estimate the strains and stresses on the membrane from the measured displacements. The basic idea of the methodology is to generate global two-dimensional functions that describe the strains and stresses at any point of the membrane. Two constitutive models are considered in the analysis: the Hookean and the neo-Hookean models. Favorable comparisons between the experimental and numerical results attest to the accuracy of the proposed experimental procedure, which can be used for both artificial and natural membranes. --- paper_title: A vision-based system for remote sensing of bridge displacement paper_content: This study proposed the vision-based system which remotely measures dynamic displacement of bridges in real-time using digital image processing techniques. This system has a number of innovative features including a high resolution in dynamic measurement, remote sensing, cost-effectiveness, real-time measurement and visualization, ease of installation and operation and no electro-magnetic interference. The digital video camera combined with a telescopic device takes a motion picture of the target installed on a measurement location. Meanwhile, the displacement of the target is calculated using an image processing technique, which requires a target recognition algorithm, projection of the captured image, and calculation of the actual displacement using target geometry and number of pixels moved. For the purpose of verification, a laboratory test using shaking table test and field application on a bridge with open-box girders were carried out. The test results gave sufficient dynamic resolution in frequency as well as the amplitude. --- paper_title: Structural dynamic displacement vision system using digital image processing paper_content: Abstract This study introduces dynamic displacement vision system (DDVS), which is applicable for imaging unapproachable structures using a hand-held digital video camcorder and is more economical than the existing contact and contactless measurement methods of dynamic displacement and deformation. This proposed DDVS method is applied to the Region of Interest (ROI) resizing and coefficient updating at each time step to improve the accuracy of the measurement from the digital image. Thus, after evaluating the algorithms conducted in this study by the static and dynamic verification, the measurement's usability by calculating the dynamic displacement of the masonry specimen, and the two-story steel frame specimen is evaluated under uniaxial seismic loading. The algorithm of the proposed method in this study, despite the relatively low resolution during frozen, slow, and seismic motions, has precision and usability that can replace the existing displacement transducer. Moreover, the method can be effectively applied to even fast behavior for multi-measurement positions like the seismic simulation test using large scale specimen. DDVS, using the consecutive images of the structures with an economic, hand-held digital video camcorder is a more economical displacement sensing concept than the existing contact and contactless measurement methods. --- paper_title: NONCONTACT PHOTOGRAMMETRIC MEASUREMENT OF VERTICAL BRIDGE DEFLECTION paper_content: This paper reports on the results from a study of vertical deflection measurement of bridges using digital close-range terrestrial photogrammetry (DCRTP). The study consisted of a laboratory and two field exercises. In the laboratory exercise, photogrammetric measurements of a 11.6 m (38 ft) steel beam loaded at midspan were made and compared with dial gauge readings and elastic beam theory. In the first field exercise, the initial camber and dead load deflection of 31.1 m (102 ft) prestressed concrete bridge girders were measured photogrammetrically and compared with level rod and total station readings. A comparison of the photogrammetric measurements with the dead load deflection diagram is also made. In the second field exercise, the vertical deflection of a 14.9 m (49 ft) noncomposite steel girder bridge loaded with two dump trucks was measured. Photogrammetric results are compared with deflections estimated using elastic finite-element analysis, level rod readings, and curvature-based deflection measurements. The paper is concluded with a discussion of work in progress to further improve the accuracy of DCRTP in the field. --- paper_title: A Vision-Based Sensor for Noncontact Structural Displacement Measurement paper_content: Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement. --- paper_title: BRIDGE DEFLECTION MEASUREMENT USING DIGITAL IMAGE CORRELATION paper_content: Digital image correlation technique is used for measuring vertical deflections of bridge girders during a bridge load testing. A bridge is loaded by a heavy cargo truck on the bridge road. Then, the deflection distribution is measured by digital image correlation. The applicability of digital image correlation to bridge deflection measurement is investigated by comparing the results obtained by digital image correlation with those obtained by displacement transducers. The effect of random pattern on an object surface is also investigated by measuring with and without random pattern. The results show that the deflection distributions of the bridge obtained by digital image correlation agree well with those obtained by the displacement transducers when the random pattern is attached on the bridge surface. In addition, it is found that the deflections can be measured even if the artificial random pattern is not applied to the surface of the bridge girder. It is emphasized that noncontact displacement measurement is possible by simple and easy procedure with digital image correlation for the structural evaluation of infrastructures. --- paper_title: Vision-based displacement measurement method for high-rise building structures using partitioning approach paper_content: The horizontal displacement of a high-rise building structure is usually considered as one of the major indicators to assess the structural safety. It is, however, difficult to directly measure the displacement of such a flexible structure due to the inaccessibility to a reference point usually needed for conventional displacement sensors and the huge size of the structure. In order to resolve this issue, a novel vision-based displacement measurement technique is proposed by employing a partitioning approach (i.e., successive estimation of relative displacements and rotational angles throughout a large flexible structure). A series of the experimental tests using two webcams installed on a flexible steel column structure have been conducted to validate the feasibility of the proposed method. The test results showed that the difference between the displacement measured from the proposed method and the exact value is less than 0.5%. Therefore, the proposed method could be considered as one promising candidate for measuring the displacement of high-rise building structures. --- paper_title: Vision-based algorithms for damage detection and localization in structural health monitoring paper_content: Summary ::: ::: Deflection curve can be used to detect and localize damage in civil engineering structures. In this paper, a vision-based method applied for in-plane displacement field measurement of cantilever beams is presented. The deflection curve of the analyzed structure is computed by means of the digital image correlation. Damage is introduced into the structure. Resulting deflection curves are used as an input to the novel damage detection algorithms: line segments method and voting method. The algorithms are then compared with the second derivative method and assessed through the probability of detection procedure. Copyright © 2015 John Wiley & Sons, Ltd. --- paper_title: Investigation of the dynamic characteristic of bridge structures using a computer vision method paper_content: A new method for the investigation of the dynamic characteristic of bridges has been developed. It is based on the photogrammetric principle; however, the viewing system is equipped with an additional reference system, which decreases the sensitivity to vibrations and an analysis system which enables image analysis. The method is used for monitoring and real-time measurement of the displacement of chosen points at bridge structures. It has applications particularly in the case of the measurement of hard-to-access places on bridges. The instrumentation, methodology and engineering examples of its application are presented. --- paper_title: An optical approach to structural displacement measurement and its application paper_content: A number of optical devices are commercially available now for measurement. Some of them, such as laser devices and still and video cameras with high resolution, may be used effectively and efficiently for measuring structural displacement. This paper presents an approach to this type of application using a charge-coupled-device camera. It can acquire digitized images for low cost, to be used to identify structural displacement via digital signal processing. It is shown that this approach’s resolution for point measurement is comparable with traditional sensors such as dial gages. Furthermore, it offers a new capability of displacement measurement for a large number of points on a structure, and it can provide spatially intensive displacement data. This kind of data may be used for structural damage detection and health monitoring, as suggested and demonstrated herein. --- paper_title: Flexible Videogrammetric Technique for Three-Dimensional Structural Vibration Measurement paper_content: A videogrammetric technique is proposed for measuring three-dimensional structural vibration response in the laboratory. The technique is based on the principles of close-range digital photogrammetry and computer vision. Two commercial-grade digital video cameras are used for image acquisition. To calibrate these cameras and to overcome potential lens distortion problems, an innovative two-step calibration process including individual and stereo calibration is proposed. These calibrations are efficiently done using a planar pattern arbitrarily shown at a few different orientations. This special characteristic makes it possible to perform an on-site calibration that provides flexibility in terms of using different camera settings to suit various application conditions. To validate the proposed technique, three tests, including sinusoidal motion of a point, wind tunnel test of a cross-section bridge model, and a three-story building model under earthquake excitation, are performed. Results indicate that the proposed videogrammetric technique can provide fairly accurate displacement measurement for all three tests. The proposed technique is shown to be a good complement to the traditional sensors for measuring two- or three-dimensional vibration response in the low-frequency range. --- paper_title: Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures paper_content: For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test. --- paper_title: Monocular Computer Vision Method for the Experimental Study of Three-Dimensional Rocking Motion paper_content: AbstractThe rocking problem is applicable to a wide variety of structural and nonstructural elements. The current applications range from bridge pier and shallow footing design to hospital and data center equipment, even art preservation. Despite the increasing number of theoretical and simulation studies of rocking motion, few experimental studies exist. Of those that have been published, most are focused on a reduced version of the problem introducing modifications to the physical problem with the purpose of eliminating either sliding, uplift, or the three-dimensional (3D) response of the body. However, all of these phenomena may affect the response of an unrestrained rocking body. The intent of this work is to present a computer vision method that allows for the experimental measurement of the rigid body translation and rotation time histories in three dimensions. Experimental results obtained with this method will be presented to demonstrate that it obtains greater than 97% accuracy when compared agai... --- paper_title: A paired visual servoing system for 6-DOF displacement measurement of structures paper_content: In the previous study, a paired structured light system which incorporates lasers, cameras, and screens was proposed, and experimental tests validated the potential of a displacement measurement system for large structures. However, the estimation of relative translational and rotational displacements between two sides was based on an assumption that there is zero initial displacement and that three laser beams are always on the screens. In this paper, a calibration method is proposed to offset the initial displacement using the first captured image. The calibration matrix derived from the initial offset is used for subsequent displacement estimation. A newly designed 2-DOF manipulator for each side is visually controlled to prevent the laser beams from leaving the screen. As the manipulator actively controls the laser beams to target the center of the screen, it contains all three laser points within the bounds of the screen. To verify the feasibility of the proposed system, various simulations and experimental tests were performed. The results show that the proposed visually servoed paired structured light system solves the main problem with the former system and that it can be utilized to enlarge the estimation range of the displacement. --- paper_title: A four-camera videogrammetric system for 3-D motion measurement of deformable object paper_content: Abstract A four-camera videogrammetric system with large field-of-view is proposed for 3-D motion measurement of deformable object. Four high-speed commercial-grade cameras are used for image acquisition. Based on close-range photogrammetry, an accurate calibration method is proposed and verified for calibrating the four cameras simultaneously, where a cross target as calibration patterns with feature points pasted on its two-sides is used. The key issues of the videogrammetric processes including feature point recognition and matching, 3-D coordinate and displacement reconstruction, and motion parameters calculation are discussed in detail. Camera calibration experiment indicates that the proposed calibration method, with a re-projection error less than 0.05 pixels, has a considerable accuracy. Accuracy evaluation experiments prove that the accuracy of the proposed system is up to 0.5 mm on length dynamic measurement within 5000 mm×5000 mm field-of-view. Motion measurement experiment on an automobile tire is conducted to validate performance of our system. The experimental results show that the proposed four-camera videogrammetric system is available and reliable for position, trajectory, displacement and speed measurement of deformable moving object. --- paper_title: A new low-cost displacements monitoring system based on Kinect sensor paper_content: We present a new system for structural health monitoring. In particular, the reason we have developed the system is related to structural monitoring during (potentially) destructive tests. Structural response is usually sensed by sensors such as accelerometers, LVDTs and all the instruments which need to be placed in contact with the structure in order to work properly. This can cause problems during (potentially) destructive tests, like the ones performed on shaking tables, an important class of experiments aimed at monitoring the behavior of certain structures under the action of a seismic force. Thus, in the present paper, we focus our attention on designing a new displacements monitoring system, which does not need to be in direct contact with the structure in order to work. The goal is to use a really promising technology in the field of 3D space monitoring to build a system suitable to replace classical sensors such as LVDTs and accelerometers, usually adopted in displacements monitoring. This technology is based on computer vision and low-cost sensors; this allows us to protect monitoring systems from damage due to contact with the structure under testing. Results show that our system is able to estimate the position of the selected points on a test structure with good accuracy, with respect to a traditional high-precision sensor. --- paper_title: A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests paper_content: Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. --- paper_title: Using Specific Displacements to Analyze Motion without Calibration paper_content: Considering the field of un-calibrated image sequences and self-calibration, this paper analyzes the use of specific displacements (such as fixed axis rotation, pure translations,...) or specific sets of camera parameters. This allows to induce affine or metric constraints, which can lead to self-calibration and 3D reconstruction.A uniformed formalism for such models already developed in the literature plus some novel models are developed here. A hierarchy of special situations is described, in order to tailor the most appropriate camera model to either the actual robotic device supporting the camera, or to tailor the fact we only have a reduced set of data available.This visual motion perception module leads to the estimation of a minimal 3D parameterization of the retinal displacement for a monocular visual system without calibration, and leads to self-calibration and 3D dynamic analysis.The implementation of these equations is analyzed and experimented. --- paper_title: A stereoscopic digital speckle photography system for 3-D displacement field measurements paper_content: Stereoscopic digital speckle photography offers a technique to measure object shapes and 3-D displacement fields in experimental mechanics. The system measures the displacement of a random white light speckle pattern, which somehow is present on the object surface, using digital correlation. This paper describes a general physical model for stereo imaging systems. A camera calibration algorithm, which takes the distortion in the lenses into account, is also presented and evaluated by real experiments. Standard deviations of small deformations as low as 1% of the pixel size for in-plane deformations and 6% of the pixel size for the out-of-plane component are reported. Using the calibration algorithm described, the main source of errors is random errors originating from the correlation algorithm. --- paper_title: Three-Dimensional Structural Translation and Rotation Measurement Using Monocular Videogrammetry paper_content: Measuring displacement for large-scale structures has always been an important yet challenging task. In most applications, it is not feasible to provide a stationary platform at the location where its displacements need to be measured. Recently, image-based technique for three-dimensional (3D) displacement measurement has been developed and proven to be applicable to civil engineering structures. Most of these developments, however, use two or more cameras and require sophisticated calibration using a total station. In this paper, we present a single-camera approach that can simultaneously measure both 3D translation and rotation of a planar target attached on a structure. The intrinsic parameters of the camera are first obtained using a planar calibration board arbitrarily positioned around the target location. The obtained intrinsic parameters establish the relationship between the 3D camera coordinates and the two-dimensional image coordinates. These parameters can then be used to extract the rotation and translation of the planar target using recorded image sequence. The proposed technique is illustrated using two laboratory tests and one field test. Results show that the proposed monocular videogrammetric technique is a simple and effective alternative method to measure 3D translation and rotation for civil engineering structures. It should be noted that the proposed technique cannot measure translation along the direction perpendicular to the image plane. Hence, proper caution should be taken when placing target and camera. --- paper_title: Nontarget Stereo Vision Technique for Spatiotemporal Response Measurement of Line-Like Structures paper_content: With continuous advancement in optical, electronics, and computer technology, commercial digital cameras now are equipped with a high pixel resolution at a reasonable cost. Image sequences recorded by these cameras contain both spatial and temporal informa- tion of the target object; hence they can be used to extract the object's dynamic responses and characteristics. This paper presents a nontarget stereo vision technique to measure response of a line-like structure simultaneously in both spatial and temporal domain. The technique uses two digital cameras to acquire image sequences of a line-like structure. It adopts a simple nondimensional length matching approach and the epipolar geometry to establish point correspondences within an image sequence and between two image sequences, respectively. After reconstructing a spatio-temporal displacement response from the two image sequences, wavelet transform is then used to extract the modal characteristics of the structure. The technique is illustrated using two free vibration tests: a steel cantilever beam and a bridge stay cable. Results show that the technique can measure the spatiotemporal responses, natural frequencies, and mode shapes of the two structures quite accurately. This image-based technique is a low-cost and simple-to-use approach that not only can complement current response measurement sensors but also offer special advantages which are hard to obtain using these traditional sensors. --- paper_title: 3D displacement measurement model for health monitoring of structures using a motion capture system paper_content: Abstract Unlike 1D or 2D displacement measurement sensors, a motion capture system (MCS) can determine the movement of markers in any direction precisely. In addition, an MCS can overcome the limitations of the sampling frequency in 3D measurements by terrestrial laser scanning (TLS) and global positioning system (GPS). This paper presents a method to measure three dimensional (3D) structural displacements using a motion capture system (MCS) with a high accuracy and sampling rate. The MCS measures 2D coordinates of a number of markers with multiple cameras; these measurements are then used to calculate the 3D coordinates of markers. Therefore, unlike previous 1D or 2D displacement measurement sensors, the MCS can determine precisely the movement of markers in any direction. In addition, since the MCS cameras can monitor several markers, measurement points are increased by the addition of more markers. The effectiveness of the proposed model was tested by comparing the displacements measured in a free vibration experiment of a 3-story structure with a height of 2.1 m using both the MCS and laser displacement sensors. --- paper_title: Three-Dimensional Acceleration Measurement Using Videogrammetry Tracking Data paper_content: In order to evaluate the feasibility of multi-point, non-contact, acceleration measurement, a high-speed, precision videogrammetry system has been assembled from commercially-available components and software. Consisting of three synchronized 640 × 480 pixel monochrome progressive scan CCD cameras each operated at 200 frames per second, this system has the capability to provide surface-wide position-versus-time data that are filtered and twice-differentiated to yield the desired acceleration tracking at multiple points on a moving body. The oscillating motion of targets mounted on the shaft of a modal shaker were tracked, and the accelerations calculated using the videogrammetry data were compared directly to conventional accelerometer measurements taken concurrently. Although differentiation is an inherently noisy operation, the results indicate that simple mathematical filters based on the well-known Savitzky and Golay algorithms, implemented using spreadsheet software, remove a significant component of the noise, resulting in videogrammetry-based acceleration measurements that are comparable to those obtained using the accelerometers. --- paper_title: Experimental validation of non-contacting measurement method using LED-optical displacement sensors for vibration stress of small-bore piping paper_content: Abstract There have been many reports of fatigue failures of small-bore piping systems such as drain piping, vent piping and instrumentation piping in nuclear power plants that arise from vibration sources such as pumps. To prevent the failures, integrity evaluation of piping is conducted by measuring and analyzing vibration stress in the piping. But, a more efficient and economical measurement method is desirable to evaluate the vibration fatigue in small-bore piping. In this study, a non-contacting measurement method was proposed that is based on optical displacement sensors using light emission diodes (LEDs) to measure the vibration stress. The applicability of the method was discussed based on the vibration experiments using pipe elements and a mock-up piping system. From the experimental results, the proposed method was clarified to be sufficiently applicable and practically useful for the vibration measurement and stress evaluation in small-bore piping systems. --- paper_title: High resolution digital image correlation measurements of strain accumulation in fatigue crack growth paper_content: Abstract Microstructure plays a key role in fatigue crack initiation and growth. Consequently, measurements of strain at the microstructural level are crucial to understanding fatigue crack behavior. The few studies that provide such measurements have relatively limited resolution or areas of observation. This paper provides quantitative, full-field measurements of plastic strain near a growing fatigue crack in Hastelloy X, a nickel-based superalloy. Unprecedented spatial resolution for the area covered was obtained through a novel experimental technique based on digital image correlation (DIC). These high resolution strain measurements were linked to electron backscatter diffraction (EBSD) measurements of grain structure (both grain shape and orientation). Accumulated plastic strain fields associated with fatigue crack growth exhibited inhomogeneities at two length scales. At the macroscale, the plastic wake contained high strain regions in the form of asymmetric lobes associated with past crack tip plastic zones. At high magnification, high resolution DIC measurements revealed inhomogeneities at, and below, the grain scale. Effective strain not only varied from grain to grain, but also within individual grains. Furthermore, strain localizations were observed in slip bands within grains and on twin and grain boundaries. A better understanding of these multiscale heterogeneities could help explain variations in fatigue crack growth rate and crack path and could improve the understanding of fatigue crack closure and fracture in ductile metals. --- paper_title: Assessing steel strains on reinforced concrete members from surface cracking patterns paper_content: Abstract The measurement of steel strains in reinforced concrete structures is often critical to characterise the stresses along the member. In this scope, this manuscript describes the development of a method to assess steel strains inside concrete members using solely surface measurements. These measurements were obtained using photogrammetry and image processing. The technique was validated using two concrete ties monitored with strain gauges placed inside the steel bars. The experimental results showed that one of the most important parameters affecting the accuracy of the technique is the measurement of crack widths. In comparison, the concrete strain has little effect on the final results. The technique is particularly advantageous since it is non-contact and does not impact on the bond conditions. It also does not require accessing the reinforcements. As a main conclusion, this work showed the feasibility of estimating strains inside the structure using surface measurements. This technique will benefit in the near future from further improvements namely in what concerns the camera resolution. --- paper_title: New Parameters to Describe High-Temperature Deformation of Prestressing Steel Determined Using Digital Image Correlation paper_content: AbstractThis paper describes the results from a series of high-temperature tension tests on prestressing steel under sustained load (creep tests). Both steady-state and transient heating regimes ar... --- paper_title: Computer Vision-Based Technique to Measure Displacement in Selected Soil Tests paper_content: The paper investigates the accuracy of normal case photography as a means of measuring linear deformations of soil specimens. The results of using this approach were compared with results from conventional procedures for two soil tests, namely unconfined compression and direct shear tests. Charge-coupled-device (CCD) video cameras were used to measure deformation or strain in soil specimens. A personal computer based digital vision system was used to obtain accurately measured linear displacement data. Using remolded soil specimens, comparisons between displacement measurements using American Society for Testing and Materials (ASTM) conventional methods and normal case photography methods showed that use of the latter method is promising and could be used as a substitute for strain gages. Experimental investigation showed that differences between displacement measurements using conventional ASTM procedures and the computer vision technique were consistently within 0.04 plus or minus 0.15 to 0.3 plus or minus 0.23 mm for unconfined compression tests and direst shear tests, respectively. This was compatible with the image scale where one pixel on the image domain was equivalent to about 0.4 mm on object space coordinates. Statistical correlations between strains by the two methods supported this result. Image scale and resolution were found to be the two major factors affecting the accuracy of the measurements. The results of this work can further the search for more fully automated soil testing measurements. --- paper_title: Full-field measurements of heterogeneous deformation patterns on polymeric foams using digital image correlation paper_content: The ability of a digital image correlation technique to capture the heterogeneous deformation fields appearing during compression of ultra-light open-cell foams is presented in this article. Quantitative characterization of these fields is of importance to understand the mechanical properties of the collapse process and the energy dissipation patterns in this type of materials. The present algorithm is formulated in the context of multi-variable non-linear optimization where a merit function based on a local average of the deformation mapping is minimized implicitly. A parallel implementation utilizing message passing interface for distributed-memory architectures is also discussed. Estimates for optimal size of the correlation window based on measurement accuracy and spatial resolution are provided. This technique is employed to reveal the evolution of the deformation texture on the surface of open-cell polyurethane foam samples of different relative densities. Histograms of the evolution of surface deformation are extracted, showing the transition from unimodal to bimodal and back to unimodal. These results support the interpretation that the collapse of light open-cell foams occurs as a phase transition phenomenon. --- paper_title: Application of three-dimensional digital image correlation to the core-drilling method paper_content: We present a non-destructive technique for the determination ofin situ stresses in concrete structures, reterred to as the core-drilling method. The method is similar to the American Society for Testing and Materials (ASTM) hole-drilling strain gage method, except that the core-drilling method is formulated in the current work are performed with traditional photogrammetry, and the more novel (and more accurate) three-dimensional digital image correlation. In this paper we review the background elasticity theory and we discuss the results of verification experiments on steel plates. Calculated normal stresses are within 17% of applied values for photogrammetry, and 7% for three-dimensional digital image correlation. --- paper_title: Image-based stress and strain measurement of wood in the split-Hopkinson pressure bar paper_content: The properties of wood must be considered when designing mechanical pulping machinery. The composition of wood within the annual ring is important. This paper proposes a novel image-based method to measure stress and planar strain distribution in soft, heterogeneous materials. The main advantage of this method in comparison to traditional methods that are based on strain gauges is that it captures local strain gradients and not only average strains. Wood samples were subjected to compression at strain rates of 1000–2500 s−1 in an encapsulated split-Hopkinson device. High-speed photography captured images at 50 000–100 000 Hz and different magnifications to achieve spatial resolutions of 2.9 to 9.7 µm pixels−1. The image-based analysis utilized an image correlation technique with a method that was developed for particle image velocimetry. The image analysis gave local strain distribution and average stress as a function of time. Two stress approximations, using the material properties of the split-Hopkinson bars and the displacement of the transmitter bar/sample interface, are presented. Strain gauges on the bars of the split-Hopkinson device give the reference average stress and strain. The most accurate image-based stress approximation differed from the strain gauge result by 5%. --- paper_title: Calibration and evaluation of optical systems for full-field strain measurement paper_content: The design and testing of a reference material for the calibration of optical systems for strain measurement is described, together with the design and testing of a standardized test material that allows the evaluation and assessment of fitness for purpose of the most sophisticated optical system for strain measurement. A classification system for the steps in the measurement process is also proposed and allows the development of a unified approach to diagnostic testing of components or sub-systems in an optical system for strain measurement based on any optical technique. The results described arise from a European study known as SPOTS whose objectives were to begin to fill the gap caused by a lack of standards. --- paper_title: Measurement of Local Deformations in Steel Monostrands Using Digital Image Correlation paper_content: The local deformation mechanisms in steel monostrands have a significant influence on their fatigue life and failure mode. However, the observation and quantification of deformations in monostrands experiencing axial and transverse deformations is challenging because of their complex geometry, difficulties with the placement of strain gauges in the vicinity of the anchorage, and, most importantly, the relatively small magnitude of deformation occurring in the monostrand. This paper focuses on the measurement of localized deformations in high-strength steel monostrands using the digital image correlation (DIC) technique. The presented technique enables the measurement of individual wire strains along the length of the monostrand and also provides quantitative information on the relative movement between individual wires, leading to a more in-depth understanding of the underlying fatigue mechanisms. To validate the proposed image-based measurement method, two different tests were performed, with the one correlation method showing good agreement. Data collected from the DIC technique creates a basis for the analysis of the fretting and localized bending behavior of the monostrand and provides relevant information on the internal state of displacement of the monostrand under bending load. --- paper_title: On the use of digital image correlation for slip measurement during coupon scale fretting fatigue experiments paper_content: Abstract Digital image correlation (DIC) is a full field three dimensional measurement technique that can quantify displacements and strains of a surface. In this paper, digital image correlation is used as a slip measurement technique during coupon scale fretting fatigue experiments. Slip measured with the novel DIC technique is compared to conventional slip measurement techniques as clip gauges and modified clip gauge measurements proposed by Wittkowsky et al. Slip measurements with the DIC system show lower slip values and higher tangential contact stiffness’s compared to (modified) clip gauge measurements. Slip measured with DIC is obtained closer to the contact compared to clip gauges, eliminating the influence of elastic deformations or fitting parameters. During the fretting fatigue experiments are two equal contacts simultaneously tested. However, the slip of both contacts is not identical with outliers of more than 10% difference in slip amplitude. --- paper_title: Application of digital photogrammetry techniques in identifying the mode shape ratios of stay cables with multiple camcorders paper_content: Abstract A simple digital photogrammetry technique using artificial targets is developed to conduct the synchronized ambient vibration measurements at different locations of a stay cable with multiple camcorders for the identification of mode shape ratios. To compensate the restrictions imposed by the adoption of budget equipment, elaborate adjustments in scale, synchronization, and baselines are performed. Based on the on-site measurements for three stay cables, it is verified that the proposed methodology can effectively measure the ambient vibration responses of stay cables and attain the same order of accuracy in the identification of mode shape ratios as that of high-resolution velocimeters. --- paper_title: Cost‐effective vision‐based system for monitoring dynamic response of civil engineering structures paper_content: This study develops a cost-effective vision-based displacement measurement system for real-time monitoring of dynamic responses of large-size civil engineering structures such as bridges and buildings. The system simply consists of a low-cost digital camcorder and a notebook computer equipped with digital image-processing software developed in this study. A target panel with predesigned marks is attached to a structure, whose movement is captured by the digital camcorder placed at a fixed point away from the measurement point. The captured images are streamed to the notebook computer and processed by the software to compute displacement in real time. The efficacy of the system in measuring dynamic responses was demonstrated through laboratory tests, seismic shaking table tests on a steel building frame, and a field experiment on a bridge. In order to simultaneously measure displacements of multiple points, this study further developed a time synchronization system, in which TCP/IP protocol is employed for communications. The effectiveness of the time synchronization system was also experimentally verified. The vision-based system developed in this study is simple, cost-effective, easy to install, and capable of real-time measurement of dynamic responses, making the system ideal for monitoring civil engineering structures. Copyright © 2009 John Wiley & Sons, Ltd. --- paper_title: Nontarget Image-Based Technique for Small Cable Vibration Measurement paper_content: In this paper, a proof-of-concept image-based technique is proposed for measuring small cable vibration. The technique analyzes an image sequence of a vibrating cable segment captured by a camera. An optical flow method is used to calculate variation of optical intensity of an arbitrary selected region of interest ROI on the cable image sequence. The obtained optical flow vector provides the direction of vibration for the ROI on the cable segment, which then can be used to estimate displacement of the ROI on the image plane. Furthermore, actual displacement of the ROI can be extracted when some conditions are met. The proposed technique is validated both in the laboratory using a rigid pipe and in the field on a small pedestrian bridge cable. Results show that the technique is able to measure the pipe motion and the cable vibration accurately. The proposed technique requires only one commercial camera, and no prior camera calibration is needed. In addition, the use of an optical flow method eliminates the need to attach any target to the cable and makes the technique very easy to implement. Despite these advantages, the technique still needs further development before it can be applied to long-span bridge cables. --- paper_title: Extracting modal parameters of a cable on shaky motion pictures paper_content: Abstract A set of modal parameters of a cable are extracted from a motion picture captured by a digital camera operated with shaking hands. It is difficult to identify the center of the targets attached to the cable surface from the blurred motion image of the cable, because of the high-speed motion of the cable, low sampling frequency of the camera, and the effect of shaking hands on the motion pictures. This paper proposes a multi-template matching algorithm to solve these difficulties. In addition, a sensitivity-based system identification algorithm is proposed for extracting the natural frequencies and the damping ratios from ambient cable vibration data. Three sets of vibration tests are performed to examine the validity of the proposed algorithms. The results show that the proposed approach of using these two algorithms is fairly feasible for extracting modal parameters from severely blurred motion pictures. --- paper_title: Structural damage detection using digital video imaging technique and wavelet transformation paper_content: Damage in structures may render risk of catastrophic failure. Identifying damages and their locations is termed as damage detection. In this paper, use of digital video imaging is proposed for detecting damage in structures. The theory of measuring structural vibration using high-resolution images is presented first, based on sub-pixel edge identification. Then a concept of mode shape difference function is developed for structural damage detection. A laboratory test program was carried out to implement these concepts using a high-speed digital video camera. The images were analyzed to obtain displacement time series at sub-pixel resolution. Mode shapes were obtained from the time series to find the mode shape difference functions between the damaged and the reference states. They were subjected to wavelet transformation for determining the damage locations. Results show that the proposed approach is able to identify the introduced damage cases and their locations. --- paper_title: Measurement of rivulet movement on inclined cables during rain–wind induced vibration paper_content: Abstract The large amplitude vibration of stay cables has been observed in several cable-stayed bridges under the simultaneous occurrence of rain and wind, which is called rain–wind induced vibration (RWIV). During RWIV, the upper rivulet oscillating circumferentially on the inclined cable surface is widely considered to have an important role in this phenomenon. However, the small size of rivulets and high sensitivity to wind flow make the measurement of the rivulet movement challenging. This study proposes a digital image processing method to measure the rivulet movement in wind tunnel tests. RWIV of a cable model was excited during the test and a digital video camera was used to record the video clips of the rivulets, from which the time history of the rivulet movement along the entire cable is identified through image processing. The oscillation amplitude, equilibrium position, and dominant frequency of the upper rivulet are investigated. Results demonstrated that the proposed non-contact, non-intrusive measurement method is cost-effective and has good resolution in measuring the rivulet vibration. Finally the rivulet vibration characteristics were also studied when the cable was fixed. Comparison demonstrates the relation between the upper rivulet and cable vibration. --- paper_title: Damage Detection Using Optical Measurements and Wavelets paper_content: The paper presents an application of the wavelet transform for damage detection based on optical measurements. A number of important issues, that needs to be considered when image sequences are used for vibration analysis, are discussed. These include: correspondence of image features from image to image, image calibration and spatial resolution. The principles of image edge detection are discussed and a comparison between the wavelet approach and the classical method is presented. A novel damage detection method based on optically measured modeshape data is proposed. The method is illustrated using a simple cantilever beam experiment. The major advantage of the method is the significantly increased number of discrete points used to describe modeshapes. This is in contrast to classical techniques where in practice a small number of measurement points are obtained from a limited number of sensors. --- paper_title: A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests paper_content: Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. --- paper_title: Application of modal analysis supported by 3D vision-based measurements paper_content: In the paper, applications of the 3D vision techniques to the modal analysis method are presented. The goal of the project was to develop a methodology for vibration amplitude measurements and a software tool for modal analysis based on visual data. For this purpose, dedicated procedures and algorithms based on the vision technique methods were developed. 3D measurements of vibrations and structure geometry were obtained by application and developing passive 3D vision techniques. The amplitude of vibrations was calculated for selected points on the structure. Necessary vision data were received from the high-speed digital camera "X-Stream Vision" in form of "avi" files that were used as the input data for the developed software tool. The amplitude of vibration displacements obtained from vision-based measurements were treated as the input data for operational modal analysis algorithms. In this domain, the Balanced Realization algorithm has been used. --- paper_title: MULTI-POINT MEASUREMENT OF STRUCTURAL VIBRATION USING PATTERN RECOGNITION FROM CAMERA IMAGE paper_content: Modal testing requires measuring the vibration of many points, for which an accelerometer, a gab sensor and laser vibrometer are generally used. Conventional modal testing requires mounting of these sensors to all measurement points in order to acquire the signals. However, this can be disadvantageous because it requires considerable measurement time and effort when there are many measurement points. In this paper, we propose a method for modal testing using a camera image. A camera can measure the vibration of many points at the same time. However, this task requires that the measurement points be classified frame by frame. While it is possible to classify the measurement points one by one, this also requires much time. Therefore, we try to classify multiple points using pattern recognition. The feasibility of the proposed method is verified by a beam experiment. The experimental results demonstrate that we can obtain good results. --- paper_title: Experimental methodology for the dynamic analysis of slender structures based on digital image processing techniques paper_content: There are several problems in engineering where the structural response may be affected by conventional devices for displacement measurements. This work presents an alternative methodology for the experimental dynamic analysis of structures, which cannot be monitored by conventional sensors. This methodology is based on digital image acquisition and processing techniques. The importance of this methodology is that it enables one to perform non-contact displacement measurements, without introducing undesirable changes in the structure's behavior, as for example, in models with an extremely reduced scale. The proposed methodology uses sticking markers of a negligible mass in relation to the structure, facilitating the implementation of the instrumentation of the experimental test. To evaluate the efficiency, precision and accuracy of this methodology, two different experimental tests are carried out and the obtained results are favorably compared with the theoretical results and with the ones provided by conventional sensors. This has shown to be a highly cost-effective and easy to implement experimental procedure for the analysis of very flexible structures, and yet maintains the advantage of dynamic measurements with high resolution. --- paper_title: Digital image processing for non-linear system identification paper_content: Abstract Emerging digital image processing techniques demonstrate their potential applications in engineering mechanics, particularly in the area of system identification involving non-linear characteristics of mechanical and structural systems. The objective of this study is to demonstrate the proof-of-concept that the techniques permit the identification non-intrusively and remotely. First, the efficacy of the digital image processing method is shown by identifying the friction behavior between two solids that are assumed to be governed by the Coulomb friction model. The inverse analysis is carried out to validate the proposed method of identifying model parameters. Studies further illustrate that the digital imaging procedure and inverse analysis algorithms developed for the friction problem can be extended for identification of non-linear mechanical and structural characteristics. One illustration utilizes the second example, where relative motion between a model structure and a shaking table is measured by digitally processing the analogue image provided by a video tape recorded during a shaking table test of a model structure base isolated by a hybrid isolation device consisting of friction and elastomeric components. Third, constitutive relationship for a non-linear elastomeric membrane is identified by digitally processing images of its deformed states under tension. The relationship is postulated to follow the Mooney–Rivlin function. Schemes developed are verified with iterative non-linear finite-element program that is valid in the finite deformation range. Emerging cost-effective hardware and software systems for high-performance data acquisition and processing are quite promising to the implementation of the techniques by removing most, if not all, of its existing limitations. --- paper_title: Remote nondestructive evaluation technique using infrared thermography for fatigue cracks in steel bridges paper_content: Long-standing infrastructure is subject to structural deterioration. In this respect, steel bridges suffer fatigue cracks, which necessitate immediate inspection, structural integrity evaluation or repair. However, the inaccessibility of such structures makes inspection time consuming and labour intensive. Therefore, there is an urgent need for developing high-performance nondestructive evaluation (NDE) methods to assist in effective maintenance of such structures. Recently, use of infrared cameras in nondestructive testing has been attracting increasing interest, as they provide highly efficient remote and wide area measurements. This paper first reviews the current situation of nondestructive inspection techniques used for fatigue crack detection in steel bridges, and then presents remote NDE techniques using infrared thermography developed by the author for fatigue crack detection and structural integrity assessments. Furthermore, results of applying fatigue crack evaluation to a steel bridge using the newly developed NDE techniques are presented. --- paper_title: Automatic concrete health monitoring: assessment and monitoring of concrete surfaces paper_content: To predict the degradation of concrete structures is extremely challenging. The typical approach combines periodic visual inspections with required non-destructive tests. However, this methodology only discretely evaluates few areas of the structure, being also time consuming and subject to human error. Therefore, a new method designated ‘automatic concrete health monitoring’ is herein presented which aims at automatically characterising and monitoring the state of conservation of concrete surfaces by combining photogrammetry, image processing and multi-spectral analysis. The method was designed to (i) characterise crack pattern, displacement and strain fields; (ii) map damages and (iii) assess and define restoration tasks. --- paper_title: Image-Based Monitoring of Open Gears of Movable Bridges for Condition Assessment and Maintenance Decision Making paper_content: AbstractMovable bridges are unique structures due to the complex interaction between their structural, mechanical, and electrical systems with an intricate interrelation creating several challenges related to operation and maintenance. Continuous monitoring of the critical parts of these structures is essential to track and evaluate their performance for improving maintenance operations and reducing the associated costs. Open gears are one of the most critical components of movable bridges. Proper and regular maintenance of these gears is vitally important to ensure a safe, reliable, and cost-effective operation. In this study, a practical and low-cost monitoring approach is presented to track the lubrication level in an open gear of a movable bridge by using video cameras. Two unique indices are developed for monitoring of the open gear by investigating two different image processing methods in a comparative fashion. The first methodology is based on an edge detection algorithm that utilizes a Sobel grad... --- paper_title: Improvement of Crack-Detection Accuracy Using a Novel Crack Defragmentation Technique in Image-Based Road Assessment paper_content: AbstractA common problem of crack-extraction algorithms is that extracted crack image components are usually fragmented in their crack paths. A novel crack-defragmentation technique, MorphLink-C, is proposed to connect crack fragments for road pavement. It consists of two subprocesses, including fragment grouping using the dilation transform and fragment connection using the thinning transform. The proposed fragment connection technique is self-adaptive for different crack types, without involving time-consuming computations of crack orientation, length, and intensity. The proposed MorphLink-C is evaluated using realistic flexible pavement images collected by the Florida Department of Transportation (FDOT). Statistical hypothesis tests are conducted to analyze false positive and negative errors in crack/no-crack classification using an artificial neural network (ANN) classifier associated with feature subset selection methods. The results show that MorphLink-C improves crack-detection accuracy and reduces... --- paper_title: Vision-Based Automated Crack Detection for Bridge Inspection paper_content: The visual inspection of bridges demands long inspection time and also makes it difficult to access all areas of the bridge. This paper presents a visual-based crack detection technique for the automatic inspection of bridges. The technique collects images from an aerial camera to identify the presence of damage to the structure. The images are captured without controlling angles or positioning of cameras so there is no need for calibration. This allows the extracting of images of damage sensitive areas from different angles to increase detection of damage and decrease false-positive errors. The images can detect cracks regardless of the size or the possibility of not being visible. The effectiveness of this technique can be used to successfully detect cracks near bolts. --- paper_title: Integrated Vision-Based System for Automated Defect Detection in Sewer Closed Circuit Television Inspection Videos paper_content: AbstractThis paper discusses the development of a general framework and software system to support automated analysis of sewer inspection closed-circuit television (CCTV) videos. The proposed system aims primarily to support the off-site review and quality control process of the videos and to enable efficient reevaluation of archived CCTV videos to extract historical sewer condition data. Automated analysis of sewer CCTV videos poses several challenges including the nonuniformity of camera motion and illumination conditions inside the sewer. The paper presents a novel algorithm for optical flow-based camera motion tracking to automatically identify, locate, and extract a limited set of video segments, called regions of interest (ROI), that likely include defects, thus reducing the time and computational requirements needed for video processing. The proposed algorithm attempts to recover the operator actions during the inspection session, which would enable determining the location and relative severity of... --- paper_title: Concrete Crack Assessment Using Digital Image Processing and 3D Scene Reconstruction paper_content: AbstractTraditional crack assessment methods for concrete structures are time consuming and produce subjective results. The development of a means for automated assessment employing digital image processing offers high potential for practical implementation. However, two problems in two-dimensional (2D) image processing hinder direct application for crack assessment, as follows: (1) the image used for the digital image processing has to be taken perpendicular to the surface of the concrete structure, and (2) the working distance used in retrieving the imaging model has to be measured each time. To address these problems, this paper proposes a combination of 2D image processing and three-dimensional (3D) scene reconstruction to locate the 3D position of crack edges. In the proposed algorithm, first the precise crack information is obtained from the 2D images after noise elimination and crack detection using image processing techniques. Then, 3D reconstruction is conducted employing several crack images to ... --- paper_title: An efficient image-based damage detection for cable surface in cable-stayed bridges paper_content: Abstract Since cable members are the major structural components of cable bridges, they should be properly inspected for surface damage and inside defects such as corrosion and/or breakage of wires. This study introduces an efficient image-based damage detection system that can automatically identify damages to the cable surface through image processing techniques and pattern recognition. The damage detection algorithm combines image enhancement techniques with principal component analysis (PCA) algorithm. Images from three cameras attached to a cable climbing robot are wirelessly transmitted to a server computer located on a stationary cable support. To improve the overall quality of the images, this study utilizes an image enhancement method together with a noise removal technique. Next the input images are projected into PCA sub-space, the Mahalanobis square distance is used to determine the distances between the input images and sample patterns. The smallest distance is found to be a match for an input image. The proposed damage detection algorithm was verified through laboratory tests on three types of cables. Results of the tests showed that the proposed system could be used to detect damage to bridge cables. --- paper_title: Sensor Networks, Computer Imaging, and Unit Influence Lines for Structural Health Monitoring: Case Study for Bridge Load Rating paper_content: In this paper, a novel methodology for structural health monitoring of a bridge is presented with implementations for bridge load rating using sensor and video image data from operating traffic. With this methodology, video images are analyzed by means of computer vision techniques to detect and track vehicles crossing the bridge. Traditional sensor data are correlated with computer images to extract unit influence lines (UILs). Based on laboratory studies, UILs can be extracted for a critical section with different vehicles by means of synchronized video and sensor data. The synchronized computer vision and strain measurements can be obtained for bridge load rating under operational traffic. For this, the following are presented: a real life bridge is instrumented and monitored, and the real-life data are processed under a moving load. A detailed finite-element model (FEM) of the bridge is also developed and presented along with the experimental measurements to support the applicability of the approach for load rating using UILs extracted from operating traffic. The load rating of the bridges using operational traffic in real life was validated with the FEM results of the bridge and the simulation of the operational traffic on the bridge. This approach is further proven with different vehicles captured with video and measurements. The UILs are used for load rating by multiplying the UIL vector of the critical section with the load vector from the HL-93 design truck. The load rating based on the UIL is compared with the FEM results and indicates good agreement. With this method, it is possible to extract UILs of bridges under regular traffic and obtain load rating efficiently. --- paper_title: Combined Imaging Technologies for Concrete Bridge Deck Condition Assessment paper_content: Evaluating the condition of concrete bridge decks is an increasingly important challenge for transportation agencies and bridge inspection teams. Closing the bridge to traffic, safety, and time consuming data collection are some of the major issues during a visual or in-depth bridge inspection. To date, several nondestructive testing technologies have shown promise in detecting subsurface deteriorations. However, the main challenge is to develop a data acquisition and analysis system to obtain and integrate both surface and subsurface bridge health indicators at higher speeds. Recent developments in imaging technologies for bridge decks and higher-end cameras allow for faster image collection while driving over the bridge deck. This paper will focus on deploying nondestructive imaging technologies such as the three-dimensional (3D) optical bridge evaluation system (3DOBS) and thermal infrared (IR) imagery on a bridge deck to yield both surface and subsurface indicators of condition, respectively. Spall and delamination maps were generated from the optical and thermal IR images. Integration of the maps into ArcGIS, a professional geographic information system (GIS), allowed for a streamlined analysis that included integrating and combining the results of the complimentary technologies. Finally, ground truth information was gathered through coring several locations on a bridge deck to validate the results obtained by nondestructive evaluation. This study confirms the feasibility of combining the bridge inspection results in ArcGIS and provides additional evidence to suggest that thermal infrared imagery provides similar results to chain dragging for bridge inspection. --- paper_title: Strain and Displacement Controls by Fibre Bragg Grating and Digital Image Correlation paper_content: Test control is traditionally performed by a feedback signal from a displacement transducer or force gauge positioned inside the actuator of a test machine. For highly compliant test rigs, this is a problem since the response of the rig influences the results. It is therefore beneficial to control the test based on measurements performed directly on the test specimen. In this paper, fibre Bragg grating (FBG) and Digital Image Correlation (DIC) are used to control a test. The FBG sensors offer the possibility of measuring strains inside the specimen, while the DIC system measures strains and displacement on the surface of the specimen. In this paper, a three-point bending test is used to demonstrate the functionality of a control loop, where the FBG and DIC signals are used as control channels. The FBG strain control was capable of controlling the test within an error tolerance of 20 µm m−1. However, the measurement uncertainty offered by the FBG system allowed a tolerance of 8.3 µm m−1. The DIC displacement control proved capable of controlling the displacement within an accuracy of 0.01 mm. --- paper_title: Integration of computer imaging and sensor data for structural health monitoring of bridges paper_content: The condition of civil infrastructure systems (CIS) changes over their life cycle for different reasons such as damage, overloading, severe environmental inputs, and ageing due normal continued use. The structural performance often decreases as a result of the change in condition. Objective condition assessment and performance evaluation are challenging activities since they require some type of monitoring to track the response over a period of time. In this paper, integrated use of video images and sensor data in the context of structural health monitoring is demonstrated as promising technologies for the safety of civil structures in general and bridges in particular. First, the challenges and possible solutions to using video images and computer vision techniques for structural health monitoring are presented. Then, the synchronized image and sensing data are analyzed to obtain unit influence line (UIL) as an index for monitoring bridge behavior under identified loading conditions. Subsequently, the UCF 4-span bridge model is used to demonstrate the integration and implementation of imaging devices and traditional sensing technology with UIL for evaluating and tracking the bridge behavior. It is shown that video images and computer vision techniques can be used to detect, classify and track different vehicles with synchronized sensor measurements to establish an input–output relationship to determine the normalized response of the bridge. --- paper_title: Camera-based monitoring of the rigid-body displacement of a mandrel in superconducting cable production paper_content: We describe a machine vision measurement head that is used to monitor the mandrel position in production of superconducting cables. Two cameras are orthogonally aligned, viewing different sections of the cylindrical part of the mandrel. The use of telecentric lenses obviates the need for re-calibration after the replacement of mandrel. All parameters of rigid-body motion are obtained in linear theory by using a multivariate least-squares fitting procedure on dynamically corresponded sets of target points that vary due to partial blocking by rotating wires. A rigorous analysis of measurement uncertainty is given. --- paper_title: Vision-based estimation of vertical dynamic loading induced by jumping and bobbing crowds on civil structures paper_content: Abstract People's motion on civil structures induces dynamic loading that may lead to excessive vibrations. The complete characterization of this force distribution over a wide area due to a large number of people is still an unsolved issue. This work presents a measuring technique for the vertical load estimation in case of jumping and bobbing crowd, based on the evaluation of the vertical inertia of the human body. Laboratory experiments verify the proposed model on a single volunteer through standard inertial sensors and then extend it introducing the non-contact measuring technique. The method validation is carried out in a real environment: a stand of the G. Meazza stadium in Milan, dynamically characterized in terms of frequency response function. The load induced by groups of jumping people is estimated with the proposed method and the resulting structure accelerations are computed: the comparison between measured and estimated vibrations shows a very high correspondence in both time domain and main spectral components and, above all, the performances do not get worse as the number of volunteer increases. --- paper_title: Study on Three-Dimensional Displacement Measurement Method of EAST Cold Magnet Based on Computer Vision paper_content: The location of superconducting tokamak magnets decides the position and shape of plasma, it is significant of real-time acquiring the location of tokamak magnets to stably operate the tokamak. At present, magnet displacement measurement is taken in displacement sensor, but the factors affecting the measurement results have much because of tokamak structural features, this paper proposes an improved monocular laser triangle measuring method, it can effectively reduce the distractions, using the image processing software specially developed, we can quickly get magnet operating data, experimental results show that this method precision is high, this method can meet the requirements of the measure. --- paper_title: Application of computer vision and laser interferometer to the inspection of line scale paper_content: In the paper machine vision, laser interferometer and coordinate measuring machine (CMM) are combined to develop a vision inspection system. The measurement capability of the developed system is investigated by measuring the distances between the lines on a standard line scale. The vision camera is used to replace the probe of the CMM to take the images of the interested lines on a line scale at two different positions. Meanwhile, the displacement of the CCD camera is measured using laser interferometer. Using subpixel edge localization and outlier-excluding least-squares regression, the distance between two interested lines is computed under an image plane coordinate system. By adding the displacement of the CCD camera measured using laser interferometer, the line space can be determined. Experiments have been performed repeatedly to measure the line space on the 1.00 and 300.0 mm line scales. Results indicate that the measured data only have a little fluctuation and are close to those obtained by the NML (National Measurement Laboratory, Taiwan). --- paper_title: A new joint application of non-invasive remote sensing techniques for structural health monitoring paper_content: This paper aims at analysing the potentialities of a new technological approach for the dynamic monitoring of civil infrastructures. The proposed approach is based on the joint use of a high-frequency thermal camera and a microwave radar interferometer to measure the oscillations due to traffic excitations of the Sihlhochstrasse Bridge, Switzerland, which was selected as test bed site in the ISTIMES project (EU—Seventh Framework Programme). The good quality of the results encourages the use of the proposed approach for the static and dynamic investigation of structures and infrastructures. Moreover, the remote sensing character of the two applied techniques makes them particularly suitable to study structures located in areas affected by natural hazard phenomena, and also to monitor cultural heritage buildings for which some conventional techniques are considered invasive. Obviously, their reliability needs further experiments and comparisons with standard contact sensors. ---
Title: A Review of Machine Vision-Based Structural Health Monitoring: Methodologies and Applications Section 1: Introduction Description 1: This section introduces the concept of structural health monitoring (SHM), and discusses the evolution and significance of machine vision-based SHM technologies. Section 2: Machine Vision Methods Description 2: This section covers the basic components and procedures involved in machine vision methods for structural monitoring. Section 3: Image Processing Algorithms Description 3: This section details various image processing algorithms employed in structural monitoring, explaining their functionalities and applications. Section 4: Systematic Errors Assessment and Reduction Description 4: This section addresses different kinds of errors associated with vision-based measurement systems, and describes methodologies for error assessment and reduction. Section 5: Two-Dimensional (2D) Structural Displacement Monitoring Description 5: This section focuses on techniques and systems for monitoring 2D structural displacements, including case studies and examples. Section 6: Three-Dimensional (3D) Structural Displacement Monitoring Description 6: This section explores methods for 3D structural displacement monitoring using multiple cameras and vision reconstruction techniques. Section 7: Structural Strain and Stress Monitoring Description 7: This section discusses vision-based methods to measure structural strain and stress, with examples from various materials and structural elements. Section 8: Vibration Monitoring and Dynamic Characteristics Identification Description 8: This section outlines the use of high-speed cameras for vibration monitoring and identification of dynamic characteristics, like natural frequency and modal damping ratio. Section 9: Crack Inspection and Characterization Description 9: This section highlights advanced image processing techniques for detecting and characterizing cracks and other surface features on structures. Section 10: Integration Technology Description 10: This section describes the integration of machine vision technology with other sensing methods to enhance structural health monitoring capabilities. Section 11: Conclusions Description 11: This section provides a summary of the state-of-the-art in machine vision-based SHM, outlining key achievements, limitations, and future prospects in the field.
A SURVEY OF MODEL-BASED SENSOR DATA ACQUISITION AND MANAGEMENT
19
--- paper_title: Efficient gathering of correlated data in sensor networks paper_content: In this article, we design techniques that exploit data correlations in sensor data to minimize communication costs (and hence, energy costs) incurred during data gathering in a sensor network. Our proposed approach is to select a small subset of sensor nodes that may be sufficient to reconstruct data for the entire sensor network. Then, during data gathering only the selected sensors need to be involved in communication. The selected set of sensors must also be connected, since they need to relay data to the data-gathering node. We define the problem of selecting such a set of sensors as the connected correlation-dominating set problem, and formulate it in terms of an appropriately defined correlation structure that captures general data correlations in a sensor network. We develop a set of energy-efficient distributed algorithms and competitive centralized heuristics to select a connected correlation-dominating set of small size. The designed distributed algorithms can be implemented in an asynchronous communication model, and can tolerate message losses. We also design an exponential (but nonexhaustive) centralized approximation algorithm that returns a solution within O(log n) of the optimal size. Based on the approximation algorithm, we design a class of centralized heuristics that are empirically shown to return near-optimal solutions. Simulation results over randomly generated sensor networks with both artificially and naturally generated data sets demonstrate the efficiency of the designed algorithms and the viability of our technique—even in dynamic conditions. --- paper_title: Exploiting correlated attributes in acquisitional query processing paper_content: Sensor networks and other distributed information systems (such as the Web) must frequently access data that has a high per-attribute acquisition cost, in terms of energy, latency, or computational resources. When executing queries that contain several predicates over such expensive attributes, we observe that it can be beneficial to use correlations to automatically introduce low-cost attributes whose observation will allow the query processor to better estimate die selectivity of these expensive predicates. In particular, we show how to build conditional plans that branch into one or more sub-plans, each with a different ordering for the expensive query predicates, based on the runtime observation of low-cost attributes. We frame the problem of constructing the optimal conditional plan for a given user query and set of candidate low-cost attributes as an optimization problem. We describe an exponential time algorithm for finding such optimal plans, and describe a polynomial-time heuristic for identifying conditional plans that perform well in practice. We also show how to compactly model conditional probability distributions needed to identify correlations and build these plans. We evaluate our algorithms against several real-world sensor-network data sets, showing several-times performance increases for a variety of queries versus traditional optimization techniques. --- paper_title: MauveDB: supporting model-based user views in database systems paper_content: Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system. --- paper_title: PRESTO: feedback-driven data management in sensor networks paper_content: This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven push approach is energy-efficient, while ensuring that anomalous data trends are never missed. In addition to supporting queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. --- paper_title: Energy conservation in wireless sensor networks: A survey paper_content: In the last years, wireless sensor networks (WSNs) have gained increasing attention from both the research community and actual users. As sensor nodes are generally battery-powered devices, the critical aspects to face concern how to reduce the energy consumption of nodes, so that the network lifetime can be extended to reasonable times. In this paper we first break down the energy consumption for the components of a typical sensor node, and discuss the main directions to energy conservation in WSNs. Then, we present a systematic and comprehensive taxonomy of the energy conservation schemes, which are subsequently discussed in depth. Special attention has been devoted to promising solutions which have not yet obtained a wide attention in the literature, such as techniques for energy efficient data acquisition. Finally we conclude the paper with insights for research directions about energy conservation in WSNs. --- paper_title: PAQ: Time Series Forecasting for Approximate Query Answering in Sensor Networks paper_content: In this paper, we present a method for approximating the values of sensors in a wireless sensor network based on time series forecasting. More specifically, our approach relies on autoregressive models built at each sensor to predict local readings. Nodes transmit these local models to a sink node, which uses them to predict sensor values without directly communicating with sensors. When needed, nodes send information about outlier readings and model updates to the sink. We show that this approach can dramatically reduce the amount of communication required to monitor the readings of all sensors in a network, and demonstrate that our approach provides provably-correct, user-controllable error bounds on the predicted values of each sensor. --- paper_title: Model-Driven Data Acquisition in Sensor Networks paper_content: Declarative queries are proving to be an attractive paradigm for ineracting with networks of wireless sensors. The metaphor that "the sensornet is a database" is problematic, however, because sensors do not exhaustively represent the data in the real world. In order to map the raw sensor readings onto physical reality, a model of that reality is required to complement the readings. In this paper, we enrich interactive sensor querying with statistical modeling techniques. We demonstrate that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy. Utilizing the combination of a model and live data acquisition raises the challenging optimization problem of selecting the best sensor readings to acquire, balancing the increase in the confidence of our answer against the communication and data acquisition costs in the network. We describe an exponential time algorithm for finding the optimal solution to this optimization problem, and a polynomial-time heuristic for identifying solutions that perform well in practice. We evaluate our approach on several real-world sensor-network data sets, taking into account the real measured data and communication quality, demonstrating that our model-based approach provides a high-fidelity representation of the real phenomena and leads to significant performance gains versus traditional data acquisition techniques. --- paper_title: A Weighted Moving Average-based Approach for Cleaning Sensor Data paper_content: Nowadays, wireless sensor networks have been widely used in many monitoring applications. Due to the low quality of sensors and random effects of the environments, however, it is well known that the collected sensor data are noisy. Therefore, it is very critical to clean the sensor data before using them to answer queries or conduct data analysis. Popular data cleaning approaches, such as the moving average, cannot meet the requirements of both energy efficiency and quick response time in many sensor related applications. In this paper, we propose a hybrid sensor data cleaning approach with confidence. Specifically, we propose a smart weighted moving average (WMA) algorithm that collects confidence data from sensors and computes the weighted moving average. The rationale behind the WMA algorithm is to draw more samples for a particular value that is of great importance to the moving average, and provide higher confidence weight for this value, such that this important value can be quickly reflected in the moving average. Based on our extensive simulation results, we demonstrate that, compared to the simple moving average (SMA), our WMA approach can effectively clean data and offer quick response time. --- paper_title: A neuro-fuzzy approach for sensor network data cleaning paper_content: Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control. However, sensor data are subject to several sources of errors as the data captured from the physical world through these sensor devices tend to be incomplete, noisy, and unreliable, thus yielding imprecise or even incorrect and misleading answers which can be very significative if they result in immediate critical decisions or activation of actuators. Traditional data cleaning techniques cannot be applied in this context as they do not take into account the strong spatial and temporal correlations typically present in sensor data, so machine learning techniques could greatly be of aid. In this paper, we propose a neuro-fuzzy regression approach to clean sensor network data: the well known ANFIS model is employed for reducing the uncertainty associated with the data thus obtaining a more accurate estimate of sensor readings. The obtained cleaning results show good ANFIS performance compared to other common used model such as kernel methods, and we demonstrate its effectiveness if the cleaning model has to be implemented at sensor level rather than at base-station level. --- paper_title: Declarative Support for Sensor Data Cleaning paper_content: Pervasive applications rely on data captured from the physical world through sensor devices. Data provided by these devices, however, tend to be unreliable. The data must, therefore, be cleaned before an application can make use of them, leading to additional complexity for application development and deployment. Here we present Extensible Sensor stream Processing (ESP), a framework for building sensor data cleaning infrastructures for use in pervasive applications. ESP is designed as a pipeline using declarative cleaning mechanisms based on spatial and temporal characteristics of sensor data. We demonstrate ESP's effectiveness and ease of use through three real-world scenarios. --- paper_title: MauveDB: supporting model-based user views in database systems paper_content: Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system. --- paper_title: ERACER: a database approach for statistical inference and data cleaning paper_content: Real-world databases often contain syntactic and semantic errors, in spite of integrity constraints and other safety measures incorporated into modern DBMSs. We present ERACER, an iterative statistical framework for inferring missing information and correcting such errors automatically. Our approach is based on belief propagation and relational dependency networks, and includes an efficient approximate inference algorithm that is easily implemented in standard DBMSs using SQL and user defined functions. The system performs the inference and cleansing tasks in an integrated manner, using shrinkage techniques to infer correct values accurately even in the presence of dirty data. We evaluate the proposed methods empirically on multiple synthetic and real-world data sets. The results show that our framework achieves accuracy comparable to a baseline statistical method using Bayesian networks with exact inference. However, our framework has wider applicability than the Bayesian network baseline, due to its ability to reason with complex, cyclic relational dependencies. --- paper_title: A Deferred Cleansing Method for RFID Data Analytics paper_content: Radio Frequency Identification is gaining broader adoption in many areas. One of the challenges in implementing an RFID-based system is dealing with anomalies in RFID reads. A small number of anomalies can translate into large errors in analytical results. Conventional "eager" approaches cleanse all data upfront and then apply queries on cleaned data. However, this approach is not feasible when several applications define anomalies and corrections on the same data set differently and not all anomalies can be defined beforehand. This necessitates anomaly handling at query time. We introduce a deferred approach for detecting and correcting RFID data anomalies. Each application specifies the detection and the correction of relevant anomalies using declarative sequence-based rules. An application query is then automatically rewritten based on the cleansing rules that the application has specified, to provide answers over cleaned data. We show that a naive approach to deferred cleansing that applies rules without leveraging query information can be prohibitive. We develop two novel rewrite methods, both of which reduce the amount of data to be cleaned, by exploiting predicates in application queries while guaranteeing correct answers. We leverage standardized SQL/OLAP functionality to implement rules specified in a declarative sequence-based language. This allows efficient evaluation of cleansing rules using existing query processing capabilities of a DBMS. Our experimental results show that deferred cleansing is affordable for typical analytic queries over RFID data. --- paper_title: MIST: Distributed Indexing and Querying in Sensor Networks using Statistical Models paper_content: The modeling of high level semantic events from low level sensor signals is important in order to understand distributed phenomena. For such content-modeling purposes, transformation of numeric data into symbols and the modeling of resulting symbolic sequences can be achieved using statistical models---Markov Chains (MCs) and Hidden Markov Models (HMMs). We consider the problem of distributed indexing and semantic querying over such sensor models. Specifically, we are interested in efficiently answering (i) range queries: return all sensors that have observed an unusual sequence of symbols with a high likelihood, (ii) top-1 queries: return the sensor that has the maximum probability of observing a given sequence, and (iii) 1-NN queries: return the sensor (model) which is most similar to a query model. All the above queries can be answered at the centralized base station, if each sensor transmits its model to the base station. However, this is communication-intensive. We present a much more efficient alternative---a distributed index structure, MIST (Model-based Index STructure), and accompanying algorithms for answering the above queries. MIST aggregates two or more constituent models into a single composite model, and constructs an in-network hierarchy over such composite models. We develop two kinds of composite models: the first kind captures the average behavior of the underlying models and the second kind captures the extreme behaviors of the underlying models. Using the index parameters maintained at the root of a subtree, we bound the probability of observation of a query sequence from a sensor in the subtree. We also bound the distance of a query model to a sensor model using these parameters. Extensive experimental evaluation on both real-world and synthetic data sets show that the MIST schemes scale well in terms of network size and number of model states. We also show its superior performance over the centralized schemes in terms of update, query, and total communication costs. --- paper_title: Querying continuous functions in a database system paper_content: Many scientific, financial, data mining and sensor network applications need to work with continuous, rather than discrete data e.g., temperature as a function of location, or stock prices or vehicle trajectories as a function of time. Querying raw or discrete data is unsatisfactory for these applications -- e.g., in a sensor network, it is necessary to interpolate sensor readings to predict values at locations where sensors are not deployed. In other situations, raw data can be inaccurate owing to measurement errors, and it is useful to fit continuous functions to raw data and query the functions, rather than raw data itself -- e.g., fitting a smooth curve to noisy sensor readings, or a smooth trajectory to GPS data containing gaps or outliers. Existing databases do not support storing or querying continuous functions, short of brute-force discretization of functions into a collection of tuples. We present FunctionDB, a novel database system that treats mathematical functions as first-class citizens that can be queried like traditional relations. The key contribution of FunctionDB is an efficient and accurate algebraic query processor - for the broad class of multi-variable polynomial functions, FunctionDB executes queries directly on the algebraic representation of functions without materializing them into discrete points, using symbolic operations: zero finding, variable substitution, and integration. Even when closed form solutions are intractable, FunctionDB leverages symbolic approximation operations to improve performance. We evaluate FunctionDB on real data sets from a temperature sensor network, and on traffic traces from Boston roads. We show that operating in the functional domain has substantial advantages in terms of accuracy (15-30%) and up to order of magnitude (10x-100x) performance wins over existing approaches that represent models as discrete collections of points. --- paper_title: Probabilistic Inference over RFID Streams in Mobile Environments paper_content: Recent innovations in RFID technology are enabling large-scale cost-effective deployments in retail, healthcare, pharmaceuticals and supply chain management. The advent of mobile or handheld readers adds significant new challenges to RFID stream processing due to the inherent reader mobility, increased noise, and incomplete data. In this paper, we address the problem of translating noisy, incomplete raw streams from mobile RFID readers into clean, precise event streams with location information. Specifically we propose a probabilistic model to capture the mobility of the reader, object dynamics, and noisy readings. Our model can self-calibrate by automatically estimating key parameters from observed data. Based on this model, we employ a sampling-based technique called particle filtering to infer clean, precise information about object locations from raw streams from mobile RFID readers. Since inference based on standard particle filtering is neither scalable nor efficient in our settings, we propose three enhancements---particle factorization, spatial indexing, and belief compression---for scalable inference over large numbers of objects and high-volume streams. Our experiments show that our approach can offer 49\% error reduction over a state-of-the-art data cleaning approach such as SMURF while also being scalable and efficient. --- paper_title: Cleaning and querying noisy sensors paper_content: Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control. Unfortunately, sensor data is subject to several sources of errors such as noise from external sources, hardware noise, inaccuracies and imprecision, and various environmental effects. Such errors may seriously impact the answer to any query posed to the sensors. In particular, they may yield imprecise or even incorrect and misleading answers which can be very significant if they result in immediate critical decisions or activation of actuators. In this paper, we present a framework for cleaning and querying noisy sensors. Specifically, we present a Bayesian approach for reducing the uncertainty associated with the data, that arise due to random noise, in an online fashion. Our approach combines prior knowledge of the true sensor reading, the noise characteristics of this sensor, and the observed noisy reading in order to obtain a more accurate estimate of the reading. This cleaning step can be performed either at the sensor level or at the base-station. Based on our proposed uncertainty models and using a statistical approach, we introduce several algorithms for answering traditional database queries over uncertain sensor readings. Finally, we present a preliminary evaluation of our proposed approach using synthetic data and highlight some exciting research directions in this area. --- paper_title: Adaptive Cleaning for RFID Data Streams paper_content: To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a "smoothing filter", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID data cleaning. SMURF models the unreliability of RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. Through the use of tools such as binomial sampling and π-estimators, SMURF continuously adapts the smoothing window size in a principled manner to provide accurate RFID data to applications. --- paper_title: ORDEN: outlier region detection and exploration in sensor networks paper_content: Sensor networks play a central role in applications that monitor variables in geographic areas such as the traffic volume on roads or the temperature in the environment. A key feature users are often interested in when employing such systems is the detection of unusual phenomena, that is, anomalous values measured by the sensors. In this demonstration, we present a system, called ORDEN, that allows for the detection and (visual) exploration of outliers and anomalous events in sensor networks in real-time. In particular, the system constructs outlier regions from anomalous sensor measurements to provide for a comprehensive description of the spatial extent of phenomena of interest. With our system, users can interactively explore displayed outlier regions and investigate the heterogeneity within individual regions using different parameter and threshold settings. Using real-world sensor data streams from different application domains, we demonstrate the effectiveness and utility of our system. --- paper_title: Online Filtering, Smoothing and Probabilistic Modeling of Streaming data paper_content: In this paper, we address the problem of extending a relational database system to facilitate efficient real-time application of dynamic probabilistic models to streaming data. We use the recently proposed abstraction of model-based views for this purpose, by allowing users to declaratively specify the model to be applied, and by presenting the output of the models to the user as a probabilistic database view. We support declarative querying over such views using an extended version of SQL that allows for querying probabilistic data. Underneath we use particle filters, a class of sequential Monte Carlo algorithms, to represent the present and historical states of the model as sets of weighted samples (particles) that are kept up-to-date as new data arrives. We develop novel techniques to convert the queries on the model-based view directly into queries over particle tables, enabling highly efficient query processing. Finally, we present experimental evaluation of our prototype implementation over several synthetic and real datasets, that demonstrates the feasibility of online modeling of streaming data using our system and establishes the advantages of tight integration between dynamic probabilistic models and databases. --- paper_title: Predictive Modeling-Based Data Collection in Wireless Sensor Networks paper_content: We address the problem of designing practical, energy-efficient protocols for data collection in wireless sensor networks using predictive modeling. Prior work has suggested several approaches to capture and exploit the rich spatio-temporal correlations prevalent in WSNs during data collection. Although shown to be effective in reducing the data collection cost, those approaches use simplistic corelation models and further, ignore many idiosyncrasies of WSNs, in particular the broadcast nature of communication. Our proposed approach is based on approximating the joint probability distribution over the sensors using undirected graphical models, ideally suited to exploit both the spatial correlations and the broadcast nature of communication. We present algorithms for optimally using such a model for data collection under different communication models, and for identifying an appropriate model to use for a given sensor network. Experiments over synthetic and real-world datasets show that our approach significantly reduces the data collection cost. --- paper_title: Efficiently Maintaining Distributed Model-Based Views on Real-Time Data Streams paper_content: Minimizing communication cost is a fundamental problem in large-scale federated sensor networks. Maintaining model-based views of data streams has been highlighted because it permits efficient data communication by transmitting parameter values of models, instead of original data streams. We propose a framework that employs the advantages of using model-based views for communication-efficient stream data processing over federated sensor networks, yet it significantly improves state-of-the-art approaches. The framework is generic and any time-parameterized models can be plugged, while accuracy guarantees for query results are ensured throughout the large-scale networks. In addition, we boost the performance of the framework by the coded model update that enables efficient model update from one node to another. It predetermines parameter values for the model, updates only identifiers of the parameter values, and compresses the identifiers by utilizing bitmaps. Moreover, we propose a correlation model, named coded inter-variable model, that merges the efficiency of the coded model update with that of correlation models. Empirical studies with real data demonstrate that our proposal achieves substantial amounts of communication reduction, outperforming state-of-the art methods. --- paper_title: Efficient time series matching by wavelets paper_content: Time series stored as feature vectors can be indexed by multidimensional index trees like R-Trees for fast retrieval. Due to the dimensionality curse problem, transformations are applied to time series to reduce the number of dimensions of the feature vectors. Different transformations like Discrete Fourier Transform (DFT) Discrete Wavelet Transform (DWT), Karhunen-Loeve (KL) transform or Singular Value Decomposition (SVD) can be applied. While the use of DFT and K-L transform or SVD have been studied on the literature, to our knowledge, there is no in-depth study on the application of DWT. In this paper we propose to use Haar Wavelet Transform for time series indexing. The major contributions are: (1) we show that Euclidean distance is preserved in the Haar transformed domain and no false dismissal will occur, (2) we show that Haar transform can outperform DFT through experiments, (3) a new similarity model is suggested to accommodate vertical shift of time series, and (4) a two-phase method is proposed for efficient n-nearest neighbor query in time series databases. --- paper_title: Streaming Pattern Discovery in Multiple Time-Series paper_content: In this paper, we introduce SPIRIT (Streaming Pattern dIscoveRy in multIple Time-series). Given n numerical data streams, all of whose values we observe at each time tick t, SPIRIT can incrementally find correlations and hidden variables, which summarise the key trends in the entire stream collection. It can do this quickly, with no buffering of stream values and without comparing pairs of streams. Moreover, it is any-time, single pass, and it dynamically detects changes. The discovered trends can also be used to immediately spot potential anomalies, to do efficient forecasting and, more generally, to dramatically simplify further data processing. Our experimental evaluation and case studies show that SPIRIT can incrementally capture correlations and discover trends, efficiently and effectively. --- paper_title: GAMPS: compressing multi sensor data by grouping and amplitude scaling paper_content: We consider the problem of collectively approximating a set of sensor signals using the least amount of space so that any individual signal can be efficiently reconstructed within a given maximum (L∞) error e. The problem arises naturally in applications that need to collect large amounts of data from multiple concurrent sources, such as sensors, servers and network routers, and archive them over a long period of time for offline data mining. We present GAMPS, a general framework that addresses this problem by combining several novel techniques. First, it dynamically groups multiple signals together so that signals within each group are correlated and can be maximally compressed jointly. Second, it appropriately scales the amplitudes of different signals within a group and compresses them within the maximum allowed reconstruction error bound. Our schemes are polynomial time O(α, β approximation schemes, meaning that the maximum (L∞) error is at most α e and it uses at most β times the optimal memory. Finally, GAMPS maintains an index so that various queries can be issued directly on compressed data. Our experiments on several real-world sensor datasets show that GAMPS significantly reduces space without compromising the quality of search and query. --- paper_title: StatStream: Statistical Monitoring of Thousands of Data Streams in Real Time paper_content: Consider the problem of monitoring tens of thousands of time series data streams in an online fashion and making decisions based on them. In addition to single stream statistics such as average and standard deviation, we also want to find high correlations among all pairs of streams. A stock market trader might use such a tool to spot arbitrage opportunities. This paper proposes efficient methods for solving this problem based on Discrete Fourier Transforms and a three level time interval hierarchy. Extensive experiments on synthetic data and real world financial trading data show that our algorithm beats the direct computation approach by several orders of magnitude. It also improves on previous Fourier Transform approaches by allowing the efficient computation of time-delayed correlation over any size sliding window and any time delay. Correlation also lends itself to an efficient grid-based data structure. The result is the first algorithm that we know of to compute correlations over thousands of data streams in real time. The algorithm is incremental, has fixed response time, and can monitor the pairwise correlations of 10,000 streams on a single PC. The algorithm is embarrassingly parallelizable. --- paper_title: Similarity search over time-series data using wavelets paper_content: Considers the use of wavelet transformations as a dimensionality reduction technique to permit efficient similarity searching over high-dimensional time-series data. While numerous transformations have been proposed and studied, the only wavelet that has been shown to be effective for this application is the Haar wavelet. In this work, we observe that a large class of wavelet transformations (not only orthonormal wavelets but also bi-orthonormal wavelets) can be used to support similarity searching. This class includes the most popular and most effective wavelets being used in image compression. We present a detailed performance study of the effects of using different wavelets on the performance of similarity searching for time-series data. We include several wavelets that outperform both the Haar wavelet and the best-known non-wavelet transformations for this application. To ensure our results are usable by an application engineer, we also show how to configure an indexing strategy for the best-performing transformations. Finally, we identify classes of data that can be indexed efficiently using these wavelet transformations. --- paper_title: Efficient gathering of correlated data in sensor networks paper_content: In this article, we design techniques that exploit data correlations in sensor data to minimize communication costs (and hence, energy costs) incurred during data gathering in a sensor network. Our proposed approach is to select a small subset of sensor nodes that may be sufficient to reconstruct data for the entire sensor network. Then, during data gathering only the selected sensors need to be involved in communication. The selected set of sensors must also be connected, since they need to relay data to the data-gathering node. We define the problem of selecting such a set of sensors as the connected correlation-dominating set problem, and formulate it in terms of an appropriately defined correlation structure that captures general data correlations in a sensor network. We develop a set of energy-efficient distributed algorithms and competitive centralized heuristics to select a connected correlation-dominating set of small size. The designed distributed algorithms can be implemented in an asynchronous communication model, and can tolerate message losses. We also design an exponential (but nonexhaustive) centralized approximation algorithm that returns a solution within O(log n) of the optimal size. Based on the approximation algorithm, we design a class of centralized heuristics that are empirically shown to return near-optimal solutions. Simulation results over randomly generated sensor networks with both artificially and naturally generated data sets demonstrate the efficiency of the designed algorithms and the viability of our technique—even in dynamic conditions. --- paper_title: Exploiting correlated attributes in acquisitional query processing paper_content: Sensor networks and other distributed information systems (such as the Web) must frequently access data that has a high per-attribute acquisition cost, in terms of energy, latency, or computational resources. When executing queries that contain several predicates over such expensive attributes, we observe that it can be beneficial to use correlations to automatically introduce low-cost attributes whose observation will allow the query processor to better estimate die selectivity of these expensive predicates. In particular, we show how to build conditional plans that branch into one or more sub-plans, each with a different ordering for the expensive query predicates, based on the runtime observation of low-cost attributes. We frame the problem of constructing the optimal conditional plan for a given user query and set of candidate low-cost attributes as an optimization problem. We describe an exponential time algorithm for finding such optimal plans, and describe a polynomial-time heuristic for identifying conditional plans that perform well in practice. We also show how to compactly model conditional probability distributions needed to identify correlations and build these plans. We evaluate our algorithms against several real-world sensor-network data sets, showing several-times performance increases for a variety of queries versus traditional optimization techniques. --- paper_title: MauveDB: supporting model-based user views in database systems paper_content: Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system. --- paper_title: PRESTO: feedback-driven data management in sensor networks paper_content: This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven push approach is energy-efficient, while ensuring that anomalous data trends are never missed. In addition to supporting queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. --- paper_title: Energy conservation in wireless sensor networks: A survey paper_content: In the last years, wireless sensor networks (WSNs) have gained increasing attention from both the research community and actual users. As sensor nodes are generally battery-powered devices, the critical aspects to face concern how to reduce the energy consumption of nodes, so that the network lifetime can be extended to reasonable times. In this paper we first break down the energy consumption for the components of a typical sensor node, and discuss the main directions to energy conservation in WSNs. Then, we present a systematic and comprehensive taxonomy of the energy conservation schemes, which are subsequently discussed in depth. Special attention has been devoted to promising solutions which have not yet obtained a wide attention in the literature, such as techniques for energy efficient data acquisition. Finally we conclude the paper with insights for research directions about energy conservation in WSNs. --- paper_title: PAQ: Time Series Forecasting for Approximate Query Answering in Sensor Networks paper_content: In this paper, we present a method for approximating the values of sensors in a wireless sensor network based on time series forecasting. More specifically, our approach relies on autoregressive models built at each sensor to predict local readings. Nodes transmit these local models to a sink node, which uses them to predict sensor values without directly communicating with sensors. When needed, nodes send information about outlier readings and model updates to the sink. We show that this approach can dramatically reduce the amount of communication required to monitor the readings of all sensors in a network, and demonstrate that our approach provides provably-correct, user-controllable error bounds on the predicted values of each sensor. --- paper_title: Model-Driven Data Acquisition in Sensor Networks paper_content: Declarative queries are proving to be an attractive paradigm for ineracting with networks of wireless sensors. The metaphor that "the sensornet is a database" is problematic, however, because sensors do not exhaustively represent the data in the real world. In order to map the raw sensor readings onto physical reality, a model of that reality is required to complement the readings. In this paper, we enrich interactive sensor querying with statistical modeling techniques. We demonstrate that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy. Utilizing the combination of a model and live data acquisition raises the challenging optimization problem of selecting the best sensor readings to acquire, balancing the increase in the confidence of our answer against the communication and data acquisition costs in the network. We describe an exponential time algorithm for finding the optimal solution to this optimization problem, and a polynomial-time heuristic for identifying solutions that perform well in practice. We evaluate our approach on several real-world sensor-network data sets, taking into account the real measured data and communication quality, demonstrating that our model-based approach provides a high-fidelity representation of the real phenomena and leads to significant performance gains versus traditional data acquisition techniques. --- paper_title: TinyDB: an acquisitional query processing system for sensor networks paper_content: We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices. --- paper_title: The design of an acquisitional query processor for sensor networks paper_content: We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices. --- paper_title: TiNA: a scheme for temporal coherency-aware in-network aggregation paper_content: This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information transmitted by individual nodes (communication cost dominates power usage in sensor networks), and to improve quality of data when not all sensor readings can be propagated up the network within a given time constraint. TiNA was evaluated against a traditional in-network aggregation scheme with respect to power savings as well as the quality of data for aggregate queries. Preliminary results show that TiNA can reduce power consumption by up to 50% without any loss in the quality of data. --- paper_title: TAG: a Tiny AGgregation service for ad-hoc sensor networks paper_content: We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution. --- paper_title: Query processing in sensor networks paper_content: Hardware for sensor nodes that combine physical sensors, actuators, embedded processors, and communication components has advanced significantly over the last decade, and made the large-scale deployment of such sensors a reality. Applications range from monitoring applications such as inventory maintenance over health care to military applications. In this paper, we evaluate the design of a query layer for sensor networks. The query layer accepts queries in a declarative language that are then optimized to generate ecient query execution plans with in-network processing which can significantly reduce resource requirements. We examine the main architectural components of such a query layer, concentrating on in-network aggregation, interaction of in-network aggregation with the wireless routing protocol, and distributed query processing. Initial simulation experiments with the ns-2 network simulator show the tradeos of our system. --- paper_title: Exploiting correlated attributes in acquisitional query processing paper_content: Sensor networks and other distributed information systems (such as the Web) must frequently access data that has a high per-attribute acquisition cost, in terms of energy, latency, or computational resources. When executing queries that contain several predicates over such expensive attributes, we observe that it can be beneficial to use correlations to automatically introduce low-cost attributes whose observation will allow the query processor to better estimate die selectivity of these expensive predicates. In particular, we show how to build conditional plans that branch into one or more sub-plans, each with a different ordering for the expensive query predicates, based on the runtime observation of low-cost attributes. We frame the problem of constructing the optimal conditional plan for a given user query and set of candidate low-cost attributes as an optimization problem. We describe an exponential time algorithm for finding such optimal plans, and describe a polynomial-time heuristic for identifying conditional plans that perform well in practice. We also show how to compactly model conditional probability distributions needed to identify correlations and build these plans. We evaluate our algorithms against several real-world sensor-network data sets, showing several-times performance increases for a variety of queries versus traditional optimization techniques. --- paper_title: Model-Driven Data Acquisition in Sensor Networks paper_content: Declarative queries are proving to be an attractive paradigm for ineracting with networks of wireless sensors. The metaphor that "the sensornet is a database" is problematic, however, because sensors do not exhaustively represent the data in the real world. In order to map the raw sensor readings onto physical reality, a model of that reality is required to complement the readings. In this paper, we enrich interactive sensor querying with statistical modeling techniques. We demonstrate that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy. Utilizing the combination of a model and live data acquisition raises the challenging optimization problem of selecting the best sensor readings to acquire, balancing the increase in the confidence of our answer against the communication and data acquisition costs in the network. We describe an exponential time algorithm for finding the optimal solution to this optimization problem, and a polynomial-time heuristic for identifying solutions that perform well in practice. We evaluate our approach on several real-world sensor-network data sets, taking into account the real measured data and communication quality, demonstrating that our model-based approach provides a high-fidelity representation of the real phenomena and leads to significant performance gains versus traditional data acquisition techniques. --- paper_title: PRESTO: feedback-driven data management in sensor networks paper_content: This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven push approach is energy-efficient, while ensuring that anomalous data trends are never missed. In addition to supporting queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. --- paper_title: Time Series Analysis and Its Applications paper_content: Characteristics of time series.- Time series regression and exploratory data analysis.- ARIMA models.- Spectral analysis and filtering.- Additional time domain topics.- State-space models.- Statistical methods in the frequency domain. --- paper_title: Data-Driven Processing in Sensor Networks paper_content: Wireless sensor networks are poised to enable continuous data collection on unprecedented scales, in terms of area location and size, and frequency. This is a great boon to elds such as ecological modeling. We are collaborating with researchers to build sophisticated temporal and spatial models of forest growth, utilizing a variety of measurements. There exists a crucial challenge in supporting this activity: network nodes have limited battery life, and radio communication is the dominant energy consumer. The straightforward solution of instructing all nodes to report their measurements as they are taken to a base station will quickly consume the network’s energy. On the other hand, the solution of building models for node behavior and substituting these in place of the actual measurements is in conict with the end goal of constructing models. To address this dilemma, we propose data-driven processing, the goal of which is to provide continuous data without continuous reporting, but with checks against the actual data. Our primary strategy for this is suppression, which uses in-network monitoring to limit the amount of communication to the base station. Suppression employs models for optimization of data collection, but not at the risk of correctness. We discuss techniques for designing data-driven collection, such as building suppression schemes and incorporating models into them. We then present and address some of the major challenges to making this approach practical, such as handling failure and avoiding the need to co-design the network application and communication layers. --- paper_title: Snapshot queries: towards data-centric sensor networks paper_content: In this paper we introduce the idea of snapshot queries for energy efficient data acquisition in sensor networks. Network nodes generate models of their surrounding environment that are used for electing, using a localized algorithm, a small set of representative nodes in the network. These representative nodes constitute a network snapshot and can be used to provide quick approximate answers to user queries while reducing substantially the energy consumption in the network. We present a detailed experimental study of our framework and algorithms, varying multiple parameters like the available memory of the sensor nodes, their transmission range, the network message loss etc. Depending on the configuration, snapshot queries provide a reduction of up to 90% in the number of nodes that need to participate in a user query. --- paper_title: A Weighted Moving Average-based Approach for Cleaning Sensor Data paper_content: Nowadays, wireless sensor networks have been widely used in many monitoring applications. Due to the low quality of sensors and random effects of the environments, however, it is well known that the collected sensor data are noisy. Therefore, it is very critical to clean the sensor data before using them to answer queries or conduct data analysis. Popular data cleaning approaches, such as the moving average, cannot meet the requirements of both energy efficiency and quick response time in many sensor related applications. In this paper, we propose a hybrid sensor data cleaning approach with confidence. Specifically, we propose a smart weighted moving average (WMA) algorithm that collects confidence data from sensors and computes the weighted moving average. The rationale behind the WMA algorithm is to draw more samples for a particular value that is of great importance to the moving average, and provide higher confidence weight for this value, such that this important value can be quickly reflected in the moving average. Based on our extensive simulation results, we demonstrate that, compared to the simple moving average (SMA), our WMA approach can effectively clean data and offer quick response time. --- paper_title: Declarative Support for Sensor Data Cleaning paper_content: Pervasive applications rely on data captured from the physical world through sensor devices. Data provided by these devices, however, tend to be unreliable. The data must, therefore, be cleaned before an application can make use of them, leading to additional complexity for application development and deployment. Here we present Extensible Sensor stream Processing (ESP), a framework for building sensor data cleaning infrastructures for use in pervasive applications. ESP is designed as a pipeline using declarative cleaning mechanisms based on spatial and temporal characteristics of sensor data. We demonstrate ESP's effectiveness and ease of use through three real-world scenarios. --- paper_title: A Pipelined Framework for Online Cleaning of Sensor Data Streams paper_content: Data captured from the physical world through sensor devices tends to be noisy and unreliable. The data cleaning process for such data is not easily handled by standard data warehouse-oriented techniques, which do not take into account the strong temporal and spatial components of receptor data. We present Extensible receptor Stream Processing (ESP), a declarative query-based framework designed to clean the data streams produced by sensor devices. --- paper_title: Outlier Detection Techniques for Wireless Sensor Networks: A Survey paper_content: In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree. --- paper_title: ORDEN: outlier region detection and exploration in sensor networks paper_content: Sensor networks play a central role in applications that monitor variables in geographic areas such as the traffic volume on roads or the temperature in the environment. A key feature users are often interested in when employing such systems is the detection of unusual phenomena, that is, anomalous values measured by the sensors. In this demonstration, we present a system, called ORDEN, that allows for the detection and (visual) exploration of outliers and anomalous events in sensor networks in real-time. In particular, the system constructs outlier regions from anomalous sensor measurements to provide for a comprehensive description of the spatial extent of phenomena of interest. With our system, users can interactively explore displayed outlier regions and investigate the heterogeneity within individual regions using different parameter and threshold settings. Using real-world sensor data streams from different application domains, we demonstrate the effectiveness and utility of our system. --- paper_title: A Weighted Moving Average-based Approach for Cleaning Sensor Data paper_content: Nowadays, wireless sensor networks have been widely used in many monitoring applications. Due to the low quality of sensors and random effects of the environments, however, it is well known that the collected sensor data are noisy. Therefore, it is very critical to clean the sensor data before using them to answer queries or conduct data analysis. Popular data cleaning approaches, such as the moving average, cannot meet the requirements of both energy efficiency and quick response time in many sensor related applications. In this paper, we propose a hybrid sensor data cleaning approach with confidence. Specifically, we propose a smart weighted moving average (WMA) algorithm that collects confidence data from sensors and computes the weighted moving average. The rationale behind the WMA algorithm is to draw more samples for a particular value that is of great importance to the moving average, and provide higher confidence weight for this value, such that this important value can be quickly reflected in the moving average. Based on our extensive simulation results, we demonstrate that, compared to the simple moving average (SMA), our WMA approach can effectively clean data and offer quick response time. --- paper_title: Creating probabilistic databases from imprecise time-series data paper_content: Although efficient processing of probabilistic databases is a well-established field, a wide range of applications are still unable to benefit from these techniques due to the lack of means for creating probabilistic databases. In fact, it is a challenging problem to associate concrete probability values with given time-series data for forming a probabilistic database, since the probability distributions used for deriving such probability values vary over time. In this paper, we propose a novel approach to create tuple-level probabilistic databases from (imprecise) time-series data. To the best of our knowledge, this is the first work that introduces a generic solution for creating probabilistic databases from arbitrary time series, which can work in online as well as offline fashion. Our approach consists of two key components. First, the dynamic density metrics that infer time-dependent probability distributions for time series, based on various mathematical models. Our main metric, called the GARCH metric, can robustly capture such evolving probability distributions regardless of the presence of erroneous values in a given time series. Second, the Ω-View builder that creates probabilistic databases from the probability distributions inferred by the dynamic density metrics. For efficient processing, we introduce the σ-cache that reuses the information derived from probability values generated at previous times. Extensive experiments over real datasets demonstrate the effectiveness of our approach. --- paper_title: A neuro-fuzzy approach for sensor network data cleaning paper_content: Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control. However, sensor data are subject to several sources of errors as the data captured from the physical world through these sensor devices tend to be incomplete, noisy, and unreliable, thus yielding imprecise or even incorrect and misleading answers which can be very significative if they result in immediate critical decisions or activation of actuators. Traditional data cleaning techniques cannot be applied in this context as they do not take into account the strong spatial and temporal correlations typically present in sensor data, so machine learning techniques could greatly be of aid. In this paper, we propose a neuro-fuzzy regression approach to clean sensor network data: the well known ANFIS model is employed for reducing the uncertainty associated with the data thus obtaining a more accurate estimate of sensor readings. The obtained cleaning results show good ANFIS performance compared to other common used model such as kernel methods, and we demonstrate its effectiveness if the cleaning model has to be implemented at sensor level rather than at base-station level. --- paper_title: Incorporating quality aspects in sensor data streams paper_content: Sensors are increasingly embedded into physical products in order to capture data about their conditions and usage for decision making in business applications. However, a major issue for such applications is the limited quality of the captured data due to inherently restricted precision and performance of the sensors. Moreover, the data quality is further decreased by data processing to meet resource constraints in streaming environments and ultimately influences business decisions. The issue of how to efficiently provide applications with information about data quality (DQ) is still an open research problem. In my Ph.D. thesis, I address this problem by developing a system to provide business applications with accurate information on data quality. Furthermore, the system will be able to incorporate and guarantee user-defined data quality levels. In this paper, I will present the major results from my research so far. This includes a novel jumping-window-based approach for the efficient transfer of data quality information as well as a flexible metamodel for storage and propagation of data quality. The comprehensive analysis of common data processing operators w.r.t. their impact on data quality allows a fruitful knowledge evaluation and thus diminishes incorrect business decisions. --- paper_title: Representing Data Quality in Sensor Data Streaming Environments paper_content: Sensors in smart-item environments capture data about product conditions and usage to support business decisions as well as production automation processes. A challenging issue in this application area is the restricted quality of sensor data due to limited sensor precision and sensor failures. Moreover, data stream processing to meet resource constraints in streaming environments introduces additional noise and decreases the data quality. In order to avoid wrong business decisions due to dirty data, quality characteristics have to be captured, processed, and provided to the respective business task. However, the issue of how to efficiently provide applications with information about data quality is still an open research problem. In this article, we address this problem by presenting a flexible model for the propagation and processing of data quality. The comprehensive analysis of common data stream processing operators and their impact on data quality allows a fruitful data evaluation and diminishes incorrect business decisions. Further, we propose the data quality model control to adapt the data quality granularity to the data stream interestingness. --- paper_title: Probabilistic Inference over RFID Streams in Mobile Environments paper_content: Recent innovations in RFID technology are enabling large-scale cost-effective deployments in retail, healthcare, pharmaceuticals and supply chain management. The advent of mobile or handheld readers adds significant new challenges to RFID stream processing due to the inherent reader mobility, increased noise, and incomplete data. In this paper, we address the problem of translating noisy, incomplete raw streams from mobile RFID readers into clean, precise event streams with location information. Specifically we propose a probabilistic model to capture the mobility of the reader, object dynamics, and noisy readings. Our model can self-calibrate by automatically estimating key parameters from observed data. Based on this model, we employ a sampling-based technique called particle filtering to infer clean, precise information about object locations from raw streams from mobile RFID readers. Since inference based on standard particle filtering is neither scalable nor efficient in our settings, we propose three enhancements---particle factorization, spatial indexing, and belief compression---for scalable inference over large numbers of objects and high-volume streams. Our experiments show that our approach can offer 49\% error reduction over a state-of-the-art data cleaning approach such as SMURF while also being scalable and efficient. --- paper_title: Cleaning and querying noisy sensors paper_content: Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control. Unfortunately, sensor data is subject to several sources of errors such as noise from external sources, hardware noise, inaccuracies and imprecision, and various environmental effects. Such errors may seriously impact the answer to any query posed to the sensors. In particular, they may yield imprecise or even incorrect and misleading answers which can be very significant if they result in immediate critical decisions or activation of actuators. In this paper, we present a framework for cleaning and querying noisy sensors. Specifically, we present a Bayesian approach for reducing the uncertainty associated with the data, that arise due to random noise, in an online fashion. Our approach combines prior knowledge of the true sensor reading, the noise characteristics of this sensor, and the observed noisy reading in order to obtain a more accurate estimate of the reading. This cleaning step can be performed either at the sensor level or at the base-station. Based on our proposed uncertainty models and using a statistical approach, we introduce several algorithms for answering traditional database queries over uncertain sensor readings. Finally, we present a preliminary evaluation of our proposed approach using synthetic data and highlight some exciting research directions in this area. --- paper_title: Data cleaning using belief propagation paper_content: Effective data cleaning is critical in many applications where the quality of data is poor due to missing values or inaccurate values. Fortunately, a wide spectrum of applications exhibit strong dependencies between data samples, and such dependencies can be used very effectively for cleaning the data. For example, the readings of nearby sensors are generally correlated, and proteins interact with each other when performing crucial functions. We propose a data cleaning approach, based on modeling data dependencies with Markov networks. Belief propagation is used to efficiently compute the marginal or maximum posterior probabilities, so as to infer missing values or to correct errors. To illustrate the benefits and generality of the technique, we discuss its use in several applications and report on the data quality and improvements so obtained. --- paper_title: Anomaly detection: A survey paper_content: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with. --- paper_title: Online Outlier Detection in Sensor Data Using Non-Parametric Models paper_content: Sensor networks have recently found many popular applications in a number of different settings. Sensors at different locations can generate streaming data, which can be analyzed in real-time to identify events of interest. In this paper, we propose a framework that computes in a distributed fashion an approximation of multi-dimensional data distributions in order to enable complex applications in resource-constrained sensor networks.We motivate our technique in the context of the problem of outlier detection. We demonstrate how our framework can be extended in order to identify either distance- or density-based outliers in a single pass over the data, and with limited memory requirements. Experiments with synthetic and real data show that our method is efficient and accurate, and compares favorably to other proposed techniques. We also demonstrate the applicability of our technique to other related problems in sensor networks. --- paper_title: Outlier-Aware Data Aggregation in Sensor Networks paper_content: In this paper we discuss a robust aggregation framework that can detect spurious measurements and refrain from incorporating them in the computed aggregate values. Our framework can consider different definitions of an outlier node, based on a specified minimum support. Our experimental evaluation demonstrates the benefits of our approach. --- paper_title: Outlier Detection Techniques for Wireless Sensor Networks: A Survey paper_content: In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree. --- paper_title: Outlier detection in sensor networks paper_content: Outlier detection has many important applications in sensor networks, e.g., abnormal event detection, animal behavior change, etc. It is a difficult problem since global information about data distributions must be known to identify outliers. In this paper, we use a histogram-based method for outlier detection to reduce communication cost. Rather than collecting all the data in one location for centralized processing, we propose collecting hints (in the form of a histogram) about the data distribution, and using the hints to filter out unnecessary data and identify potential outliers. We show that this method can be used for detecting outliers in terms of two different definitions. Our simulation results show that the histogram method can dramatically reduce the communication cost. --- paper_title: An overview of anomaly detection techniques: Existing solutions and latest technological trends paper_content: As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems-the cyberspace's equivalent to the burglar alarm-join ranks with firewalls as one of the fundamental technologies for network security. However, today's commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ''zero day'' attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area. --- paper_title: ORDEN: outlier region detection and exploration in sensor networks paper_content: Sensor networks play a central role in applications that monitor variables in geographic areas such as the traffic volume on roads or the temperature in the environment. A key feature users are often interested in when employing such systems is the detection of unusual phenomena, that is, anomalous values measured by the sensors. In this demonstration, we present a system, called ORDEN, that allows for the detection and (visual) exploration of outliers and anomalous events in sensor networks in real-time. In particular, the system constructs outlier regions from anomalous sensor measurements to provide for a comprehensive description of the spatial extent of phenomena of interest. With our system, users can interactively explore displayed outlier regions and investigate the heterogeneity within individual regions using different parameter and threshold settings. Using real-world sensor data streams from different application domains, we demonstrate the effectiveness and utility of our system. --- paper_title: Declarative Support for Sensor Data Cleaning paper_content: Pervasive applications rely on data captured from the physical world through sensor devices. Data provided by these devices, however, tend to be unreliable. The data must, therefore, be cleaned before an application can make use of them, leading to additional complexity for application development and deployment. Here we present Extensible Sensor stream Processing (ESP), a framework for building sensor data cleaning infrastructures for use in pervasive applications. ESP is designed as a pipeline using declarative cleaning mechanisms based on spatial and temporal characteristics of sensor data. We demonstrate ESP's effectiveness and ease of use through three real-world scenarios. --- paper_title: ERACER: a database approach for statistical inference and data cleaning paper_content: Real-world databases often contain syntactic and semantic errors, in spite of integrity constraints and other safety measures incorporated into modern DBMSs. We present ERACER, an iterative statistical framework for inferring missing information and correcting such errors automatically. Our approach is based on belief propagation and relational dependency networks, and includes an efficient approximate inference algorithm that is easily implemented in standard DBMSs using SQL and user defined functions. The system performs the inference and cleansing tasks in an integrated manner, using shrinkage techniques to infer correct values accurately even in the presence of dirty data. We evaluate the proposed methods empirically on multiple synthetic and real-world data sets. The results show that our framework achieves accuracy comparable to a baseline statistical method using Bayesian networks with exact inference. However, our framework has wider applicability than the Bayesian network baseline, due to its ability to reason with complex, cyclic relational dependencies. --- paper_title: A Deferred Cleansing Method for RFID Data Analytics paper_content: Radio Frequency Identification is gaining broader adoption in many areas. One of the challenges in implementing an RFID-based system is dealing with anomalies in RFID reads. A small number of anomalies can translate into large errors in analytical results. Conventional "eager" approaches cleanse all data upfront and then apply queries on cleaned data. However, this approach is not feasible when several applications define anomalies and corrections on the same data set differently and not all anomalies can be defined beforehand. This necessitates anomaly handling at query time. We introduce a deferred approach for detecting and correcting RFID data anomalies. Each application specifies the detection and the correction of relevant anomalies using declarative sequence-based rules. An application query is then automatically rewritten based on the cleansing rules that the application has specified, to provide answers over cleaned data. We show that a naive approach to deferred cleansing that applies rules without leveraging query information can be prohibitive. We develop two novel rewrite methods, both of which reduce the amount of data to be cleaned, by exploiting predicates in application queries while guaranteeing correct answers. We leverage standardized SQL/OLAP functionality to implement rules specified in a declarative sequence-based language. This allows efficient evaluation of cleansing rules using existing query processing capabilities of a DBMS. Our experimental results show that deferred cleansing is affordable for typical analytic queries over RFID data. --- paper_title: Adaptive Cleaning for RFID Data Streams paper_content: To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a "smoothing filter", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID data cleaning. SMURF models the unreliability of RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. Through the use of tools such as binomial sampling and π-estimators, SMURF continuously adapts the smoothing window size in a principled manner to provide accurate RFID data to applications. --- paper_title: MauveDB: supporting model-based user views in database systems paper_content: Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system. --- paper_title: Fundamentals of Database Systems paper_content: From the Publisher: ::: Fundamentals of Database Systems combines clear explanations of theory and design, broad coverage of models and real systems, and excellent examples with up-to-date introductions to modern database technologies. This edition is completely revised and updated, and reflects the latest trends in technological and application development. Professors Elmasri and Navathe focus on the relational model and include coverage of recent object-oriented developments. They also address advanced modeling and system enhancements in the areas of active databases, temporal and spatial databases, and multimedia information systems. This edition also surveys the latest application areas of data warehousing, data mining, web databases, digital libraries, GIS, and genome databases. New to the Third Edition ::: Reorganized material on data modeling to clearly separate entity relationship modeling, extended entity relationship modeling, and object-oriented modeling Expanded coverage of the object-oriented and object/relational approach to data management, including ODMG and SQL3 Uses examples from real database systems including OracleTM and Microsoft AccessAE Includes discussion of decision support applications of data warehousing and data mining, as well as emerging technologies of web databases, multimedia, and mobile databases Covers advanced modeling in the areas of active, temporal, and spatial databases Provides coverage of issues of physical database tuning Discusses current database application areas of GIS, genome, and digital libraries --- paper_title: Fundamentals of Database Systems paper_content: From the Publisher: ::: Fundamentals of Database Systems combines clear explanations of theory and design, broad coverage of models and real systems, and excellent examples with up-to-date introductions to modern database technologies. This edition is completely revised and updated, and reflects the latest trends in technological and application development. Professors Elmasri and Navathe focus on the relational model and include coverage of recent object-oriented developments. They also address advanced modeling and system enhancements in the areas of active databases, temporal and spatial databases, and multimedia information systems. This edition also surveys the latest application areas of data warehousing, data mining, web databases, digital libraries, GIS, and genome databases. New to the Third Edition ::: Reorganized material on data modeling to clearly separate entity relationship modeling, extended entity relationship modeling, and object-oriented modeling Expanded coverage of the object-oriented and object/relational approach to data management, including ODMG and SQL3 Uses examples from real database systems including OracleTM and Microsoft AccessAE Includes discussion of decision support applications of data warehousing and data mining, as well as emerging technologies of web databases, multimedia, and mobile databases Covers advanced modeling in the areas of active, temporal, and spatial databases Provides coverage of issues of physical database tuning Discusses current database application areas of GIS, genome, and digital libraries --- paper_title: Querying continuous functions in a database system paper_content: Many scientific, financial, data mining and sensor network applications need to work with continuous, rather than discrete data e.g., temperature as a function of location, or stock prices or vehicle trajectories as a function of time. Querying raw or discrete data is unsatisfactory for these applications -- e.g., in a sensor network, it is necessary to interpolate sensor readings to predict values at locations where sensors are not deployed. In other situations, raw data can be inaccurate owing to measurement errors, and it is useful to fit continuous functions to raw data and query the functions, rather than raw data itself -- e.g., fitting a smooth curve to noisy sensor readings, or a smooth trajectory to GPS data containing gaps or outliers. Existing databases do not support storing or querying continuous functions, short of brute-force discretization of functions into a collection of tuples. We present FunctionDB, a novel database system that treats mathematical functions as first-class citizens that can be queried like traditional relations. The key contribution of FunctionDB is an efficient and accurate algebraic query processor - for the broad class of multi-variable polynomial functions, FunctionDB executes queries directly on the algebraic representation of functions without materializing them into discrete points, using symbolic operations: zero finding, variable substitution, and integration. Even when closed form solutions are intractable, FunctionDB leverages symbolic approximation operations to improve performance. We evaluate FunctionDB on real data sets from a temperature sensor network, and on traffic traces from Boston roads. We show that operating in the functional domain has substantial advantages in terms of accuracy (15-30%) and up to order of magnitude (10x-100x) performance wins over existing approaches that represent models as discrete collections of points. --- paper_title: Online Filtering, Smoothing and Probabilistic Modeling of Streaming data paper_content: In this paper, we address the problem of extending a relational database system to facilitate efficient real-time application of dynamic probabilistic models to streaming data. We use the recently proposed abstraction of model-based views for this purpose, by allowing users to declaratively specify the model to be applied, and by presenting the output of the models to the user as a probabilistic database view. We support declarative querying over such views using an extended version of SQL that allows for querying probabilistic data. Underneath we use particle filters, a class of sequential Monte Carlo algorithms, to represent the present and historical states of the model as sets of weighted samples (particles) that are kept up-to-date as new data arrives. We develop novel techniques to convert the queries on the model-based view directly into queries over particle tables, enabling highly efficient query processing. Finally, we present experimental evaluation of our prototype implementation over several synthetic and real datasets, that demonstrates the feasibility of online modeling of streaming data using our system and establishes the advantages of tight integration between dynamic probabilistic models and databases. --- paper_title: Evaluating Probabilistic Queries over Imprecise Data paper_content: Sensors are often employed to monitor continuously changing entities like locations of moving objects and temperature. The sensor readings are reported to a database system, and are subsequently used to answer queries. Due to continuous changes in these values and limited resources (e.g., network bandwidth and battery power), the database may not be able to keep track of the actual values of the entities. Queries that use these old values may produce incorrect answers. However, if the degree of uncertainty between the actual data value and the database value is limited, one can place more confidence in the answers to the queries. More generally, query answers can be augmented with probabilistic guarantees of the validity of the answers. In this paper, we study probabilistic query evaluation based on uncertain data. A classification of queries is made based upon the nature of the result set. For each class, we develop algorithms for computing probabilistic answers, and provide efficient indexing and numeric solutions. We address the important issue of measuring the quality of the answers to these queries, and provide algorithms for efficiently pulling data from relevant sensors or moving objects in order to improve the quality of the executing queries. Extensive experiments are performed to examine the effectiveness of several data update policies. --- paper_title: U-DBMS: A Database System for Managing Constantly-Evolving Data paper_content: In many systems, sensors are used to acquire information from external environments such as temperature, pressure and locations. Due to continuous changes in these values, and limited resources (e.g., network bandwidth and battery power), it is often infeasible for the database to store the exact values at all times. Queries that uses these old values can produce invalid results. In order to manage the uncertainty between the actual sensor value and the database value, we propose a system called U-DBMS. U-DBMS extends the database system with uncertainty management functionalities. In particular, each data value is represented as an interval and a probability distribution function, and it can be processed with probabilistic query operators to produce imprecise (but correct) answers. This demonstration presents a PostgreSQL-based system that handles uncertainty and probabilistic queries for constantly-evolving data. --- paper_title: MIST: Distributed Indexing and Querying in Sensor Networks using Statistical Models paper_content: The modeling of high level semantic events from low level sensor signals is important in order to understand distributed phenomena. For such content-modeling purposes, transformation of numeric data into symbols and the modeling of resulting symbolic sequences can be achieved using statistical models---Markov Chains (MCs) and Hidden Markov Models (HMMs). We consider the problem of distributed indexing and semantic querying over such sensor models. Specifically, we are interested in efficiently answering (i) range queries: return all sensors that have observed an unusual sequence of symbols with a high likelihood, (ii) top-1 queries: return the sensor that has the maximum probability of observing a given sequence, and (iii) 1-NN queries: return the sensor (model) which is most similar to a query model. All the above queries can be answered at the centralized base station, if each sensor transmits its model to the base station. However, this is communication-intensive. We present a much more efficient alternative---a distributed index structure, MIST (Model-based Index STructure), and accompanying algorithms for answering the above queries. MIST aggregates two or more constituent models into a single composite model, and constructs an in-network hierarchy over such composite models. We develop two kinds of composite models: the first kind captures the average behavior of the underlying models and the second kind captures the extreme behaviors of the underlying models. Using the index parameters maintained at the root of a subtree, we bound the probability of observation of a query sequence from a sensor in the subtree. We also bound the distance of a query model to a sensor model using these parameters. Extensive experimental evaluation on both real-world and synthetic data sets show that the MIST schemes scale well in terms of network size and number of model states. We also show its superior performance over the centralized schemes in terms of update, query, and total communication costs. --- paper_title: TinyDB: an acquisitional query processing system for sensor networks paper_content: We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices. --- paper_title: High-performance complex event processing over streams paper_content: In this paper, we present the design, implementation, and evaluation of a system that executes complex event queries over real-time streams of RFID readings encoded as events. These complex event queries filter and correlate events to match specific patterns, and transform the relevant events into new composite events for the use of external monitoring applications. Stream-based execution of these queries enables time-critical actions to be taken in environments such as supply chain management, surveillance and facility management, healthcare, etc. We first propose a complex event language that significantly extends existing event languages to meet the needs of a range of RFID-enabled monitoring applications. We then describe a query plan-based approach to efficiently implementing this language. Our approach uses native operators to efficiently handle query-defined sequences, which are a key component of complex event processing, and pipeline such sequences to subsequent operators that are built by leveraging relational techniques. We also develop a large suite of optimization techniques to address challenges such as large sliding windows and intermediate result sizes. We demonstrate the effectiveness of our approach through a detailed performance analysis of our prototype implementation under a range of data and query workloads as well as through a comparison to a state-of-the-art stream processor. --- paper_title: Event queries on correlated probabilistic streams paper_content: A major problem in detecting events in streams of data is that the data can be imprecise (e.g. RFID data). However, current state-ofthe-art event detection systems such as Cayuga [14], SASE [46] or SnoopIB[1], assume the data is precise. Noise in the data can be captured using techniques such as hidden Markov models. Inference on these models creates streams of probabilistic events which cannot be directly queried by existing systems. To address this challenge we propose Lahar1, an event processing system for probabilistic event streams. By exploiting the probabilistic nature of the data, Lahar yields a much higher recall and precision than deterministic techniques operating over only the most probable tuples. By using a novel static analysis and novel algorithms, Lahar processes data orders of magnitude more efficiently than a naive approach based on sampling. In this paper, we present Lahar's static analysis and core algorithms. We demonstrate the quality and performance of our approach through experiments with our prototype implementation and comparisons with alternate methods. --- paper_title: Probabilistic Inference over RFID Streams in Mobile Environments paper_content: Recent innovations in RFID technology are enabling large-scale cost-effective deployments in retail, healthcare, pharmaceuticals and supply chain management. The advent of mobile or handheld readers adds significant new challenges to RFID stream processing due to the inherent reader mobility, increased noise, and incomplete data. In this paper, we address the problem of translating noisy, incomplete raw streams from mobile RFID readers into clean, precise event streams with location information. Specifically we propose a probabilistic model to capture the mobility of the reader, object dynamics, and noisy readings. Our model can self-calibrate by automatically estimating key parameters from observed data. Based on this model, we employ a sampling-based technique called particle filtering to infer clean, precise information about object locations from raw streams from mobile RFID readers. Since inference based on standard particle filtering is neither scalable nor efficient in our settings, we propose three enhancements---particle factorization, spatial indexing, and belief compression---for scalable inference over large numbers of objects and high-volume streams. Our experiments show that our approach can offer 49\% error reduction over a state-of-the-art data cleaning approach such as SMURF while also being scalable and efficient. --- paper_title: An Enhanced Representation of Time Series Which Allows Fast and Accurate Classification, Clustering and Relevance Feedback paper_content: We introduce an extended representation of time series that allows fast, accurate classification and clustering in addition to the ability to explore time series data in a relevance feedback framework. The representation consists of piece-wise linear segments to represent shape and a weight vector that contains the relative importance of each individual linear segment. In the classification context, the weights are learned automatically as part of the training cycle. In the relevance feedback context, the weights are determined by an interactive and iterative process in which users rate various choices presented to them. Our representation allows a user to define a variety of similarity measures that can be tailored to specific domains. We demonstrate our approach on space telemetry, medical and synthetic data. --- paper_title: Efficiently Maintaining Distributed Model-Based Views on Real-Time Data Streams paper_content: Minimizing communication cost is a fundamental problem in large-scale federated sensor networks. Maintaining model-based views of data streams has been highlighted because it permits efficient data communication by transmitting parameter values of models, instead of original data streams. We propose a framework that employs the advantages of using model-based views for communication-efficient stream data processing over federated sensor networks, yet it significantly improves state-of-the-art approaches. The framework is generic and any time-parameterized models can be plugged, while accuracy guarantees for query results are ensured throughout the large-scale networks. In addition, we boost the performance of the framework by the coded model update that enables efficient model update from one node to another. It predetermines parameter values for the model, updates only identifiers of the parameter values, and compresses the identifiers by utilizing bitmaps. Moreover, we propose a correlation model, named coded inter-variable model, that merges the efficiency of the coded model update with that of correlation models. Empirical studies with real data demonstrate that our proposal achieves substantial amounts of communication reduction, outperforming state-of-the art methods. --- paper_title: Towards Online Multi-model Approximation of Time Series paper_content: The increasing use of sensor technology for various monitoring applications (e.g. air-pollution, traffic, climate-change, etc.) has led to an unprecedented volume of streaming data that has to be efficiently aggregated, stored and retrieved. Real-time model-based data approximation and filtering is a common solution for reducing the storage (and communication) overhead. However, the selection of the most efficient model depends on the characteristics of the data stream, namely rate, burstiness, data range, etc., which cannot be always known a priori for (mobile) sensors and they can even dynamically change. In this paper, we investigate the innovative concept of efficiently combining multiple approximation models in real-time. Our approach dynamically adapts to the properties of the data stream and approximates each data segment with the most suitable model. As experimentally proved, our multi-model approximation approach always produces fewer or equal data segments than those of the best individual model, and thus provably achieves higher data compression ratio than individual linear models. --- paper_title: Querying continuous functions in a database system paper_content: Many scientific, financial, data mining and sensor network applications need to work with continuous, rather than discrete data e.g., temperature as a function of location, or stock prices or vehicle trajectories as a function of time. Querying raw or discrete data is unsatisfactory for these applications -- e.g., in a sensor network, it is necessary to interpolate sensor readings to predict values at locations where sensors are not deployed. In other situations, raw data can be inaccurate owing to measurement errors, and it is useful to fit continuous functions to raw data and query the functions, rather than raw data itself -- e.g., fitting a smooth curve to noisy sensor readings, or a smooth trajectory to GPS data containing gaps or outliers. Existing databases do not support storing or querying continuous functions, short of brute-force discretization of functions into a collection of tuples. We present FunctionDB, a novel database system that treats mathematical functions as first-class citizens that can be queried like traditional relations. The key contribution of FunctionDB is an efficient and accurate algebraic query processor - for the broad class of multi-variable polynomial functions, FunctionDB executes queries directly on the algebraic representation of functions without materializing them into discrete points, using symbolic operations: zero finding, variable substitution, and integration. Even when closed form solutions are intractable, FunctionDB leverages symbolic approximation operations to improve performance. We evaluate FunctionDB on real data sets from a temperature sensor network, and on traffic traces from Boston roads. We show that operating in the functional domain has substantial advantages in terms of accuracy (15-30%) and up to order of magnitude (10x-100x) performance wins over existing approaches that represent models as discrete collections of points. --- paper_title: Online Piece-wise Linear Approximation of Numerical Streams with Precision Guarantees paper_content: Continuous "always-on" monitoring is beneficial for a number of applications, but potentially imposes a high load in terms of communication, storage and power consumption when a large number of variables need to be monitored. We introduce two new filtering techniques, swing filters and slide filters, that represent within a prescribed precision a time-varying numerical signal by a piecewise linear function, consisting of connected line segments for swing filters and (mostly) disconnected line segments for slide filters. We demonstrate the effectiveness of swing and slide filters in terms of their compression power by applying them to a real-life data set plus a variety of synthetic data sets. For nearly all combinations of signal behavior and precision requirements, the proposed techniques outperform the earlier approaches for online filtering in terms of data reduction. The slide filter, in particular, consistently dominates all other filters, with up to twofold improvement over the best of the previous techniques. --- paper_title: An online algorithm for segmenting time series paper_content: In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature. --- paper_title: Capturing sensor-generated time series with quality guarantees paper_content: We are interested in capturing time series generated by small wireless electronic sensors. Battery-operated sensors must avoid heavy use of their wireless radio which is a key cause of energy dissipation. When many sensors transmit, the resources of the recipient of the data are taxed; hence, limiting communication will benefit the recipient as well. We show how time series generated by sensors can be captured and stored in a database system (archive). Sensors compress time series instead of sending them in raw form. We propose an optimal online algorithm for constructing a piecewise constant approximation (PCA) of a time series which guarantees that the compressed representation satisfies an error bound on the L/sub /spl infin// distance. In addition to the capture task, we often want to estimate the values of a time series ahead of time, e.g., to answer real-time queries. To achieve this, sensors may fit predictive models on observed data, sending parameters of these models to the archive. We exploit the interplay between prediction and compression in a unified framework that avoids duplicating effort and leads to reduced communication. --- paper_title: An online algorithm for segmenting time series paper_content: In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature. --- paper_title: Online Piece-wise Linear Approximation of Numerical Streams with Precision Guarantees paper_content: Continuous "always-on" monitoring is beneficial for a number of applications, but potentially imposes a high load in terms of communication, storage and power consumption when a large number of variables need to be monitored. We introduce two new filtering techniques, swing filters and slide filters, that represent within a prescribed precision a time-varying numerical signal by a piecewise linear function, consisting of connected line segments for swing filters and (mostly) disconnected line segments for slide filters. We demonstrate the effectiveness of swing and slide filters in terms of their compression power by applying them to a real-life data set plus a variety of synthetic data sets. For nearly all combinations of signal behavior and precision requirements, the proposed techniques outperform the earlier approaches for online filtering in terms of data reduction. The slide filter, in particular, consistently dominates all other filters, with up to twofold improvement over the best of the previous techniques. --- paper_title: Adaptive filters for continuous queries over distributed data streams paper_content: We consider an environment where distributed data sources continuously stream updates to a centralized processor that monitors continuous queries over the distributed data. Significant communication overhead is incurred in the presence of rapid update streams, and we propose a new technique for reducing the overhead. Users register continuous queries with precision requirements at the central stream processor, which installs filters at remote data sources. The filters adapt to changing conditions to minimize stream rates while guaranteeing that all continuous queries still receive the updates necessary to provide answers of adequate precision at all times. Our approach enables applications to trade precision for communication overhead at a fine granularity by individually adjusting the precision constraints of continuous queries over streams in a multi-query workload. Through experiments performed on synthetic data simulations and a real network monitoring implementation, we demonstrate the effectiveness of our approach in achieving low communication overhead compared with alternate approaches. --- paper_title: Online amnesic approximation of streaming time series paper_content: The past decade has seen a wealth of research on time series representations, because the manipulation, storage, and indexing of large volumes of raw time series data is impractical. The vast majority of research has concentrated on representations that are calculated in batch mode and represent each value with approximately equal fidelity. However, the increasing deployment of mobile devices and real time sensors has brought home the need for representations that can be incrementally updated, and can approximate the data with fidelity proportional to its age. The latter property allows us to answer queries about the recent past with greater precision, since in many domains recent information is more useful than older information. We call such representations amnesic. While there has been previous work on amnesic representations, the class of amnesic functions possible was dictated by the representation itself. We introduce a novel representation of time series that can represent arbitrary, user-specified amnesic functions. For example, a meteorologist may decide that data that is twice as old can tolerate twice as much error, and thus, specify a linear amnesic function. In contrast, an econometrist might opt for an exponential amnesic function. We propose online algorithms for our representation, and discuss their properties. Finally, we perform an extensive empirical evaluation on 40 datasets, and show that our approach can efficiently maintain a high quality amnesic approximation. --- paper_title: Energy conservation in wireless sensor networks: A survey paper_content: In the last years, wireless sensor networks (WSNs) have gained increasing attention from both the research community and actual users. As sensor nodes are generally battery-powered devices, the critical aspects to face concern how to reduce the energy consumption of nodes, so that the network lifetime can be extended to reasonable times. In this paper we first break down the energy consumption for the components of a typical sensor node, and discuss the main directions to energy conservation in WSNs. Then, we present a systematic and comprehensive taxonomy of the energy conservation schemes, which are subsequently discussed in depth. Special attention has been devoted to promising solutions which have not yet obtained a wide attention in the literature, such as techniques for energy efficient data acquisition. Finally we conclude the paper with insights for research directions about energy conservation in WSNs. --- paper_title: Locally adaptive dimensionality reduction for indexing large time series databases paper_content: Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this article, we introduce a new dimensionality reduction technique, which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower-bounding, but very tight, Euclidean distance approximation, and show how they can support fast exact searching and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority. --- paper_title: Efficient retrieval of similar time sequences under time warping paper_content: Fast similarity searching in large time sequence databases has typically used Euclidean distance as a dissimilarity metric. However, for several applications, including matching of voice, audio and medical signals (e.g., electrocardiograms), one is required to permit local accelerations and decelerations in the rate of sequences, leading to a popular, field tested dissimilarity metric called the "time warping" distance. From the indexing viewpoint, this metric presents two major challenges: (a) it does not lead to any natural indexable "features", and (b) comparing two sequences requires time quadratic in the sequence length. To address each problem, we propose to use: (a) a modification of the so called "FastMap", to map sequences into points, with little compromise of "recall" (typically zero); and (b) a fast linear test, to help us discard quickly many of the false alarms that FastMap will typically introduce. Using both ideas in cascade, our proposed method achieved up to an order of magnitude speed-up over sequential scanning on both real and synthetic datasets. --- paper_title: Online Information Compression in Sensor Networks paper_content: In the emerging area of wireless sensor networks, one of the most typical challenges is to retrieve historical information from the sensor nodes. Due to the resource limitation of sensor nodes (processing, memory, bandwidth, and energy), the collected information of sensor nodes has to be compressed quickly and precisely for transmission. In this paper, we propose a new technique -- the ALVQ (Adoptive Learning Vector Quantization) algorithm to compress this historical information. The ALVQ algorithm constructs a codebook to capture the prominent features of the data and with these features all the other data can be piece-wise encoded for compression. In addition, with two-level regression of the codebook's update, ALVQ algorithm saves the data transfer bandwidth and improves the compression precision further. Finally, we consider the problem of transmitting data in a sensor network while maximizing the precision. We show how we apply our algorithm so that a set of sensors can dynamically share a wireless communication channel. --- paper_title: Compressing historical information in sensor networks paper_content: We are inevitably moving into a realm where small and inexpensive wireless devices would be seamlessly embedded in the physical world and form a wireless sensor network in order to perform complex monitoring and computational tasks. Such networks pose new challenges in data processing and dissemination because of the limited resources (processing, bandwidth, energy) that such devices possess. In this paper we propose a new technique for compressing multiple streams containing historical data from each sensor. Our method exploits correlation and redundancy among multiple measurements on the same sensor and achieves high degree of data reduction while managing to capture even the smallest details of the recorded measurements. The key to our technique is the base signal, a series of values extracted from the real measurements, used for encoding piece-wise linear correlations among the collected data values. We provide efficient algorithms for extracting the base signal features from the data and for encoding the measurements using these features. Our experiments demonstrate that our method by far outperforms standard approximation techniques like Wavelets. Histograms and the Discrete Cosine Transform, on a variety of error metrics and for real datasets from different domains. --- paper_title: GAMPS: compressing multi sensor data by grouping and amplitude scaling paper_content: We consider the problem of collectively approximating a set of sensor signals using the least amount of space so that any individual signal can be efficiently reconstructed within a given maximum (L∞) error e. The problem arises naturally in applications that need to collect large amounts of data from multiple concurrent sources, such as sensors, servers and network routers, and archive them over a long period of time for offline data mining. We present GAMPS, a general framework that addresses this problem by combining several novel techniques. First, it dynamically groups multiple signals together so that signals within each group are correlated and can be maximally compressed jointly. Second, it appropriately scales the amplitudes of different signals within a group and compresses them within the maximum allowed reconstruction error bound. Our schemes are polynomial time O(α, β approximation schemes, meaning that the maximum (L∞) error is at most α e and it uses at most β times the optimal memory. Finally, GAMPS maintains an index so that various queries can be issued directly on compressed data. Our experiments on several real-world sensor datasets show that GAMPS significantly reduces space without compromising the quality of search and query. --- paper_title: Towards Online Multi-model Approximation of Time Series paper_content: The increasing use of sensor technology for various monitoring applications (e.g. air-pollution, traffic, climate-change, etc.) has led to an unprecedented volume of streaming data that has to be efficiently aggregated, stored and retrieved. Real-time model-based data approximation and filtering is a common solution for reducing the storage (and communication) overhead. However, the selection of the most efficient model depends on the characteristics of the data stream, namely rate, burstiness, data range, etc., which cannot be always known a priori for (mobile) sensors and they can even dynamically change. In this paper, we investigate the innovative concept of efficiently combining multiple approximation models in real-time. Our approach dynamically adapts to the properties of the data stream and approximates each data segment with the most suitable model. As experimentally proved, our multi-model approximation approach always produces fewer or equal data segments than those of the best individual model, and thus provably achieves higher data compression ratio than individual linear models. --- paper_title: Capturing sensor-generated time series with quality guarantees paper_content: We are interested in capturing time series generated by small wireless electronic sensors. Battery-operated sensors must avoid heavy use of their wireless radio which is a key cause of energy dissipation. When many sensors transmit, the resources of the recipient of the data are taxed; hence, limiting communication will benefit the recipient as well. We show how time series generated by sensors can be captured and stored in a database system (archive). Sensors compress time series instead of sending them in raw form. We propose an optimal online algorithm for constructing a piecewise constant approximation (PCA) of a time series which guarantees that the compressed representation satisfies an error bound on the L/sub /spl infin// distance. In addition to the capture task, we often want to estimate the values of a time series ahead of time, e.g., to answer real-time queries. To achieve this, sensors may fit predictive models on observed data, sending parameters of these models to the archive. We exploit the interplay between prediction and compression in a unified framework that avoids duplicating effort and leads to reduced communication. --- paper_title: An evaluation of multi-resolution storage for sensor networks paper_content: Wireless sensor networks enable dense sensing of the environment, offering unprecedented opportunities for observing the physical world. Centralized data collection and analysis adversely impact sensor node lifetime. Previous sensor network research has, therefore, focused on in network aggregation and query processing, but has done so for applications where the features of interest are known a priori. When features are not known a priori, as is the case with many scientific applications in dense sensor arrays, efficient support for multi-resolution storage and iterative, drill-down queries is essential.Our system demonstrates the use of in-network wavelet-based summarization and progressive aging of summaries in support of long-term querying in storage and communication-constrained networks. We evaluate the performance of our linux implementation and show that it achieves: (a) low communication overhead for multi-resolution summarization, (b) highly efficient drill-down search over such summaries, and (c) efficient use of network storage capacity through load-balancing and progressive aging of summaries. --- paper_title: Efficient time series matching by wavelets paper_content: Time series stored as feature vectors can be indexed by multidimensional index trees like R-Trees for fast retrieval. Due to the dimensionality curse problem, transformations are applied to time series to reduce the number of dimensions of the feature vectors. Different transformations like Discrete Fourier Transform (DFT) Discrete Wavelet Transform (DWT), Karhunen-Loeve (KL) transform or Singular Value Decomposition (SVD) can be applied. While the use of DFT and K-L transform or SVD have been studied on the literature, to our knowledge, there is no in-depth study on the application of DWT. In this paper we propose to use Haar Wavelet Transform for time series indexing. The major contributions are: (1) we show that Euclidean distance is preserved in the Haar transformed domain and no false dismissal will occur, (2) we show that Haar transform can outperform DFT through experiments, (3) a new similarity model is suggested to accommodate vertical shift of time series, and (4) a two-phase method is proposed for efficient n-nearest neighbor query in time series databases. --- paper_title: Dimensions: why do we need a new data handling architecture for sensor networks? paper_content: An important class of networked systems is emerging that involve very large numbers of small, low-power, wireless devices. These systems offer the ability to sense the environment densely, offering unprecedented opportunities for many scientific disciplines to observe the physical world. In this paper, we argue that a data handling architecture for these devices should incorporate their extreme resource constraints - energy, storage and processing - and spatio-temporal interpretation of the physical world in the design, cost model, and metrics of evaluation. We describe DIMENSIONS, a system that provides a unified view of data handling in sensor networks, incorporating long-term storage, multi-resolution data access and spatio-temporal pattern mining. --- paper_title: Similarity search over time-series data using wavelets paper_content: Considers the use of wavelet transformations as a dimensionality reduction technique to permit efficient similarity searching over high-dimensional time-series data. While numerous transformations have been proposed and studied, the only wavelet that has been shown to be effective for this application is the Haar wavelet. In this work, we observe that a large class of wavelet transformations (not only orthonormal wavelets but also bi-orthonormal wavelets) can be used to support similarity searching. This class includes the most popular and most effective wavelets being used in image compression. We present a detailed performance study of the effects of using different wavelets on the performance of similarity searching for time-series data. We include several wavelets that outperform both the Haar wavelet and the best-known non-wavelet transformations for this application. To ensure our results are usable by an application engineer, we also show how to configure an indexing strategy for the best-performing transformations. Finally, we identify classes of data that can be indexed efficiently using these wavelet transformations. --- paper_title: Online Piece-wise Linear Approximation of Numerical Streams with Precision Guarantees paper_content: Continuous "always-on" monitoring is beneficial for a number of applications, but potentially imposes a high load in terms of communication, storage and power consumption when a large number of variables need to be monitored. We introduce two new filtering techniques, swing filters and slide filters, that represent within a prescribed precision a time-varying numerical signal by a piecewise linear function, consisting of connected line segments for swing filters and (mostly) disconnected line segments for slide filters. We demonstrate the effectiveness of swing and slide filters in terms of their compression power by applying them to a real-life data set plus a variety of synthetic data sets. For nearly all combinations of signal behavior and precision requirements, the proposed techniques outperform the earlier approaches for online filtering in terms of data reduction. The slide filter, in particular, consistently dominates all other filters, with up to twofold improvement over the best of the previous techniques. --- paper_title: StatStream: Statistical Monitoring of Thousands of Data Streams in Real Time paper_content: Consider the problem of monitoring tens of thousands of time series data streams in an online fashion and making decisions based on them. In addition to single stream statistics such as average and standard deviation, we also want to find high correlations among all pairs of streams. A stock market trader might use such a tool to spot arbitrage opportunities. This paper proposes efficient methods for solving this problem based on Discrete Fourier Transforms and a three level time interval hierarchy. Extensive experiments on synthetic data and real world financial trading data show that our algorithm beats the direct computation approach by several orders of magnitude. It also improves on previous Fourier Transform approaches by allowing the efficient computation of time-delayed correlation over any size sliding window and any time delay. Correlation also lends itself to an efficient grid-based data structure. The result is the first algorithm that we know of to compute correlations over thousands of data streams in real time. The algorithm is incremental, has fixed response time, and can monitor the pairwise correlations of 10,000 streams on a single PC. The algorithm is embarrassingly parallelizable. --- paper_title: Capturing sensor-generated time series with quality guarantees paper_content: We are interested in capturing time series generated by small wireless electronic sensors. Battery-operated sensors must avoid heavy use of their wireless radio which is a key cause of energy dissipation. When many sensors transmit, the resources of the recipient of the data are taxed; hence, limiting communication will benefit the recipient as well. We show how time series generated by sensors can be captured and stored in a database system (archive). Sensors compress time series instead of sending them in raw form. We propose an optimal online algorithm for constructing a piecewise constant approximation (PCA) of a time series which guarantees that the compressed representation satisfies an error bound on the L/sub /spl infin// distance. In addition to the capture task, we often want to estimate the values of a time series ahead of time, e.g., to answer real-time queries. To achieve this, sensors may fit predictive models on observed data, sending parameters of these models to the archive. We exploit the interplay between prediction and compression in a unified framework that avoids duplicating effort and leads to reduced communication. --- paper_title: A Weighted Moving Average-based Approach for Cleaning Sensor Data paper_content: Nowadays, wireless sensor networks have been widely used in many monitoring applications. Due to the low quality of sensors and random effects of the environments, however, it is well known that the collected sensor data are noisy. Therefore, it is very critical to clean the sensor data before using them to answer queries or conduct data analysis. Popular data cleaning approaches, such as the moving average, cannot meet the requirements of both energy efficiency and quick response time in many sensor related applications. In this paper, we propose a hybrid sensor data cleaning approach with confidence. Specifically, we propose a smart weighted moving average (WMA) algorithm that collects confidence data from sensors and computes the weighted moving average. The rationale behind the WMA algorithm is to draw more samples for a particular value that is of great importance to the moving average, and provide higher confidence weight for this value, such that this important value can be quickly reflected in the moving average. Based on our extensive simulation results, we demonstrate that, compared to the simple moving average (SMA), our WMA approach can effectively clean data and offer quick response time. --- paper_title: A neuro-fuzzy approach for sensor network data cleaning paper_content: Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control. However, sensor data are subject to several sources of errors as the data captured from the physical world through these sensor devices tend to be incomplete, noisy, and unreliable, thus yielding imprecise or even incorrect and misleading answers which can be very significative if they result in immediate critical decisions or activation of actuators. Traditional data cleaning techniques cannot be applied in this context as they do not take into account the strong spatial and temporal correlations typically present in sensor data, so machine learning techniques could greatly be of aid. In this paper, we propose a neuro-fuzzy regression approach to clean sensor network data: the well known ANFIS model is employed for reducing the uncertainty associated with the data thus obtaining a more accurate estimate of sensor readings. The obtained cleaning results show good ANFIS performance compared to other common used model such as kernel methods, and we demonstrate its effectiveness if the cleaning model has to be implemented at sensor level rather than at base-station level. --- paper_title: Declarative Support for Sensor Data Cleaning paper_content: Pervasive applications rely on data captured from the physical world through sensor devices. Data provided by these devices, however, tend to be unreliable. The data must, therefore, be cleaned before an application can make use of them, leading to additional complexity for application development and deployment. Here we present Extensible Sensor stream Processing (ESP), a framework for building sensor data cleaning infrastructures for use in pervasive applications. ESP is designed as a pipeline using declarative cleaning mechanisms based on spatial and temporal characteristics of sensor data. We demonstrate ESP's effectiveness and ease of use through three real-world scenarios. --- paper_title: MauveDB: supporting model-based user views in database systems paper_content: Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system. --- paper_title: PRESTO: feedback-driven data management in sensor networks paper_content: This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven push approach is energy-efficient, while ensuring that anomalous data trends are never missed. In addition to supporting queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. --- paper_title: TinyDB: an acquisitional query processing system for sensor networks paper_content: We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices. --- paper_title: Efficient time series matching by wavelets paper_content: Time series stored as feature vectors can be indexed by multidimensional index trees like R-Trees for fast retrieval. Due to the dimensionality curse problem, transformations are applied to time series to reduce the number of dimensions of the feature vectors. Different transformations like Discrete Fourier Transform (DFT) Discrete Wavelet Transform (DWT), Karhunen-Loeve (KL) transform or Singular Value Decomposition (SVD) can be applied. While the use of DFT and K-L transform or SVD have been studied on the literature, to our knowledge, there is no in-depth study on the application of DWT. In this paper we propose to use Haar Wavelet Transform for time series indexing. The major contributions are: (1) we show that Euclidean distance is preserved in the Haar transformed domain and no false dismissal will occur, (2) we show that Haar transform can outperform DFT through experiments, (3) a new similarity model is suggested to accommodate vertical shift of time series, and (4) a two-phase method is proposed for efficient n-nearest neighbor query in time series databases. --- paper_title: The design of an acquisitional query processor for sensor networks paper_content: We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices. --- paper_title: Online amnesic approximation of streaming time series paper_content: The past decade has seen a wealth of research on time series representations, because the manipulation, storage, and indexing of large volumes of raw time series data is impractical. The vast majority of research has concentrated on representations that are calculated in batch mode and represent each value with approximately equal fidelity. However, the increasing deployment of mobile devices and real time sensors has brought home the need for representations that can be incrementally updated, and can approximate the data with fidelity proportional to its age. The latter property allows us to answer queries about the recent past with greater precision, since in many domains recent information is more useful than older information. We call such representations amnesic. While there has been previous work on amnesic representations, the class of amnesic functions possible was dictated by the representation itself. We introduce a novel representation of time series that can represent arbitrary, user-specified amnesic functions. For example, a meteorologist may decide that data that is twice as old can tolerate twice as much error, and thus, specify a linear amnesic function. In contrast, an econometrist might opt for an exponential amnesic function. We propose online algorithms for our representation, and discuss their properties. Finally, we perform an extensive empirical evaluation on 40 datasets, and show that our approach can efficiently maintain a high quality amnesic approximation. --- paper_title: ERACER: a database approach for statistical inference and data cleaning paper_content: Real-world databases often contain syntactic and semantic errors, in spite of integrity constraints and other safety measures incorporated into modern DBMSs. We present ERACER, an iterative statistical framework for inferring missing information and correcting such errors automatically. Our approach is based on belief propagation and relational dependency networks, and includes an efficient approximate inference algorithm that is easily implemented in standard DBMSs using SQL and user defined functions. The system performs the inference and cleansing tasks in an integrated manner, using shrinkage techniques to infer correct values accurately even in the presence of dirty data. We evaluate the proposed methods empirically on multiple synthetic and real-world data sets. The results show that our framework achieves accuracy comparable to a baseline statistical method using Bayesian networks with exact inference. However, our framework has wider applicability than the Bayesian network baseline, due to its ability to reason with complex, cyclic relational dependencies. --- paper_title: Towards Online Multi-model Approximation of Time Series paper_content: The increasing use of sensor technology for various monitoring applications (e.g. air-pollution, traffic, climate-change, etc.) has led to an unprecedented volume of streaming data that has to be efficiently aggregated, stored and retrieved. Real-time model-based data approximation and filtering is a common solution for reducing the storage (and communication) overhead. However, the selection of the most efficient model depends on the characteristics of the data stream, namely rate, burstiness, data range, etc., which cannot be always known a priori for (mobile) sensors and they can even dynamically change. In this paper, we investigate the innovative concept of efficiently combining multiple approximation models in real-time. Our approach dynamically adapts to the properties of the data stream and approximates each data segment with the most suitable model. As experimentally proved, our multi-model approximation approach always produces fewer or equal data segments than those of the best individual model, and thus provably achieves higher data compression ratio than individual linear models. --- paper_title: Capturing sensor-generated time series with quality guarantees paper_content: We are interested in capturing time series generated by small wireless electronic sensors. Battery-operated sensors must avoid heavy use of their wireless radio which is a key cause of energy dissipation. When many sensors transmit, the resources of the recipient of the data are taxed; hence, limiting communication will benefit the recipient as well. We show how time series generated by sensors can be captured and stored in a database system (archive). Sensors compress time series instead of sending them in raw form. We propose an optimal online algorithm for constructing a piecewise constant approximation (PCA) of a time series which guarantees that the compressed representation satisfies an error bound on the L/sub /spl infin// distance. In addition to the capture task, we often want to estimate the values of a time series ahead of time, e.g., to answer real-time queries. To achieve this, sensors may fit predictive models on observed data, sending parameters of these models to the archive. We exploit the interplay between prediction and compression in a unified framework that avoids duplicating effort and leads to reduced communication. --- paper_title: MIST: Distributed Indexing and Querying in Sensor Networks using Statistical Models paper_content: The modeling of high level semantic events from low level sensor signals is important in order to understand distributed phenomena. For such content-modeling purposes, transformation of numeric data into symbols and the modeling of resulting symbolic sequences can be achieved using statistical models---Markov Chains (MCs) and Hidden Markov Models (HMMs). We consider the problem of distributed indexing and semantic querying over such sensor models. Specifically, we are interested in efficiently answering (i) range queries: return all sensors that have observed an unusual sequence of symbols with a high likelihood, (ii) top-1 queries: return the sensor that has the maximum probability of observing a given sequence, and (iii) 1-NN queries: return the sensor (model) which is most similar to a query model. All the above queries can be answered at the centralized base station, if each sensor transmits its model to the base station. However, this is communication-intensive. We present a much more efficient alternative---a distributed index structure, MIST (Model-based Index STructure), and accompanying algorithms for answering the above queries. MIST aggregates two or more constituent models into a single composite model, and constructs an in-network hierarchy over such composite models. We develop two kinds of composite models: the first kind captures the average behavior of the underlying models and the second kind captures the extreme behaviors of the underlying models. Using the index parameters maintained at the root of a subtree, we bound the probability of observation of a query sequence from a sensor in the subtree. We also bound the distance of a query model to a sensor model using these parameters. Extensive experimental evaluation on both real-world and synthetic data sets show that the MIST schemes scale well in terms of network size and number of model states. We also show its superior performance over the centralized schemes in terms of update, query, and total communication costs. --- paper_title: An online algorithm for segmenting time series paper_content: In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature. --- paper_title: Querying continuous functions in a database system paper_content: Many scientific, financial, data mining and sensor network applications need to work with continuous, rather than discrete data e.g., temperature as a function of location, or stock prices or vehicle trajectories as a function of time. Querying raw or discrete data is unsatisfactory for these applications -- e.g., in a sensor network, it is necessary to interpolate sensor readings to predict values at locations where sensors are not deployed. In other situations, raw data can be inaccurate owing to measurement errors, and it is useful to fit continuous functions to raw data and query the functions, rather than raw data itself -- e.g., fitting a smooth curve to noisy sensor readings, or a smooth trajectory to GPS data containing gaps or outliers. Existing databases do not support storing or querying continuous functions, short of brute-force discretization of functions into a collection of tuples. We present FunctionDB, a novel database system that treats mathematical functions as first-class citizens that can be queried like traditional relations. The key contribution of FunctionDB is an efficient and accurate algebraic query processor - for the broad class of multi-variable polynomial functions, FunctionDB executes queries directly on the algebraic representation of functions without materializing them into discrete points, using symbolic operations: zero finding, variable substitution, and integration. Even when closed form solutions are intractable, FunctionDB leverages symbolic approximation operations to improve performance. We evaluate FunctionDB on real data sets from a temperature sensor network, and on traffic traces from Boston roads. We show that operating in the functional domain has substantial advantages in terms of accuracy (15-30%) and up to order of magnitude (10x-100x) performance wins over existing approaches that represent models as discrete collections of points. --- paper_title: Probabilistic Inference over RFID Streams in Mobile Environments paper_content: Recent innovations in RFID technology are enabling large-scale cost-effective deployments in retail, healthcare, pharmaceuticals and supply chain management. The advent of mobile or handheld readers adds significant new challenges to RFID stream processing due to the inherent reader mobility, increased noise, and incomplete data. In this paper, we address the problem of translating noisy, incomplete raw streams from mobile RFID readers into clean, precise event streams with location information. Specifically we propose a probabilistic model to capture the mobility of the reader, object dynamics, and noisy readings. Our model can self-calibrate by automatically estimating key parameters from observed data. Based on this model, we employ a sampling-based technique called particle filtering to infer clean, precise information about object locations from raw streams from mobile RFID readers. Since inference based on standard particle filtering is neither scalable nor efficient in our settings, we propose three enhancements---particle factorization, spatial indexing, and belief compression---for scalable inference over large numbers of objects and high-volume streams. Our experiments show that our approach can offer 49\% error reduction over a state-of-the-art data cleaning approach such as SMURF while also being scalable and efficient. --- paper_title: Cleaning and querying noisy sensors paper_content: Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control. Unfortunately, sensor data is subject to several sources of errors such as noise from external sources, hardware noise, inaccuracies and imprecision, and various environmental effects. Such errors may seriously impact the answer to any query posed to the sensors. In particular, they may yield imprecise or even incorrect and misleading answers which can be very significant if they result in immediate critical decisions or activation of actuators. In this paper, we present a framework for cleaning and querying noisy sensors. Specifically, we present a Bayesian approach for reducing the uncertainty associated with the data, that arise due to random noise, in an online fashion. Our approach combines prior knowledge of the true sensor reading, the noise characteristics of this sensor, and the observed noisy reading in order to obtain a more accurate estimate of the reading. This cleaning step can be performed either at the sensor level or at the base-station. Based on our proposed uncertainty models and using a statistical approach, we introduce several algorithms for answering traditional database queries over uncertain sensor readings. Finally, we present a preliminary evaluation of our proposed approach using synthetic data and highlight some exciting research directions in this area. --- paper_title: Similarity search over time-series data using wavelets paper_content: Considers the use of wavelet transformations as a dimensionality reduction technique to permit efficient similarity searching over high-dimensional time-series data. While numerous transformations have been proposed and studied, the only wavelet that has been shown to be effective for this application is the Haar wavelet. In this work, we observe that a large class of wavelet transformations (not only orthonormal wavelets but also bi-orthonormal wavelets) can be used to support similarity searching. This class includes the most popular and most effective wavelets being used in image compression. We present a detailed performance study of the effects of using different wavelets on the performance of similarity searching for time-series data. We include several wavelets that outperform both the Haar wavelet and the best-known non-wavelet transformations for this application. To ensure our results are usable by an application engineer, we also show how to configure an indexing strategy for the best-performing transformations. Finally, we identify classes of data that can be indexed efficiently using these wavelet transformations. --- paper_title: Online Filtering, Smoothing and Probabilistic Modeling of Streaming data paper_content: In this paper, we address the problem of extending a relational database system to facilitate efficient real-time application of dynamic probabilistic models to streaming data. We use the recently proposed abstraction of model-based views for this purpose, by allowing users to declaratively specify the model to be applied, and by presenting the output of the models to the user as a probabilistic database view. We support declarative querying over such views using an extended version of SQL that allows for querying probabilistic data. Underneath we use particle filters, a class of sequential Monte Carlo algorithms, to represent the present and historical states of the model as sets of weighted samples (particles) that are kept up-to-date as new data arrives. We develop novel techniques to convert the queries on the model-based view directly into queries over particle tables, enabling highly efficient query processing. Finally, we present experimental evaluation of our prototype implementation over several synthetic and real datasets, that demonstrates the feasibility of online modeling of streaming data using our system and establishes the advantages of tight integration between dynamic probabilistic models and databases. ---
Title: A SURVEY OF MODEL-BASED SENSOR DATA ACQUISITION AND MANAGEMENT Section 1: Introduction Description 1: Introduce the growth in sensor data and the significance of model-based techniques for data acquisition and management. Summarize the categories of tasks covered, such as data acquisition, cleaning, query processing, and compression. Section 2: Model-Based Sensor Data Acquisition Description 2: Discuss various techniques for model-based sensor data acquisition, focusing on energy consumption and communication costs. Cover pull-based and push-based approaches along with specific methods like TinyDB, BBQ, PRESTO, and Ken. Section 3: Preliminaries Description 3: Define the basic model of a sensor network and establish the notation utilized in the chapter. Explain the scenario of sensors used for environmental monitoring. Section 4: The Sensor Data Acquisition Query Description 4: Describe the process of creating and maintaining the sensor values table. Present an example of a sensor data acquisition query and its execution. Section 5: Pull-Based Data Acquisition Description 5: Elaborate on pull-based acquisition methods such as TinyDB (with ACQP and Semantic Routing Trees) and BBQ (using multi-dimensional Gaussian distributions). Section 6: Push-Based Data Acquisition Description 6: Explain push-based techniques like PRESTO and Ken, focusing on predictive models and the communication protocols between sensor nodes and base stations. Section 7: Model-Based Sensor Data Cleaning Description 7: Describe the methods for detecting and correcting erroneous sensor values using regression, probabilistic, and outlier detection models. Mention declarative data cleaning approaches. Section 8: Models for Sensor Data Cleaning Description 8: Review popular models used in data cleaning, such as regression models (polynomial and Chebyshev), probabilistic models (Kalman filter and Bayes' theorem), and outlier detection techniques. Section 9: Model-Based Query Processing Description 9: Discuss approaches like in-network query processing and centralized query processing using model-based views, symbolic query evaluation, and processing over uncertain data. Cover dynamic probabilistic models and the use of HMMs. Section 10: Processing Event Queries Description 10: Introduce event queries and their importance in continuously monitoring particular events in sensor data. Provide references to related works. Section 11: Model-Based Sensor Data Compression Description 11: Categorize and review important model-based approaches for compressing sensor data. Cover methods like piecewise approximation, compressing correlated data streams, multi-model data compression, and orthogonal transformations. Section 12: Overview of Sensor Data Compression System Description 12: Present the goal of sensor data compression systems and describe general approaches to segmentation and approximation of data streams. Section 13: Methods for Data Segmentation Description 13: Discuss categorization of piecewise linear approximation algorithms into sliding window, top-down, and bottom-up approaches. Section 14: Piecewise Approximation Description 14: Explain techniques such as piecewise linear approximation and piecewise constant approximation, including filters like swing and slide. Section 15: Compressing Correlated Data Streams Description 15: Review methods for exploiting spatial and temporal correlations among different data streams to achieve compression. Section 16: Multi-Model Data Compression Description 16: Discuss the effectiveness of using multiple models for approximating data streams and selection strategies for the best models. Section 17: Orthogonal Transformations Description 17: Introduce orthogonal transformation approaches such as DFT and discrete wavelet transform for dimensionality reduction and data compression. Section 18: Lossless vs. Lossy Compression Description 18: Compare lossless and lossy compression techniques, discussing their use cases and performance implications. Section 19: Summary Description 19: Provide a comprehensive overview of the various model-based techniques for sensor data acquisition, cleaning, query processing, and compression. Summarize key methods and their contributions to efficient data management.
XML Query Processing and Query Languges: A Survey
8
--- paper_title: A Query Processing Approach for XML Database Systems paper_content: Besides the storage engine, the query processor of a database system is the most critical component when it comes to performance and scalability. Research on query processing for relational database systems developed an approach which we believe should also be adopted for the newly proposed XML database systems. It includes a syntactic and semantic analyzation phase, the mapping onto an internal query representation, algebraic and cost-based optimization, and finally the execution on a record-oriented interface. Each step hides its own challenges and will therefore be discussed throughout this paper. Our contribution can be understood as a roadmap that reveals a desirable set of functionalities for an XML query processor. --- paper_title: The Niagara Internet Query System paper_content: Many projections envision a future in which the Internet is populated with a vast number of Web-accessible XML files—a “World-Wide Database”. Recently, there has been a great deal of research into XML query languages to enable the execution of database-style queries over these XML files. However, merely being an XML query-processing engine does not render a system suitable for querying the Internet. A truly useful system must provide mechanisms to (a) find the XML files that are relevant to a given query, and (b) deal with remote data sources that either provide unpredictable data access and transfer rates, or are infinite streams, or both. The Niagara Internet Query System was designed from the bottom-up to provide these mechanisms. It finds relevant XML documents by using a novel collaboration between the Niagara XML-QL query processor and the Niagara “text-in-context” XML search engine. To handle infinite streams and data sources with unpredictable rates, it supports a “get partial” operation on blocking operators in order to produce partial query results, and inserts synchronization packets at critical points in the operator tree to guarantee the consistency of (partial) results. The Niagara Internet Query System is public domain software that can be found at http://www-db.cs.wisc.edu/niagara/. Category: Research. --- paper_title: Navigation- vs. index-based XML multi-query processing paper_content: XML path queries form the basis of complex filtering of XML data. Most current XML path query processing techniques can be divided in two groups. Navigation-based algorithms compute results by analyzing an input document one tag at a time. In contrast, index-based algorithms take advantage of precomputed numbering schemes over the input XML document. We introduce a new index-based technique, index-filter, to answer multiple XML path queries. Index-filter uses indexes built over the document tags to avoid processing large portions of the input document that are guaranteed not to be part of any match. We analyze index-filter and compare it against Y-filter, a state-of-the-art navigation-based technique. We show that both techniques have their advantages, and we discuss the scenarios under which each technique is superior to the other one. In particular, we show that while most XML path query processing techniques work off SAX events, in some cases it pays off to preprocess the input document, augmenting it with auxiliary information that can be used to evaluate the queries faster. We present experimental results over real and synthetic XML documents that validate our claims. --- paper_title: Efficient Recursive XML Query Processing Using Relational Database Systems paper_content: Recursive queries are quite important in the context of XML databases. In addition, several recent papers have investigated a relational approach to store XML data and there is growing evidence that schema-conscious approaches are a better option than schema-oblivious techniques as far as query performance is concerned. However, the issue of recursive XML queries for such approaches has not been dealt with satisfactorily. In this paper we argue that it is possible to design a schema-oblivious approach that outperforms schema-conscious approaches for certain types of recursive queries. To that end, we propose a novel schema-oblivious approach, called SUCXENT++ (Schema Unconcious XML Enabled System), that outperforms existing schema-oblivious approaches such as XParent by up to 15 times and schema-conscious approaches (Shared-Inlining) by up to eight times for recursive query execution. Our approach has up to two times smaller storage requirements compared to existing schema-oblivious approaches and 10% less than schema-conscious techniques. In addition SUCXENT++ performs marginally better than Shared-Inlining and is 5.7-47 times faster than XParent as far as insertion time is concerned. --- paper_title: MonetDB/XQuery: a fast XQuery processor powered by a relational engine paper_content: Relational XQuery systems try to re-use mature relational data management infrastructures to create fast and scalable XML database technology. This paper describes the main features, key contributions, and lessons learned while implementing such a system. Its architecture consists of (i) a range-based encoding of XML documents into relational tables, (ii) a compilation technique that translates XQuery into a basic relational algebra, (iii) a restricted (order) property-aware peephole relational query optimization strategy, and (iv) a mapping from XML update statements into relational updates. Thus, this system implements all essential XML database functionalities (rather than a single feature) such that we can learn from the full consequences of our architectural decisions. While implementing this system, we had to extend the state-of-the-art with a number of new technical contributions, such as loop-lifted staircase join and efficient relational query evaluation strategies for XQuery theta-joins with existential semantics. These contributions as well as the architectural lessons learned are also deemed valuable for other relational back-end engines. The performance and scalability of the resulting system is evaluated on the XMark benchmark up to data sizes of 11GB. The performance section also provides an extensive benchmark comparison of all major XMark results published previously, which confirm that the goal of purely relational XQuery processing, namely speed and scalability, was met. --- paper_title: The BEA/XQRL Streaming XQuery Processor paper_content: In this paper, we describe the design, implementation, and performance characteristics of a complete, industrial-strength XQuery engine, the BEA streaming XQuery processor. The engine was designed to provide very high performance for message processing applications, i.e., for transforming XML data streams, and it is a central component of the 8.1 release of BEA's WebLogic Integration (WLI) product. This XQuery engine is fully compliant with the August 2002 draft of the W3C XML Query Language specification. A goal of this paper is to describe how an efficient, fully compliant XQuery engine can be built from a few relatively simple components and well-understood technologies. --- paper_title: NaXDB - Realizing Pipelined XQuery Processing in a Native XML Database System paper_content: Supporting queries and modifications on XML documents is a challenging task, and several related approaches exist. When implementing query and modification languages efficiently, the actual persistent storage of the XML data is of particular importance. Generally speaking, the structure of XML data significantly differs from the well-known relational data-model. This paper presents the prototypical implementation of NaXDB, a native XML database management system (DBMS). A native approach means the XML data is stored as a hierarchical tree of linked objects. NaXDB is implemented as a MaxDB by MySQL kernel module and thus inherits several DBMS functionality. Furthermore, NaXDB uses objectoriented extensions of of MaxDB by MySQL to store the tree of linked objects. NaXDB implements a large subset of the query language specification XQuery 1.0 as well as XUpdate for modification. The design and architecture of the prototypical implementation is presented, and concepts for query processing within the database server are discussed. --- paper_title: Holistic twig joins: optimal XML pattern matching paper_content: XML employs a tree-structured data model, and, naturally, XML queries specify patterns of selection predicates on multiple elements related by a tree structure. Finding all occurrences of such a twig pattern in an XML database is a core operation for XML query processing. Prior work has typically decomposed the twig pattern into binary structural (parent-child and ancestor-descendant) relationships, and twig matching is achieved by: (i) using structural join algorithms to match the binary relationships against the XML database, and (ii) stitching together these basic matches. A limitation of this approach for matching twig patterns is that intermediate result sizes can get large, even when the input and output sizes are more manageable.In this paper, we propose a novel holistic twig join algorithm, TwigStack, for matching an XML query twig pattern. Our technique uses a chain of linked stacks to compactly represent partial results to root-to-leaf query paths, which are then composed to obtain matches for the twig pattern. When the twig pattern uses only ancestor-descendant relationships between elements, TwigStack is I/O and CPU optimal among all sequential algorithms that read the entire input: it is linear in the sum of sizes of the input lists and the final result list, but independent of the sizes of intermediate results. We then show how to use (a modification of) B-trees, along with TwigStack, to match query twig patterns in sub-linear time. Finally, we complement our analysis with experimental results on a range of real and synthetic data, and query twig patterns. --- paper_title: Structural joins: a primitive for efficient XML query pattern matching paper_content: XML queries typically specify patterns of selection predicates on multiple elements that have some specified tree structured relationships. The primitive tree structured relationships are parent-child and ancestor-descendant, and finding all occurrences of these relationships in an XML database is a core operation for XML query processing. We develop two families of structural join algorithms for this task: tree-merge and stack-tree. The tree-merge algorithms are a natural extension of traditional merge joins and the multi-predicate merge joins, while the stack-tree algorithms have no counterpart in traditional relational join processing. We present experimental results on a range of data and queries using the TIMBER native XML query engine built on top of SHORE. We show that while, in some cases, tree-merge algorithms can have performance comparable to stack-tree algorithms, in many cases they are considerably worse. This behavior is explained by analytical results that demonstrate that, on sorted inputs, the stack-tree algorithms have worst-case I/O and CPU complexities linear in the sum of the sizes of inputs and output, while the tree-merge algorithms do not have the same guarantee. --- paper_title: An Efficient XPath Query Processor for XML Streams paper_content: Streaming XPath evaluation algorithms must record a potentially exponential number of pattern matches when both predicates and descendant axes are present in queries, and the XML data is recursive. In this paper, we use a compact data structure to encode these pattern matches rather than storing them explicitly. We then propose a polynomial time streaming algorithm to evaluate XPath queries by probing the data structure in a lazy fashion. Extensive experiments show that our approach not only has a good theoretical complexity bound but is also efficient in practice. --- paper_title: Mixed Mode XML Query Processing paper_content: Querying XML documents typically involves both tree-based navigation and pattern matching similar to that used in structured information retrieval domains. In this paper, we show that for good performance, a native XML query processing system should support query plans that mix these two processing paradigms. We describe our prototype native XML system, and report on experiments demonstrating that even for simple queries, there are a number of options for how to combine tree-based navigation and structural joins based on information retrieval-style inverted lists, and that these options can have widely varying performance. We present ways of transparently using both techniques in a single system, and provide a cost model for identifying efficient combinations of the techniques. Our preliminary experimental results prove the viability of our approach. ---
Title: XML Query Processing and Query Languages: A Survey Section 1: INTRODUCTION Description 1: Introduce the functionalities of XML, its structure, and the importance of XML query languages and processors. Section 2: XML QUERY LANGUAGES Description 2: Review the evolution of XML query languages, with a focus on XQuery and other historical proposals, detailing their important characteristics and criteria. Section 3: APPROACHES FOR XML QUERY PROCESSING Description 3: Discuss various approaches for XML query processing, including both high-level abstractions and procedural evaluations. Section 4: Query Processing on Flat File Scheme Description 4: Analyze the flat file processing approach, explain different processing techniques such as yfilter, index-filter, and pathstack, and outline their performance considerations. Section 5: Processing on Relational Structure Description 5: Explain the relational structure approach for storing and querying XML data, detailing different relational storage schemes and their implementation. Section 6: Query Processing on Native Storage Scheme Description 6: Describe the native storage scheme approach, including labeling schemes for XML elements, stack-based algorithms, and holistic twig joins for efficient query processing. Section 7: TOWARD FUTURE XML DATABASE MANAGEMENT SYSTEMS Description 7: Discuss the future directions for XML database management systems, including trends, potential research areas, and application interoperability. Section 8: CONCLUSION Description 8: Summarize the current state of XML query processing, highlight the current trends and challenges, and suggest areas for future research.
A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks
15
--- paper_title: Energy Efficient Routing Protocols for Mobile Ad Hoc Networks paper_content: Nodes in a mobile ad hoc network have limited battery power. If a node is used frequently for transmission or overhearing of data packets, more energy is consumed by that node and after certain amount of time the energy level may not be sufficient for data transmission resulting in link failure. In this paper, we have considered two routing protocols-Dynamic Source Routing (DSR) & Minimum Maximum Battery cost Routing (MMBCR) and studied their performances in terms of network lifetime for the same network scenario. Simulations are carried out using NS2. Finally from the simulation results we have concluded that MMBCR gives more network lifetime by selecting route with maximum battery capacity thereby outperforming DSR. General Terms Energy efficiency, MANETS, Routing Protocols. --- paper_title: Ad Hoc Wireless Networks: Architectures and Protocols paper_content: Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. --- paper_title: Energy Efficient Routing Protocols for Mobile Ad Hoc Networks paper_content: Nodes in a mobile ad hoc network have limited battery power. If a node is used frequently for transmission or overhearing of data packets, more energy is consumed by that node and after certain amount of time the energy level may not be sufficient for data transmission resulting in link failure. In this paper, we have considered two routing protocols-Dynamic Source Routing (DSR) & Minimum Maximum Battery cost Routing (MMBCR) and studied their performances in terms of network lifetime for the same network scenario. Simulations are carried out using NS2. Finally from the simulation results we have concluded that MMBCR gives more network lifetime by selecting route with maximum battery capacity thereby outperforming DSR. General Terms Energy efficiency, MANETS, Routing Protocols. --- paper_title: Energy conserving routing in wireless ad-hoc networks paper_content: An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power. --- paper_title: Ad Hoc Wireless Networks: Architectures and Protocols paper_content: Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. --- paper_title: Energy conserving routing in wireless ad-hoc networks paper_content: An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power. --- paper_title: Power-aware localized routing in wireless networks paper_content: A cost aware metric for wireless networks based on remaining battery power at nodes was proposed for shortest-cost routing algorithms, assuming constant transmission power. Power-aware metrics, where transmission power depends on distance between nodes and corresponding shortest power algorithms were also proposed. We define a power-cost metric based on the combination of both node's lifetime and distance-based power metrics. We investigate some properties of power adjusted transmissions and show that, if additional nodes can be placed at desired locations between two nodes at distance d, the transmission power can be made linear in d as opposed to d/sup /spl alpha// dependence for /spl alpha/ /spl ges/ 2. This provides basis for power, cost, and power-cost localized routing algorithms where nodes make routing decisions solely on the basis, of location of their neighbors and destination. The power-aware routing algorithm attempts to minimize the total power needed to route a message between a source and a destination. The cost-aware routing algorithm is aimed at extending the battery's worst-case lifetime at each node. The combined power-cost localized routing algorithm attempts to minimize the total power needed and to avoid nodes with a short battery's remaining lifetime. We prove that the proposed localized power, cost, and power-cost efficient routing algorithms are loop-free and show their efficiency by experiments. --- paper_title: Power-aware localized routing in wireless networks paper_content: A cost aware metric for wireless networks based on remaining battery power at nodes was proposed for shortest-cost routing algorithms, assuming constant transmission power. Power-aware metrics, where transmission power depends on distance between nodes and corresponding shortest power algorithms were also proposed. We define a power-cost metric based on the combination of both node's lifetime and distance-based power metrics. We investigate some properties of power adjusted transmissions and show that, if additional nodes can be placed at desired locations between two nodes at distance d, the transmission power can be made linear in d as opposed to d/sup /spl alpha// dependence for /spl alpha/ /spl ges/ 2. This provides basis for power, cost, and power-cost localized routing algorithms where nodes make routing decisions solely on the basis, of location of their neighbors and destination. The power-aware routing algorithm attempts to minimize the total power needed to route a message between a source and a destination. The cost-aware routing algorithm is aimed at extending the battery's worst-case lifetime at each node. The combined power-cost localized routing algorithm attempts to minimize the total power needed and to avoid nodes with a short battery's remaining lifetime. We prove that the proposed localized power, cost, and power-cost efficient routing algorithms are loop-free and show their efficiency by experiments. --- paper_title: Energy saving dynamic source routing for ad hoc wireless networks paper_content: In this paper, energy saving dynamic source routing (ESDSR) protocol is introduced to maximize the life-span of a mobile ad hoc network (MANET). Many theoretical studies show that energy consumption in MANET can be significantly reduced using energy-aware routing protocols compared to fixed-power minimum-hop routing protocols. Two approaches are broadly suggested for energy-aware routing protocols - transmission power control approach and load sharing approach. ESDSR integrates the advantages of those two approaches. In ESDSR, the routing decision is based on a load balancing approach. Once a routing decision is made, link by link transmit power adjustment per packet is done based on a transmit power control approach. We modified dynamic source routing (DSR) protocol to make it energy aware by a network simulator (network simulator-2 of University of California). The simulation results show that the proposed ESDSR can save energy up to 40% per packet and it can send 20 % more packets to destinations by spending the same battery power in compare to DSR. --- paper_title: Overhead reduction and energy management in DSR for MANETs paper_content: Dynamic Source Routing (DSR) Protocol is a commonly applied reactive protocol for mobile ad hoc networks (MANETs). When the network size is increased, overhead is increased due to the source routing nature of DSR. Moreover, energy consumption of the nodes increases, as the nodes act as intermediate nodes for many source destination pairs. In order to improve the scalability of DSR, in this paper, a modification is proposed, and is referred to as Modified DSR (MDSR). Secondly, in order to conserve the battery power, MDSR with energy management is proposed, in which the packets are transmitted with minimum required energy. Simulation results show that both MDSR and MDSR with Energy management generate less overhead and delay compared to standard DSR. It is observed that in MDSR with Energy management, irrespective of network size, the average energy consumption per node is 37.9% less compared to that of standard DSR. ---
Title: A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks Section 1: INTRODUCTION Description 1: This section provides an overview of Ad Hoc Networks, characterizes the nature of MANETs, discusses the challenges faced particularly regarding power constraint and the motivation for energy efficient routing protocols based on DSR. Section 2: ROUTING PROCESS IN AD HOC NETWORKS Description 2: This section explains the basic routing process in MANETs, covering the route discovery and route selection mechanisms, and highlights existing ad hoc routing protocols with specific characteristics needed. Section 3: Dynamic source routing protocol Description 3: This section provides a detailed explanation of the DSR protocol, including the route discovery and route maintenance processes, along with its benefits and limitations. Section 4: ENERGY EFFICIENT ROUTING PROTOCOLS Description 4: This section discusses different energy efficient routing protocols built on DSR, elaborates on various approaches such as transmission power control, load distribution, and sleep/power-down mechanisms to minimize energy consumption. Section 5: RELATED WORK Description 5: This section reviews various modifications and enhancements made on the traditional DSR to develop energy-efficient routing protocols over the past years. Section 6: Global Energy Aware Routing (GEAR) Protocol Description 6: This section describes the GEAR protocol and its approach to selecting routes based on residual battery power, along with associated problems and limitations. Section 7: Local Energy Aware Routing (LEAR) Protocol Description 7: This section outlines the LEAR protocol mechanism which considers nodes' willingness based on remaining battery power to participate in routing. Section 8: Energy Saving Dynamic Source Routing (ESDSR) Protocol Description 8: This section details ESDSR, focusing on the load balancing and transmission power control strategies designed to prolong network lifetime. Section 9: Energy Dependent DSR Routing (EEDSR) Protocol Description 9: This section explains EEDSR's method using residual battery power and drain rate to determine node participation in forwarding data packets. Section 10: Energy Efficient DSR Protocol (E2DSR) Description 10: This section discusses E2DSR's new control packet structures and route selection algorithms for energy efficiency. Section 11: Topology Control Based Power-Aware and Battery Life-aware DSR (TPBDSR) Protocol Description 11: This section describes TPBDSR's mechanism to adjust transmitting power based on network topology changes and its impact on node lifetime. Section 12: Minimum Energy Dynamic Source Routing Protocol (MEDSR) Description 12: This section covers MEDSR's dual mechanism approach, utilizing low and high power levels for route discovery to ensure network connectivity and energy savings. Section 13: Modified DSR Routing (MDSR) Protocol Description 13: This section explains MDSR's aim to reduce overhead by minimizing routing reply packets and implementing fixed header sizes. Section 14: Multi-path Energy Aware DSR Routing (MEADSR) protocol Description 14: This section elucidates MEADSR's integration of multipath and energy-aware techniques for efficient routing and balanced energy consumption among nodes. Section 15: Conclusion Description 15: This section synthesizes the discussion on energy consumption issues in MANETs, compares various energy-efficient protocols based on DSR, and emphasizes the need for integrating multiple techniques to enhance network performance.
Bridging ontologies and folksonomies to leverage knowledge sharing on the social Web: A brief survey
10
--- paper_title: Using Ontologies to Strengthen Folksonomies and Enrich Information Retrieval in Weblogs paper_content: While free-tagging classification is widely used in social software implementations and especially in weblogs, it raises various issues regarding information retrieval. In this paper, we describe an approach that mixes folksonomies and semantic web technologies in order to solve some of these problems, and to enrich information retrieval capabilities among blog posts. We first introduce the corporate context of the study and the issues we have faced that motivated our approach. Then, we argue how the use of domain ontologies combined with the SIOC vocabulary on the top of an existing folksonomy and weblogging platform offers a way to get rid of free-tagging classification flaws, and enhances information retrieval by suggesting related blog posts. Aside of the theoretical background, this paper also focuses on implementation. We present experimental results of this approach through the example of add-ons to a corporate blogging platform and the associated semantic web search engine, that extensively uses RDF and other semantic web technologies to find appropriate information and suggest related posts. --- paper_title: The complex dynamics of collaborative tagging paper_content: The debate within the Web community over the optimal means by which to organize information often pits formalized classifications against distributed collaborative tagging systems. A number of questions remain unanswered, however, regarding the nature of collaborative tagging systems including whether coherent categorization schemes can emerge from unsupervised tagging by users. This paper uses data from the social bookmarking site delicio. us to examine the dynamics of collaborative tagging systems. In particular, we examine whether the distribution of the frequency of use of tags for "popular" sites with a long history (many tags and many users) can be described by a power law distribution, often characteristic of what are considered complex systems. We produce a generative model of collaborative tagging in order to understand the basic dynamics behind tagging, including how a power law distribution of tags could arise. We empirically examine the tagging history of sites in order to determine how this distribution arises over time and to determine the patterns prior to a stable distribution. Lastly, by focusing on the high-frequency tags of a site where the distribution of tags is a stabilized power law, we show how tag co-occurrence networks for a sample domain of tags can be used to analyze the meaning of particular tags given their relationship to other tags. --- paper_title: Ontologies are us: a unified model of social networks and semantics paper_content: In our work we extend the traditional bipartite model of ontologies with the social dimension, leading to a tripartite model of actors, concepts and instances. We demonstrate the application of this representation by showing how community-based semantics emerges from this model through a process of graph transformation. We illustrate ontology emergence by two case studies, an analysis of a large scale folksonomy system and a novel method for the extraction of community-based ontologies from Web pages. --- paper_title: GroupMe! - Where Semantic Web meets Web 2.0 paper_content: Grouping is an attractive interaction metaphor for users to create reference collections of Web resources they are interested in. Each grouping activity has a certain semantics: things which were previously unrelated are now connected with others via the group. We present the GroupMe! application which allows users to group and arrange multimedia Web resources they are interested in. GroupMe! has an easy-touse interface for gathering and grouping of resources, and allows users to tag everything they like. The semantics of any user interaction is captured, transformed and stored as adequate RDF descriptions. As an example application of this automatically derived RDF content, we show the enhancement of search for tagged Web resources, which evaluates the grouping information to deduce additional contextual information about the resources. GroupMe! is available via http://www.groupme.org. --- paper_title: Bridging the Gap Between Folksonomies and the Semantic Web: An Experience Report paper_content: While folksonomies allow tagging of similar resources with a variety of tags, their content retrieval mechanisms are severely hampered by being agnostic to the relations that exist between these tags. To overcome this limitation, several methods have been proposed to find groups of implicitly inter-related tags. We believe that content retrieval can be further improved by making the relations between tags explicit. In this paper we propose the semantic enrichment of folksonomy tags with explicit relations by harvesting the Semantic Web, i.e., dynamically selecting and combining relevant bits of knowledge from online ontologies. Our experimental results show that, while semantic enrichment needs to be aware of the particular characteristics of folksonomies and the Semantic Web, it is beneficial for both. --- paper_title: Integrating Folksonomies with the Semantic Web paper_content: While tags in collaborative tagging systems serve primarily an indexing purpose, facilitating search and navigation of resources, the use of the same tags by more than one individual can yield a collective classification schema. We present an approach for making explicit the semantics behind the tag space in social tagging systems, so that this collaborative organization can emerge in the form of groups of concepts and partial ontologies. This is achieved by using a combination of shallow pre-processing strategies and statistical techniques together with knowledge provided by ontologies available on the semantic web. Preliminary results on the del.icio.us and Flickr tag sets show that the approach is very promising: it generates clusters with highly related tags corresponding to concepts in ontologies and meaningful relationships among subsets of these tags can be identified. --- paper_title: Using the Semantic Web as Background Knowledge for Ontology Mapping paper_content: While current approaches to ontology mapping produce good results by mainly relying on label and structure based similarity measures, there are several cases in which they fail to discover important mappings. In this paper we describe a novel approach to ontology mapping, which is able to avoid this limitation by using background knowledge. Existing approaches relying on background knowledge typically have one or both of two key limitations: 1) they rely on a manually selected reference ontology; 2) they suffer from the noise introduced by the use of semi-structured sources, such as text corpora. Our technique circumvents these limitations by exploiting the increasing amount of semantic resources available online. As a result, there is no need either for a manually selected reference ontology (the relevant ontologies are dynamically selected from an online ontology repository), or for transforming background knowledge in an ontological form. The promising results from experiments on two real life thesauri indicate both that our approach has a high precision and also that it can find mappings, which are typically missed by existing approaches. --- paper_title: Discovering Shared Conceptualizations in Folksonomies paper_content: Social bookmarking tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. Unlike ontologies, shared conceptualizations are not formalized, but rather implicit. We present a new data mining task, the mining of all frequent tri-concepts, together with an efficient algorithm, for discovering these implicit shared conceptualizations. Our approach extends the data mining task of discovering all closed itemsets to three-dimensional data structures to allow for mining folksonomies. We provide a formal definition of the problem, and present an efficient algorithm for its solution. Finally, we show the applicability of our approach on three large real-world examples. --- paper_title: Ontology of Folksonomy: A Mash-Up of Apples and Oranges paper_content: Ontologies are enabling technology for the Semantic Web. They are a means for people to state what they mean by the terms used in data that they might generate, share, or consume. Folksonomies are an emergent phenomenon of the social Web. They arise from data about how people associate terms with content that they generate, share, or consume. Recently the two ideas have been put into opposition, as if they were right and left poles of a political spectrum. This is a false dichotomy; they are more like apples and oranges. In fact, as the Semantic Web matures and the social Web grows, there is increasing value in applying Semantic Web technologies to the data of the social Web. This article is an attempt to clarify the distinct roles for ontologies and folksonomies, and preview some new work that applies the two ideas together—an ontology of folk-sonomy. --- paper_title: Using Ontologies to Strengthen Folksonomies and Enrich Information Retrieval in Weblogs paper_content: While free-tagging classification is widely used in social software implementations and especially in weblogs, it raises various issues regarding information retrieval. In this paper, we describe an approach that mixes folksonomies and semantic web technologies in order to solve some of these problems, and to enrich information retrieval capabilities among blog posts. We first introduce the corporate context of the study and the issues we have faced that motivated our approach. Then, we argue how the use of domain ontologies combined with the SIOC vocabulary on the top of an existing folksonomy and weblogging platform offers a way to get rid of free-tagging classification flaws, and enhances information retrieval by suggesting related blog posts. Aside of the theoretical background, this paper also focuses on implementation. We present experimental results of this approach through the example of add-ons to a corporate blogging platform and the associated semantic web search engine, that extensively uses RDF and other semantic web technologies to find appropriate information and suggest related posts. --- paper_title: Extreme Tagging: Emergent Semantics through the Tagging of Tags paper_content: While the Semantic Web requires a large amount of structured knowledge (triples) to allow machine reasoning, the acquisition of this knowledge still represents an open issue. Indeed, expressing expert knowledge in a given formalism is a tedious process. Less structured annotations such as tagging have, however, proved immensely popular, whilst existing unstructured or semi-structured collaborative knowledge bases such as Wikipedia have proven to be useful and scalable. Both processes are often regulated through social mechanisms such as wiki-like operations, recommendations, ratings, and collaborative games. To promote collaborative tagging as a means to acquire unstructured as well as structured knowledge we introduce the notion of Extreme Tagging, which describes systems which allow the tagging of resources, as well as of tags themselves and their relations. We provide a formal description of extreme tagging followed by examples and highlight the necessity of regulatory processes which can be applied to it. We also present a prototype implementation. --- paper_title: Using Ontologies to Strengthen Folksonomies and Enrich Information Retrieval in Weblogs paper_content: While free-tagging classification is widely used in social software implementations and especially in weblogs, it raises various issues regarding information retrieval. In this paper, we describe an approach that mixes folksonomies and semantic web technologies in order to solve some of these problems, and to enrich information retrieval capabilities among blog posts. We first introduce the corporate context of the study and the issues we have faced that motivated our approach. Then, we argue how the use of domain ontologies combined with the SIOC vocabulary on the top of an existing folksonomy and weblogging platform offers a way to get rid of free-tagging classification flaws, and enhances information retrieval by suggesting related blog posts. Aside of the theoretical background, this paper also focuses on implementation. We present experimental results of this approach through the example of add-ons to a corporate blogging platform and the associated semantic web search engine, that extensively uses RDF and other semantic web technologies to find appropriate information and suggest related posts. --- paper_title: Ontology of Folksonomy: A Mash-Up of Apples and Oranges paper_content: Ontologies are enabling technology for the Semantic Web. They are a means for people to state what they mean by the terms used in data that they might generate, share, or consume. Folksonomies are an emergent phenomenon of the social Web. They arise from data about how people associate terms with content that they generate, share, or consume. Recently the two ideas have been put into opposition, as if they were right and left poles of a political spectrum. This is a false dichotomy; they are more like apples and oranges. In fact, as the Semantic Web matures and the social Web grows, there is increasing value in applying Semantic Web technologies to the data of the social Web. This article is an attempt to clarify the distinct roles for ontologies and folksonomies, and preview some new work that applies the two ideas together—an ontology of folk-sonomy. --- paper_title: Meaning Of A Tag: A collaborative approach to bridge the gap between tagging and Linked Data paper_content: This paper introduces MOAT, a lightweight Semantic Web framework that provides a collaborative way to let Web 2.0 content producers give meanings to their tags in a machine- readable way. To achieve this goal, this approach relies on Linked Data principles, using URIs from existing resources to define these meanings. That way, users can create inter- linked RDF data and let their content enter the Semantic Web, while solving some limits of free-tagging at the same time. --- paper_title: Ontogame: Weaving the semantic web by online gaming paper_content: Most of the challenges faced when building the Semantic Web require a substantial amount of human labor and intelligence. Despite significant advancement in ontology learning and human language technology, the tasks of ontology construction, semantic annotation, and establishing alignments between multiple ontologies remain highly dependent on human intelligence. This means that individuals need to contribute time and sometimes other resources. Unfortunately, we observe a serious lack of user involvement in the aforementioned tasks, which may be due to the absence of motivations for people who contribute. As a novel solution, we (1) propose to masquerade the core tasks of weaving the Semantic Web behind online, multi-player game scenarios, in order to create proper incentives for human users to get involved. Doing so, we adopt the findings from the already famous "games with a purpose" by von Ahn, who has shown that presenting a useful task, which requires human intelligence, in the form of an online game can motivate a large amount of people to work heavily on this task, and this for free. Then, we (2) describe our generic OntoGame platform, and (3) several gaming scenarios for various tasks plus our respective prototypes. Based on the analysis of user data and interviews with players, we provide preliminary evidence that users (4) enjoy the games and are willing to dedicate their time to those games, (5) are able to produce high-quality conceptual choices. Eventually we show how users entertaining themselves by online games can unknowingly help weave and maintain the Semantic Web. --- paper_title: Extreme Tagging: Emergent Semantics through the Tagging of Tags paper_content: While the Semantic Web requires a large amount of structured knowledge (triples) to allow machine reasoning, the acquisition of this knowledge still represents an open issue. Indeed, expressing expert knowledge in a given formalism is a tedious process. Less structured annotations such as tagging have, however, proved immensely popular, whilst existing unstructured or semi-structured collaborative knowledge bases such as Wikipedia have proven to be useful and scalable. Both processes are often regulated through social mechanisms such as wiki-like operations, recommendations, ratings, and collaborative games. To promote collaborative tagging as a means to acquire unstructured as well as structured knowledge we introduce the notion of Extreme Tagging, which describes systems which allow the tagging of resources, as well as of tags themselves and their relations. We provide a formal description of extreme tagging followed by examples and highlight the necessity of regulatory processes which can be applied to it. We also present a prototype implementation. --- paper_title: Towards Semantically-Interlinked Online Communities paper_content: Online community sites have replaced the traditional means of keeping a community informed via libraries and publishing. At present, online communities are islands that are not interlinked. We describe different types of online communities and tools that are currently used to build and support such communities. Ontologies and Semantic Web technologies offer an upgrade path to providing more complex services. Fusing information and inferring links between the various applications and types of information provides relevant insights that make the available information on the Internet more valuable. We present the SIOC ontology which combines terms from vocabularies that already exist with new terms needed to describe the relationships between concepts in the realm of online community sites. --- paper_title: GroupMe! - Where Semantic Web meets Web 2.0 paper_content: Grouping is an attractive interaction metaphor for users to create reference collections of Web resources they are interested in. Each grouping activity has a certain semantics: things which were previously unrelated are now connected with others via the group. We present the GroupMe! application which allows users to group and arrange multimedia Web resources they are interested in. GroupMe! has an easy-touse interface for gathering and grouping of resources, and allows users to tag everything they like. The semantics of any user interaction is captured, transformed and stored as adequate RDF descriptions. As an example application of this automatically derived RDF content, we show the enhancement of search for tagged Web resources, which evaluates the grouping information to deduce additional contextual information about the resources. GroupMe! is available via http://www.groupme.org. --- paper_title: Tag Mediated Society with SCOT Ontology paper_content: In this paper we give an overview of the int.ere.st for a social tagging, bookmarking, and sharing service. It is based on the SCOT ontology. The SCOT ontology can represent the structure and semantics for social tagging data and provide methods for sharing and reusing them. We describe how it enables users to participate in a semantic social tagging from functional point of view and show how int.ere.st allows users to save, tag, and search SCOT ontologies. All kinds of user contributions in the system will be exposed as RDF vocabularies that connect them. We believe it is a good starting point to build Semantic Web based society using tagging data. --- paper_title: mle: Enhancing the Exploration of Mailing List Archives Through Making Semantics Explicit paper_content: Following and understanding discussions on mailing lists is a prevalent task for executives and policy makers in order to get an impression of one's company image. However, existing solutions providing a Web-based archive require substantial manual effort to search for or filter certain information. With mle we propose a new way to automatically process mailing list archives. The tool is realised based on two Semantic Web technologies: Firstly, SIOC is utilised as the primary vocabulary for describing posts, people, and topics; secondly the RDF metadata is deployed by means of embedding it in the Web page encoded in RDFa. --- paper_title: Integrating Folksonomies with the Semantic Web paper_content: While tags in collaborative tagging systems serve primarily an indexing purpose, facilitating search and navigation of resources, the use of the same tags by more than one individual can yield a collective classification schema. We present an approach for making explicit the semantics behind the tag space in social tagging systems, so that this collaborative organization can emerge in the form of groups of concepts and partial ontologies. This is achieved by using a combination of shallow pre-processing strategies and statistical techniques together with knowledge provided by ontologies available on the semantic web. Preliminary results on the del.icio.us and Flickr tag sets show that the approach is very promising: it generates clusters with highly related tags corresponding to concepts in ontologies and meaningful relationships among subsets of these tags can be identified. --- paper_title: Ontology of Folksonomy: A Mash-Up of Apples and Oranges paper_content: Ontologies are enabling technology for the Semantic Web. They are a means for people to state what they mean by the terms used in data that they might generate, share, or consume. Folksonomies are an emergent phenomenon of the social Web. They arise from data about how people associate terms with content that they generate, share, or consume. Recently the two ideas have been put into opposition, as if they were right and left poles of a political spectrum. This is a false dichotomy; they are more like apples and oranges. In fact, as the Semantic Web matures and the social Web grows, there is increasing value in applying Semantic Web technologies to the data of the social Web. This article is an attempt to clarify the distinct roles for ontologies and folksonomies, and preview some new work that applies the two ideas together—an ontology of folk-sonomy. --- paper_title: The complex dynamics of collaborative tagging paper_content: The debate within the Web community over the optimal means by which to organize information often pits formalized classifications against distributed collaborative tagging systems. A number of questions remain unanswered, however, regarding the nature of collaborative tagging systems including whether coherent categorization schemes can emerge from unsupervised tagging by users. This paper uses data from the social bookmarking site delicio. us to examine the dynamics of collaborative tagging systems. In particular, we examine whether the distribution of the frequency of use of tags for "popular" sites with a long history (many tags and many users) can be described by a power law distribution, often characteristic of what are considered complex systems. We produce a generative model of collaborative tagging in order to understand the basic dynamics behind tagging, including how a power law distribution of tags could arise. We empirically examine the tagging history of sites in order to determine how this distribution arises over time and to determine the patterns prior to a stable distribution. Lastly, by focusing on the high-frequency tags of a site where the distribution of tags is a stabilized power law, we show how tag co-occurrence networks for a sample domain of tags can be used to analyze the meaning of particular tags given their relationship to other tags. --- paper_title: Using Ontologies to Strengthen Folksonomies and Enrich Information Retrieval in Weblogs paper_content: While free-tagging classification is widely used in social software implementations and especially in weblogs, it raises various issues regarding information retrieval. In this paper, we describe an approach that mixes folksonomies and semantic web technologies in order to solve some of these problems, and to enrich information retrieval capabilities among blog posts. We first introduce the corporate context of the study and the issues we have faced that motivated our approach. Then, we argue how the use of domain ontologies combined with the SIOC vocabulary on the top of an existing folksonomy and weblogging platform offers a way to get rid of free-tagging classification flaws, and enhances information retrieval by suggesting related blog posts. Aside of the theoretical background, this paper also focuses on implementation. We present experimental results of this approach through the example of add-ons to a corporate blogging platform and the associated semantic web search engine, that extensively uses RDF and other semantic web technologies to find appropriate information and suggest related posts. --- paper_title: Ontologies are us: a unified model of social networks and semantics paper_content: In our work we extend the traditional bipartite model of ontologies with the social dimension, leading to a tripartite model of actors, concepts and instances. We demonstrate the application of this representation by showing how community-based semantics emerges from this model through a process of graph transformation. We illustrate ontology emergence by two case studies, an analysis of a large scale folksonomy system and a novel method for the extraction of community-based ontologies from Web pages. --- paper_title: Towards Semantically-Interlinked Online Communities paper_content: Online community sites have replaced the traditional means of keeping a community informed via libraries and publishing. At present, online communities are islands that are not interlinked. We describe different types of online communities and tools that are currently used to build and support such communities. Ontologies and Semantic Web technologies offer an upgrade path to providing more complex services. Fusing information and inferring links between the various applications and types of information provides relevant insights that make the available information on the Internet more valuable. We present the SIOC ontology which combines terms from vocabularies that already exist with new terms needed to describe the relationships between concepts in the realm of online community sites. --- paper_title: Integrating Folksonomies with the Semantic Web paper_content: While tags in collaborative tagging systems serve primarily an indexing purpose, facilitating search and navigation of resources, the use of the same tags by more than one individual can yield a collective classification schema. We present an approach for making explicit the semantics behind the tag space in social tagging systems, so that this collaborative organization can emerge in the form of groups of concepts and partial ontologies. This is achieved by using a combination of shallow pre-processing strategies and statistical techniques together with knowledge provided by ontologies available on the semantic web. Preliminary results on the del.icio.us and Flickr tag sets show that the approach is very promising: it generates clusters with highly related tags corresponding to concepts in ontologies and meaningful relationships among subsets of these tags can be identified. --- paper_title: Using Ontologies to Strengthen Folksonomies and Enrich Information Retrieval in Weblogs paper_content: While free-tagging classification is widely used in social software implementations and especially in weblogs, it raises various issues regarding information retrieval. In this paper, we describe an approach that mixes folksonomies and semantic web technologies in order to solve some of these problems, and to enrich information retrieval capabilities among blog posts. We first introduce the corporate context of the study and the issues we have faced that motivated our approach. Then, we argue how the use of domain ontologies combined with the SIOC vocabulary on the top of an existing folksonomy and weblogging platform offers a way to get rid of free-tagging classification flaws, and enhances information retrieval by suggesting related blog posts. Aside of the theoretical background, this paper also focuses on implementation. We present experimental results of this approach through the example of add-ons to a corporate blogging platform and the associated semantic web search engine, that extensively uses RDF and other semantic web technologies to find appropriate information and suggest related posts. --- paper_title: Ontologies are us: a unified model of social networks and semantics paper_content: In our work we extend the traditional bipartite model of ontologies with the social dimension, leading to a tripartite model of actors, concepts and instances. We demonstrate the application of this representation by showing how community-based semantics emerges from this model through a process of graph transformation. We illustrate ontology emergence by two case studies, an analysis of a large scale folksonomy system and a novel method for the extraction of community-based ontologies from Web pages. --- paper_title: Extreme Tagging: Emergent Semantics through the Tagging of Tags paper_content: While the Semantic Web requires a large amount of structured knowledge (triples) to allow machine reasoning, the acquisition of this knowledge still represents an open issue. Indeed, expressing expert knowledge in a given formalism is a tedious process. Less structured annotations such as tagging have, however, proved immensely popular, whilst existing unstructured or semi-structured collaborative knowledge bases such as Wikipedia have proven to be useful and scalable. Both processes are often regulated through social mechanisms such as wiki-like operations, recommendations, ratings, and collaborative games. To promote collaborative tagging as a means to acquire unstructured as well as structured knowledge we introduce the notion of Extreme Tagging, which describes systems which allow the tagging of resources, as well as of tags themselves and their relations. We provide a formal description of extreme tagging followed by examples and highlight the necessity of regulatory processes which can be applied to it. We also present a prototype implementation. ---
Title: Bridging Ontologies and Folksonomies to Leverage Knowledge Sharing on the Social Web: A Brief Survey Section 1: Introduction Description 1: Introduce the paper's focus on leveraging knowledge sharing through bridging ontologies and folksonomies on the social web. Section 2: Structure in Folksonomies Description 2: Discuss the analysis of semantic information within folksonomies and methods to extract semantic relationships. Section 3: Clustering and Mapping with Ontologies Description 3: Explain methods for clustering tags and mapping them to ontological concepts. Section 4: Data Mining and Folksonomies Description 4: Describe the application of data mining techniques to extract meaningful information from folksonomies. Section 5: Enriching Folksonomies Description 5: Present works that support folksonomies with semantic web technologies to enrich social tagging platforms. Section 6: Guiding Tagging with Ontologies Description 6: Discuss solutions to integrate ontologies minimally intrusively with social tagging interfaces. Section 7: Building an Ontology of Folksonomy Description 7: Describe the construction and benefits of an ontology dedicated to formalizing the act of tagging. Section 8: Ontologies to Interlink Social Softwares Description 8: Present contributions aiming to interconnect online communities using semantic web formalisms. Section 9: Social Aspects Description 9: Address the social aspects and design models fitting the actual usage of knowledge sharing platforms. Section 10: Perspectives Description 10: Provide perspectives on the integration of folksonomies with semantic web technologies, including remaining challenges and future opportunities.
Coordinated contour following control for machining operations-a survey
5
--- paper_title: Passive control of bilateral teleoperated manipulators paper_content: The control of a bilateral teleoperated manipulator system is considered. The goals of the control are to (i) coordinate the motions of the two manipulators according to a predefined kinematic scaling, (ii) render the dynamics of a locked system, and its response to forces from the human operator and environment to approximate that of predefined natural dynamics, (iii) to provide for possible scaling of power. In addition, for safety reasons, the closed loop system need to remain passive. For linear dynamically similar systems, dynamics of the system can be block diagonalized into two decoupled mechanical systems: the shape system that deals with the coordination error, and the locked system that describe the average motion of the two manipulators. The passive velocity field control methodology is then applied to the shape system to regulate the coordination error at 0 and at the same time preserves the passivity of the overall system. ---
Title: Coordinated Contour Following Control for Machining Operations - A Survey Section 1: Introduction Description 1: This section introduces the fundamental concepts of contour following in machining operations and the objectives of the paper. Section 2: Non-coordination based approach Description 2: This section reviews traditional control approaches focusing on improving trajectory tracking for each individual machining axis without coordination. Section 3: Cross-coupled Control Approach Description 3: This section examines approaches that use cross-coupled or synchronizing control strategies to utilize contour error in the feedback structure. Section 4: Velocity Field Control Approach Description 4: This section reviews the velocity field control approach that abandons timed trajectory requirements and instead uses a time-invariant velocity field for contour following. Section 5: Conclusions Description 5: This section summarizes the various approaches to contour following control, their benefits, and limitations.
A State-of-the-Art Survey on Semantic Web Mining
6
--- paper_title: Finding association rules in semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly growing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of query patterns. Initial experiments performed on semantic data of a biomedical application show the usefulness and efficiency of the approach. --- paper_title: “Hello world”, web mining for e-learning paper_content: Abstract As the internet and mobile applications are getting an important role in our lives, usage of mobile services also took place in educational field since the internet is widespread, which is usually called by the terms “e-learning” or “distance learning”. A known issue on e-learning is all the content’s being online and less face-to-face communication than traditional learning; this brings the problem of chasing student’s success, and advising and managing student’s way of studying. Hence, a recent hot topic, data mining, can be applied on student’s data left on e-learning portals to guide the instructor and advisors to help students’ being more successful. Recent researches done on this topic showed that e-learning combined with data mining can decrease the gap between itself and traditional learning — referred as semantic web mining in general. --- paper_title: Using Ontologies in Semantic Data Mining with SEGS and g-SEGS paper_content: With the expanding of the SemanticWeb and the availability of numerous ontologies which provide domain background knowledge and semantic descriptors to the data, the amount of semantic data is rapidly growing. The data mining community is faced with a paradigm shift: instead of mining the abundance of empirical data supported by the background knowledge, the new challenge is to mine the abundance of knowledge encoded in domain ontologies, constrained by the heuristics computed from the empirical data collection. We address this challenge by an approach, named semantic data mining, where domain ontologies define the hypothesis search space, and the data is used as means of constraining and guiding the process of hypothesis search and evaluation. The use of prototype semantic data mining systems SEGS and g-SEGS is demonstrated in a simple semantic data mining scenario and in two reallife functional genomics scenarios of mining biological ontologies with the support of experimental microarray data. --- paper_title: Development of semantic decision tree paper_content: The Semantic Web is an evolving development of the World Wide Web in which the meaning of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. RDF and OWL is published by W3C for the standard of Semantic Web languages which can define relationships and structures between Web resources. Since the start of Semantic Web, the semantic web data is incrementally increased by Portal, Search Engines and Linking Open Data project. Therefore, the necessity of mining useful knowledge from huge size ontology is highly expected. However, the semantic web ontology has special characteristics to apply traditional decision tree algorithm which is the most popular in classification data mining. To overcome these problems, we proposed Semantic Decision Tree Algorithm which can perform on the semantic web ontology. This algorithm will help to mine the covered knowledge in the semantic web data. --- paper_title: “Hello world”, web mining for e-learning paper_content: Abstract As the internet and mobile applications are getting an important role in our lives, usage of mobile services also took place in educational field since the internet is widespread, which is usually called by the terms “e-learning” or “distance learning”. A known issue on e-learning is all the content’s being online and less face-to-face communication than traditional learning; this brings the problem of chasing student’s success, and advising and managing student’s way of studying. Hence, a recent hot topic, data mining, can be applied on student’s data left on e-learning portals to guide the instructor and advisors to help students’ being more successful. Recent researches done on this topic showed that e-learning combined with data mining can decrease the gap between itself and traditional learning — referred as semantic web mining in general. --- paper_title: Handbook of Semantic Web Technologies paper_content: After years of mostly theoretical research, Semantic Web Technologies are now reaching out into application areas like bioinformatics, eCommerce, eGovernment, or Social Webs. Applications like genomic ontologies, semantic web services, automated catalogue alignment, ontology matching, or blogs and social networks are constantly increasing, often driven or at least backed up by companies like Google, Amazon, YouTube, Facebook, LinkedIn and others. The need to leverage the potential of combining information in a meaningful way in order to be able to benefit from the Web will create further demand for and interest in Semantic Web research. This movement, based on the growing maturity of related research results, necessitates a reliable reference source from which beginners to the field can draw a first basic knowledge of the main underlying technologies as well as state-of-the-art application areas. This handbook, put together by three leading authorities in the field, and supported by an advisory board of highly reputed researchers, fulfils exactly this need. It is the first dedicated reference work in this field, collecting contributions about both the technical foundations of the Semantic Web as well as their main usage in other scientific fields like life sciences, engineering, business, or education. --- paper_title: Secure and Intelligent Decision Making in Semantic Web Mining paper_content: With the huge amount of information available online the World Wide Web is a fertile area for data mining research. The Web has become a major vehicle in performing research and education related activities for researches and students. Web mining is the use of data mining technologies to automatically interact and discover information from web documents, which can be in structured, unstructured or semistructured form. We present an enterprise framework regarding semantic web mining in distance learning, which can be used to not only improve the quality of web mining results but also enhances the functions and services and the interoperability of long distance educational information systems and standards in the educational field. For on line distance education system we propose an Ontology-based approach to share online data and retrieve all relevant data about students and their courses. Thus semantic web ontology help build better web mining analysis in educational institute and web mining in-turns helps contract basis more powerful ontology in distance learning. Since the majority of the online data considered as private data we need various mechanism for privacy preservation and control over the online presence data. We propose privacy protection in semantic web mining using role back access control. --- paper_title: Development of semantic decision tree paper_content: The Semantic Web is an evolving development of the World Wide Web in which the meaning of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. RDF and OWL is published by W3C for the standard of Semantic Web languages which can define relationships and structures between Web resources. Since the start of Semantic Web, the semantic web data is incrementally increased by Portal, Search Engines and Linking Open Data project. Therefore, the necessity of mining useful knowledge from huge size ontology is highly expected. However, the semantic web ontology has special characteristics to apply traditional decision tree algorithm which is the most popular in classification data mining. To overcome these problems, we proposed Semantic Decision Tree Algorithm which can perform on the semantic web ontology. This algorithm will help to mine the covered knowledge in the semantic web data. --- paper_title: Handbook of Semantic Web Technologies paper_content: After years of mostly theoretical research, Semantic Web Technologies are now reaching out into application areas like bioinformatics, eCommerce, eGovernment, or Social Webs. Applications like genomic ontologies, semantic web services, automated catalogue alignment, ontology matching, or blogs and social networks are constantly increasing, often driven or at least backed up by companies like Google, Amazon, YouTube, Facebook, LinkedIn and others. The need to leverage the potential of combining information in a meaningful way in order to be able to benefit from the Web will create further demand for and interest in Semantic Web research. This movement, based on the growing maturity of related research results, necessitates a reliable reference source from which beginners to the field can draw a first basic knowledge of the main underlying technologies as well as state-of-the-art application areas. This handbook, put together by three leading authorities in the field, and supported by an advisory board of highly reputed researchers, fulfils exactly this need. It is the first dedicated reference work in this field, collecting contributions about both the technical foundations of the Semantic Web as well as their main usage in other scientific fields like life sciences, engineering, business, or education. --- paper_title: Development of semantic decision tree paper_content: The Semantic Web is an evolving development of the World Wide Web in which the meaning of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. RDF and OWL is published by W3C for the standard of Semantic Web languages which can define relationships and structures between Web resources. Since the start of Semantic Web, the semantic web data is incrementally increased by Portal, Search Engines and Linking Open Data project. Therefore, the necessity of mining useful knowledge from huge size ontology is highly expected. However, the semantic web ontology has special characteristics to apply traditional decision tree algorithm which is the most popular in classification data mining. To overcome these problems, we proposed Semantic Decision Tree Algorithm which can perform on the semantic web ontology. This algorithm will help to mine the covered knowledge in the semantic web data. --- paper_title: Finding association rules in semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly growing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of query patterns. Initial experiments performed on semantic data of a biomedical application show the usefulness and efficiency of the approach. --- paper_title: Secure and Intelligent Decision Making in Semantic Web Mining paper_content: With the huge amount of information available online the World Wide Web is a fertile area for data mining research. The Web has become a major vehicle in performing research and education related activities for researches and students. Web mining is the use of data mining technologies to automatically interact and discover information from web documents, which can be in structured, unstructured or semistructured form. We present an enterprise framework regarding semantic web mining in distance learning, which can be used to not only improve the quality of web mining results but also enhances the functions and services and the interoperability of long distance educational information systems and standards in the educational field. For on line distance education system we propose an Ontology-based approach to share online data and retrieve all relevant data about students and their courses. Thus semantic web ontology help build better web mining analysis in educational institute and web mining in-turns helps contract basis more powerful ontology in distance learning. Since the majority of the online data considered as private data we need various mechanism for privacy preservation and control over the online presence data. We propose privacy protection in semantic web mining using role back access control. --- paper_title: Research on semantic Web mining paper_content: A semantic-based Web mining is mentioned by many people in order to improve Web service levels and address the existing Web services which is supported by the lack of semantic problem. Semantic-based Web data mining is a combination of the semantic Web and Web mining. Web mining results help to build the semantic Web. The knowledge of Semantic Web makes Web mining easier to achieve, but also can improve the effectiveness of Web mining. This paper firstly introduces the related knowledge of Semantic Web and Web mining, and then discusses the semantic-based Web mining, finally proposes to build a semantic-based Web mining model under the framework of the Agent. --- paper_title: Mining association rules from semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly increasing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/S and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive just the appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of a query pattern. Initial experiments performed on real world semantic data enjoy promising results and show the usefulness of the approach. --- paper_title: Handbook of Semantic Web Technologies paper_content: After years of mostly theoretical research, Semantic Web Technologies are now reaching out into application areas like bioinformatics, eCommerce, eGovernment, or Social Webs. Applications like genomic ontologies, semantic web services, automated catalogue alignment, ontology matching, or blogs and social networks are constantly increasing, often driven or at least backed up by companies like Google, Amazon, YouTube, Facebook, LinkedIn and others. The need to leverage the potential of combining information in a meaningful way in order to be able to benefit from the Web will create further demand for and interest in Semantic Web research. This movement, based on the growing maturity of related research results, necessitates a reliable reference source from which beginners to the field can draw a first basic knowledge of the main underlying technologies as well as state-of-the-art application areas. This handbook, put together by three leading authorities in the field, and supported by an advisory board of highly reputed researchers, fulfils exactly this need. It is the first dedicated reference work in this field, collecting contributions about both the technical foundations of the Semantic Web as well as their main usage in other scientific fields like life sciences, engineering, business, or education. --- paper_title: Secure and Intelligent Decision Making in Semantic Web Mining paper_content: With the huge amount of information available online the World Wide Web is a fertile area for data mining research. The Web has become a major vehicle in performing research and education related activities for researches and students. Web mining is the use of data mining technologies to automatically interact and discover information from web documents, which can be in structured, unstructured or semistructured form. We present an enterprise framework regarding semantic web mining in distance learning, which can be used to not only improve the quality of web mining results but also enhances the functions and services and the interoperability of long distance educational information systems and standards in the educational field. For on line distance education system we propose an Ontology-based approach to share online data and retrieve all relevant data about students and their courses. Thus semantic web ontology help build better web mining analysis in educational institute and web mining in-turns helps contract basis more powerful ontology in distance learning. Since the majority of the online data considered as private data we need various mechanism for privacy preservation and control over the online presence data. We propose privacy protection in semantic web mining using role back access control. --- paper_title: Finding association rules in semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly growing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of query patterns. Initial experiments performed on semantic data of a biomedical application show the usefulness and efficiency of the approach. --- paper_title: “Hello world”, web mining for e-learning paper_content: Abstract As the internet and mobile applications are getting an important role in our lives, usage of mobile services also took place in educational field since the internet is widespread, which is usually called by the terms “e-learning” or “distance learning”. A known issue on e-learning is all the content’s being online and less face-to-face communication than traditional learning; this brings the problem of chasing student’s success, and advising and managing student’s way of studying. Hence, a recent hot topic, data mining, can be applied on student’s data left on e-learning portals to guide the instructor and advisors to help students’ being more successful. Recent researches done on this topic showed that e-learning combined with data mining can decrease the gap between itself and traditional learning — referred as semantic web mining in general. --- paper_title: Research on semantic Web mining paper_content: A semantic-based Web mining is mentioned by many people in order to improve Web service levels and address the existing Web services which is supported by the lack of semantic problem. Semantic-based Web data mining is a combination of the semantic Web and Web mining. Web mining results help to build the semantic Web. The knowledge of Semantic Web makes Web mining easier to achieve, but also can improve the effectiveness of Web mining. This paper firstly introduces the related knowledge of Semantic Web and Web mining, and then discusses the semantic-based Web mining, finally proposes to build a semantic-based Web mining model under the framework of the Agent. --- paper_title: Mining association rules from semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly increasing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/S and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive just the appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of a query pattern. Initial experiments performed on real world semantic data enjoy promising results and show the usefulness of the approach. --- paper_title: Development of semantic decision tree paper_content: The Semantic Web is an evolving development of the World Wide Web in which the meaning of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. RDF and OWL is published by W3C for the standard of Semantic Web languages which can define relationships and structures between Web resources. Since the start of Semantic Web, the semantic web data is incrementally increased by Portal, Search Engines and Linking Open Data project. Therefore, the necessity of mining useful knowledge from huge size ontology is highly expected. However, the semantic web ontology has special characteristics to apply traditional decision tree algorithm which is the most popular in classification data mining. To overcome these problems, we proposed Semantic Decision Tree Algorithm which can perform on the semantic web ontology. This algorithm will help to mine the covered knowledge in the semantic web data. --- paper_title: Finding association rules in semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly growing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of query patterns. Initial experiments performed on semantic data of a biomedical application show the usefulness and efficiency of the approach. --- paper_title: “Hello world”, web mining for e-learning paper_content: Abstract As the internet and mobile applications are getting an important role in our lives, usage of mobile services also took place in educational field since the internet is widespread, which is usually called by the terms “e-learning” or “distance learning”. A known issue on e-learning is all the content’s being online and less face-to-face communication than traditional learning; this brings the problem of chasing student’s success, and advising and managing student’s way of studying. Hence, a recent hot topic, data mining, can be applied on student’s data left on e-learning portals to guide the instructor and advisors to help students’ being more successful. Recent researches done on this topic showed that e-learning combined with data mining can decrease the gap between itself and traditional learning — referred as semantic web mining in general. --- paper_title: Research on semantic Web mining paper_content: A semantic-based Web mining is mentioned by many people in order to improve Web service levels and address the existing Web services which is supported by the lack of semantic problem. Semantic-based Web data mining is a combination of the semantic Web and Web mining. Web mining results help to build the semantic Web. The knowledge of Semantic Web makes Web mining easier to achieve, but also can improve the effectiveness of Web mining. This paper firstly introduces the related knowledge of Semantic Web and Web mining, and then discusses the semantic-based Web mining, finally proposes to build a semantic-based Web mining model under the framework of the Agent. --- paper_title: Using data mining techniques for exploring learning object repositories paper_content: Purpose – This paper aims to show the results obtained from the data mining techniques application to learning objects (LO) metadata.Design/methodology/approach – A general review of the literature was carried out. The authors gathered and pre‐processed the data, and then analyzed the results of data mining techniques applied upon the LO metadata.Findings – It is possible to extract new knowledge based on learning objects stored in repositories. For example it is possible to identify distinctive features and group learning objects according to them. Semantic relationships can also be found among the attributes that describe learning objects.Research limitations/implications – In the first section, four test repositories are included for case study. In the second section, the analysis is focused on the most complete repository from the pedagogical point of view.Originality/value – Many publications report results of analysis on repositories mainly focused on the number, evolution and growth of the learning... --- paper_title: Mining association rules from semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly increasing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/S and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive just the appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of a query pattern. Initial experiments performed on real world semantic data enjoy promising results and show the usefulness of the approach. --- paper_title: Context and target configurations for mining RDF data paper_content: Association rule mining has been widely studied in the context of basket analysis and sale recommendations [1]. In fact, the concept can be applied to any domain with many items or events in which interesting relationships can be inferred from co-occurrence of those items or events in existing subsets (transactions). The increasing amount of Linked Open Data (LOD) in the World Wide Web raises new opportunities and challenges for the data mining community [5]. LOD is often represented in the Resource Description Framework (RDF) data model. In RDF, data is represented by a triple structure consisting of subject, predicate, and object (SPO). Each triple represents a statement/fact. We propose an approach that applies association rule mining at statement level by introducing the concept of mining configurations. --- paper_title: Extracting significant Website Key Objects: A Semantic Web mining approach paper_content: Web mining has been traditionally used in different application domains in order to enhance the content that Web users are accessing. Likewise, Website administrators are interested in finding new approaches to improve their Website content according to their users' preferences. Furthermore, the Semantic Web has been considered as an alternative to represent Web content in a way which can be used by intelligent techniques to provide the organization, meaning, and definition of Web content. In this work, we define the Website Key Object Extraction problem, whose solution is based on a Semantic Web mining approach to extract from a given Website core ontology, new relations between objects according to their Web user interests. This methodology was applied to a real Website, whose results showed that the automatic extraction of Key Objects is highly competitive against traditional surveys applied to Web users. --- paper_title: Development of semantic decision tree paper_content: The Semantic Web is an evolving development of the World Wide Web in which the meaning of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. RDF and OWL is published by W3C for the standard of Semantic Web languages which can define relationships and structures between Web resources. Since the start of Semantic Web, the semantic web data is incrementally increased by Portal, Search Engines and Linking Open Data project. Therefore, the necessity of mining useful knowledge from huge size ontology is highly expected. However, the semantic web ontology has special characteristics to apply traditional decision tree algorithm which is the most popular in classification data mining. To overcome these problems, we proposed Semantic Decision Tree Algorithm which can perform on the semantic web ontology. This algorithm will help to mine the covered knowledge in the semantic web data. --- paper_title: Finding association rules in semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly growing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/(S) and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of query patterns. Initial experiments performed on semantic data of a biomedical application show the usefulness and efficiency of the approach. --- paper_title: Context and target configurations for mining RDF data paper_content: Association rule mining has been widely studied in the context of basket analysis and sale recommendations [1]. In fact, the concept can be applied to any domain with many items or events in which interesting relationships can be inferred from co-occurrence of those items or events in existing subsets (transactions). The increasing amount of Linked Open Data (LOD) in the World Wide Web raises new opportunities and challenges for the data mining community [5]. LOD is often represented in the Resource Description Framework (RDF) data model. In RDF, data is represented by a triple structure consisting of subject, predicate, and object (SPO). Each triple represents a statement/fact. We propose an approach that applies association rule mining at statement level by introducing the concept of mining configurations. --- paper_title: Secure and Intelligent Decision Making in Semantic Web Mining paper_content: With the huge amount of information available online the World Wide Web is a fertile area for data mining research. The Web has become a major vehicle in performing research and education related activities for researches and students. Web mining is the use of data mining technologies to automatically interact and discover information from web documents, which can be in structured, unstructured or semistructured form. We present an enterprise framework regarding semantic web mining in distance learning, which can be used to not only improve the quality of web mining results but also enhances the functions and services and the interoperability of long distance educational information systems and standards in the educational field. For on line distance education system we propose an Ontology-based approach to share online data and retrieve all relevant data about students and their courses. Thus semantic web ontology help build better web mining analysis in educational institute and web mining in-turns helps contract basis more powerful ontology in distance learning. Since the majority of the online data considered as private data we need various mechanism for privacy preservation and control over the online presence data. We propose privacy protection in semantic web mining using role back access control. --- paper_title: Using Ontologies in Semantic Data Mining with SEGS and g-SEGS paper_content: With the expanding of the SemanticWeb and the availability of numerous ontologies which provide domain background knowledge and semantic descriptors to the data, the amount of semantic data is rapidly growing. The data mining community is faced with a paradigm shift: instead of mining the abundance of empirical data supported by the background knowledge, the new challenge is to mine the abundance of knowledge encoded in domain ontologies, constrained by the heuristics computed from the empirical data collection. We address this challenge by an approach, named semantic data mining, where domain ontologies define the hypothesis search space, and the data is used as means of constraining and guiding the process of hypothesis search and evaluation. The use of prototype semantic data mining systems SEGS and g-SEGS is demonstrated in a simple semantic data mining scenario and in two reallife functional genomics scenarios of mining biological ontologies with the support of experimental microarray data. --- paper_title: “Hello world”, web mining for e-learning paper_content: Abstract As the internet and mobile applications are getting an important role in our lives, usage of mobile services also took place in educational field since the internet is widespread, which is usually called by the terms “e-learning” or “distance learning”. A known issue on e-learning is all the content’s being online and less face-to-face communication than traditional learning; this brings the problem of chasing student’s success, and advising and managing student’s way of studying. Hence, a recent hot topic, data mining, can be applied on student’s data left on e-learning portals to guide the instructor and advisors to help students’ being more successful. Recent researches done on this topic showed that e-learning combined with data mining can decrease the gap between itself and traditional learning — referred as semantic web mining in general. --- paper_title: Mining association rules from semantic web data paper_content: The amount of ontologies and semantic annotations available on the Web is constantly increasing. This new type of complex and heterogeneous graph-structured data raises new challenges for the data mining community. In this paper, we present a novel method for mining association rules from semantic instance data repositories expressed in RDF/S and OWL. We take advantage of the schema-level (i.e. Tbox) knowledge encoded in the ontology to derive just the appropriate transactions which will later feed traditional association rules algorithms. This process is guided by the analyst requirements, expressed in the form of a query pattern. Initial experiments performed on real world semantic data enjoy promising results and show the usefulness of the approach. --- paper_title: Extracting significant Website Key Objects: A Semantic Web mining approach paper_content: Web mining has been traditionally used in different application domains in order to enhance the content that Web users are accessing. Likewise, Website administrators are interested in finding new approaches to improve their Website content according to their users' preferences. Furthermore, the Semantic Web has been considered as an alternative to represent Web content in a way which can be used by intelligent techniques to provide the organization, meaning, and definition of Web content. In this work, we define the Website Key Object Extraction problem, whose solution is based on a Semantic Web mining approach to extract from a given Website core ontology, new relations between objects according to their Web user interests. This methodology was applied to a real Website, whose results showed that the automatic extraction of Key Objects is highly competitive against traditional surveys applied to Web users. --- paper_title: Development of semantic decision tree paper_content: The Semantic Web is an evolving development of the World Wide Web in which the meaning of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. RDF and OWL is published by W3C for the standard of Semantic Web languages which can define relationships and structures between Web resources. Since the start of Semantic Web, the semantic web data is incrementally increased by Portal, Search Engines and Linking Open Data project. Therefore, the necessity of mining useful knowledge from huge size ontology is highly expected. However, the semantic web ontology has special characteristics to apply traditional decision tree algorithm which is the most popular in classification data mining. To overcome these problems, we proposed Semantic Decision Tree Algorithm which can perform on the semantic web ontology. This algorithm will help to mine the covered knowledge in the semantic web data. ---
Title: A State-of-the-Art Survey on Semantic Web Mining Section 1: Introduction Description 1: Introduce the concept of Semantic Web Mining by integrating Semantic Web and Data Mining, emphasizing the growing importance and potential applications of this field. Section 2: Semantic Web Description 2: Provide an overview of the Semantic Web, including its definition, reasons for development, representation techniques, and examples from the commercial domain. Section 3: Data Mining Description 3: Introduce the concept of data mining, describe popular data mining techniques, and provide a brief overview of Web Mining and its types. Section 4: Semantic Web Mining Description 4: Offer a detailed introduction to Semantic Web Mining, discuss challenges faced, propose solutions, and present case studies supporting its usefulness across various domains. Section 5: Conclusion Description 5: Summarize the survey findings, highlighting the synergy between Web Mining and the Semantic Web, and conclude with insights into the future potential of Semantic Web Mining. Section 6: Acknowledgements Description 6: Acknowledge the support and contributions of individuals and organizations that facilitated the research.
Social Engineering Attacks: A Survey
9
--- paper_title: Social engineering and digital technologies for the security of the social capital' development paper_content: Social capital is transformed due to the digitalized environment built with the ICT expansion. The evolution of the social capital' forming and development reflects the new forms and models of communication, the extended possibilities of choice and of presence into the Web as a unique space of exchange. This article presents the theoretical approach to the analysis of behavior' regulation within the framework of social technologies implemented to the digital space. Social engineering represents an important factor of the violation of information security' rules, the IT-specialists attribute the majority of the crime in the sector to the insiders of private business companies and to the imprudence of individuals. The social capital can play the both positive and negative roles to assure information security issues, the social engineering can be in the contradiction with the tendencies of the social capital development, but the social technologies used are based on the social capital development. --- paper_title: Security Evaluation of Wireless Network Access Points paper_content: Abstract The paper focuses on the real-world usage of IEEE 802.11 wireless network encryption and Wi-Fi Protected Setup (WPS) function. A brief history on the development of encryption methods and WPS is given. Wireless scanning of 802.11 networks in a capital city has been performed, and the results of it have been analysed. To ascertain the knowledge about the security of wireless networks of the average user, an online survey has been conducted. To test the security of encryption methods and WPS function, practical attacks against private test wireless networks have been made. The authors conclude that the safest way to set up 802.11 network with a pre-shared key is to use Wi-Fi Protected Access 2 (WPA2) encryption without support for WPS function. Statistics in Riga shows that networks are often configured otherwise and thus vulnerable to attacks. Survey results prove that respondents are not well informed regarding the security of wireless networks. --- paper_title: Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble? paper_content: Robots such as information security and overtrust in them are gaining increasing relevance. This research aims at giving an insight into how trust toward robots could be exploited for the purpose of social engineering. Drawing on Mitnick's model, a well-known social engineering framework, an interactive scenario with the humanoid robot iCub was designed to emulate a social engineering attack. At first, iCub attempted to collect the kind of personal information usually gathered by social engineers by asking a series of private questions. Then, the robot tried to develop trust and rapport with participants by offering reliable clues during a treasure hunt game. At the end of the treasure hunt, the robot tried to exploit the gained trust in order to make participants gamble the money they won. The results show that people tend to build rapport with and trust toward the robot, resulting in the disclosure of sensitive information, conformation to its suggestions and gambling. --- paper_title: CANDY: A Social Engineering Attack to Leak Information from Infotainment System paper_content: The introduction of Information and Communications Technologies (ICT) systems into vehicles make them more prone to cyber-security attacks that may impact of vehicles capability and, consequently, on the safety of drivers, passengers. In this paper, we focus on how to exploit security vulnerabilities affecting user-to-vehicle and intra- vehicle communications to hack the infotainment system to retrieve information about both vehicle and driver. Indeed, we designed and developed CANDY, a set of malicious APP injecting in a genuine Android APP, acting as a Trojan-horse on the Android In-Vehicle infotainment system. It opens a back-door that allows an attacker to remotely access to the infotainment system. We use this back-door to hit the privacy of the driver by recording her voice and collect information circulating on the CAN bus about the vehicle. CANDY is distributed by using social engineering techniques. --- paper_title: Social engineering as an attack vector for ransomware paper_content: Constant ransomware attacks achieve to evade multiple security perimeters, because its strategy is based on the exploitation of users-the weakest link in the security chain-getting high levels of effectiveness. In this paper, an in-depth analysis of the anatomy of the ransomware is develop, and how they combine technology with manipulation, through social engineering, to meet their goals and compromise thousands of computers. Finally, simulations of the phases that it follows are carried out, leaving the necessary guidelines to evaluate the security of the information, and thus, generate policies that minimize the risks of loss of information. --- paper_title: Social Engineering Toolkit — A systematic approach to social engineering paper_content: Social engineering techniques, exploiting humans as information systems' security weakest link, are mostly the initial attack vectors within larger intrusions and information system compromises. In order to practically evaluate the risks of information leakage trough the target organizations' employees, when performing a penetration test, an ethical hacker must consider social engineering as a very important aspect of the performed test. Social Engineering Toolkit (SET) is an integrated set of tools designed specifically to perform advanced attacks against the human element, and is the most advanced, if not the only toolkit of such kind that is publicly available as open source software. Incorporating many social engineering attack vectors, it heavily depends on Metasploit, an integrated penetration testing framework. This paper gives a brief introduction to the Social Engineering Toolkit software architecture, and provides an overview of supported attack vectors. --- paper_title: Social engineering and digital technologies for the security of the social capital' development paper_content: Social capital is transformed due to the digitalized environment built with the ICT expansion. The evolution of the social capital' forming and development reflects the new forms and models of communication, the extended possibilities of choice and of presence into the Web as a unique space of exchange. This article presents the theoretical approach to the analysis of behavior' regulation within the framework of social technologies implemented to the digital space. Social engineering represents an important factor of the violation of information security' rules, the IT-specialists attribute the majority of the crime in the sector to the insiders of private business companies and to the imprudence of individuals. The social capital can play the both positive and negative roles to assure information security issues, the social engineering can be in the contradiction with the tendencies of the social capital development, but the social technologies used are based on the social capital development. --- paper_title: Security Analytics: Big Data Analytics for cybersecurity: A review of trends, techniques and tools paper_content: The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future. --- paper_title: Security Evaluation of Wireless Network Access Points paper_content: Abstract The paper focuses on the real-world usage of IEEE 802.11 wireless network encryption and Wi-Fi Protected Setup (WPS) function. A brief history on the development of encryption methods and WPS is given. Wireless scanning of 802.11 networks in a capital city has been performed, and the results of it have been analysed. To ascertain the knowledge about the security of wireless networks of the average user, an online survey has been conducted. To test the security of encryption methods and WPS function, practical attacks against private test wireless networks have been made. The authors conclude that the safest way to set up 802.11 network with a pre-shared key is to use Wi-Fi Protected Access 2 (WPA2) encryption without support for WPS function. Statistics in Riga shows that networks are often configured otherwise and thus vulnerable to attacks. Survey results prove that respondents are not well informed regarding the security of wireless networks. --- paper_title: On tackling social engineering web phishing attacks utilizing software defined networks (SDN) approach paper_content: Web phishing attacks are one of the challenging security threats. Phishing depends on humans’ behavior but not protocols and devices vulnerabilities. In this work, software defined networking (SDN) will be tailored to tackle phishing attacks. In SDN, network devices forward received packets to a central point ‘controller’ that makes decision on behalf of them. This approach allows more control and management over network devices and protocol. In this work, we propose a neural network based phishing prevention algorithm (PPA) that is implemented utilizing Ryu, an open source, SDN controller. The PPA algorithm has been tested in a home network that is constructed with HP2920-24G switch. Moreover, a phished version of Facebook, Yahoo and Hotmail login pages have been written and hosted on three different free hosting domains. PPA has detected all of the phished versions and allowed the access to real version of these services. --- paper_title: Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble? paper_content: Robots such as information security and overtrust in them are gaining increasing relevance. This research aims at giving an insight into how trust toward robots could be exploited for the purpose of social engineering. Drawing on Mitnick's model, a well-known social engineering framework, an interactive scenario with the humanoid robot iCub was designed to emulate a social engineering attack. At first, iCub attempted to collect the kind of personal information usually gathered by social engineers by asking a series of private questions. Then, the robot tried to develop trust and rapport with participants by offering reliable clues during a treasure hunt game. At the end of the treasure hunt, the robot tried to exploit the gained trust in order to make participants gamble the money they won. The results show that people tend to build rapport with and trust toward the robot, resulting in the disclosure of sensitive information, conformation to its suggestions and gambling. --- paper_title: A Serious Game for Eliciting Social Engineering Security Requirements paper_content: Social engineering is the acquisition of informationabout computer systems by methods that deeply include non-technical means. While technical security of most critical systemsis high, the systems remain vulnerable to attacks from socialengineers. Social engineering is a technique that: (i) does notrequire any (advanced) technical tools, (ii) can be used by anyone,(iii) is cheap. Traditional security requirements elicitation approaches oftenfocus on vulnerabilities in network or software systems. Fewapproaches even consider the exploitation of humans via socialengineering and none of them elicits personal behaviours of indi-vidual employees. While the amount of social engineering attacksand the damage they cause rise every year, the security awarenessof these attacks and their consideration during requirementselicitation remains negligible. We propose to use a card game to elicit these requirements, which all employees of a company can play to understand thethreat and document security requirements. The game considersthe individual context of a company and presents underlyingprinciples of human behaviour that social engineers exploit, aswell as concrete attack patterns. We evaluated our approachwith several groups of researchers, IT administrators, andprofessionals from industry. --- paper_title: Social Engineering and Insider Threats paper_content: This paper describes our research on the insider threats of Social engineering. Social engineering is a method using interaction between humans to get the access of a system in an illegal way. Due to staff's lack of confidentiality, the confidentiality of records is compromised, data is stolen or financial damage is done. This is insider threat. Social engineering and insider threat are two of the most relevant subjects in cyber security today. This research summarizes and seeks solution for the drawback of Social engineering through analyzing the Insider Threat cases. The first stage is to introduce the importance of using social engineering to reduce internet crime by analyzing the past loss created by insider threats. The second test illustrates insider threats' hazards to network security are ongoing. The third part covers the situation of insider threats with the emphasis on the security side. The topic of security aspect is extended to the rest of internal control of system, data exchange, and management of employees and their communication content. Actually, by the time of this abstract, insider threats are still not being taken as seriously as it should be. Many companies and organizations have given little thought to the insider threat but have concentrated on keeping attackers outside the network. This research will directly focus on the insider threats of organizations and the ways hackers use social engineering with the latest analysis of technology involved and examples that are close to common cybercrime. We aim to reveal the importance of reducing insider threats in organizations. The further research will be focused on a group consisted of managers and engineers within a company and the communication means of staff to the outside world. The analysis of the related crime cases will help prevent similar tragedy and seek possible approaches. --- paper_title: Social engineering attack examples, templates and scenarios paper_content: The field of information security is a fast-growing discipline. Even though the effectiveness of security measures to protect sensitive information is increasing, people remain susceptible to manipulation and thus the human element remains a weak link. A social engineering attack targets this weakness by using various manipulation techniques to elicit sensitive information. The field of social engineering is still in its early stages with regard to formal definitions, attack frameworks and templates of attacks. This paper proposes detailed social engineering attack templates that are derived from real-world social engineering examples. Current documented examples of social engineering attacks do not include all the attack steps and phases. The proposed social engineering attack templates attempt to alleviate the problem of limited documented literature on social engineering attacks by mapping the real-world examples to the social engineering attack framework. Mapping several similar real-world examples to the social engineering attack framework allows one to establish a detailed flow of the attack whilst abstracting subjects and objects. This mapping is then utilised to propose the generalised social engineering attack templates that are representative of real-world examples, whilst still being general enough to encompass several different real-world examples. The proposed social engineering attack templates cover all three types of communication, namely bidirectional communication, unidirectional communication and indirect communication. In order to perform comparative studies of different social engineering models, processes and frameworks, it is necessary to have a formalised set of social engineering attack scenarios that are fully detailed in every phase and step of the process. The social engineering attack templates are converted to social engineering attack scenarios by populating the template with both subjects and objects from real-world examples whilst still maintaining the detailed flow of the attack as provided in the template. Furthermore, this paper illustrates how the social engineering attack scenarios are applied to verify a social engineering attack detection model. These templates and scenarios can be used by other researchers to either expand on, use for comparative measures, create additional examples or evaluate models for completeness. Additionally, the proposed social engineering attack templates can also be used to develop social engineering awareness material. --- paper_title: CANDY: A Social Engineering Attack to Leak Information from Infotainment System paper_content: The introduction of Information and Communications Technologies (ICT) systems into vehicles make them more prone to cyber-security attacks that may impact of vehicles capability and, consequently, on the safety of drivers, passengers. In this paper, we focus on how to exploit security vulnerabilities affecting user-to-vehicle and intra- vehicle communications to hack the infotainment system to retrieve information about both vehicle and driver. Indeed, we designed and developed CANDY, a set of malicious APP injecting in a genuine Android APP, acting as a Trojan-horse on the Android In-Vehicle infotainment system. It opens a back-door that allows an attacker to remotely access to the infotainment system. We use this back-door to hit the privacy of the driver by recording her voice and collect information circulating on the CAN bus about the vehicle. CANDY is distributed by using social engineering techniques. --- paper_title: Reverse TCP and Social Engineering Attacks in the Era of Big Data paper_content: TCP is a connection-oriented protocol used for the transport of information across the Internet. The very nature of this powerful medium attracts cyber criminals who are continuously searching for new vulnerabilities within the TCP protocol to exploit for nefarious means. Reverse TCP attacks are a relatively new approach to exploit this connection process. The attacker is able to seize remote access to the victim end user's network. Success in this attack largely depends on skillful social engineering techniques to target specific end users in order to open the connection. In the era of Big Data, social engineering are expected to be more feasible. This paper examines various methods that adversaries may use to implement their attacks. Our work implements a reverse TCP attack via a virtualized environment, detailing the process used to gain unauthorized access to victim's machine. This paper also discusses the key threats that the reverse TCP attack may pose to end users and will provide a testbed to determine how effective computer systems are able to defend against this attack. --- paper_title: Social engineering as an attack vector for ransomware paper_content: Constant ransomware attacks achieve to evade multiple security perimeters, because its strategy is based on the exploitation of users-the weakest link in the security chain-getting high levels of effectiveness. In this paper, an in-depth analysis of the anatomy of the ransomware is develop, and how they combine technology with manipulation, through social engineering, to meet their goals and compromise thousands of computers. Finally, simulations of the phases that it follows are carried out, leaving the necessary guidelines to evaluate the security of the information, and thus, generate policies that minimize the risks of loss of information. --- paper_title: Social Engineering Toolkit — A systematic approach to social engineering paper_content: Social engineering techniques, exploiting humans as information systems' security weakest link, are mostly the initial attack vectors within larger intrusions and information system compromises. In order to practically evaluate the risks of information leakage trough the target organizations' employees, when performing a penetration test, an ethical hacker must consider social engineering as a very important aspect of the performed test. Social Engineering Toolkit (SET) is an integrated set of tools designed specifically to perform advanced attacks against the human element, and is the most advanced, if not the only toolkit of such kind that is publicly available as open source software. Incorporating many social engineering attack vectors, it heavily depends on Metasploit, an integrated penetration testing framework. This paper gives a brief introduction to the Social Engineering Toolkit software architecture, and provides an overview of supported attack vectors. --- paper_title: Social engineering and digital technologies for the security of the social capital' development paper_content: Social capital is transformed due to the digitalized environment built with the ICT expansion. The evolution of the social capital' forming and development reflects the new forms and models of communication, the extended possibilities of choice and of presence into the Web as a unique space of exchange. This article presents the theoretical approach to the analysis of behavior' regulation within the framework of social technologies implemented to the digital space. Social engineering represents an important factor of the violation of information security' rules, the IT-specialists attribute the majority of the crime in the sector to the insiders of private business companies and to the imprudence of individuals. The social capital can play the both positive and negative roles to assure information security issues, the social engineering can be in the contradiction with the tendencies of the social capital development, but the social technologies used are based on the social capital development. --- paper_title: Security Analytics: Big Data Analytics for cybersecurity: A review of trends, techniques and tools paper_content: The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future. --- paper_title: Security Evaluation of Wireless Network Access Points paper_content: Abstract The paper focuses on the real-world usage of IEEE 802.11 wireless network encryption and Wi-Fi Protected Setup (WPS) function. A brief history on the development of encryption methods and WPS is given. Wireless scanning of 802.11 networks in a capital city has been performed, and the results of it have been analysed. To ascertain the knowledge about the security of wireless networks of the average user, an online survey has been conducted. To test the security of encryption methods and WPS function, practical attacks against private test wireless networks have been made. The authors conclude that the safest way to set up 802.11 network with a pre-shared key is to use Wi-Fi Protected Access 2 (WPA2) encryption without support for WPS function. Statistics in Riga shows that networks are often configured otherwise and thus vulnerable to attacks. Survey results prove that respondents are not well informed regarding the security of wireless networks. --- paper_title: On tackling social engineering web phishing attacks utilizing software defined networks (SDN) approach paper_content: Web phishing attacks are one of the challenging security threats. Phishing depends on humans’ behavior but not protocols and devices vulnerabilities. In this work, software defined networking (SDN) will be tailored to tackle phishing attacks. In SDN, network devices forward received packets to a central point ‘controller’ that makes decision on behalf of them. This approach allows more control and management over network devices and protocol. In this work, we propose a neural network based phishing prevention algorithm (PPA) that is implemented utilizing Ryu, an open source, SDN controller. The PPA algorithm has been tested in a home network that is constructed with HP2920-24G switch. Moreover, a phished version of Facebook, Yahoo and Hotmail login pages have been written and hosted on three different free hosting domains. PPA has detected all of the phished versions and allowed the access to real version of these services. --- paper_title: Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble? paper_content: Robots such as information security and overtrust in them are gaining increasing relevance. This research aims at giving an insight into how trust toward robots could be exploited for the purpose of social engineering. Drawing on Mitnick's model, a well-known social engineering framework, an interactive scenario with the humanoid robot iCub was designed to emulate a social engineering attack. At first, iCub attempted to collect the kind of personal information usually gathered by social engineers by asking a series of private questions. Then, the robot tried to develop trust and rapport with participants by offering reliable clues during a treasure hunt game. At the end of the treasure hunt, the robot tried to exploit the gained trust in order to make participants gamble the money they won. The results show that people tend to build rapport with and trust toward the robot, resulting in the disclosure of sensitive information, conformation to its suggestions and gambling. --- paper_title: A Serious Game for Eliciting Social Engineering Security Requirements paper_content: Social engineering is the acquisition of informationabout computer systems by methods that deeply include non-technical means. While technical security of most critical systemsis high, the systems remain vulnerable to attacks from socialengineers. Social engineering is a technique that: (i) does notrequire any (advanced) technical tools, (ii) can be used by anyone,(iii) is cheap. Traditional security requirements elicitation approaches oftenfocus on vulnerabilities in network or software systems. Fewapproaches even consider the exploitation of humans via socialengineering and none of them elicits personal behaviours of indi-vidual employees. While the amount of social engineering attacksand the damage they cause rise every year, the security awarenessof these attacks and their consideration during requirementselicitation remains negligible. We propose to use a card game to elicit these requirements, which all employees of a company can play to understand thethreat and document security requirements. The game considersthe individual context of a company and presents underlyingprinciples of human behaviour that social engineers exploit, aswell as concrete attack patterns. We evaluated our approachwith several groups of researchers, IT administrators, andprofessionals from industry. --- paper_title: Social Engineering and Insider Threats paper_content: This paper describes our research on the insider threats of Social engineering. Social engineering is a method using interaction between humans to get the access of a system in an illegal way. Due to staff's lack of confidentiality, the confidentiality of records is compromised, data is stolen or financial damage is done. This is insider threat. Social engineering and insider threat are two of the most relevant subjects in cyber security today. This research summarizes and seeks solution for the drawback of Social engineering through analyzing the Insider Threat cases. The first stage is to introduce the importance of using social engineering to reduce internet crime by analyzing the past loss created by insider threats. The second test illustrates insider threats' hazards to network security are ongoing. The third part covers the situation of insider threats with the emphasis on the security side. The topic of security aspect is extended to the rest of internal control of system, data exchange, and management of employees and their communication content. Actually, by the time of this abstract, insider threats are still not being taken as seriously as it should be. Many companies and organizations have given little thought to the insider threat but have concentrated on keeping attackers outside the network. This research will directly focus on the insider threats of organizations and the ways hackers use social engineering with the latest analysis of technology involved and examples that are close to common cybercrime. We aim to reveal the importance of reducing insider threats in organizations. The further research will be focused on a group consisted of managers and engineers within a company and the communication means of staff to the outside world. The analysis of the related crime cases will help prevent similar tragedy and seek possible approaches. --- paper_title: Social engineering attack examples, templates and scenarios paper_content: The field of information security is a fast-growing discipline. Even though the effectiveness of security measures to protect sensitive information is increasing, people remain susceptible to manipulation and thus the human element remains a weak link. A social engineering attack targets this weakness by using various manipulation techniques to elicit sensitive information. The field of social engineering is still in its early stages with regard to formal definitions, attack frameworks and templates of attacks. This paper proposes detailed social engineering attack templates that are derived from real-world social engineering examples. Current documented examples of social engineering attacks do not include all the attack steps and phases. The proposed social engineering attack templates attempt to alleviate the problem of limited documented literature on social engineering attacks by mapping the real-world examples to the social engineering attack framework. Mapping several similar real-world examples to the social engineering attack framework allows one to establish a detailed flow of the attack whilst abstracting subjects and objects. This mapping is then utilised to propose the generalised social engineering attack templates that are representative of real-world examples, whilst still being general enough to encompass several different real-world examples. The proposed social engineering attack templates cover all three types of communication, namely bidirectional communication, unidirectional communication and indirect communication. In order to perform comparative studies of different social engineering models, processes and frameworks, it is necessary to have a formalised set of social engineering attack scenarios that are fully detailed in every phase and step of the process. The social engineering attack templates are converted to social engineering attack scenarios by populating the template with both subjects and objects from real-world examples whilst still maintaining the detailed flow of the attack as provided in the template. Furthermore, this paper illustrates how the social engineering attack scenarios are applied to verify a social engineering attack detection model. These templates and scenarios can be used by other researchers to either expand on, use for comparative measures, create additional examples or evaluate models for completeness. Additionally, the proposed social engineering attack templates can also be used to develop social engineering awareness material. --- paper_title: CANDY: A Social Engineering Attack to Leak Information from Infotainment System paper_content: The introduction of Information and Communications Technologies (ICT) systems into vehicles make them more prone to cyber-security attacks that may impact of vehicles capability and, consequently, on the safety of drivers, passengers. In this paper, we focus on how to exploit security vulnerabilities affecting user-to-vehicle and intra- vehicle communications to hack the infotainment system to retrieve information about both vehicle and driver. Indeed, we designed and developed CANDY, a set of malicious APP injecting in a genuine Android APP, acting as a Trojan-horse on the Android In-Vehicle infotainment system. It opens a back-door that allows an attacker to remotely access to the infotainment system. We use this back-door to hit the privacy of the driver by recording her voice and collect information circulating on the CAN bus about the vehicle. CANDY is distributed by using social engineering techniques. --- paper_title: Reverse TCP and Social Engineering Attacks in the Era of Big Data paper_content: TCP is a connection-oriented protocol used for the transport of information across the Internet. The very nature of this powerful medium attracts cyber criminals who are continuously searching for new vulnerabilities within the TCP protocol to exploit for nefarious means. Reverse TCP attacks are a relatively new approach to exploit this connection process. The attacker is able to seize remote access to the victim end user's network. Success in this attack largely depends on skillful social engineering techniques to target specific end users in order to open the connection. In the era of Big Data, social engineering are expected to be more feasible. This paper examines various methods that adversaries may use to implement their attacks. Our work implements a reverse TCP attack via a virtualized environment, detailing the process used to gain unauthorized access to victim's machine. This paper also discusses the key threats that the reverse TCP attack may pose to end users and will provide a testbed to determine how effective computer systems are able to defend against this attack. --- paper_title: Security and privacy challenges in smart cities paper_content: Abstract The construction of smart cities will bring about a higher quality of life to the masses through digital interconnectivity, leading to increased efficiency and accessibility in cities. Smart cities must ensure individual privacy and security in order to ensure that its citizens will participate. If citizens are reluctant to participate, the core advantages of a smart city will dissolve. This article will identify and offer possible solutions to five smart city challenges, in hopes of anticipating destabilizing and costly disruptions. The challenges include privacy preservation with high dimensional data, securing a network with a large attack surface, establishing trustworthy data sharing practices, properly utilizing artificial intelligence, and mitigating failures cascading through the smart network. Finally, further research directions are provided to encourage further exploration of smart city challenges before their construction. --- paper_title: A literature survey on social engineering attacks: Phishing attack paper_content: Phishing is a network type attack where the attacker creates the fake of an existing webpage to fool an online user into elicit personal Information. The prime objective of this review is to do literature survey on social engineering attack: Phishing attack and techniques to detect attack. Phishing is the combination of social engineering and technical methods to convince the user to reveal their personal data. The paper discusses about the Phishing social engineering attack theoretically and their issues in the life of human Beings. Phishing is typically carried out by Email spoofing or instant messaging. It targets the user who has no knowledge about social engineering attacks, and internet security, like persons who do not take care of privacy of their accounts details such as Facebook, Gmail, credit banks accounts and other financial accounts. The paper discusses various types of Phishing attacks such as Tab-napping, spoofing emails, Trojan horse, hacking and how to prevent them. At the same time this paper also provides different techniques to detect these attacks so that they can be easily dealt with in case one of them occurs. The paper gives a thorough analysis of various Phishing attacks along with their advantages and disadvantages. --- paper_title: A FORMAL CLASSIFICATION OF INTERNET BANKING ATTACKS AND VULNERABILITIES paper_content: A formal classification of attacks and vulnerabilities that affect current internet banking systems is presented along with two attacks which demonstrate the insecurity of such systems. Based ona thorough analysis of current security models, we propose a guidelines for designing secure internet banking systems which are not affected by the presented attacks and vulnerabilities . --- paper_title: A framework to mitigate social engineering through social media within the enterprise paper_content: The influx of employees using social media throughout the working environment has presented information security professionals with an extensive array of challenges facing people, process and technology. Social engineering through social media is a formidable enterprise concern due to the proclivity of social engineers targeting employees through these mediums to attack information assets residing within the organization. This research was motivated by a perceived gap in academic literature of organizational controls and guidance for employees concerning social engineering threats through social media adoption. The purpose of this paper is to propose a social media policy framework focusing on the practical reduction of social engineering risk through ICT security policy control. The framework's development involved an analysis of the primary social engineering through social media challenges alongside current information security standard recommendations. The resulting proposal for the SESM (Social Engineering through Social Media) framework addresses these challenges by conceptualizing enterprise implementation through the interconnection of relevant existing IT security standards in juxtaposition with our own social media policy development framework. --- paper_title: Email trouble: Secrets of spoofing, the dangers of social engineering, and how we can help paper_content: Email spoofing is a method of scamming individuals by impersonating a trusted correspondent via email. Incidences of successful Business Email Compromise (BEC) implemented by email spoofing are rising astronomically. Existing security systems are not widely implemented and cannot provide perfect protection against a technological threat that relies on social engineering for success. When existing security systems are implemented the settings are generally not restrictive enough to catch the more sophisticated email attacks. Businesses are not comfortable with legitimate emails being lost due to security false positives. Our idea for a solution would add a layer to existing precautions that would permit looser server-side security settings but would warn the user when discrepancies occur in the header source code that could result from a spoofed email. We suggest a client-side sentinel to vet email header source code and alert the user to potential problems. This software could log alerts, notify company officials, remind users of company policies to be followed in the event of suspicious email, and could increase user accountability by logging incidents. Users could have the option of white-listing frequently flagged trusted correspondents which would decrease the annoyance of false positives. --- paper_title: Social Engineering Attack Strategies and Defence Approaches paper_content: This paper examines the role and value of information security awareness efforts in defending against social engineering attacks. It categories the different social engineering threats and tactics used in targeting employees and the approaches to defend against such attacks. While we review these techniques, we attempt to develop a thorough understanding of human security threats, with a suitable balance between structured improvements to defend human weaknesses, and efficiently focused security training and awareness building. Finally, the paper shows that a multi-layered shield can mitigate various security risks and minimize the damage to systems and data. --- paper_title: Security and privacy challenges in smart cities paper_content: Abstract The construction of smart cities will bring about a higher quality of life to the masses through digital interconnectivity, leading to increased efficiency and accessibility in cities. Smart cities must ensure individual privacy and security in order to ensure that its citizens will participate. If citizens are reluctant to participate, the core advantages of a smart city will dissolve. This article will identify and offer possible solutions to five smart city challenges, in hopes of anticipating destabilizing and costly disruptions. The challenges include privacy preservation with high dimensional data, securing a network with a large attack surface, establishing trustworthy data sharing practices, properly utilizing artificial intelligence, and mitigating failures cascading through the smart network. Finally, further research directions are provided to encourage further exploration of smart city challenges before their construction. --- paper_title: A framework to mitigate social engineering through social media within the enterprise paper_content: The influx of employees using social media throughout the working environment has presented information security professionals with an extensive array of challenges facing people, process and technology. Social engineering through social media is a formidable enterprise concern due to the proclivity of social engineers targeting employees through these mediums to attack information assets residing within the organization. This research was motivated by a perceived gap in academic literature of organizational controls and guidance for employees concerning social engineering threats through social media adoption. The purpose of this paper is to propose a social media policy framework focusing on the practical reduction of social engineering risk through ICT security policy control. The framework's development involved an analysis of the primary social engineering through social media challenges alongside current information security standard recommendations. The resulting proposal for the SESM (Social Engineering through Social Media) framework addresses these challenges by conceptualizing enterprise implementation through the interconnection of relevant existing IT security standards in juxtaposition with our own social media policy development framework. --- paper_title: Email trouble: Secrets of spoofing, the dangers of social engineering, and how we can help paper_content: Email spoofing is a method of scamming individuals by impersonating a trusted correspondent via email. Incidences of successful Business Email Compromise (BEC) implemented by email spoofing are rising astronomically. Existing security systems are not widely implemented and cannot provide perfect protection against a technological threat that relies on social engineering for success. When existing security systems are implemented the settings are generally not restrictive enough to catch the more sophisticated email attacks. Businesses are not comfortable with legitimate emails being lost due to security false positives. Our idea for a solution would add a layer to existing precautions that would permit looser server-side security settings but would warn the user when discrepancies occur in the header source code that could result from a spoofed email. We suggest a client-side sentinel to vet email header source code and alert the user to potential problems. This software could log alerts, notify company officials, remind users of company policies to be followed in the event of suspicious email, and could increase user accountability by logging incidents. Users could have the option of white-listing frequently flagged trusted correspondents which would decrease the annoyance of false positives. --- paper_title: CANDY: A Social Engineering Attack to Leak Information from Infotainment System paper_content: The introduction of Information and Communications Technologies (ICT) systems into vehicles make them more prone to cyber-security attacks that may impact of vehicles capability and, consequently, on the safety of drivers, passengers. In this paper, we focus on how to exploit security vulnerabilities affecting user-to-vehicle and intra- vehicle communications to hack the infotainment system to retrieve information about both vehicle and driver. Indeed, we designed and developed CANDY, a set of malicious APP injecting in a genuine Android APP, acting as a Trojan-horse on the Android In-Vehicle infotainment system. It opens a back-door that allows an attacker to remotely access to the infotainment system. We use this back-door to hit the privacy of the driver by recording her voice and collect information circulating on the CAN bus about the vehicle. CANDY is distributed by using social engineering techniques. --- paper_title: Social Engineering and Insider Threats paper_content: This paper describes our research on the insider threats of Social engineering. Social engineering is a method using interaction between humans to get the access of a system in an illegal way. Due to staff's lack of confidentiality, the confidentiality of records is compromised, data is stolen or financial damage is done. This is insider threat. Social engineering and insider threat are two of the most relevant subjects in cyber security today. This research summarizes and seeks solution for the drawback of Social engineering through analyzing the Insider Threat cases. The first stage is to introduce the importance of using social engineering to reduce internet crime by analyzing the past loss created by insider threats. The second test illustrates insider threats' hazards to network security are ongoing. The third part covers the situation of insider threats with the emphasis on the security side. The topic of security aspect is extended to the rest of internal control of system, data exchange, and management of employees and their communication content. Actually, by the time of this abstract, insider threats are still not being taken as seriously as it should be. Many companies and organizations have given little thought to the insider threat but have concentrated on keeping attackers outside the network. This research will directly focus on the insider threats of organizations and the ways hackers use social engineering with the latest analysis of technology involved and examples that are close to common cybercrime. We aim to reveal the importance of reducing insider threats in organizations. The further research will be focused on a group consisted of managers and engineers within a company and the communication means of staff to the outside world. The analysis of the related crime cases will help prevent similar tragedy and seek possible approaches. --- paper_title: Blockchain-based Mutual Authentication Security Protocol for Distributed RFID Systems paper_content: Since radio frequency identification (RFID) technology has been used in various scenarios such as supply chain, access control system and credit card, tremendous efforts have been made to improve the authentication between tags and readers to prevent potential attacks. Though effective in certain circumstances, these existing methods usually require a server to maintain a database of identity related information for every tag, which makes the system vulnerable to the SQL injection attack and not suitable for distributed environment. To address these problems, we now propose a novel blockchain-based mutual authentication security protocol. In this new scheme, there is no need for the trusted third parties to provide security and privacy for the system. Authentication is executed as an unmodifiable transaction based on blockchain rather than database, which applies to distributed RFID systems with high security demand and relatively low real-time requirement. Analysis shows that our protocol is logically correct and can prevent multiple attacks. --- paper_title: Social engineering as an attack vector for ransomware paper_content: Constant ransomware attacks achieve to evade multiple security perimeters, because its strategy is based on the exploitation of users-the weakest link in the security chain-getting high levels of effectiveness. In this paper, an in-depth analysis of the anatomy of the ransomware is develop, and how they combine technology with manipulation, through social engineering, to meet their goals and compromise thousands of computers. Finally, simulations of the phases that it follows are carried out, leaving the necessary guidelines to evaluate the security of the information, and thus, generate policies that minimize the risks of loss of information. --- paper_title: Dynamic ransomware protection using deterministic random bit generator paper_content: Ransomware has become a very significant cyber threat. The basic idea of ransomware was presented in the form of a cryptovirus in 1995. However, it was considered as merely a conceptual topic since then for over a decade. In 2017, ransomware has become a reality, with several famous cases of ransomware having compromised important computer systems worldwide. For example, the damage caused by CryptoLocker and WannaCry is huge, as well as global. They encrypt victims' files and require user's payment to decrypt them. Because they utilize public key cryptography, the key for recovery cannot be found in the footprint of the ransomware on the victim's system. Therefore, once infected, the system cannot be recovered without paying for restoration. Various methods to deal this threat have been developed by antivirus researchers and experts in network security. However, it is believed that cryptographic defense is infeasible because recovering a victim's files is computationally as difficult as breaking a public key cryptosystem. Quite recently, various approaches to protect the crypto-API of an OS from malicious codes have been proposed. Most ransomware generate encryption keys using the random number generation service provided by the victim's OS. Thus, if a user can control all random numbers generated by the system, then he/she can recover the random numbers used by the ransomware for the encryption key. In this paper, we propose a dynamic ransomware protection method that replaces the random number generator of the OS with a user-defined generator. As the proposed method causes the virus program to generate keys based on the output from the user-defined generator, it is possible to recover an infected file system by reproducing the keys the attacker used to perform the encryption. --- paper_title: Blockchain-based Mutual Authentication Security Protocol for Distributed RFID Systems paper_content: Since radio frequency identification (RFID) technology has been used in various scenarios such as supply chain, access control system and credit card, tremendous efforts have been made to improve the authentication between tags and readers to prevent potential attacks. Though effective in certain circumstances, these existing methods usually require a server to maintain a database of identity related information for every tag, which makes the system vulnerable to the SQL injection attack and not suitable for distributed environment. To address these problems, we now propose a novel blockchain-based mutual authentication security protocol. In this new scheme, there is no need for the trusted third parties to provide security and privacy for the system. Authentication is executed as an unmodifiable transaction based on blockchain rather than database, which applies to distributed RFID systems with high security demand and relatively low real-time requirement. Analysis shows that our protocol is logically correct and can prevent multiple attacks. --- paper_title: Cutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks paper_content: In this paper, we present the results of a long-term study of ransomware attacks that have been observed in the wild between 2006 and 2014. We also provide a holistic view on how ransomware attacks have evolved during this period by analyzing 1,359 samples that belong to 15 different ransomware families. Our results show that, despite a continuous improvement in the encryption, deletion, and communication techniques in the main ransomware families, the number of families with sophisticated destructive capabilities remains quite small. In fact, our analysis reveals that in a large number of samples, the malware simply locks the victim's computer desktop or attempts to encrypt or delete the victim's files using only superficial techniques.i?źOur analysis also suggests that stopping advanced ransomware attacks is not as complex as it has been previously reported. For example, we show that by monitoring abnormal file system activity, it is possible to design a practical defense system that could stop a large number of ransomware attacks, even those using sophisticated encryption capabilities. A close examination on the file system activities of multiple ransomware samples suggests that by looking at I/O requests and protecting Master File Table MFT in the NTFS file system, it is possible to detect and prevent a significant number of zero-day ransomware attacks. --- paper_title: An Approach To Perceive Tabnabbing Attack paper_content: The growth of Internet has many pros and cons to mankind, which is easily visible in day to day activities. The growth of Internet has also manifested into the other domain of cyber crimes. Phishing, web defacement, money laundering, tax evasion, etc. are some of the examples of cyber crimes that have been reported in literature. It has become vital to make technology reliable, so that it can record and intelligently apprise the user of illegal activity. The objective of this work is twofold. Firstly, in this paper tabnabbing which is a type of phishing attack is explored by developing an attack scenario. Secondly, the signature based detection mechanism is proposed to handle tabnabbing attack. --- paper_title: TabShots: client-side detection of tabnabbing attacks paper_content: As the web grows larger and larger and as the browser becomes the vehicle-of-choice for delivering many applications of daily use, the security and privacy of web users is under constant attack. Phishing is as prevalent as ever, with anti-phishing communities reporting thousands of new phishing campaigns each month. In 2010, tabnabbing, a variation of phishing, was introduced. In a tabnabbing attack, an innocuous-looking page, opened in a browser tab, disguises itself as the login page of a popular web application, when the user's focus is on a different tab. The attack exploits the trust of users for already opened pages and the user habit of long-lived browser tabs. To combat this recent attack, we propose TabShots. TabShots is a browser extension that helps browsers and users to remember what each tab looked like, before the user changed tabs. Our system compares the appearance of each tab and highlights the parts that were changed, allowing the user to distinguish between legitimate changes and malicious masquerading. Using an experimental evaluation on the most popular sites of the Internet, we show that TabShots has no impact on 78% of these sites, and very little on another 19%. Thereby, TabShots effectively protects users against tabnabbing attacks without affecting their browsing habits and without breaking legitimate popular sites. --- paper_title: A Serious Game for Eliciting Social Engineering Security Requirements paper_content: Social engineering is the acquisition of informationabout computer systems by methods that deeply include non-technical means. While technical security of most critical systemsis high, the systems remain vulnerable to attacks from socialengineers. Social engineering is a technique that: (i) does notrequire any (advanced) technical tools, (ii) can be used by anyone,(iii) is cheap. Traditional security requirements elicitation approaches oftenfocus on vulnerabilities in network or software systems. Fewapproaches even consider the exploitation of humans via socialengineering and none of them elicits personal behaviours of indi-vidual employees. While the amount of social engineering attacksand the damage they cause rise every year, the security awarenessof these attacks and their consideration during requirementselicitation remains negligible. We propose to use a card game to elicit these requirements, which all employees of a company can play to understand thethreat and document security requirements. The game considersthe individual context of a company and presents underlyingprinciples of human behaviour that social engineers exploit, aswell as concrete attack patterns. We evaluated our approachwith several groups of researchers, IT administrators, andprofessionals from industry. --- paper_title: A Taxonomy for Social Engineering attacks paper_content: As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them. --- paper_title: A Taxonomy for Social Engineering attacks paper_content: As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them. --- paper_title: SoK: Everyone Hates Robocalls: A Survey of Techniques Against Telephone Spam paper_content: Telephone spam costs United States consumers $8.6 billion annually. In 2014, the Federal Trade Commission has received over 22 million complaints of illegal and wanted calls. Telephone spammers today are leveraging recent technical advances in the telephony ecosystem to distribute massive automated spam calls known as robocalls. Given that anti-spam techniques and approaches are effective in the email domain, the question we address is: what are the effective defenses against spam calls? In this paper, we first describe the telephone spam ecosystem, specifically focusing on the differences between email and telephone spam. Then, we survey the existing telephone spam solutions and, by analyzing the failings of the current techniques, derive evaluation criteria that are critical to an acceptable solution. We believe that this work will help guide the development of effective telephone spam defenses, as well as provide a framework to evaluate future defenses. --- paper_title: Advanced social engineering attacks paper_content: Social engineering has emerged as a serious threat in virtual communities and is an effective means to attack information systems. The services used by today's knowledge workers prepare the ground for sophisticated social engineering attacks. The growing trend towards BYOD (bring your own device) policies and the use of online communication and collaboration tools in private and business environments aggravate the problem. In globally acting companies, teams are no longer geographically co-located, but staffed just-in-time. The decrease in personal interaction combined with a plethora of tools used for communication (e-mail, IM, Skype, Dropbox, LinkedIn, Lync, etc.) create new attack vectors for social engineering attacks. Recent attacks on companies such as the New York Times and RSA have shown that targeted spear-phishing attacks are an effective, evolutionary step of social engineering attacks. Combined with zero-day-exploits, they become a dangerous weapon that is often used by advanced persistent threats. This paper provides a taxonomy of well-known social engineering attacks as well as a comprehensive overview of advanced social engineering attacks on the knowledge worker. --- paper_title: A client-side anti-pharming (CSAP) approach paper_content: Pharming is a type of social engineering attack wherein an attacker wants to steal sensitive information of internet users. Pharming is advanced technique than Phishing attack. In phishing, attacker's URL is different from targeted legitimate website URL but in case of pharming, attacker's URL is same to legitimate URL. Pharming attacks can be conducted by exploiting vulnerabilities in DNS server, which is more advanced type of attack than phishing. It's a special attack because the attacker doesn't have to target individual user. When attacker performs pharming on a Domain Name System (DNS) server, all users who are using DNS service through that server will fall victim of pharming attack. Various techniques are proposed for avoiding pharming attack. We present an approach which uses multiple DNS servers. --- paper_title: Social engineering: Revisiting end-user awareness and susceptibility to classic attack vectors paper_content: Social engineering relies on human vulnerability to exploit system security. Social engineering attacks are relatively harder to protect against as they mainly target the user, and not just hardware or software system defenses. End user awareness can be considered as one of the simplest yet most effective ways to protect the end user against social engineering vectors. The present study ascertains the level of user susceptibility to social engineering attacks in a cooperating corporate organization. Two attack scenarios, a spear-phishing campaign and a physical intrusion vector were designed targeting the organization's user population (employees) based on publicly available information from the Internet. Clues relating to social engineering techniques were included in the attacks to alert suspicious users. Despite the revealing signs of a social engineering campaign, the results indicated that a significantly high proportion (46–60%) of the users fell prey and failed to identify the attacks. It was observed that lack of user awareness remained the primary cause of the success of the attacks, requiring corrective action through post-incident training and regular IT security drills. --- paper_title: Mitigating social engineering for improved cybersecurity paper_content: In this paper, Social Engineering threats and trends are investigated and mitigation strategies recommended. The national and economic security of countries now relies on the Cyberspace because virtually all businesses processes are using the Internet. Unfortunately, Cyber-criminality is increasing and rated the fastest growing crime worldwide. Social engineering, a technique whereby cybercriminals trick their victims into disclosing log-in information without using any technical gadget has been identified as one of the most dangerous threats of our time. Due to the fact that any identity theft or breach in an organization's information system resulting in disclosure of sensitive information could have far-reaching consequences such as financial losses, disruption of services, damage to public image, or even bringing the organization or a nation to a complete standstill, this paper x-rayed most prevalent social engineering attacks, their typical tools and recommended mitigation strategies. --- paper_title: A Taxonomy for Social Engineering attacks paper_content: As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them. --- paper_title: An Approach To Perceive Tabnabbing Attack paper_content: The growth of Internet has many pros and cons to mankind, which is easily visible in day to day activities. The growth of Internet has also manifested into the other domain of cyber crimes. Phishing, web defacement, money laundering, tax evasion, etc. are some of the examples of cyber crimes that have been reported in literature. It has become vital to make technology reliable, so that it can record and intelligently apprise the user of illegal activity. The objective of this work is twofold. Firstly, in this paper tabnabbing which is a type of phishing attack is explored by developing an attack scenario. Secondly, the signature based detection mechanism is proposed to handle tabnabbing attack. --- paper_title: A layered defense mechanism for a social engineering aware perimeter paper_content: While many cyber security organizations urge the corporate world to use defence-in-depth to create vigilant network perimeters, the human factor is often overlooked. Security evaluation frameworks focus mostly on critical assets of an organization and technical aspects of prevailing risks. There is consequently no specific framework to identify, categorize, analyse and mitigate social engineering related risks. This paper identifies the requirement for such a framework through an in-depth investigation of an actual organization and extensive analysis of existing methodologies. On the basis of this a layered defence strategy SERA is developed, starting with the basic building blocks for social-engineering aware risk analysis. A chronological attack classification framework is presented as an enhancement of existing frameworks on social engineering. --- paper_title: An Overview of Social Engineering in the Context of Information Security paper_content: Social engineering in the context of information security is the exploitation of human psychology to gain access into secure data. Human emotion can act as both a strength and a weakness. When it comes to the world booming with technology, human emotions which are completely unrelated to the matter is made to relate through social engineering. Social engineering employs ‘traps’ to pry on human emotion and its vulnerability, taking advantage of the flaws of human psychology. Information security breaches utilising social engineering techniques are vast, so that social engineering in this context is a topic which could not be neglected. This research paper presents an overview of social engineering attacks and suggested defence mechanisms. An introduction to social engineering attacks are given, with context to the current trends and related vulnerabilities. Main reasons for the spread of social engineering attacks in the current context are also presented. Attack frameworks are presented and defence approaches are proposed at the end. --- paper_title: Vulnerability to social engineering in social networks: a proposed user-centric framework paper_content: Social networking sites have billions of users who communicate and share their personal information every day. Social engineering is considered one of the biggest threats to information security nowadays. Social engineering is an attacker technique to manipulate and deceive users in order to access or gain privileged information. Such attacks are continuously developed to deceive a high number of potential victims. The number of social engineering attacks has risen dramatically in the past few years, causing unpleasant damage both to organizations and individuals. Yet little research has discussed social engineering in the virtual environments of social networks. One approach to counter these exploits is through research that aims to understand why people fall victim to such attacks. Previous social engineering and deception research have not satisfactory identified the factors that influence the users’ ability to detect attacks. Characteristics that influence users’ vulnerability must be investigated to address this issue and help to build a profile for vulnerable users in order to focus on increasing the training programs and education for those users. In this context, the present study proposes a user-centric framework to understand the user's susceptibility, relevant factors and dimensions. --- paper_title: Malicious PDF detection using metadata and structural features paper_content: Owed to their versatile functionality and widespread adoption, PDF documents have become a popular avenue for user exploitation ranging from large-scale phishing attacks to targeted attacks. In this paper, we present a framework for robust detection of malicious documents through machine learning. Our approach is based on features extracted from document metadata and structure. Using real-world datasets, we demonstrate the the adequacy of these document properties for malware detection and the durability of these features across new malware variants. Our analysis shows that the Random Forests classification method, an ensemble classifier that randomly selects features for each individual classification tree, yields the best detection rates, even on previously unseen malware. Indeed, using multiple datasets containing an aggregate of over 5,000 unique malicious documents and over 100,000 benign ones, our classification rates remain well above 99% while maintaining low false positives of 0.2% or less for different classification parameters and experimental scenarios. Moreover, the classifier has the ability to detect documents crafted for targeted attacks and separate them from broadly distributed malicious PDF documents. Remarkably, we also discovered that by artificially reducing the influence of the top features in the classifier, we can still achieve a high rate of detection in an adversarial setting where the attacker is aware of both the top features utilized in the classifier and our normality model. Thus, the classifier is resilient against mimicry attacks even with knowledge of the document features, classification method, and training set. --- paper_title: Measuring Source Credibility of Social Engineering Attackers on Facebook paper_content: Past research has suggested that social networking sites are the most common source for social engineering-based attacks. Persuasion research shows that people are more likely to obey and accept a message when the source’s presentation appears to be credible. However, many factors can impact the perceived credibility of a source, depending on its type and the characteristics of the environment. Our previous research showed that there are four dimensions of source credibility in terms of social engineering on Facebook: perceived sincerity, perceived competence, perceived attraction, and perceived worthiness. Because the dimensionalities of source credibility as well as their measurement scales can fluctuate from one type of source to another and from one type of context to another, our aim in this study includes validating the existence of those four dimensions toward the credibility of social engineering attackers on Facebook and developing a valid measurement scale for every dimension of them. --- paper_title: Flow whitelisting in SCADA networks paper_content: Supervisory Control And Data Acquisition (SCADA) networks are commonly deployed to aid the operation of large industrial facilities. Modern SCADA networks are becoming more vulnerable to network attacks, due to the now common use of standard communication protocols and increased interconnection to corporate networks and the Internet. In this work, we propose an approach to improve the security of these networks based on flow whitelisting. A flow whitelist describes the legitimate traffic solely using four properties of network packets: the client address, the server address, the server-side port, and the transport protocol. The proposed approach consists in learning a flow whitelist by capturing network traffic and aggregating it into flows for a given period of time. After this learning phase is complete, any non-whitelisted connection observed generates an alarm. The evaluation of the approach focuses on two important whitelist characteristics: size and stability. We demonstrate the applicability of the approach using real-world traffic traces, captured in two water treatment plants and a gas and electric utility. --- paper_title: A model for social engineering awareness program for schools paper_content: Advancements in security has over the years of technological growth been mainly focused on providing secured technological infrastructure. The developed security measures and counter-measures have played a major role in reducing the surge of cyber-attacks. However, hackers have continued to exploit vulnerabilities due to the human element to gain access into otherwise secured systems. Risks and potential for exploits are more so in schools where the human vulnerability is enhanced by young impressionable pupils. Social engineering, the art of manipulating people so they give up confidential information, is increasingly the approach of choice for hackers who exploit the human element. Social engineers bypass secured systems in schools by directing targeting and exploiting the human vulnerabilities of school's students and staff. Education through awareness campaigns are typically used in countering the threat from social engineering. Such awareness campaigns tend to however be too holistic in focus to lead to the significant and sustainable change in behaviour required to counter social engineering. This paper presents a model for designing and implementing social engineering awareness programmes aimed at fostering behaviour change in schools. It demonstrates the process of designing a social engineering awareness program to meet all types of learning styles by using different multiple communication methods. Evaluation and continuous reinforcement approaches are also presented. A pilot implementation of our proposed model for social engineering awareness programme shows a significant change in behaviour of school's teaching staff. --- paper_title: Social Engineering: Hacking into Humans paper_content: In this world of digitalization, the need for data privacy and data security is quite important. The IT companies today prefer their data over everything. Not only for companies, the data privacy important for any individual. But no matter how secure is the company, how advanced is the technology used or how much up to date their software is, there's still a vulnerability in every sector known as ‘Human'. The art of gathering sensitive information from a human being is known as Social Engineering. Technology has increased drastically in the past few years but the threat of Social engineering is still a problem. Social engineering attacks are ince=reasing day by day due to lack of awareness and knowledge. Social engineering is a really common practice to gather information and sensitive data through the use of mobile numbers, emails, SMS or direct approach. Social engineering can be really useful for the attacker if done in a proper manner.'Kevin Mitnik' is the most renowned social engineers of all time. In this paper, we are going to discuss Social Engineering, its types, how it affects us and how to prevent these attacks. Also, many proofs of Concepts are also presented in this paper. --- paper_title: Social engineering attack modeling with the use of Bayesian networks paper_content: The problem of modeling socio-engineering attacks on the user's machine using a Bayesian belief networks. The system models the complex “information system - personnel - critical documents” presented in the form of Bayesian belief networks, containing a set of psychological characteristics. This approach can significantly improve the performance and productivity of software for the analysis of security of user information system. --- paper_title: Protocols for mitigating blackhole attacks in delay tolerant networks paper_content: High node mobility and infrequent connectivity in delay tolerant networks (DTNs) makes it challenging to implement traditional security algorithms for detecting malicious nodes. In DTN, most of the routing algorithms are based on the announcement of routing metrics like probability of delivery, contact strength or social group strength by the nodes in contact. Blackhole in DTN exploits these characteristics of routing protocols and either announces a high value of these metrics or tries to attain a high value for them by following fast, repeated movement patterns. Dynamic social grouping (DSG) based routing algorithm shows that social behavior of nodes helps to make better forwarding decisions and to achieve highest message delivery ratio amongst other existing routing algorithms. We examine the impact of blackholes, intermittent blackholes and tailgating attack on DSG. We propose a suit of three solutions. Our first solution detects blackholes and tailgating malicious nodes in the network, however, is not suitable for intermittent blackholes. Second solution handles intermittent blackholes and performs well when the nodes are well connected. The third and final solution handles intermittent blackholes in sparsely connected as well as in well-connected networks. In all proposed solutions, blackholes are not able to degrade the performance of the protocols by changing their geographical locations. We demonstrate through simulation that our protocols improve upon the message delivery ratio over the existing solutions. An appropriate protocol from the suit may be used depending upon an application. --- paper_title: A New Method for Detection of Phishing Websites: URL Detection paper_content: Phishing is an unlawful activity wherein people are misled into the wrong sites by using various fraudulent methods. The aim of these phishing websites is to confiscate personal information or other financial details for personal benefits or misuse. As technology advances, the phishing approaches used need to get progressed and there is a dire need for better security and better mechanisms to prevent as well as detect these phishing approaches. The primary focus of this paper is to put forth a model as a solution to detect phishing websites by using the URL detection method using Random Forest algorithm. There are 3 major phases such as Parsing, Heuristic Classification of data, Performance Analysis in this model and each phase makes use of a different technique or algorithm for processing of data to give better results. --- paper_title: CryptoLock (and Drop It): Stopping Ransomware Attacks on User Data paper_content: Ransomware is a growing threat that encrypts auser's files and holds the decryption key until a ransom ispaid by the victim. This type of malware is responsible fortens of millions of dollars in extortion annually. Worse still, developing new variants is trivial, facilitating the evasion of manyantivirus and intrusion detection systems. In this work, we presentCryptoDrop, an early-warning detection system that alerts a userduring suspicious file activity. Using a set of behavior indicators, CryptoDrop can halt a process that appears to be tampering witha large amount of the user's data. Furthermore, by combininga set of indicators common to ransomware, the system can beparameterized for rapid detection with low false positives. Ourexperimental analysis of CryptoDrop stops ransomware fromexecuting with a median loss of only 10 files (out of nearly5,100 available files). Our results show that careful analysis ofransomware behavior can produce an effective detection systemthat significantly mitigates the amount of victim data loss. --- paper_title: Social engineering in social networking sites: Affect-based model paper_content: While social engineering represents a real and ominous threat to many organizations, companies, governments, and individuals, social networking sites (SNSs), have been identified as among the most common means of social engineering attacks. Owing to factors that reduce the ability of users to detect social engineering tricks and increase the ability of attackers to launch them, SNSs seem to be perfect breeding ground for exploiting the vulnerabilities of people, and the weakest link in security. This work will contribute to the knowledge of social engineering by identifying different entities and subentities that affect social engineering based attacks in SNSs. Moreover, this paper includes an intensive and comprehensive overview of different aspects of social engineering threats in SNSs. --- paper_title: Improving Awareness of Social Engineering Attacks paper_content: Social engineering is a method of attack involving the exploitation of human weakness, gullibility and ignorance. Although related techniques have existed for some time, current awareness of social engineering and its many guises is relatively low and efforts are therefore required to improve the protection of the user community. This paper begins by examining the problems posed by social engineering, and outlining some of the previous efforts that have been made to address the threat. This leads toward the discussion of a new awareness-raising website that has been specifically designed to aid users in understanding and avoiding the risks. Findings from an experimental trial involving 46 participants are used to illustrate that the system served to increase users’ understanding of threat concepts, as well as providing an engaging environment in which they would be likely to persevere with their learning. --- paper_title: Solutions for counteracting human deception in social engineering attacks paper_content: The purpose of this paper is to investigate the top three cybersecurity issues in organizations related to social engineering and aggregate solutions for counteracting human deception in social engineering attacks.,A total of 20 experts within Information System Security Association participated in a three-round Delphi study for aggregating and condensing expert opinions. Three rounds moved participants toward consensus for solutions to counteract social engineering attacks in organizations.,Three significant issues: compromised data; ineffective practices; and lack of ongoing education produced three target areas for implementing best practices in countering social engineering attacks. The findings offer counteractions by including education, policies, processes and continuous training in security practices.,Study limitations include lack of prior data on effective social engineering defense. Research implications stem from the psychology of human deception and trust with the ability to detect deception.,Practical implications relate to human judgment in complying with effective security policies and programs and consistent education and training. Future research may include exploring financial, operational and educational costs of implementing social engineering solutions.,Social implications apply across all knowledge workers who benefit from technology and are trusted to protect organizational assets and intellectual property.,This study contributes to the field of cybersecurity with a focus on trust and human deception to investigate solutions to counter social engineering attacks. This paper adds to under-represented cybersecurity research regarding effective implementation for social engineering defense. --- paper_title: Cyber and Physical Security Vulnerability Assessment for IoT-Based Smart Homes paper_content: The Internet of Things (IoT) is an emerging paradigm focusing on the connection of devices, objects, or “things” to each other, to the Internet, and to users. IoT technology is anticipated to become an essential requirement in the development of smart homes, as it offers convenience and efficiency to home residents so that they can achieve better quality of life. Application of the IoT model to smart homes, by connecting objects to the Internet, poses new security and privacy challenges in terms of the confidentiality, authenticity, and integrity of the data sensed, collected, and exchanged by the IoT objects. These challenges make smart homes extremely vulnerable to different types of security attacks, resulting in IoT-based smart homes being insecure. Therefore, it is necessary to identify the possible security risks to develop a complete picture of the security status of smart homes. This article applies the operationally critical threat, asset, and vulnerability evaluation (OCTAVE) methodology, known as OCTAVE Allegro, to assess the security risks of smart homes. The OCTAVE Allegro method focuses on information assets and considers different information containers such as databases, physical papers, and humans. The key goals of this study are to highlight the various security vulnerabilities of IoT-based smart homes, to present the risks on home inhabitants, and to propose approaches to mitigating the identified risks. The research findings can be used as a foundation for improving the security requirements of IoT-based smart homes. --- paper_title: Cutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks paper_content: In this paper, we present the results of a long-term study of ransomware attacks that have been observed in the wild between 2006 and 2014. We also provide a holistic view on how ransomware attacks have evolved during this period by analyzing 1,359 samples that belong to 15 different ransomware families. Our results show that, despite a continuous improvement in the encryption, deletion, and communication techniques in the main ransomware families, the number of families with sophisticated destructive capabilities remains quite small. In fact, our analysis reveals that in a large number of samples, the malware simply locks the victim's computer desktop or attempts to encrypt or delete the victim's files using only superficial techniques.i?źOur analysis also suggests that stopping advanced ransomware attacks is not as complex as it has been previously reported. For example, we show that by monitoring abnormal file system activity, it is possible to design a practical defense system that could stop a large number of ransomware attacks, even those using sophisticated encryption capabilities. A close examination on the file system activities of multiple ransomware samples suggests that by looking at I/O requests and protecting Master File Table MFT in the NTFS file system, it is possible to detect and prevent a significant number of zero-day ransomware attacks. --- paper_title: From Intrusion Detection to an Intrusion Response System: Fundamentals, Requirements, and Future Directions paper_content: In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies. --- paper_title: Scoping the Cyber Security Body of Knowledge paper_content: Cybersecurity is becoming an important element in curricula at all education levels. However, the foundational knowledge on which the field of cybersecurity is being developed is fragmented, and as a result, it can be difficult for both students and educators to map coherent paths of progression through the subject. The Cyber Security Body of Knowledge (CyBOK) project (www.cybok.org) aims to codify the foundational and generally recognized knowledge on cybersecurity. --- paper_title: Online Social Networks Security: Threats, Attacks, and Future Directions paper_content: A list of well-known Online Social Networks extend to hundreds of available sites with hundreds of thousands, millions, and even billions of registered accounts; for instance, Facebook as of April 2016 has around two billion active users. Online Social Networks made a difference in many people’s lives and helped in opening avenues that were not possible before. However, as in any success story there is a downside. Cyber-attacks that used to have a small or limited effect can now have a huge distributed effect through utilizing those social network sites. Some attacks are more apparent than others in this context; hence this chapter discusses how serious attacks are possible in online social networks and what has been done to encounter them. It will discuss privacy, Sybil attacks, social engineering, spam, malware, botnet attacks, and the trade-off between services, security, and users’ rights. --- paper_title: A Taxonomy of Attacks and a Survey of Defence Mechanisms for Semantic Social Engineering Attacks paper_content: Social engineering is used as an umbrella term for a broad spectrum of computer exploitations that employ a variety of attack vectors and strategies to psychologically manipulate a user. Semantic attacks are the specific type of social engineering attacks that bypass technical defences by actively manipulating object characteristics, such as platform or system applications, to deceive rather than directly attack the user. Commonly observed examples include obfuscated URLs, phishing emails, drive-by downloads, spoofed websites and scareware to name a few. This article presents a taxonomy of semantic attacks, as well as a survey of applicable defences. By contrasting the threat landscape and the associated mitigation techniques in a single comparative matrix, we identify the areas where further research can be particularly beneficial. --- paper_title: The social engineering attack spiral (SEAS) paper_content: Cybercrime is on the increase and attacks are becoming ever more sophisticated. Organisations are investing huge sums of money and vast resources in trying to establish effective and timely countermeasures. This is still a game of catch up, where hackers have the upper hand and potential victims are trying to produce secure systems hardened against what feels like are inevitable future attacks. The focus so far has been on technology and not people and the amount of resource allocated to countermeasures and research into cyber security attacks follows the same trend. This paper adds to the growing body of work looking at social engineering attacks and therefore seeks to redress this imbalance to some extent. The objective is to produce a model for social engineering that provides a better understanding of the attack process such that improved and timely countermeasures can be applied and early interventions implemented. ---
Title: Social Engineering Attacks: A Survey Section 1: Introduction Description 1: Provide an introduction about social engineering attacks, their impact on cybersecurity, and the importance of the survey. Section 2: Social Engineering Attacks Description 2: Discuss the significance and different phases of social engineering attacks. Section 3: Attacks Classification Description 3: Classify social engineering attacks into human-based, computer-based, social, technical, and physical-based categories. Section 4: Attacks Description Description 4: Describe various types of social engineering attacks such as phishing, pretexting, baiting, tailgating, ransomware, fake software attacks, reverse social engineering, pop-up windows, phone/email scams, robocalls, and others. Section 5: Prevention Techniques Description 5: Discuss various techniques and strategies to prevent social engineering attacks including training, policies, anti-phishing tools, and other security measures. Section 6: Mitigation Techniques Description 6: Outline techniques to mitigate and handle social engineering attacks after they occur, including human-based and technology-based strategies. Section 7: Comparison Description 7: Compare different countermeasures and mitigation techniques for social engineering attacks. Section 8: Challenges and Future Directions Description 8: Identify key challenges and suggest future directions for improving the detection, prevention, and mitigation of social engineering attacks. Section 9: Conclusions Description 9: Summarize the findings of the survey, discuss the importance of novel detection techniques and programs, and emphasize the need for cybersecurity education.
Datasets on object manipulation and interaction: a survey
6
--- paper_title: Recognizing Fine-Grained and Composite Activities Using Hand-Centric Features and Script Data paper_content: Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them. --- paper_title: Script data for attribute-based recognition of composite activities paper_content: State-of-the-art human activity recognition methods build on discriminative learning which requires a representative training set for good performance. This leads to scalability issues for the recognition of large sets of highly diverse activities. In this paper we leverage the fact that many human activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. To share and transfer knowledge between composite activities we model them by a common set of attributes corresponding to basic actions and object participants. This attribute representation allows to incorporate script data that delivers new variations of a composite activity or even to unseen composite activities. In our experiments on 41 composite cooking tasks, we found that script data to successfully capture the high variability of composite activities. We show improvements in a supervised case where training data for all composite cooking tasks is available, but we are also able to recognize unseen composites by just using script data and without any manual video annotation. --- paper_title: Learning to Recognize Daily Actions using Gaze paper_content: We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze. --- paper_title: Human Activity Detection from RGBD Images paper_content: Being able to detect and recognize human activities is important for making personal assistant robots useful in performing assistive tasks. The challenge is to develop a system that is low-cost, reliable in unstructured home settings, and also straightforward to use. In this paper, we use a RGBD sensor (Microsoft Kinect) as the input sensor, and present learning algorithms to infer the activities. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM). It considers a person's activity as composed of a set of sub-activities, and infers the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve an average performance of 84.3% when the person was seen before in the training set (and 64.2% when the person was not seen before). --- paper_title: Learning human activities and object affordances from RGB-D videos paper_content: Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot. --- paper_title: Slice&Dice: Recognizing Food Preparation Activities Using Embedded Accelerometers paper_content: Within the context of an endeavor to provide situated support for people with cognitive impairments in the kitchen, we developed and evaluated classifiers for recognizing 11 actions involved in food preparation. Data was collected from 20 lay subjects using four specially designed kitchen utensils incorporating embedded 3-axis accelerometers. Subjects were asked to prepare a mixed salad in our laboratory-based instrumented kitchen environment. Video of each subject's food preparation activities were independently annotated by three different coders. Several classifiers were trained and tested using these features. With an overall accuracy of 82.9% our investigation demonstrated that a broad set of food preparation actions can be reliably recognized using sensors embedded in kitchen utensils. --- paper_title: Slice&Dice: Recognizing Food Preparation Activities Using Embedded Accelerometers paper_content: Within the context of an endeavor to provide situated support for people with cognitive impairments in the kitchen, we developed and evaluated classifiers for recognizing 11 actions involved in food preparation. Data was collected from 20 lay subjects using four specially designed kitchen utensils incorporating embedded 3-axis accelerometers. Subjects were asked to prepare a mixed salad in our laboratory-based instrumented kitchen environment. Video of each subject's food preparation activities were independently annotated by three different coders. Several classifiers were trained and tested using these features. With an overall accuracy of 82.9% our investigation demonstrated that a broad set of food preparation actions can be reliably recognized using sensors embedded in kitchen utensils. --- paper_title: Learning to Recognize Daily Actions using Gaze paper_content: We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze. --- paper_title: Recognizing Fine-Grained and Composite Activities Using Hand-Centric Features and Script Data paper_content: Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them. --- paper_title: Script data for attribute-based recognition of composite activities paper_content: State-of-the-art human activity recognition methods build on discriminative learning which requires a representative training set for good performance. This leads to scalability issues for the recognition of large sets of highly diverse activities. In this paper we leverage the fact that many human activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. To share and transfer knowledge between composite activities we model them by a common set of attributes corresponding to basic actions and object participants. This attribute representation allows to incorporate script data that delivers new variations of a composite activity or even to unseen composite activities. In our experiments on 41 composite cooking tasks, we found that script data to successfully capture the high variability of composite activities. We show improvements in a supervised case where training data for all composite cooking tasks is available, but we are also able to recognize unseen composites by just using script data and without any manual video annotation. --- paper_title: The Language of Actions: Recovering the Syntax and Semantics of Goal-Directed Human Activities paper_content: This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities. --- paper_title: A database for fine grained activity detection of cooking activities paper_content: While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition. --- paper_title: Combining embedded accelerometers with computer vision for recognizing food preparation activities paper_content: This paper introduces a publicly available dataset of complex activities that involve manipulative gestures. The dataset captures people preparing mixed salads and contains more than 4.5 hours of accelerometer and RGB-D video data, detailed annotations, and an evaluation protocol for comparison of activity recognition algorithms. Providing baseline results for one possible activity recognition task, this paper further investigates modality fusion methods at different stages of the recognition pipeline: (i) prior to feature extraction through accelerometer localization, (ii) at feature level via feature concatenation, and (iii) at classification level by combining classifier outputs. Empirical evaluation shows that fusing information captured by these sensor types can considerably improve recognition performance. --- paper_title: Slice&Dice: Recognizing Food Preparation Activities Using Embedded Accelerometers paper_content: Within the context of an endeavor to provide situated support for people with cognitive impairments in the kitchen, we developed and evaluated classifiers for recognizing 11 actions involved in food preparation. Data was collected from 20 lay subjects using four specially designed kitchen utensils incorporating embedded 3-axis accelerometers. Subjects were asked to prepare a mixed salad in our laboratory-based instrumented kitchen environment. Video of each subject's food preparation activities were independently annotated by three different coders. Several classifiers were trained and tested using these features. With an overall accuracy of 82.9% our investigation demonstrated that a broad set of food preparation actions can be reliably recognized using sensors embedded in kitchen utensils. --- paper_title: Kitchen Scene Context Based Gesture Recognition: A Contest in ICPR2012 paper_content: This paper introduces a new open dataset "Actions for Cooking Eggs ACE Dataset" and summarizes results of the contest on "Kitchen Scene Context based Gesture Recognition", in conjunction with ICPR2012. The dataset consists of naturally performed actions in a kitchen environment. Five kinds of cooking menus were actually performed by five different actors, and the cooking actions were recorded by a Kinect Sensor. Color image sequences and depth image sequences are both available. Besides, action label was given to each frame. To estimate the action label, action recognition method has to analyze not only actor's action, but also scene contexts such as ingredients and cooking utensils. We compare the submitted algorithms and the results in this paper. --- paper_title: Audio-visual classification and detection of human manipulation actions paper_content: This thesis builds on the observation that robots cannot be programmed to handle any possible situation in the world. Like humans, they need mechanisms to deal with previously unseen situations and unknown objects. One of the skills humans rely on to deal with the unknown is the ability to learn by observing others. This thesis addresses the challenge of enabling a robot to learn from a human instructor. In particular, it is focused on objects. How can a robot find previously unseen objects? How can it track the object with its gaze? How can the object be employed in activities? Throughout this thesis, these questions are addressed with the end goal of allowing a robot to observe a human instructor and learn how to perform an activity. The robot is assumed to know very little about the world and it is supposed to discover objects autonomously. Given a visual input, object hypotheses are formulated by leveraging on common contextual knowledge often used by humans (e.g. gravity, compactness, convexity). Moreover, unknown objects are tracked and their appearance is updated over time since only a small fraction of the object is visible from the robot initially. Finally, object functionality is inferred by looking how the human instructor is manipulating objects and how objects are used in relation to others. All the methods included in this thesis have been evaluated on datasets that are publicly available or that we collected, showing the importance of these learning abilities. --- paper_title: The Language of Actions: Recovering the Syntax and Semantics of Goal-Directed Human Activities paper_content: This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities. --- paper_title: The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition paper_content: We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, full-body motion capture data recorded by a markerless motion tracker, RFID tag readings and magnetic sensor readings from objects and the environment, as well as corresponding action labels. In this paper, we both describe how the data was computed, in particular the motion tracker and the labeling, and give examples what it can be used for. We present first results of an automatic method for segmenting the observed motions into semantic classes, and describe how the data can be integrated in a knowledge-based framework for reasoning about the observations. --- paper_title: Recognizing Fine-Grained and Composite Activities Using Hand-Centric Features and Script Data paper_content: Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them. --- paper_title: The Language of Actions: Recovering the Syntax and Semantics of Goal-Directed Human Activities paper_content: This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities. --- paper_title: A database for fine grained activity detection of cooking activities paper_content: While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition. --- paper_title: Activity recognition using the velocity histories of tracked keypoints paper_content: We present an activity recognition feature inspired by human psychophysical performance. This feature is based on the velocity history of tracked keypoints. We present a generative mixture model for video sequences using this feature, and show that it performs comparably to local spatio-temporal features on the KTH activity recognition dataset. In addition, we contribute a new activity recognition dataset, focusing on activities of daily living, with high resolution video sequences of complex actions. We demonstrate the superiority of our velocity history feature on high resolution video sequences of complicated activities. Further, we show how the velocity history feature can be extended, both with a more sophisticated latent velocity model, and by combining the velocity history feature with other useful information, like appearance, position, and high level semantic information. Our approach performs comparably to established and state of the art methods on the KTH dataset, and significantly outperforms all other methods on our challenging new dataset. --- paper_title: The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition paper_content: We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, full-body motion capture data recorded by a markerless motion tracker, RFID tag readings and magnetic sensor readings from objects and the environment, as well as corresponding action labels. In this paper, we both describe how the data was computed, in particular the motion tracker and the labeling, and give examples what it can be used for. We present first results of an automatic method for segmenting the observed motions into semantic classes, and describe how the data can be integrated in a knowledge-based framework for reasoning about the observations. --- paper_title: Combining embedded accelerometers with computer vision for recognizing food preparation activities paper_content: This paper introduces a publicly available dataset of complex activities that involve manipulative gestures. The dataset captures people preparing mixed salads and contains more than 4.5 hours of accelerometer and RGB-D video data, detailed annotations, and an evaluation protocol for comparison of activity recognition algorithms. Providing baseline results for one possible activity recognition task, this paper further investigates modality fusion methods at different stages of the recognition pipeline: (i) prior to feature extraction through accelerometer localization, (ii) at feature level via feature concatenation, and (iii) at classification level by combining classifier outputs. Empirical evaluation shows that fusing information captured by these sensor types can considerably improve recognition performance. --- paper_title: Human Activity Detection from RGBD Images paper_content: Being able to detect and recognize human activities is important for making personal assistant robots useful in performing assistive tasks. The challenge is to develop a system that is low-cost, reliable in unstructured home settings, and also straightforward to use. In this paper, we use a RGBD sensor (Microsoft Kinect) as the input sensor, and present learning algorithms to infer the activities. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM). It considers a person's activity as composed of a set of sub-activities, and infers the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve an average performance of 84.3% when the person was seen before in the training set (and 64.2% when the person was not seen before). --- paper_title: Learning human activities and object affordances from RGB-D videos paper_content: Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot. --- paper_title: Learning to Recognize Daily Actions using Gaze paper_content: We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze. --- paper_title: Detecting activities of daily living in first-person camera views paper_content: We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.” --- paper_title: A public domain dataset for ADL recognition using wrist-placed accelerometers paper_content: The automatic monitoring of specific Activities of Daily Living (ADL) can be a useful tool for Human-Robot Interaction in smart environments and Assistive Robotics applications. The qualitative definition that is given for most ADL and the lack of well-defined benchmarks, however, are obstacles toward the identification of the most effective monitoring approaches for different tasks. The contribution of the article is two-fold: (i) we propose a taxonomy of ADL allowing for their categorization with respect to the most suitable monitoring approach; (ii) we present a freely available dataset of acceleration data, coming from a wrist-worn wearable device, targeting the recognition of 14 different human activities. --- paper_title: The Yale human grasping dataset: Grasp, object, and task data in household and machine shop environments paper_content: This paper presents a dataset of human grasping behavior in unstructured environments. Wide-angle head-mounted camera video was recorded from two housekeepers and two machinists during their regular work activities, and the grasp types, objects, and tasks were analyzed and coded by study staff. The full dataset contains 27.7 hours of tagged video and represents a wide range of manipulative behaviors spanning much of the typical human hand usage. We provide the original videos, a spreadsheet including the tagged grasp type, object, and task parameters, time information for each successive grasp, and video screenshots for each instance. Example code is provided for MATLAB and R, demonstrating how to load in the dataset and produce simple plots. --- paper_title: The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition paper_content: We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, full-body motion capture data recorded by a markerless motion tracker, RFID tag readings and magnetic sensor readings from objects and the environment, as well as corresponding action labels. In this paper, we both describe how the data was computed, in particular the motion tracker and the labeling, and give examples what it can be used for. We present first results of an automatic method for segmenting the observed motions into semantic classes, and describe how the data can be integrated in a knowledge-based framework for reasoning about the observations. --- paper_title: Human Activity Detection from RGBD Images paper_content: Being able to detect and recognize human activities is important for making personal assistant robots useful in performing assistive tasks. The challenge is to develop a system that is low-cost, reliable in unstructured home settings, and also straightforward to use. In this paper, we use a RGBD sensor (Microsoft Kinect) as the input sensor, and present learning algorithms to infer the activities. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM). It considers a person's activity as composed of a set of sub-activities, and infers the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve an average performance of 84.3% when the person was seen before in the training set (and 64.2% when the person was not seen before). --- paper_title: Learning human activities and object affordances from RGB-D videos paper_content: Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot. --- paper_title: A public domain dataset for ADL recognition using wrist-placed accelerometers paper_content: The automatic monitoring of specific Activities of Daily Living (ADL) can be a useful tool for Human-Robot Interaction in smart environments and Assistive Robotics applications. The qualitative definition that is given for most ADL and the lack of well-defined benchmarks, however, are obstacles toward the identification of the most effective monitoring approaches for different tasks. The contribution of the article is two-fold: (i) we propose a taxonomy of ADL allowing for their categorization with respect to the most suitable monitoring approach; (ii) we present a freely available dataset of acceleration data, coming from a wrist-worn wearable device, targeting the recognition of 14 different human activities. --- paper_title: Kitchen Scene Context Based Gesture Recognition: A Contest in ICPR2012 paper_content: This paper introduces a new open dataset "Actions for Cooking Eggs ACE Dataset" and summarizes results of the contest on "Kitchen Scene Context based Gesture Recognition", in conjunction with ICPR2012. The dataset consists of naturally performed actions in a kitchen environment. Five kinds of cooking menus were actually performed by five different actors, and the cooking actions were recorded by a Kinect Sensor. Color image sequences and depth image sequences are both available. Besides, action label was given to each frame. To estimate the action label, action recognition method has to analyze not only actor's action, but also scene contexts such as ingredients and cooking utensils. We compare the submitted algorithms and the results in this paper. --- paper_title: UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild paper_content: We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5%. To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips. --- paper_title: Recognizing Fine-Grained and Composite Activities Using Hand-Centric Features and Script Data paper_content: Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them. --- paper_title: The Yale human grasping dataset: Grasp, object, and task data in household and machine shop environments paper_content: This paper presents a dataset of human grasping behavior in unstructured environments. Wide-angle head-mounted camera video was recorded from two housekeepers and two machinists during their regular work activities, and the grasp types, objects, and tasks were analyzed and coded by study staff. The full dataset contains 27.7 hours of tagged video and represents a wide range of manipulative behaviors spanning much of the typical human hand usage. We provide the original videos, a spreadsheet including the tagged grasp type, object, and task parameters, time information for each successive grasp, and video screenshots for each instance. Example code is provided for MATLAB and R, demonstrating how to load in the dataset and produce simple plots. --- paper_title: The YCB object and Model set: Towards common benchmarks for manipulation research paper_content: In this paper we present the Yale-CMU-Berkeley (YCB) Object and Model set, intended to be used for benchmarking in robotic grasping and manipulation research. The objects in the set are designed to cover various aspects of the manipulation problem; it includes objects of daily life with different shapes, sizes, textures, weight and rigidity, as well as some widely used manipulation tests. The associated database provides high-resolution RGBD scans, physical properties and geometric models of the objects for easy incorporation into manipulation and planning software platforms. A comprehensive literature survey on existing benchmarks and object datasets is also presented and their scope and limitations are discussed. The set will be freely distributed to research groups worldwide at a series of tutorials at robotics conferences, and will be otherwise available at a reasonable purchase cost. --- paper_title: Robot programming by demonstration paper_content: The presented paper shows a pilot development of a robot ‘task programming method’. In this method, the user programs the robot task by demonstrating it (Programming by Demonstration, PbD). PbD is applied on a robotic arm with two degrees-of-freedom for programming a constrained motion task. --- paper_title: The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition paper_content: We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, full-body motion capture data recorded by a markerless motion tracker, RFID tag readings and magnetic sensor readings from objects and the environment, as well as corresponding action labels. In this paper, we both describe how the data was computed, in particular the motion tracker and the labeling, and give examples what it can be used for. We present first results of an automatic method for segmenting the observed motions into semantic classes, and describe how the data can be integrated in a knowledge-based framework for reasoning about the observations. --- paper_title: Script data for attribute-based recognition of composite activities paper_content: State-of-the-art human activity recognition methods build on discriminative learning which requires a representative training set for good performance. This leads to scalability issues for the recognition of large sets of highly diverse activities. In this paper we leverage the fact that many human activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. To share and transfer knowledge between composite activities we model them by a common set of attributes corresponding to basic actions and object participants. This attribute representation allows to incorporate script data that delivers new variations of a composite activity or even to unseen composite activities. In our experiments on 41 composite cooking tasks, we found that script data to successfully capture the high variability of composite activities. We show improvements in a supervised case where training data for all composite cooking tasks is available, but we are also able to recognize unseen composites by just using script data and without any manual video annotation. --- paper_title: Learning to Recognize Daily Actions using Gaze paper_content: We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze. --- paper_title: The Language of Actions: Recovering the Syntax and Semantics of Goal-Directed Human Activities paper_content: This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities. --- paper_title: A database for fine grained activity detection of cooking activities paper_content: While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition. --- paper_title: Detecting activities of daily living in first-person camera views paper_content: We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.” --- paper_title: Discovery of activity patterns using topic models paper_content: In this work we propose a novel method to recognize daily routines as a probabilistic combination of activity patterns. The use of topic models enables the automatic discovery of such patterns in a user's daily routine. We report experimental results that show the ability of the approach to model and recognize daily routines without user annotation. --- paper_title: BigBIRD: A large-scale 3D database of object instances paper_content: The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition—whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a highquality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perceptionrelated problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http://rll.eecs.berkeley.edu/bigbird. --- paper_title: Human Activity Detection from RGBD Images paper_content: Being able to detect and recognize human activities is important for making personal assistant robots useful in performing assistive tasks. The challenge is to develop a system that is low-cost, reliable in unstructured home settings, and also straightforward to use. In this paper, we use a RGBD sensor (Microsoft Kinect) as the input sensor, and present learning algorithms to infer the activities. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM). It considers a person's activity as composed of a set of sub-activities, and infers the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve an average performance of 84.3% when the person was seen before in the training set (and 64.2% when the person was not seen before). --- paper_title: Activity recognition using the velocity histories of tracked keypoints paper_content: We present an activity recognition feature inspired by human psychophysical performance. This feature is based on the velocity history of tracked keypoints. We present a generative mixture model for video sequences using this feature, and show that it performs comparably to local spatio-temporal features on the KTH activity recognition dataset. In addition, we contribute a new activity recognition dataset, focusing on activities of daily living, with high resolution video sequences of complex actions. We demonstrate the superiority of our velocity history feature on high resolution video sequences of complicated activities. Further, we show how the velocity history feature can be extended, both with a more sophisticated latent velocity model, and by combining the velocity history feature with other useful information, like appearance, position, and high level semantic information. Our approach performs comparably to established and state of the art methods on the KTH dataset, and significantly outperforms all other methods on our challenging new dataset. --- paper_title: Learning human activities and object affordances from RGB-D videos paper_content: Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot. --- paper_title: Combining embedded accelerometers with computer vision for recognizing food preparation activities paper_content: This paper introduces a publicly available dataset of complex activities that involve manipulative gestures. The dataset captures people preparing mixed salads and contains more than 4.5 hours of accelerometer and RGB-D video data, detailed annotations, and an evaluation protocol for comparison of activity recognition algorithms. Providing baseline results for one possible activity recognition task, this paper further investigates modality fusion methods at different stages of the recognition pipeline: (i) prior to feature extraction through accelerometer localization, (ii) at feature level via feature concatenation, and (iii) at classification level by combining classifier outputs. Empirical evaluation shows that fusing information captured by these sensor types can considerably improve recognition performance. --- paper_title: Audio-visual classification and detection of human manipulation actions paper_content: This thesis builds on the observation that robots cannot be programmed to handle any possible situation in the world. Like humans, they need mechanisms to deal with previously unseen situations and unknown objects. One of the skills humans rely on to deal with the unknown is the ability to learn by observing others. This thesis addresses the challenge of enabling a robot to learn from a human instructor. In particular, it is focused on objects. How can a robot find previously unseen objects? How can it track the object with its gaze? How can the object be employed in activities? Throughout this thesis, these questions are addressed with the end goal of allowing a robot to observe a human instructor and learn how to perform an activity. The robot is assumed to know very little about the world and it is supposed to discover objects autonomously. Given a visual input, object hypotheses are formulated by leveraging on common contextual knowledge often used by humans (e.g. gravity, compactness, convexity). Moreover, unknown objects are tracked and their appearance is updated over time since only a small fraction of the object is visible from the robot initially. Finally, object functionality is inferred by looking how the human instructor is manipulating objects and how objects are used in relation to others. All the methods included in this thesis have been evaluated on datasets that are publicly available or that we collected, showing the importance of these learning abilities. --- paper_title: Slice&Dice: Recognizing Food Preparation Activities Using Embedded Accelerometers paper_content: Within the context of an endeavor to provide situated support for people with cognitive impairments in the kitchen, we developed and evaluated classifiers for recognizing 11 actions involved in food preparation. Data was collected from 20 lay subjects using four specially designed kitchen utensils incorporating embedded 3-axis accelerometers. Subjects were asked to prepare a mixed salad in our laboratory-based instrumented kitchen environment. Video of each subject's food preparation activities were independently annotated by three different coders. Several classifiers were trained and tested using these features. With an overall accuracy of 82.9% our investigation demonstrated that a broad set of food preparation actions can be reliably recognized using sensors embedded in kitchen utensils. --- paper_title: A public domain dataset for ADL recognition using wrist-placed accelerometers paper_content: The automatic monitoring of specific Activities of Daily Living (ADL) can be a useful tool for Human-Robot Interaction in smart environments and Assistive Robotics applications. The qualitative definition that is given for most ADL and the lack of well-defined benchmarks, however, are obstacles toward the identification of the most effective monitoring approaches for different tasks. The contribution of the article is two-fold: (i) we propose a taxonomy of ADL allowing for their categorization with respect to the most suitable monitoring approach; (ii) we present a freely available dataset of acceleration data, coming from a wrist-worn wearable device, targeting the recognition of 14 different human activities. --- paper_title: Recognizing Fine-Grained and Composite Activities Using Hand-Centric Features and Script Data paper_content: Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them. --- paper_title: Script data for attribute-based recognition of composite activities paper_content: State-of-the-art human activity recognition methods build on discriminative learning which requires a representative training set for good performance. This leads to scalability issues for the recognition of large sets of highly diverse activities. In this paper we leverage the fact that many human activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. To share and transfer knowledge between composite activities we model them by a common set of attributes corresponding to basic actions and object participants. This attribute representation allows to incorporate script data that delivers new variations of a composite activity or even to unseen composite activities. In our experiments on 41 composite cooking tasks, we found that script data to successfully capture the high variability of composite activities. We show improvements in a supervised case where training data for all composite cooking tasks is available, but we are also able to recognize unseen composites by just using script data and without any manual video annotation. --- paper_title: A database for fine grained activity detection of cooking activities paper_content: While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition. ---
Title: Datasets on Object Manipulation and Interaction: A Survey Section 1: INTRODUCTION Description 1: Write about the significance and the role of datasets in scientific fields, especially in object manipulation research, and introduce the purpose of the survey. Section 2: DATASETS OF COOKING ACTIVITY Description 2: Present datasets related to cooking activities, providing details on the modalities, activities performed, and annotations, along with their relevance to object manipulation. Section 3: DATASETS OF ACTIVITIES OF DAILY LIVING (ADL) Description 3: Discuss datasets encompassing general daily activities, specifying the included modalities, types of activities, annotations, and their application to object manipulation research. Section 4: DATASET SUMMARY Description 4: Summarize the findings from the reviewed datasets, highlighting shared annotated activities, object identifiability, and forms of temporal segmentation, and provide a complete list of annotated activities. Section 5: CONCLUSION Description 5: Provide a summary of the review, reiterate the significance of the surveyed datasets for object manipulation research, and offer suggestions for future datasets. Section 6: ACKNOWLEDGEMENT Description 6: Acknowledge the financial and academic support received for conducting the survey and compiling the review.
Survey on Prediction Algorithms in Smart Homes
16
--- paper_title: Bio-signal based control in assistive Robots: A survey paper_content: Recently, bio-signal based control has been gradually deployed in biomedical devices and assistive robots for improving the quality of life of disabled and elderly people, among which electromyography (EMG) and electroencephalography (EEG) bio-signals are being used widely. This paper reviews the deployment of these bio-signals in the state of art of control systems. The main aim of this paper is to describe the techniques used for (i) collecting EMG and EEG signals and diving these signals into segments (data acquisition and data segmentation stage), (ii) dividing the important data and removing redundant data from the EMG and EEG segments (feature extraction stage), and (iii) identifying categories from the relevant data obtained in the previous stage (classification stage). Furthermore, this paper presents a summary of applications controlled through these two bio-signals and some research challenges in the creation of these control systems. Finally, a brief conclusion is summarized. --- paper_title: The role of prediction algorithms in the MavHome smart home architecture paper_content: The goal of the MavHome project is to create a home that acts as a rational agent. The agent seeks to maximize inhabitant comfort and minimize operation cost. To achieve these goals, the agent must be able to predict the mobility patterns and device usages of the inhabitants. We introduce the MavHome project and its underlying architecture. The role of prediction algorithms within the architecture is discussed, and three prediction algorithms that are central to home operations are presented. We demonstrate the effectiveness of these algorithms on synthetic and/or actual smart home data. --- paper_title: A Review of Smart Homes—Past, Present, and Future paper_content: A smart home is an application of ubiquitous computing in which the home environment is monitored by ambient intelligence to provide context-aware services and facilitate remote home control. This paper presents an overview of previous smart home research as well as the associated technologies. A brief discussion on the building blocks of smart homes and their interrelationships is presented. It describes collective information about sensors, multimedia devices, communication protocols, and systems, which are widely used in smart home implementation. Special algorithms from different fields and their significance are explained according to their scope of use in smart homes. This paper also presents a concrete guideline for future researchers to follow in developing a practical and sustainable smart home. --- paper_title: MavHome: an agent-based smart home paper_content: The goal of the MavHome (Managing An Intelligent Versatile Home) project is to create a home that acts as an intelligent agent. In this paper we introduce the MavHome architecture. The role of prediction algorithms within the architecture is discussed, and a meta-predictor is presented which combines the strengths of multiple approaches to inhabitant action prediction. We demonstrate the effectiveness of these algorithms on smart home data. --- paper_title: Activity Recognition in the Home Using Simple and Ubiquitous Sensors paper_content: In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25% to 89% depending on the evaluation criteria used. --- paper_title: Behavioural pattern identification and prediction in intelligent environments paper_content: In this paper, the application of soft computing techniques in prediction of an occupant's behaviour in an inhabited intelligent environment is addressed. In this research, daily activities of elderly people who live in their own homes suffering from dementia are studied. Occupancy sensors are used to extract the movement patterns of the occupant. The occupancy data is then converted into temporal sequences of activities which are eventually used to predict the occupant behaviour. To build the prediction model, different dynamic recurrent neural networks are investigated. Recurrent neural networks have shown a great ability in finding the temporal relationships of input patterns. The experimental results show that non-linear autoregressive network with exogenous inputs model correctly extracts the long term prediction patterns of the occupant and outperformed the Elman network. The results presented here are validated using data generated from a simulator and real environments. --- paper_title: Predicting inhabitant action using action and task models with application to smart homes paper_content: An intelligent home is likely in the near future. An important ingredient in an intelligent environment such as a home is prediction – of the next low-level action, the next location, and the next high-level task that an inhabitant is likely to perform. In this paper we model inhabitant actions as states in a simple Markov model. We introduce an enhancement to this basic approach, the Task-based Markov model (TMM) method. TMM discovers high-level inhabitant tasks using the supplied unlabeled data. We investigate clustering of actions to identify tasks, and integrate clusters into a hidden Markov model that predicts the next inhabitant action. We validate our approach and observe that for simulated data we achieve good accuracy using both the simple Markov model and the TMM, whereas on real data we see that simple Markov models outperform the TMM. We also perform an analysis of the performance of the HMM in the framework of the TMM when diverse patterns are introduced into the data. --- paper_title: Bayesian Networks Structure Learning for Activity Prediction in Smart Homes paper_content: This paper presents a sequence-based activity prediction approach which uses Bayesian networks in a novel two-step process to predict both activities and their corresponding features. In addition to the proposed model, we also present the results of several search and score (SaS) and constraint-based (CB) Bayesian structure learning algorithms. The activity prediction performance of the proposed model is compared with the naive Bayes and the other aforementionedSaS and CB algorithms. The experimental results are performed on real data collected from a smart home over the period of five months. The results suggest the superior activity prediction accuracy of the proposed network over the resulting networks of the mentioned Bayesian network structure learning algorithms. --- paper_title: Discovering models of software processes from event-based data paper_content: Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study. --- paper_title: Modeling human behavior from simple sensors in the home paper_content: Pervasive sensors in the home have a variety of applications including energy minimization, activity monitoring for elders, and tutors for household tasks such as cooking. Many of the common sensors today are binary, e.g. IR motion sensors, door close sensors, and floor pressure pads. Predicting user behavior is one of the key enablers for applications. While we consider smart home data here, the general problem is one of predicting discrete human actions. Drawing on Activity Theory, the language as action principle, and speech understanding research, we argue that smoothed n-grams are very appropriate for this task. We built such a model and applied it to data gathered from 3 smart home installations. The data showed a classic Zipf or power-law distribution, similar to speech and language. We found that the predictive accuracy of the n-gram model ranges from 51% to 39%, which is significantly above the baseline for the deployments of 16, 76 and 70 sensors. While we cannot directly compare this result with other work (lack of shared data), by examination of high entropy zones in the datasets (e.g. the kitchen triangle) we argue that accuracies around 50% are best possible for this task. --- paper_title: A tutorial on hidden Markov models and selected applications in speech recognition paper_content: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > --- paper_title: AN INTRODUCTION TO HIDDEN MARKOV MODELS AND BAYESIAN NETWORKS paper_content: We provide a tutorial on learning and inference in hidden Markov models in the context of the recent literature on Bayesian networks. This perspective make sit possible to consider novel generalizations to hidden Markov models with multiple hidden state variables, multiscale representations, and mixed discrete and continuous variables. Although exact inference in these generalizations is usually intractable, one can use approximate inference in these generalizations is usually intractable, one can use approximate inference algorithms such as Markov chain sampling and variational methods. We describe how such methods are applied to these generalized hidden Markov models. We conclude this review with a discussion of Bayesian methods for model selection in generalized HMMs. --- paper_title: Location aware resource management in smart homes paper_content: The rapid advances in a wide range of wireless access technologies along with the efficient use of smart spaces have already set the stage for the development of smart homes. Context-awareness is perhaps the most salient feature in these intelligent computing platforms. The "location" information of the users plays a vital role in defining this context. To extract the best performance and efficacy of such smart computing environments, one needs a scalable, technology-independent location service. We have developed a predictive framework for location-aware resource optimization in smart homes. The underlying compression mechanism helps in efficient learning of an inhabitant's movement (location) profiles in the symbolic domain. The concept of Asymptotic Equipartition Property (AEP) in information theory helps to predict the inhabitant's future location as well as most likely path-segments with good accuracy. Successful prediction helps in pro-active resource management and on-demand operations of automated devices along the inhabitant's future paths and locations - thus providing the necessary comfort at a near-optimal cost. Simulation results on a typical smart home floor plans corroborate this high prediction success and demonstrate sufficient reduction in daily energy-consumption, manual operations and time spent by the inhabitant which are considered as a fair measure of his/her comfort. --- paper_title: Energy storage management in smart homes based on resident activity of daily life recognition paper_content: Recently, home energy storage system is emerging as one of the main driving forces to prompt the development of the future smart grid. By leveraging time-based electricity pricing, the home energy storage system can store energy during off-peak periods and supply energy to residential customers during on-peak periods, such that the stress on main power system can be relieved. Yet, an efficient use of home energy storage system is still a challenging issue, due to the nonlinear properties of the battery in terms of energy conversion loss and shortened battery life, and the randomness in residential energy demand. In this paper, we investigate the utilization of smart home monitoring and communication technologies to recognize the resident activity of daily life (ADL) and propose a non-homogeneous hidden Markov model (NHMM) to characterize the residential energy demand. An optimal energy storage management problem is formulated by taking into account the NHMM and nonlinear battery properties. This problem belongs to a class of adaptive stochastic control problems in smart grid with nonlinear value functions. In order to solve this problem efficiently, piecewise linear approximation is applied to the energy conversion function, and a state-dependent multi-threshold policy is proposed and proved to be optimal. The performance of the proposed energy management scheme is evaluated via a case study based on CASAS smart home dataset collected in real life by Washington State University. Numerical results indicate that our proposed energy storage management scheme can achieve energy cost savings, in comparison with existing schemes with uniform and non-uniform discharging profiles. --- paper_title: Using duration to learn activities of daily living in a smart home environment paper_content: Recognition of inhabitants' activities of daily living (ADLs) is an important task in smart homes to support assisted living for elderly people aging in place. However, uncertain information brings challenge to activity recognition which can be categorised into environmental uncertainties from sensor readings and user uncertainties of variations in the ways to carry out activities in different contexts, or by different users within the same environment. To address the challenges of these two types of uncertainty, in this paper, we introduce the innovative idea of incorporating activity duration into the framework of learning inhabitants' behaviour patterns on carrying out ADLs in smart home environment. A probabilistic learning algorithm is proposed with duration information in the context of multi-inhabitants in a single home environment. The prediction is for both inhabitant and ADL using the learned model representing what activity is carried out and who performed it. Experiments are designed for the evaluation of duration information in identifying activities and inhabitants. Real data have been collected in a smart kitchen laboratory, and realistic synthetic data are generated for evaluation. Evaluations show encouraging results for higher-level activity identification and improvement on inhabitant and activity prediction in the challenging situation of incomplete observation due to unreliable sensors compared to models that are derived with no duration information. The approach also provides a potential opportunity to identify inhabitants' concept drift in long-term monitoring and respond to a deteriorating situation at as early stage as possible. --- paper_title: A Knowledge-Driven Approach to Activity Recognition in Smart Homes paper_content: This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds. --- paper_title: Intervention of non-inhabitant activities detection in smart home environment paper_content: Inhabitants daily activity form a pattern in their daily life which has important things in smart home. These patterns can be used to recognize the inhabitant activity that is useful to enhance the smart home services like energy efficiency service, where these patterns can be used as inhabitant behavior to reduce an unnecessary appliances or lightings usage based on the activities their conduct. Recognition accuracy is important things for providing particular service needs on automation process in smart home, but activity recognition faces many challenges in real world cause of diversity and complexity of the activities. Inter-subject variability activities often appear in real world situation that accuracy of recognition process can be affected. For instance, there is a possible situation where family or colleague visits to inhabitant's home in long term. Non-inhabitant activities may conduct with a different way or different behavior than inhabitant does. This situation is producing activities where is not carried from legitimate inhabitant. In this paper, we propose a method to overcome the activity recognition issue that commonly occurred. Our proposed method using temporal relation approach, which can detect a non-inhabitant activity. This approach is separating detected activities from inhabitant's observed activities, so the activity recognition will perform effectively. We assess the effectiveness of our approach using Activity Daily Living (ADL) provided by WSU Smart Home Project dataset. --- paper_title: Monitoring Activities of Daily Living in Smart Homes: Understanding human behavior paper_content: Monitoring the activities of daily living (ADLs) and detection of deviations from previous patterns is crucial to assessing the ability of an elderly person to live independently in their community and in early detection of upcoming critical situations. "Aging in place" for an elderly person is one key element in ambient assisted living (AAL) technologies. --- paper_title: Use of Prediction Algorithms in Smart Homes paper_content: Smart Homes' or 'Intelligent Homes' are capable in making smart or rational decisions and increase home automation. This is done to maximize inhabitant comfort and minimize operation cost. Tracing and predicting the mobility patterns and usages of devices by the inhabitant, sets a step towards the objective. The paper discusses in detail, the role of certain Prediction algorithms to bring about next event recognition. Further, an Episode Discovery helps in finding the frequency of occurrence of these events and targeting the particular events for automation. The effectiveness of the Prediction algorithms used is demonstrated ;making it clear how they prove to be a key component in the efficient implementation of a Smart Home architecture. --- paper_title: Learning to Control a Smart Home Environment paper_content: The goal of the MavHome (Managing An Intelligent Versatile Home) project is to create a home that acts as an intelligent agent. In this paper, we introduce the MavHome architecture and two learning algorithms that play central roles in the smart home. The first algorithm predicts actions the inhabitant will take in the home. The second algorithm learns a policy to control the home. Effectiveness of the algorithms on smart home data is presented and we document the application of the technology in a working smart home environment. --- paper_title: LeZi-update: an information-theoretic approach to track mobile users in PCS networks paper_content: The complexity of the mobility tracking problem in a cellular environment has been characterized under an informationtheoretic framework. Shannon’s entropy measure is identified as a basis for comparing user mobility models. By building and maintaining a dictionary of individual user’s path updates (as opposed to the widely used location updates), the proposed adaptive on-line algorithm can learn subscribers’ profiles. This technique evolves out of the concepts of lossless compression. The compressibility of the variable-to-fixed length encoding of the acclaimed LempelZiv family of algorithms reduces the update cost, whereas their built-in predictive power can be effectively used to reduce paging cost. --- paper_title: Active Lezi: An incremental parsing algorithm for sequential prediction paper_content: Prediction is an important component in a variety of domains in Artificial Intelligence and Machine Learning, in order that Intelligent Systems may make more informed and reliable decisions. Certain domains require that prediction be performed on sequences of events that can typically be modeled as stochastic processes. This work presents Active LeZi (ALZ), a sequential prediction algorithm that is founded on an Information Theoretic approach, and is based on the acclaimed LZ78 family of data compression algorithms. The efficacy of this algorithm in a typical Smart Environment – the Smart Home, is demonstrated by employing this algorithm to predict device usage in the home. The performance of this algorithm is tested on synthetic data sets that are representative of typical interactions between a Smart Home and the inhabitant. In addition, for the Smart Home environment, we introduce a method of learning a measure of the relative time between actions using ALZ, and demonstrate the efficacy of this approach on synthetic Smart Home data. --- paper_title: An Improved Position Prediction Algorithm Based on Active LeZi in Smart Home paper_content: In smart home environments, each inhabitant has their own habits, these habits have certain periodic and time-varying characteristics. According to these characteristics, an improved Time-varying LeZi algorithm (TALZ) is proposed, which is based on Active LeZi algorithm (ALZ). Compared to other models, the results show that TALZ has smaller time complexity, higher prediction accuracy of laboratory data acquisition, and it is suitable for real-time, accurate position prediction in smart home environments. --- paper_title: Compression of Individual Sequences via Variable-Rate Coding paper_content: Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequence x a quantity \rho(x) is defined, called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of \rho(x) allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences. --- paper_title: A customized flocking algorithm for swarms of sensors tracking a swarm of targets paper_content: Wireless mobile sensor networks (WMSNs) are groups of mobile sensing agents with multi-modal sensing capabilities that communicate over wireless networks. WMSNs have more flexibility in terms of deployment and exploration abilities over static sensor networks. Sensor networks have a wide range of applications in security and surveillance systems, environmental monitoring, data gathering for network-centric healthcare systems, monitoring seismic activities and atmospheric events, tracking traffic congestion and air pollution levels, localization of autonomous vehicles in intelligent transportation systems, and detecting failures of sensing, storage, and switching components of smart grids. The above applications require target tracking for processes and events of interest occurring in an environment. Various methods and approaches have been proposed in order to track one or more targets in a pre-defined area. Usually, this turns out to be a complicated job involving higher order mathematics coupled with artificial intelligence due to the dynamic nature of the targets. To optimize the resources we need to have an approach that works in a more straightforward manner while resulting in fairly satisfactory data. In this paper we have discussed the various cases that might arise while flocking a group of sensors to track targets in a given environment. The approach has been developed from scratch although some basic assumptions have been made keeping in mind some previous theories. This paper outlines a customized approach for feasibly tracking swarms of targets in a specific area so as to minimize the resources and optimize tracking efficiency. --- paper_title: A Knowledge-Driven Approach to Activity Recognition in Smart Homes paper_content: This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds. --- paper_title: A Minimalist Flocking Algorithm for Swarm Robots paper_content: In this paper we describe a low-end and easy to implement flocking algorithm which was developed for very simple swarm robots and which works without communication, memory or global information. By adapting traditional flocking algorithms and eliminating the need for communication, we created an algorithm with emergent flocking properties. We analyse its potential of aggregating an initially scattered robot swarm, which is not a trivial task for robots that only have local information. --- paper_title: SPEED: An Inhabitant Activity Prediction Algorithm for Smart Homes paper_content: This paper proposes an algorithm, called sequence prediction via enhanced episode discovery (SPEED), to predict inhabitant activity in smart homes. SPEED is a variant of the sequence prediction algorithm. It works with the episodes of smart home events that have been extracted based on the on -off states of home appliances. An episode is a set of sequential user activities that periodically occur in smart homes. The extracted episodes are processed and arranged in a finite-order Markov model. A method based on prediction by partial matching (PPM) algorithm is applied to predict the next activity from the previous history. The result shows that SPEED achieves an 88.3% prediction accuracy, which is better than LeZi Update, Active LeZi, IPAM, and C4.5. --- paper_title: Nash Q-Learning for General-Sum Stochastic Games paper_content: We extend Q-learning to a noncooperative multiagent context, using the framework of general-sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance. --- paper_title: Context-aware resource management in multi-inhabitant smart homes a Nash H-learning based approach paper_content: A smart home aims at building intelligence automation with a goal to provide its inhabitants with maximum possible comfort, minimize the resource consumption and thus overall cost of maintaining the home. 'Context awareness' is perhaps the most salient feature of such an intelligent environment. Clearly, an inhabitant's mobility and activities play a significant role in defining his contexts in and around the home. Although there exists an optimal algorithm for location and activity tracking of a single inhabitant, the correlation and dependence between multiple inhabitants' contexts within the same environment make the location and activity tracking more challenging. In this paper, we first prove that the optimal location prediction across multiple inhabitants in smart homes is an NP-hard problem. Next, to capture the correlation and interactions of different inhabitants' movements (and hence activities), we develop a novel framework based on a game theoretic, Nash H-learning approach that attempts to minimize the joint location uncertainty. The framework achieves a Nash equilibrium such that no inhabitant is given preference over others. This results in more accurate prediction of contexts and better adaptive control of automated devices, leading to a mobility-aware resource (say, energy) management scheme in multi-inhabitant smart homes. Experimental results demonstrate that the proposed framework is capable of adaptively controlling a smart environment, thus reducing energy consumption and enhancing the comfort of the inhabitants. --- paper_title: Research on Mining Association Behavior of Smart Home Users Based on Apriori Algorithm paper_content: This paper mainly studies the improvement of Apriori algorithm based on smart home system. Association rule algorithm has the promotion effect to the analysis of the smart home user behavior. Apriori algorithm is one of the classical algorithms of association rules. However, the efficiency of the Apriori algorithm is not high, there is stillroom for improvement in smart home. Based on the characteristics of smart home system and terminal operation platform, this paper optimizes the Apriori algorithm. By introducing the auxiliary matrix, the number of scanning database is reduced, and the speed of the support solving is accelerated. Simulation experiment results show that the Apriori algorithm optimized is more efficient than the optimal algorithm. In rapid advocacy of the Internet of things,with the improvement of people's living standards and housing conditions, digital intelligent have walk into people's life. Smart home products have already entered the vision of ordinary families and greatly simplified people's life. Which make it possible to develop a new lifestyle. Smart home products can not just stay in the original remote control mode. Packing multiple behaviors ofusers and concurrent operation will further streamline operations that improve the intelligence level. Custom profile is a manifestation. Technology product is continually improving the experience of users and to simplify user operations, which is the next target of smart products. According to the user's behavior,mining high degree of user behavior associatedthrough data mining analysis and then recommended as a personalized user profile. Which replacing the custom profiles of trouble, adding intelligence andmeet user preferences in operation. In data mining, association ruleis an importantissue. Association rules can find the existence of things that may exist or contactfrom the data behind. Which could be applied to the association study of user behavior in the smart home. The most classical association rule algorithm is Apriori algorithm. Apriori algorithm is an iterative manner based mining method by searching frequent item set. The mining process of Apriori algorithm is complex because of the iterative. Based on the characteristics of smart home system, this paper proposes an improved method of Apriori algorithm, which makes it more suitable for the implementation and application of the terminal platform. --- paper_title: Keeping the Resident in the Loop: Adapting the Smart Home to the User paper_content: Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. --- paper_title: A unified view of the apriori-based algorithms for frequent episode discovery paper_content: Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders. --- paper_title: Improving home automation by discovering regularly occurring device usage patterns paper_content: The data stream captured by recording inhabitant-device interactions in an environment can be mined to discover significant patterns, which an intelligent agent could use to automate device interactions. However, this knowledge discovery problem is complicated by several challenges, such as excessive noise in the data, data that does not naturally exist as transactions, a need to operate in real time, and a domain where frequency may not be the best discriminator. We propose a novel data mining technique that addresses these challenges and discovers regularly-occurring interactions with a smart home. We also discuss a case study that shows the data mining technique can improve the accuracy of two prediction algorithms, thus demonstrating multiple uses for a home automation system. Finally, we present an analysis of the algorithm and results obtained using inhabitant interactions. --- paper_title: A Time Series Based Sequence Prediction Algorithm to Detect Activities of Daily Living in Smart Home paper_content: OBJECTIVES ::: The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. ::: ::: ::: METHODS ::: The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. ::: ::: ::: RESULTS ::: The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. ::: ::: ::: CONCLUSIONS ::: Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people. --- paper_title: TEMPORAL MODELING OF HUMAN ACTIVITY IN SMART HOMES paper_content: Recognition of human activity is a potential challenge to design an effective smart home. The paper proposed a novel algorithm to recognize activities of daily living (ADL) of the resident. It provides analysis and mathematical modeling of temporal intervals of the event. The opposite entity states are used to extract the pattern of event sequence. Each extracted episode represents a distinct task of the resident. Result shows that, the algorithm can successfully identify 135 unique tasks of different lengths with temporal characteristics. The analysis confirms that temporal pattern follows normal distribution which can be modeled by Gaussian function. --- paper_title: A Novel Deployment Scheme for Green Internet of Things paper_content: The Internet of Things (IoT) has been realized as one of the most promising networking paradigms that bridge the gap between the cyber and physical world. Developing green deployment schemes for IoT is a challenging issue since IoT achieves a larger scale and becomes more complex so that most of the current schemes for deploying wireless sensor networks (WSNs) cannot be transplanted directly in IoT. This paper addresses this challenging issue by proposing a deployment scheme to achieve green networked IoT. The contributions made in this paper include: 1) a hierarchical system framework for a general IoT deployment, 2) an optimization model on the basis of proposed system framework to realize green IoT, and 3) a minimal energy consumption algorithm for solving the presented optimization model. The numerical results on minimal energy consumption and network lifetime of the system indicate that the deployment scheme proposed in this paper is more flexible and energy efficient compared to typical WSN deployment scheme; thus is applicable to the green IoT deployment. --- paper_title: Toward Efficient Multi-Keyword Fuzzy Search Over Encrypted Outsourced Data With Accuracy Improvement paper_content: Keyword-based search over encrypted outsourced data has become an important tool in the current cloud computing scenario. The majority of the existing techniques are focusing on multi-keyword exact match or single keyword fuzzy search. However, those existing techniques find less practical significance in real-world applications compared with the multi-keyword fuzzy search technique over encrypted data. The first attempt to construct such a multi-keyword fuzzy search scheme was reported by Wang et al. , who used locality-sensitive hashing functions and Bloom filtering to meet the goal of multi-keyword fuzzy search. Nevertheless, Wang’s scheme was only effective for a one letter mistake in keyword but was not effective for other common spelling mistakes. Moreover, Wang’s scheme was vulnerable to server out-of-order problems during the ranking process and did not consider the keyword weight. In this paper, based on Wang et al. ’s scheme, we propose an efficient multi-keyword fuzzy ranked search scheme based on Wang et al. ’s scheme that is able to address the aforementioned problems. First, we develop a new method of keyword transformation based on the uni-gram, which will simultaneously improve the accuracy and creates the ability to handle other spelling mistakes. In addition, keywords with the same root can be queried using the stemming algorithm. Furthermore, we consider the keyword weight when selecting an adequate matching file set. Experiments using real-world data show that our scheme is practically efficient and achieve high accuracy. --- paper_title: Incremental learning for ν -Support Vector Regression paper_content: The ? -Support Vector Regression ( ? -SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ? on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ? -Support Vector Classification ( ? -SVC) (Scholkopf et?al., 2000), ? -SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ? -SVC algorithm (AONSVM) to ? -SVR will not generate an effective initial solution. It is the main challenge to design an incremental ? -SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ? -SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ? -SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ? -SVR algorithms with both cold and warm starts. ---
Title: Survey on Prediction Algorithms in Smart Homes Section 1: Introduction Description 1: Introduce the concept of smart homes and the importance of prediction algorithms in enabling intelligent control and automation. Section 2: Background of Prediction Algorithms Description 2: Provide background information on prediction algorithms, including system models and data used in the context of smart homes. Section 3: System Models Description 3: Discuss various system models that are pertinent to prediction algorithms in smart homes, such as Markov models and Bayesian networks. Section 4: Activities of Daily Living (ADLs) Description 4: Explain how prediction algorithms utilize the concept of activities of daily living (ADLs) to forecast future events based on historical data in smart home environments. Section 5: Location Prediction Description 5: Discuss the importance of predicting the location of inhabitants within a smart environment and various methodologies used for this task. Section 6: Review of Prediction Algorithms Description 6: Extensively review several prediction algorithms used in smart homes, highlighting their features, strengths, and weaknesses. Section 7: Active LeZi Description 7: Detail the technical aspects and application of the Active LeZi algorithm in smart environments. Section 8: Flocking Description 8: Describe the flocking algorithm, its basis in clustering techniques, and its applications in smart homes. Section 9: SPEED Description 9: Examine the SPEED algorithm and how it uses periodic tendencies to predict user activities in smart homes. Section 10: Nash H-Learning Description 10: Analyze the Nash H-learning algorithm, focusing on its applications in predicting the location of multiple users within a smart home. Section 11: Apriori Algorithm Description 11: Explain the Apriori algorithm and its enhancements for use in smart home environments. Section 12: Summary of Algorithms Description 12: Summarize and compare the reviewed prediction algorithms, emphasizing their similarities and differences in approach and data handling. Section 13: Algorithm Enhancements Description 13: Discuss various enhancements and extensions to prediction algorithms that aim to improve performance in smart home applications. Section 14: Episode Discovery Description 14: Describe the episode discovery method and its role in enhancing prediction algorithms by identifying significant patterns in the data. Section 15: Temporal Relations Description 15: Explain the importance of temporal relations in improving prediction accuracy and anomaly detection within smart home environments. Section 16: Conclusion Description 16: Summarize the key points discussed in the paper and highlight the significance of prediction algorithms in the development of intelligent smart home systems.
Graphical Model-based Approaches to Target Tracking in Sensor Networks: An Overview of Some Recent Work and Challenges
9
--- paper_title: Probabilistic Networks and Expert Systems paper_content: From the Publisher: ::: Probabilistic expert systems are graphical networks that support the modelling of uncertainty and decisions in large complex domains, while retaining ease of calculation. Building on original research by the authors over a number of years, this book gives a thorough and rigorous mathematical treatment of the underlying ideas, structures, and algorithms, emphasizing those cases in which exact answers are obtainable. The book will be of interest to researchers and graduate students in artificial intelligence who desire an understanding of the mathematical and statistical basis of probabilistic expert systems, and to students and research workers in statistics wanting an introduction to this fascinating and rapidly developing field. The careful attention to detail will also make this work an important reference source for all those involved in the theory and applications of probabilistic expert systems. --- paper_title: Distributed fusion in sensor networks: a graphical models perspective paper_content: This paper presents an overview of research conducted to bridge the rich field of graphical models with the emerging field of data fusion for sensor networks. Both theoretical issues and prototyping applications are discussed in addition to suggesting new lines of reasoning. --- paper_title: Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues paper_content: Preface * 1 Probability Review * 2 Discrete Time Markov Models * 3 Recurrence and Ergodicity * 4 Long Run Behavior * 5 Lyapunov Functions and Martingales * 6 Eigenvalues and Nonhomogeneous Markov Chains * 7 Gibbs Fields and Monte Carlo Simulation * 8 Continuous-Time Markov Models 9 Poisson Calculus and Queues * Appendix * Bibliography * Author Index * Subject Index --- paper_title: Tree consistency and bounds on the performance of the max-product algorithm and its generalizations paper_content: Finding the maximum a posteriori (MAP) assignment of a discrete-state distribution specified by a graphical model requires solving an integer program. The max-product algorithm, also known as the max-plus or min-sum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles. We provide a novel perspective on the algorithm, which is based on the idea of reparameterizing the distribution in terms of so-called pseudo-max-marginals on nodes and edges of the graph. This viewpoint provides conceptual insight into the max-product algorithm in application to graphs with cycles. First, we prove the existence of max-product fixed points for positive distributions on arbitrary graphs. Next, we show that the approximate max-marginals computed by max-product are guaranteed to be consistent, in a suitable sense to be defined, over every tree of the graph. We then turn to characterizing the nature of the approximation to the MAP assignment computed by max-product. We generalize previous work by showing that for any graph, the max-product assignment satisfies a particular optimality condition with respect to any subgraph containing at most one cycle per connected component. We use this optimality condition to derive upper bounds on the difference between the log probability of the true MAP assignment, and the log probability of a max-product assignment. Finally, we consider extensions of the max-product algorithm that operate over higher-order cliques, and show how our reparameterization analysis extends in a natural manner. --- paper_title: Distributed fusion in sensor networks: a graphical models perspective paper_content: This paper presents an overview of research conducted to bridge the rich field of graphical models with the emerging field of data fusion for sensor networks. Both theoretical issues and prototyping applications are discussed in addition to suggesting new lines of reasoning. --- paper_title: Constructing free-energy approximations and generalized belief propagation algorithms paper_content: Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a "valid" or "maxent-normal" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the "Bethe method", the "junction graph method", the "cluster variation method", and the "region graph method". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP. --- paper_title: Distributed data association for multi-target tracking in sensor networks paper_content: Associating sensor measurements with target tracks is a fundamental and challenging problem in multi-target tracking. The problem is even more ::: challenging in the context of sensor networks, since association is coupled ::: across the network, yet centralized data processing is in general ::: infeasible due to power and bandwidth limitations. Hence efficient, distributed solutions are needed. We propose techniques based on graphical models to efficiently solve such data association problems in sensor networks. Our approach scales well with the number of sensor nodes in the network, and it is well--suited for distributed implementation. Distributed inference is realized by a message--passing algorithm which requires iterative, parallel exchange of information among neighboring nodes on the graph. So as to address trade--offs between inference performance and communication costs, we also propose a communication--sensitive form of message--passing that is capable of achieving near--optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data. --- paper_title: A Distributed Algorithm for Managing Multi-target Identities in Wireless Ad-hoc Sensor Networks paper_content: This paper presents a scalable distributed algorithm for computing and maintaining multi-target identity information. The algorithm builds on a novel representational framework, Identity-Mass Flow, to overcome the problem of exponential computational complexity in managing multi-target identity explicitly. The algorithm uses local information to efficiently update the global multi-target identity information represented as a doubly stochastic matrix, and can be efficiently mapped to nodes in a wireless ad hoc sensor network. The paper describes a distributed implementation of the algorithm in sensor networks. Simulation results have validated the Identity-Mass Flow framework and demonstrated the feasibility of the algorithm. --- paper_title: Data association based on optimization in graphical models with application to sensor networks paper_content: We propose techniques based on graphical models for efficiently solving data association problems arising in multiple target tracking with distributed sensor networks. Graphical models provide a powerful framework for representing the statistical dependencies among a collection of random variables, and are widely used in many applications (e.g., computer vision, error-correcting codes). We consider two different types of data association problems, corresponding to whether or not it is known a priori which targets are within the surveillance range of each sensor. We first demonstrate how to transform these two problems to inference problems on graphical models. With this transformation, both problems can be solved efficiently by local message-passing algorithms for graphical models, which solve optimization problems in a distributed manner by exchange of information among neighboring nodes on the graph. Moreover, a suitably reweighted version of the max-product algorithm yields provably optimal data associations. These approaches scale well with the number of sensors in the network, and moreover are well suited to being realized in a distributed fashion. So as to address trade-offs between performance and communication costs, we propose a communication-sensitive form of message-passing that is capable of achieving near-optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data. --- paper_title: Distributed data association for multi-target tracking in sensor networks paper_content: Associating sensor measurements with target tracks is a fundamental and challenging problem in multi-target tracking. The problem is even more ::: challenging in the context of sensor networks, since association is coupled ::: across the network, yet centralized data processing is in general ::: infeasible due to power and bandwidth limitations. Hence efficient, distributed solutions are needed. We propose techniques based on graphical models to efficiently solve such data association problems in sensor networks. Our approach scales well with the number of sensor nodes in the network, and it is well--suited for distributed implementation. Distributed inference is realized by a message--passing algorithm which requires iterative, parallel exchange of information among neighboring nodes on the graph. So as to address trade--offs between inference performance and communication costs, we also propose a communication--sensitive form of message--passing that is capable of achieving near--optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data. --- paper_title: Distributed Group Management in Sensor Networks: Algorithms and Applications to Localization and Tracking paper_content: The tradeoff between performance and scalability is a fundamental issue in distributed sensor networks. In this paper, we propose a novel scheme to efficiently organize and utilize network resources for target localization. Motivated by the essential role of geographic proximity in sensing, sensors are organized into geographically local collaborative groups. In a target tracking context, we present a dynamic group management method to initiate and maintain multiple tracks in a distributed manner. Collaborative groups are formed, each responsible for tracking a single target. The sensor nodes within a group coordinate their behavior using geographically-limited message passing. Mechanisms such as these for managing local collaborations are essential building blocks for scalable sensor network applications. --- paper_title: Information-driven dynamic sensor collaboration paper_content: This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications. --- paper_title: Distributed data association for multi-target tracking in sensor networks paper_content: Associating sensor measurements with target tracks is a fundamental and challenging problem in multi-target tracking. The problem is even more ::: challenging in the context of sensor networks, since association is coupled ::: across the network, yet centralized data processing is in general ::: infeasible due to power and bandwidth limitations. Hence efficient, distributed solutions are needed. We propose techniques based on graphical models to efficiently solve such data association problems in sensor networks. Our approach scales well with the number of sensor nodes in the network, and it is well--suited for distributed implementation. Distributed inference is realized by a message--passing algorithm which requires iterative, parallel exchange of information among neighboring nodes on the graph. So as to address trade--offs between inference performance and communication costs, we also propose a communication--sensitive form of message--passing that is capable of achieving near--optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data. --- paper_title: An Efficient Message-Passing Algorithm for Optimizing Decentralized Detection Networks paper_content: A promising feature of emerging wireless sensor networks is the opportunity for each spatially-distributed node to measure its local state and transmit only information relevant to effective global decision-making. An equally important design objective, as a result of each node's finite power, is for measurement processing to satisfy explicit constraints on, or perhaps make selective use of, the distributed algorithmic resources. We formulate this multi-objective design problem within the Bayesian decentralized detection paradigm, modeling resource constraints by a directed acyclic network with low-rate, unreliable communication links. Existing team theory establishes when necessary optimality conditions reduce to a convergent iterative algorithm to be executed offline (i.e., before measurements are processed). Even so, this offline algorithm has exponential complexity in the number of nodes, and its distributed implementation assumes a fully-connected communication network. We state conditions under which the offline algorithm admits an efficient message-passing interpretation, featuring linear complexity and a natural distributed implementation. We experiment with a simulated network of binary detectors, applying the message-passing algorithm to optimize the achievable tradeoff between global detection performance and network-wide online communication. The empirical analysis also exposes a design tradeoff between constraining in-network processing to preserve resources (per online measurement) and then having to consume resources (per offline reorganization) to maintain detection performance. --- paper_title: Loopy belief propagation: Convergence and effects of message errors paper_content: Belief propagation (BP) is an increasingly popular method of performing approximate inference on arbitrary graphical models. At times, even further approximations are required, whether due to quantization of the messages or model parameters, from other simplified message or model representations, or from stochastic approximation methods. The introduction of such errors into the BP message computations has the potential to affect the solution obtained adversely. We analyze the effect resulting from message approximation under two particular measures of error, and show bounds on the accumulation of errors in the system. This analysis leads to convergence conditions for traditional BP message passing, and both strict bounds and estimates of the resulting error in systems of approximate BP message passing. --- paper_title: A robust architecture for distributed inference in sensor networks paper_content: Many inference problems that arise in sensor networks require the computation of a global conclusion that is consistent with local information known to each node. A large class of these problems---including probabilistic inference, regression, and control problems---can be solved by message passing on a data structure called a junction tree. In this paper, we present a distributed architecture for solving these problems that is robust to unreliable communication and node failures. In this architecture, the nodes of the sensor network assemble themselves into a junction tree and exchange messages between neighbors to solve the inference problem efficiently and exactly. A key part of the architecture is an efficient distributed algorithm for optimizing the choice of junction tree to minimize the communication and computation required by inference. We present experimental results from a prototype implementation on a 97-node Mica2 mote network, as well as simulation results for three applications: distributed sensor calibration, optimal control, and sensor field modeling. These experiments demonstrate that our distributed architecture can solve many important inference problems exactly, efficiently, and robustly. --- paper_title: Distributed data association for multi-target tracking in sensor networks paper_content: Associating sensor measurements with target tracks is a fundamental and challenging problem in multi-target tracking. The problem is even more ::: challenging in the context of sensor networks, since association is coupled ::: across the network, yet centralized data processing is in general ::: infeasible due to power and bandwidth limitations. Hence efficient, distributed solutions are needed. We propose techniques based on graphical models to efficiently solve such data association problems in sensor networks. Our approach scales well with the number of sensor nodes in the network, and it is well--suited for distributed implementation. Distributed inference is realized by a message--passing algorithm which requires iterative, parallel exchange of information among neighboring nodes on the graph. So as to address trade--offs between inference performance and communication costs, we also propose a communication--sensitive form of message--passing that is capable of achieving near--optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data. ---
Title: Graphical Model-based Approaches to Target Tracking in Sensor Networks: An Overview of Some Recent Work and Challenges Section 1: Introduction Description 1: This section introduces sensor networks, the role of graphical models, and the context of target tracking in sensor networks. Section 2: Graphical Models Description 2: This section provides an overview of graphical models, focusing on Markov Random Fields and their relevance to sensor networks. Section 3: Inference in Sensor Networks Description 3: This section discusses how inference tasks are structured in sensor networks using graphical models and the importance of information structure. Section 4: Inference Tasks in Target Tracking Description 4: This section introduces key tasks in target tracking such as source localization and data association, and their graphical model representations. Section 5: Sensor Characterization Description 5: This section describes sensor measurement models and the general process of sensor characterization in the context of target tracking. Section 6: Source Localization Description 6: This section explores methodologies for localizing targets using sensor networks, including graphical models for multi-target scenarios. Section 7: Data Association Description 7: This section delves into the data association problem, particularly in multi-sensor, multi-target tracking and the use of graphical models for resolution. Section 8: Track Maintenance Description 8: This section covers the maintenance of tracks including initiation, updating, and finalization in a distributed sensor network environment. Section 9: Discussion Description 9: This section discusses the challenges and considerations in applying graphical models to sensor networks, especially under communication and energy constraints.
Overview of Electrochemical DNA Biosensors: New Approaches to Detect the Expression of Life
10
--- paper_title: Whole genome analysis: experimental access to all genome sequenced segments through larger-scale efficient oligonucleotide synthesis and PCR. paper_content: The recent ability to sequence whole genomes allows ready access to all genetic material. The approaches outlined here allow automated analysis of sequence for the synthesis of optimal primers in an automated multiplex oligonucleotide synthesizer (AMOS). The efficiency is such that all ORFs for an organism can be amplified by PCR. The resulting amplicons can be used directly in the construction of DNA arrays or can be cloned for a large variety of functional analyses. These tools allow a replacement of single-gene analysis with a highly efficient whole-genome analysis. --- paper_title: Applications of DNA microarrays in biology. paper_content: DNA microarrays have enabled biology researchers to conduct large-scale quantitative experiments. This capacity has produced qualitative changes in the breadth of hypotheses that can be explored. In what has become the dominant mode of use, changes in the transcription rate of nearly all the genes in a genome, taking place in a particular tissue or cell type, can be measured in disease states, during development, and in response to intentional experimental perturbations, such as gene disruptions and drug treatments. The response patterns have helped illuminate mechanisms of disease and identify disease subphenotypes, predict disease progression, assign function to previously unannotated genes, group genes into functional pathways, and predict activities of new compounds. Directed at the genome sequence itself, microarrays have been used to identify novel genes, binding sites of transcription factors, changes in DNA copy number, and variations from a baseline sequence, such as in emerging strains of pathogens or complex mutations in disease-causing human genes. They also serve as a general demultiplexing tool to sort spatially the sequence-tagged products of highly parallel reactions performed in solution. A brief review of microarray platform technology options, and of the process steps involved in complete experiment workflows, is included. --- paper_title: Single-chain polymorphism analysis in long QT syndrome using planar waveguide fluorescent biosensors. paper_content: Rapid detection of single nucleotide polymorphisms (SNPs) has potential applications in both genetic screening and pharmacogenomics. Planar waveguide fluorescent biosensor technology was employed to detect SNPs using a simple hybridization assay with the complementary strand ("capture oligo") immobilized on the waveguide. This technology allows real-time measurements of DNA hybridization kinetics. Under normal conditions, both the wild-type sequence and the SNP-containing sequence will hybridize with the capture oligo, but with different reaction kinetics and equilibrium duplex concentrations. A "design of experiments" approach was used to maximize the differences in the kinetics profiles of the two. Nearly perfect discrimination can be achieved at short times (2 min) with temperatures that destabilize or melt the heteroduplex while maintaining the stability of the homoduplex. The counter ion content of the solvent was shown to have significant effect not only on the melting point of the heteroduplex and the homoduplex but also on the hybridization rate. Changes in both the stability and the difference between the hybridization rates of the hetero- and homoduplex were observed with varying concentrations of three different cations (Na(+), K(+), Mg(2+)). With the difference in hybridization rates maximized, discrimination between the hetero- and the homoduplex can be obtained at lower, less rigorous temperatures at hybridization times of 7.5 min or longer. --- paper_title: Next-generation DNA sequencing paper_content: DNA sequence represents a single format onto which a broad range of biological phenomena can be projected for high-throughput data collection. Over the past three years, massively parallel DNA sequencing platforms have become widely available, reducing the cost of DNA sequencing by over two orders of magnitude, and democratizing the field by putting the sequencing capacity of a major genome center in the hands of individual investigators. These new technologies are rapidly evolving, and near-term challenges include the development of robust protocols for generating sequencing libraries, building effective new approaches to data-analysis, and often a rethinking of experimental design. Next-generation DNA sequencing has the potential to dramatically accelerate biological and biomedical research, by enabling the comprehensive analysis of genomes, transcriptomes and interactomes to become inexpensive, routine and widespread, rather than requiring significant production-scale efforts. --- paper_title: Nucleic acid biosensors for environmental pollution monitoring. paper_content: Nucleic acid-based biosensors are finding increasing use for the detection of environmental pollution and toxicity. A biosensor is defined as a compact analytical device incorporating a biological or biologically-derived sensing element either integrated within or intimately associated with a physicochemical transducer. A nucleic acid-based biosensor employs as the sensing element an oligonucleotide, with a known sequence of bases, or a complex structure of DNA or RNA. Nucleic acid biosensors can be used to detect DNA/RNA fragments or either biological or chemical species. In the first application, DNA/RNA is the analyte and it is detected through the hybridization reaction (this kind of biosensor is also called a genosensor). In the second application, DNA/RNA plays the role of the receptor of specific biological and/or chemical species, such as target proteins, pollutants or drugs. Recent advances in the development and applications of nucleic acid-based biosensors for environmental application are reviewed in this article with special emphasis on functional nucleic acid elements (aptamers, DNAzymes, aptazymes) and lab-on-a-chip technology. --- paper_title: Genomic Profiling and Identification of High-Risk Uveal Melanoma by Array CGH Analysis of Primary Tumors and Liver Metastases paper_content: PURPOSE. Incurable metastases develop in approximately 50% of patients with uveal melanoma (UM). The purpose of this study was to analyze genomic profiles in a large series of ocular tumors and liver metastases and design a genome-based classifier for metastatic risk assessment. METHODS. A series of 86 UM tumors and 66 liver metastases were analyzed by using a BAC CGH (comparative genomic hybridization) microarray. A clustering was performed, and correlation with the metastatic status was sought among a subset of 71 patients with a minimum follow-up of 24 months. The status of chromosome 3 was further examined in the tumors, and metastases with disomy 3 were checked with an SNP microarray. A prognostic classifier was constructed using a log-linear model on minimal regions and leave-one-out cross-validation. RESULTS. The clustering divides the groups of tumors with disomy 3 and monosomy 3 into two and three subgroups, respectively. Same subgroups are found in primary tumors and in metastases, but with different frequencies. Isolated monosomy 3 was present in 0% of metastatic ocular tumors and in 3% of metastases. The highest metastatic rate in ocular tumors was observed in a subgroup defined by the gain of 8q with a proximal breakpoint, and losses of 3, 8p, and 16q, also most represented in metastases. A prognostic classifier that included the status of these markers led to an 85.9% classification accuracy. CONCLUSIONS. The analysis of the status of these specific chromosome regions by genome profiling on SNP microarrays should be a reliable tool for identifying high-risk patients in future adjuvant therapy protocols. --- paper_title: Light-generated oligonucleotide arrays for rapid DNA sequence analysis. paper_content: In many areas of molecular biology there is a need to rapidly extract and analyze genetic information; however, current technologies for DNA sequence analysis are slow and labor intensive. We report here how modern photolithographic techniques can be used to facilitate sequence analysis by generating miniaturized arrays of densely packed oligonucleotide probes. These probe arrays, or DNA chips, can then be applied to parallel DNA hybridization analysis, directly yielding sequence information. In a preliminary experiment, a 1.28 x 1.28 cm array of 256 different octanucleotides was produced in 16 chemical reaction cycles, requiring 4 hr to complete. The hybridization pattern of fluorescently labeled oligonucleotide targets was then detected by epifluorescence microscopy. The fluorescence signals from complementary probes were 5-35 times stronger than those with single or double base-pair hybridization mismatches, demonstrating specificity in the identification of complementary sequences. This method should prove to be a powerful tool for rapid investigations in human genetics and diagnostics, pathogen detection, and DNA molecular recognition. --- paper_title: Biochemical applications of ultrathin films of enzymes, polyions and DNA. paper_content: This feature article summarizes recent applications of ultrathin films of enzymes and DNA assembled layer-by-layer (LbL). Using examples mainly from our own research, we focus on systems developed for biocatalysis and biosensors for toxicity screening. Enzyme-poly(L-lysine) (PLL) films, especially when stabilized by crosslinking, can be used for biocatalysis at unprecedented high temperatures or in acidic or basic solutions on electrodes or sub-micron sized beads. Such films have bright prospects for chiral synthesis and biofuel cells. Excellent bioactivity and retention of enzyme structure in these films facilitates their use in detailed kinetic studies. Biosensors and arrays employing DNA-enzyme films show great promise in predicting genotoxicity of new drug and chemical product candidates. These devices combine metabolic biocatalysis, reactive metabolite-DNA reactions, and DNA damage detection. Catalytic voltammetry or electrochemiluminescence (ECL) can be used for high throughput arrays utilizing multiple LbL "spots" of DNA, enzyme and metallopolymer. DNA-enzyme films can also be used to produce nucleobase adduct toxicity biomarkers for detection by LC-MS. These approaches provide valuable high throughput tools for drug and chemical product development and toxicity prediction. --- paper_title: Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray paper_content: A high-capacity system was developed to monitor the expression of many genes in parallel. Microarrays prepared by high-speed robotic printing of complementary DNAs on glass were used for quantitative expression measurements of the corresponding genes. Because of the small format and high density of the arrays, hybridization volumes of 2 microliters could be used that enabled detection of rare transcripts in probe mixtures derived from 2 micrograms of total cellular messenger RNA. Differential expression measurements of 45 Arabidopsis genes were made by means of simultaneous, two-color fluorescence hybridization. --- paper_title: Expression profiling using cDNA microarrays paper_content: cDNA microarrays are capable of profiling gene expression patterns of tens of thousands of genes in a single experiment. DNA targets, in the form of 3´ expressed sequence tags (ESTs), are arrayed onto glass slides (or membranes) and probed with fluorescent– or radioactively–labelled cDNAs. Here, we review technical aspects of cDNA microarrays, including the general principles, fabrication of the arrays, target labelling, image analysis and data extraction, management and mining. --- paper_title: Nucleic acid approaches for detection and identification of biological warfare and infectious disease agents. paper_content: Biological warfare agents are the most problematic of the weapons of mass destruction and terror. Both civilian and military sources predict that over the next decade the threat from proliferation of these agents will increase significantly. In this review we summarize the state of the art in detection and identification of biological threat agents based on PCR technology with emphasis on the new technology of microarrays. The advantages and limitations of real-time PCR technology and a review of the literature as it applies to pathogen and virus detection are presented. The paper covers a number of issues related to the challenges facing biological threat agent detection technologies and identifies critical components that must be overcome for the emergence of reliable PCR-based DNA technologies as bioterrorism countermeasures and for environmental applications. The review evaluates various system components developed for an integrated DNA microchip and the potential applications of the next generation of fully automated DNA analyzers with integrated sample preparation and biosensing elements. The article also reviews promising devices and technologies that are near to being, or have been, commercialized. --- paper_title: Part II: coordinated biosensors – development of enhanced nanobiosensors for biological and medical applications paper_content: In this review, we summarize recent developments in nanobiosensors and their applications in biology and potential in medical diagnostics. We first highlight the concept of coordinated nanobiosensors, which integrate desirable properties of the individual components: protein machinery for sensitivity and specificity of binding, peptide or nucleic acid chemistry for aligning the various electron-transducing units and the nanoelectrodes for enhancing sensitivity in electronic detection. The fundamental basis of coordinated nanobiosensing is in applying the precise 3D atomic resolution structural information to rationally design and fabricate biosensors with high specificity and sensitivity. Additionally, we describe several biosensors developed for detecting biologically relevant compounds, including those for hydrogen peroxide, dopamine, glucose, DNA and cytochrome C. Results from these systems highlight the potential advantages of using nanoscale biosensors and how further developments in this area will c... --- paper_title: Use of a cDNA microarray to analyse gene expression patterns in human cancer paper_content: The development and progression of cancer and the experimental reversal of tumorigenicity are accompanied by complex changes in patterns of gene expression. Microarrays of cDNA provide a powerful tool for studying these complex phenomena. The tumorigenic properties of a human melanoma cell line, UACC-903, can be suppressed by introduction of a normal human chromosome 6, resulting in a reduction of growth rate, restoration of contact inhibition, and suppression of both soft agar clonogenicity and tumorigenicity in nude mice. We used a high density microarray of 1,161 DNA elements to search for differences in gene expression associated with tumour suppression in this system. Fluorescent probes for hybridization were derived from two sources of cellular mRNA [UACC-903 and UACC-903(+6)] which were labelled with different fluors to provide a direct and internally controlled comparison of the mRNA levels corresponding to each arrayed gene. The fluorescence signals representing hybridization to each arrayed gene were analysed to determine the relative abundance in the two samples of mRNAs corresponding to each gene. Previously unrecognized alterations in the expression of specific genes provide leads for further investigation of the genetic basis of the tumorigenic phenotype of these cells. --- paper_title: Exploring the Metabolic and Genetic Control of Gene Expression on a Genomic Scale paper_content: DNA microarrays containing virtually every gene of Saccharomyces cerevisiae were used to carry out a comprehensive investigation of the temporal program of gene expression accompanying the metabolic shift from fermentation to respiration. The expression profiles observed for genes with known metabolic functions pointed to features of the metabolic reprogramming that occur during the diauxic shift, and the expression patterns of many previously uncharacterized genes provided clues to their possible functions. The same DNA microarrays were also used to identify genes whose expression was affected by deletion of the transcriptional co-repressor TUP1 or overexpression of the transcriptional activator YAP1. These results demonstrate the feasibility and utility of this approach to genomewide exploration of gene expression patterns. --- paper_title: Options available for profiling small samples: a review of sample amplification technology when combined with microarray profiling paper_content: The possibility of performing microarray analysis on limited material has been demonstrated in a number of publications. In this review we approach the technical aspects of mRNA amplification and several important implicit consequences, for both linear and exponential procedures. Amplification efficiencies clearly allow profiling of extremely small samples. The conservation of transcript abundance is the most important issue regarding the use of sample amplification in combination with microarray analysis, and this aspect has generally been found to be acceptable, although demonstrated to decrease in highly diluted samples. The fact that variability and discrepancies in microarray profiles increase with minute sample sizes has been clearly documented, but for many studies this does appear to have affected the biological conclusions. We suggest that this is due to the data analysis approach applied, and the consequence is the chance of presenting misleading results. We discuss the issue of amplification sensitivity limits in the light of reports on fidelity, published data from reviewed articles and data analysis approaches. These are important considerations to be reflected in the design of future studies and when evaluating biological conclusions from published microarray studies based on extremely low input RNA quantities. --- paper_title: Reconstruction and functional analysis of altered molecular pathways in human atherosclerotic arteries paper_content: BackgroundAtherosclerosis affects aorta, coronary, carotid, and iliac arteries most frequently than any other body vessel. There may be common molecular pathways sustaining this process. Plaque presence and diffusion is revealed by circulating factors that can mediate systemic reaction leading to plaque rupture and thrombosis.ResultsWe used DNA microarrays and meta-analysis to study how the presence of calcified plaque modifies human coronary and carotid gene expression. We identified a series of potential human atherogenic genes that are integrated in functional networks involved in atherosclerosis. Caveolae and JAK/STAT pathways, and S100A9/S100A8 interacting proteins are certainly involved in the development of vascular disease. We found that the system of caveolae is directly connected with genes that respond to hormone receptors, and indirectly with the apoptosis pathway.Cytokines, chemokines and growth factors released in the blood flux were investigated in parallel. High levels of RANTES, IL-1ra, MIP-1alpha, MIP-1beta, IL-2, IL-4, IL-5, IL-6, IL-7, IL-17, PDGF-BB, VEGF and IFN-gamma were found in plasma of atherosclerotic patients and might also be integrated in the molecular networks underlying atherosclerotic modifications of these vessels.ConclusionThe pattern of cytokine and S100A9/S100A8 up-regulation characterizes atherosclerosis as a proinflammatory disorder. Activation of the JAK/STAT pathway is confirmed by the up-regulation of IL-6, STAT1, ISGF3G and IL10RA genes in coronary and carotid plaques. The functional network constructed in our research is an evidence of the central role of STAT protein and the caveolae system to contribute to preserve the plaque. Moreover, Cav-1 is involved in SMC differentiation and dyslipidemia confirming the importance of lipid homeostasis in the atherosclerotic phenotype. --- paper_title: Terminal continuation (TC) RNA amplification without second strand synthesis paper_content: Terminal continuation (TC) RNA amplification was developed originally to reproducibly and inexpensively amplify RNA. The TC RNA amplification method has been improved further by obviating second strand DNA synthesis, a cost-effective protocol that takes less time to perform with fewer manipulations required for RNA amplification. Results demonstrate that TC RNA amplification without second strand synthesis does not differ from the original protocol using RNA harvested from mouse brain and from hippocampal neurons obtained via laser capture microdissection from postmortem human brains. The modified TC RNA amplification method can discriminate single cell gene expression profiles between normal control and Alzheimer's disease hippocampal neurons indistinguishable from the original protocol. Thus, TC RNA amplification without second strand synthesis is a reproducible, time- and cost-effective method for RNA amplification from minute amounts of input RNA, and is compatible with microaspiration strategies and subsequent microarray analysis as well as quantitative real-time PCR. --- paper_title: Electrochemical detection of a breast cancer susceptible gene using cDNA immobilized chitosan-co-polyaniline electrode paper_content: Abstract An electrochemical breast cancer biosensor based on a chitosan-co-polyaniline (CHIT-co-PANI) copolymer coated onto indium–tin-oxide (ITO) was fabricated by immobilizing the complementary DNA (cDNA) probe (42 bases long) associated with the breast cancer susceptible gene BRCA1. Both the CHIT-co-PANI/ITO and the cDNA/CHIT-co-PANI/ITO electrodes were characterized with Fourier transform infrared (FTIR) spectroscopy, atomic force microscopy (AFM), cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). For the cDNA/CHIT-co-PANI/ITO electrode, the amperometric current decreased linearly with an increasing logarithm of molar concentration of the single-stranded target DNA (ssDNA) within the range of 0.05–25 fmol. The bioelectrode exhibited a sensitivity of 2.104 μA/fmol with a response time of 16 s. The cDNA/CHIT-co-PANI/ITO electrode had a shelf life of about six months, even when stored at room temperature. --- paper_title: Optimization of experimental design parameters for high-throughput chromatin immunoprecipitation studies paper_content: High-throughput, microarray-based chromatin immunoprecipitation (ChIP-chip) technology allows in vivo elucidation of transcriptional networks. However this complex is not yet readily accessible, in part because its many parameters have not been systematically evaluated and optimized. We address this gap by systematically assessing experimental-design parameters including antibody purity, dye-bias, array-batch, inter-day hybridization bias, amplification method and choice of hybridization control. The combined performance of these optimized parameters shows a 90% validation rate in ChIP-chip analysis of Myc genomic binding in HL60 cells using two different microarray platforms. Increased sensitivity and decreased noise in ChIP-chip assays will enable wider use of this methodology to accurately and affordably elucidate transcriptional networks. --- paper_title: Disposable nucleic acid biosensors based on gold nanoparticle probes and lateral flow strip. paper_content: In this article, we describe a disposable nucleic acid biosensor (DNAB) for low-cost and sensitive detection of nucleic acid samples in 15 min. Combining the unique optical properties of gold nanoparticles (Au-NP) and the high efficiency of chromatographic separation, sandwich-type DNA hybridization reactions were realized on the lateral flow strips, which avoid multiple incubation, separation, and washing steps in the conventional nucleic acid biosensors. The captured Au-NP probes on the test zone and control zone of the biosensor produced the characteristic red bands, enabling visual detection of nucleic acid samples without instrumentation. The quantitative detection was performed by reading the intensities of the produced red bands with a portable strip reader. The parameters (e.g., the concentration of reporter probe, the size of Au-NP, the amount of Au-NP-DNA probe, lateral flow membranes, and the concentration of running buffer) that govern the sensitivity and reproducibility of the sensor were optimized. The response of the optimized device is highly linear over the range of 1-100 nM target DNA, and the limit of detection is estimated to be 0.5 nM in association with a 15 min assay time. The sensitivity of the biosensor was further enhanced by using horseradish peroxidase (HRP)-Au-NP dual labels which ensure a quite low detection limit of 50 pM. The DNAB has been applied for the detection of human genomic DNA directly with a detection limit of 2.5 microg/mL (1.25 fM) by adopting well-designed DNA probes. The new nucleic acid biosensor thus provides a rapid, sensitive, low cost, and quantitative tool for the detection of nucleic acid samples. It shows great promise for in-field and point-of-care diagnosis of genetic diseases and detection of infectious agents or warning against biowarfare agents. --- paper_title: Genomewide Identification of Protein Binding Locations Using Chromatin Immunoprecipitation Coupled with Microarray paper_content: Interactions between cis-acting elements and proteins play a key role in transcriptional regulation of all known organisms. To better understand these interactions, researchers developed a method that couples chromatin immunoprecipitation with microarrays (also known as ChIP-chip), which is capable of providing a whole-genome map of protein-DNA interactions. This versatile and high-throughput strategy is initiated by formaldehyde-mediated cross-linking of DNA and proteins, followed by cell lysis, DNA fragmentation, and immunopurification. The immunoprecipitated DNA fragments are then purified from the proteins by reverse-cross-linking followed by amplification, labeling, and hybridization to a whole-genome tiling microarray against a reference sample. The enriched signals obtained from the microarray then are normalized by the reference sample and used to generate the whole-genome map of protein-DNA interactions. The protocol described here has been used for discovering the genomewide distribution of RNA polymerase and several transcription factors of Escherichia coli. --- paper_title: An optical DNA-based biosensor for the analysis of bioactive constituents with application in drug and herbal drug screening. paper_content: The efficient and rapid detection of bioactive compounds in complex matrices of different origins (natural or synthetic) is a key step in the discovery of molecules with potential application in therapy. Among them, molecules able to interact with nucleic acids can represent important targets. In this study, an optical DNA biosensor, based on surface plasmon resonance (SPR) transduction, has been studied in its potential application as new analytical device for drug screening. This device was applied to the analysis of pure synthetic or natural molecules and also to some fractions obtained by chromatographic separation of an extract of Chelidonium majus L. (great celandine), a plant containing benzo[c]phenanthridinium alkaloids having intercalating properties. The ability of these molecules to interact with the double stranded nucleic acid (dsDNA) immobilised on the sensor surface has been investigated. The optical sensing relies on the SPR-based bench instrument Biacore Xtrade mark and represents an example of multiuse sensor. The results obtained demonstrate the potential application of this device for the rapid screening of bioeffective compounds. The characteristics of the biosensor offer the possibility to be coupled to chemical analysis as in hyphenated technologies. --- paper_title: Differential gene expression of peripheral blood mononuclear cells from rheumatoid arthritis patients may discriminate immunogenetic, pathogenic and treatment features. paper_content: This study aimed to evaluate the association between the differential gene expression profiling of peripheral blood mononuclear cells of rheumatoid arthritis patients with their immunogenetic (human leucocyte antigen shared-epitope, HLA-SE), autoimmune response [anti-cyclic citrullinated peptide (CCP) antibodies], disease activity score (DAS-28) and treatment (disease-modifying antirheumatic drugs and tumour necrosis factor blocker) features. Total RNA samples were copied into Cy3-labelled complementary DNA probes, hybridized onto a glass slide microarray containing 4500 human IMAGE complementary DNA target sequences. The Cy3-monocolour microarray images from patients were quantified and normalized. Analysis of the data using the significance analysis of microarrays algorithm together with a Venn diagram allowed the identification of shared and of exclusively modulated genes, according to patient features. Thirteen genes were exclusively associated with the presence of HLA-SE alleles, whose major biological function was related to signal transduction, phosphorylation and apoptosis. Ninety-one genes were associated with disease activity, being involved in signal transduction, apoptosis, response to stress and DNA damage. One hundred and one genes were associated with the presence of anti-CCP antibodies, being involved in signal transduction, cell proliferation and apoptosis. Twenty-eight genes were associated with tumour necrosis factor blocker treatment, being involved in intracellular signalling cascade, phosphorylation and protein transport. Some of these genes had been previously associated with rheumatoid arthritis pathogenesis, whereas others were unveiled for future research. --- paper_title: Active microelectronic array system for DNA hybridization, genotyping and pharmacogenomic applications. paper_content: Microelectronic arrays have been developed for DNA hybridization analysis of point mutations, single nucleotide polymorphisms, short tandem repeats and gene expression. In addition to a variety of molecular biology and genomic research applications, such devices will also be used for infectious dise --- paper_title: MicroRNA detection by microarray paper_content: MicroRNAs (miRNAs) are a class of small noncoding RNAs ∼22 nt in length that regulate gene expression and play fundamental roles in multiple biological processes, including cell differentiation, proliferation and apoptosis as well as disease processes. The study of miRNA has thus become a rapidly emerging field in life science. The detection of miRNA expression is a very important first step in miRNA exploration. Several methodologies, including cloning, northern blotting, real-time RT-PCR, microRNA arrays and ISH (in situ hybridization), have been developed and applied successfully in miRNA profiling. This review discusses the main existing microRNA detection technologies, while emphasizing microRNA arrays. --- paper_title: Differential expression of microRNA species in human gastric cancer versus non-tumorous tissues paper_content: BACKGROUND AND AIM ::: MicroRNAs (miRNAs) play important roles in carcinogenesis. The global miRNA expression profile of gastric cancer has not been reported. The purpose of the present study was to determine the miRNA expression profile of gastric cancer. ::: ::: ::: METHODS ::: Total RNA were first extracted from primary gastric cancer tissues and adjacent non-tumorous tissues and then small isolated RNAs (< 300 nt) were 3'-extended with a poly(A) tail. Hybridization was carried out on a microParaflo microfluidic chip (LC Sciences, Houston, TX, USA). After hybridization detection by fluorescence labeling using tag-specific Cy3 and Cy5 dyes, hybridization images were collected using a laser scanner and digitized using Array-Pro image analysis software (Media Cybernetics, Silver Spring, MD, USA). To validate the results and investigate the biological meaning of differential expressed miRNAs, immunohistochemistry was used to detect the differential expression of target genes. ::: ::: ::: RESULTS ::: The most highly expressed miRNAs in non-tumorous tissues were miR-768-3p, miR-139-5p, miR-378, miR-31, miR-195, miR-497 and miR-133b. Three of them, miR-139-5p, miR-497 and miR-768-3p, were first found in non-tumorous tissues. The most highly expressed miRNAs in gastric cancer tissues were miR-20b, miR-20a, miR-17, miR-106a, miR-18a, miR-21, miR-106b, miR-18b, miR-421, miR-340*, miR-19a and miR-658. Among them, miR-340*, miR-421 and miR-658 were first found highly expressed in cancer cells. The expression of some target genes (such as Rb and PTEN) in cancer tissues was found to be decreased. ::: ::: ::: CONCLUSION ::: To our knowledge, this is the first report about these miRNAs associated with gastric cancer. This new information may suggest the potential roles of these miRNAs in the diagnosis of gastric cancer. --- paper_title: Electrochemical detection of harmful algae and other microbial contaminants in coastal waters using hand-held biosensors paper_content: Abstract Standard methods to identify microbial contaminants in the environment are slow, laborious, and can require specialized expertise. This study investigated electrochemical detection of microbial contaminants using commercially available, hand-held instruments. Electrochemical assays were developed for a red tide dinoflagellate ( Karenia brevis ), fecal-indicating bacteria ( Enterococcus spp.), markers indicative of human sources of fecal pollution (human cluster Bacteroides and the esp gene of Enterococcus faecium ), bacterial pathogens ( Escherichia coli 0157:H7, Salmonella spp., Campylobacter jejuni , Staphylococcus aureus ), and a viral pathogen (adenovirus). For K. brevis , two assay formats (Rapid PCR-Detect and Hybrid PCR-Detect) were tested and both provided detection limits of 10 genome equivalents for DNA isolated from K. brevis culture and amplified by PCR. Sensitivity with coastal water samples was sufficient to detect K. brevis that was “present” (⩽1000 cells/l) without yielding false positive results and the electrochemical signal was significantly different than for samples containing cells at “medium” concentrations (100,000 to 6 cells/l). Detection of K. brevis RNA was also shown. Multi-target capability was demonstrated with an 8-plex assay for bacterial and viral targets using isolated DNA, natural beach water spiked with human feces, and water and sediments collected from New Orleans, Louisiana following Hurricane Katrina. Furthermore, direct detection of dinoflagellate and bacterial DNA was achieved using lysed cells rather than extracted nucleic acids, allowing streamlining of the process. The methods presented can be used to rapidly (3–5 h) screen environmental water samples for the presence of microbial contaminants and have the potential to be integrated into semi-automated detection platforms. --- paper_title: Evaluation of a microarray-hybridization based method applicable for discovery of single nucleotide polymorphisms (SNPs) in the Pseudomonas aeruginosa genome paper_content: BackgroundWhole genome sequencing techniques have added a new dimension to studies on bacterial adaptation, evolution and diversity in chronic infections. By using this powerful approach it was demonstrated that Pseudomonas aeruginosa undergoes intense genetic adaptation processes, crucial in the development of persistent disease. The challenge ahead is to identify universal infection relevant adaptive bacterial traits as potential targets for the development of alternative treatment strategies.ResultsWe developed a microarray-based method applicable for discovery of single nucleotide polymorphisms (SNPs) in P. aeruginosa as an easy and economical alternative to whole genome sequencing. About 50% of all SNPs theoretically covered by the array could be detected in a comparative hybridization of PAO1 and PA14 genomes at high specificity (> 0.996). Variations larger than SNPs were detected at much higher sensitivities, reaching nearly 100% for genetic differences affecting multiple consecutive probe oligonucleotides. The detailed comparison of the in silico alignment with experimental hybridization data lead to the identification of various factors influencing sensitivity and specificity in SNP detection and to the identification of strain specific features such as a large deletion within the PA4684 and PA4685 genes in the Washington Genome Center PAO1.ConclusionThe application of the genome array as a tool to identify adaptive mutations, to depict genome organizations, and to identify global regulons by the "ChIP-on-chip" technique will expand our knowledge on P. aeruginosa adaptation, evolution and regulatory mechanisms of persistence on a global scale and thus advance the development of effective therapies to overcome persistent disease. --- paper_title: Differential microRNA expression between hepatitis B and hepatitis C leading disease progression to hepatocellular carcinoma paper_content: MicroRNA (miRNA) plays an important role in the pathology of various diseases, including infection and cancer. Using real-time polymerase chain reaction, we measured the expression of 188 miRNAs in liver tissues obtained from 12 patients with hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) and 14 patients with hepatitis C virus (HCV)-related HCC, including background liver tissues and normal liver tissues obtained from nine patients. Global gene expression in the same tissues was analyzed via complementary DNA microarray to examine whether the differentially expressed miRNAs could regulate their target genes. Detailed analysis of the differentially expressed miRNA revealed two types of miRNA, one associated with HBV and HCV infections (n = 19), the other with the stage of liver disease (n = 31). Pathway analysis of targeted genes using infection-associated miRNAs revealed that the pathways related to cell death, DNA damage, recombination, and signal transduction were activated in HBV-infected liver, and those related to immune response, antigen presentation, cell cycle, proteasome, and lipid metabolism were activated in HCV-infected liver. The differences in the expression of infection-associated miRNAs in the liver correlated significantly with those observed in Huh7.5 cells in which infectious HBV or HCV clones replicated. Out of the 31 miRNAs associated with disease state, 17 were down-regulated in HCC, which up-regulated cancer-associated pathways such as cell cycle, adhesion, proteolysis, transcription, and translation; 6 miRNAs were up-regulated in HCC, which down-regulated anti-tumor immune response. Conclusion: miRNAs are important mediators of HBV and HCV infection as well as liver disease progression, and therefore could be potential therapeutic target molecules. (HEPATOLOGY 2009.) --- paper_title: DNA arrays, electronic noses and tongues, biosensors and receptors for rapid detection of toxigenic fungi and mycotoxins: A review paper_content: This paper presents an overview of how microsystem technology tools can be applied to the development of rapid, out–of–laboratory measurement capabilities for the determinations of toxigenic fungi and mycotoxins in foodstuffs. Most of the topics discussed are all under investigation within the European Commission–sponsored project Good–Food (FP6–IST). These are DNA arrays, electronic noses and electronic tongues for the detection of fungal contaminants in feed, and biosensors and chemical sensors based on microfabricated electrode systems, antibodies and novel synthetic receptors for the detection of specific mycotoxins. The approach to resolution of these difficult measurement problems in real matrices requires a multidisciplinary approach. The technology tools discussed can provide a route to the rapid, on–site generation of data that can aid the safe production of high–quality foodstuffs. --- paper_title: Polymorphisms in XRCC1 and XPG and response to platinum-based chemotherapy in advanced non-small cell lung cancer patients. paper_content: Platinum-based chemotherapeutics is the most common regimens for advanced NSCLC patients. However, it is difficult to identify platinum resistance in clinical treatment. Genetic factors are thought to represent important determinants of drug efficacy. In this study, we investigated whether single nucleotide polymorphisms (SNPs) in Xeroderma pigmentosum group G (XPG) and X-ray repair cross complementing group 1 (XRCC1) were associated with the tumor response in non-small cell lung cancer (NSCLC) patients treated with platinum-based chemotherapy in Chinese population. Totally 82 patients with advanced NSCLC were routinely treated with cisplatin or carboplatin-based chemotherapy, and clinical response was evaluated after 2-3 cycles. And 3D (three dimensions) polyacrylamide gel-based DNA microarray method was used to evaluate the genotypes of XRCC1 194 Arg/Trp, XRCC1 399Arg/Gln, XPG 46His/His and XPG 1104His/Asp in DNA from peripheral lymphocytes. We found that there was a significantly increased chance of treatment response to platinum-based chemotherapy with the XRCC1 194Arg/Trp genotype (odds ratio 0.429; 95% CI 0.137-1.671; P=0.035). The polymorphism of XPG 46His/His was found to be associated with clinical response in NSCLC patients P=0.047, not detected between chemotherapy response and SNPs of XRCC1 399Arg/Gln or XPG 1104His/Asp (P=0.997 0.561, respectively). Our study showed that the polymorphic status of XRCC1 194Arg/Trp might be a predictive marker of treatment response for advanced NSCLC patients and those of XPG His46His was associated with susceptibility of chemotherapy. The 3D polyacrylamide gel-based DNA microarray method was accurate, high-throughput and inexpensive, especially suitable for a large scale of SNP genotyping in population. --- paper_title: A DNA microarray system for analyzing complex DNA samples using two-color fluorescent probe hybridization. paper_content: Detecting and determining the relative abundance of diverse individual sequences in complex DNA samples is a recurring experimental challenge in analyzing genomes. We describe a general experimental approach to this problem, using microscopic arrays of DNA fragments on glass substrates for differential hybridization analysis of fluorescently labeled DNA samples. To test the system, 864 physically mapped lambda clones of yeast genomic DNA, together representing >75% of the yeast genome, were arranged into 1.8-cm x 1.8-cm arrays, each containing a total of 1744 elements. The microarrays were characterized by simultaneous hybridization of two different sets of isolated yeast chromosomes labeled with two different fluorophores. A laser fluorescent scanner was used to detect the hybridization signals from the two fluorophores. The results demonstrate the utility of DNA microarrays in the analysis of complex DNA samples. This system should find numerous applications in genome-wide genetic mapping, physical mapping, and gene expression studies. --- paper_title: Surface Plasmon Resonance for Detection of Genetically Modified Organisms in the Food Supply paper_content: A review is presented demonstrating that biospecific interaction analysis, using surface plasmon resonance (SPR) and biosensor technologies is a simple, rapid, and automatable approach to detect genetically modified organisms (GMOs). Using SPR, we were able to monitor in real-time the hybridization between oligonucleotide or polymerase chain reaction (PCR)-generated probes and target single-stranded PCR products obtained by using as substrates DNA isolated from normal or transgenic soybean and maize. This procedure allows a one-step, nonradioactive detection of GMOs. PCR-generated probes are far more efficient in detecting GMOs than are oligodeoxyribonucleotide probes. This is expected to be a very important parameter, because information on low percentage of GMOs is of great value. Determination of the ability of SPR-based analysis to quantify GMOs should be considered a major research field for future studies, especially for the analyses of food supplies. --- paper_title: Biosensors: new approaches in drug discovery paper_content: The development of biosensors for analytical purposes has attracted a great deal of attention in recent years. A biosensor is defined as an analytical device consisting of a biological component (e.g., enzyme, antibody, entire cell, DNA) and a physical transducer (e.g., electrode, optical device). Biosensors are mostly designed for routine analysis, such as clinical diagnosis, quality control of food, in-process control of fermentations, and in environmental analysis. Many of these sensors are also suitable for screening purposes in order to find new drugs. Such systems should yield information either about compounds with known bioactivity or about the bioactivity of samples with known or unknown chemical composition. Biosensors intended for the latter purpose are essentially based on whole cells carrying receptors and ion channels at their surfaces. Miniaturization of structures, primarily based on silicon, allows integration of many sensors into arrays, which may be suitable for the screening of natural and chemical products as well as combinatorial libraries. Until now, no commercially available sensors of this kind exist but they are expected in the near future. Different biosensors, based on enzymes, antibodies, cells, artificial membranes and entire animal tissues, which can be used in drug discovery and may lead to efficient screening systems in the future, are described in this review. --- paper_title: New developments in the analysis of gene expression. paper_content: An understanding of the relationship between gene expression, protein expression and the influences of genetic responses upon gene function is vital before we can understand the complexity of genomes. Traditional methods for the study of gene expression are limited to studying small groups of genes at a time and a source of pure starting material has been difficult to obtain. Recent technological advances have enabled large numbers of genes, from specific cell populations, to be studied in a single experiment. Laser capture microdissection (LCM) and microarray technology are providing the next revolution in the study of gene expression. LCM-based molecular analysis of histopathological lesions can be applied to any disease process that is accessible through tissue sampling. Examples include: (i) mapping the field of genetic changes associated with oxidative stress; (ii) analysis of gene expression patterns in atherosclerotic tissues, sites of inflammation and Alzheimer's disease plaques; (iii) infectious micro-organism diagnosis; and (iv) typing of cells within disease foci. Microarray hybridisation glass chips spotted with sets of genes can then be used to obtain a molecular fingerprint of gene expression in the microdissected cells. The variation of expressed genes or alterations in the cellular DNA that correlate with a particular disease state can be compared within or between individual samples. The identification of gene expression patterns may provide vital information for the understanding of the disease process and may contribute to diagnostic decisions and therapies tailored to the individual patient. Molecules found to be associated with defined pathological lesions may provide clues about new therapeutic targets in the future. --- paper_title: Nucleic acid based biosensors: the desires of the user. paper_content: The need for nucleic acid based diagnostic tests has increased enormously in the last few years. On the one hand, this has been stimulated by the discovery of new hereditary genetic disease loci following the completion of the Human Genome Project, but also by the presence of new rapidly spreading viral threats, such as that of the SARS epidemic, or even micro-organisms released for the purpose of biological warfare. As in many instances rapid diagnoses of specific target genetic loci is required, new strategies have to be developed, which will allow this to be achieved directly at the point-of-care setting. One of these avenues being explored is that of biosensors. In this review, we provide an overview of the current state of the art concerning the high-throughput analysis of nucleic acids, and address future requirements, which will hopefully be met by new biosensor-based developments. --- paper_title: Detection of heterozygous mutations in BRCA1 using high density oligonucleotide arrays and two–colour fluorescence analysis paper_content: The ability to scan a large gene rapidly and accurately for all possible heterozygous mutations in large numbers of patient samples will be critical for the future of medicine. We have designed high-density arrays consisting of over 96,600 oligonucleotides 20-nucleotides (nt) in length to screen for a wide range of heterozygous mutations in the 3.45-kilobases (kb) exon 11 of the hereditary breast and ovarian cancer gene BRCA1. Reference and test samples were co-hybridized to these arrays and differences in hybridization patterns quantitated by two-colour analysis. Fourteen of fifteen patient samples with known mutations were accurately diagnosed, and no false positive mutations were identified in 20 control samples. Eight single nucleotide polymorphisms were also readily detected. DNA chip-based assays may provide a valuable new technology for high-throughput cost-efficient detection of genetic alterations. --- paper_title: The Cerebral Microvasculature in Schizophrenia: A Laser Capture Microdissection Study paper_content: BACKGROUND ::: Previous studies of brain and peripheral tissues in schizophrenia patients have indicated impaired energy supply to the brain. A number of studies have also demonstrated dysfunction of the microvasculature in schizophrenia patients. Together these findings are consistent with a hypothesis of blood-brain barrier dysfunction in schizophrenia. In this study, we have investigated the cerebral vascular endothelium of schizophrenia patients at the level of transcriptomics. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We used laser capture microdissection to isolate both microvascular endothelial cells and neurons from post mortem brain tissue from schizophrenia patients and healthy controls. RNA was isolated from these cell populations, amplified, and analysed using two independent microarray platforms, Affymetrix HG133plus2.0 GeneChips and CodeLink Whole Human Genome arrays. In the first instance, we used the dataset to compare the neuronal and endothelial data, in order to demonstrate that the predicted differences between cell types could be detected using this methodology. We then compared neuronal and endothelial data separately between schizophrenic subjects and controls. Analysis of the endothelial samples showed differences in gene expression between schizophrenics and controls which were reproducible in a second microarray platform. Functional profiling revealed that these changes were primarily found in genes relating to inflammatory processes. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: This study provides preliminary evidence of molecular alterations of the cerebral microvasculature in schizophrenia patients, suggestive of a hypo-inflammatory state in this tissue type. Further investigation of the blood-brain barrier in schizophrenia is warranted. --- paper_title: Genome-wide Analysis of BP1 Transcriptional Targets in Breast Cancer Cell Line Hs578T paper_content: Homeobox genes are known to be critically important in tumor development and progression. The BP1 (Beta Protein 1) gene, an isoform of DLX4, belongs to the Distal-less (DLX) subfamily of homeobox genes and encodes a homeodomain-containing transcription factor. Our studies have shown that the BP1 gene was overexpressed in 81% of primary breast cancer and its expression was closely correlated with the progression of breast cancer. However, the exact role of BP1 in breast has yet to be elucidated. Therefore, it is important to explore the potential transcriptional targets of BP1 via whole genome-scale screening. In this study, we used the chromatin immunoprecipitation on chip (ChIP-on-chip) and gene expression microarray assays to identify candidate target genes and gene networks, which are directly regulated by BP1 in ER negative (ER-) breast cancer cells. After rigorous bioinformatic and statistical analysis for both ChIP-on-chip and expression microarray gene lists, 18 overlapping genes were noted and verified. Those potential target genes are involved in a variety of tumorigenic pathways, which sheds light on the functional mechanisms of BP1 in breast cancer development and progression. --- paper_title: A genome-wide scalable SNP genotyping assay using microarray technology paper_content: Oligonucleotide probe arrays have enabled massively parallel analysis of gene expression levels from a single cDNA sample. Application of microarray technology to analyzing genomic DNA has been stymied by the sequence complexity of the entire human genome. A robust, single base-resolution direct genomic assay would extend the reach of microarray technology. We developed an array-based whole-genome genotyping assay that does not require PCR and enables effectively unlimited multiplexing. The assay achieves a high signal-to-noise ratio by combining specific hybridization of picomolar concentrations of whole genome-amplified DNA to arrayed probes with allele-specific primer extension and signal amplification. As proof of principle, we genotyped several hundred previously characterized SNPs. The conversion rate, call rate and accuracy were comparable to those of high-performance PCR-based genotyping assays. --- paper_title: A novel, sensitive detection system for high-density microarrays using dendrimer technology paper_content: To improve signal detection on cDNA microarrays, we adapted a fluorescent oligonucleotide dendrimeric signal amplification system to microarray technology. This signal detection method requires 16-fold less RNA for probe synthesis, does not depend on the incorporation of fluorescent dNTPs into a reverse transcription reaction, generates a high signal-to-background ratio, and can be used to allow for multichannel detection on a single chip. Furthermore, since the dendrimers can be detected individually, it may be possible, by employing dendrimer-binding standards, to calculate the numbers of bound cDNAs can be estimated. These features make the dendrimer signal detection reagent ideal for high-throughput functional genomics research. --- paper_title: Development of mussel mRNA profiling: Can gene expression trends reveal coastal water pollution? paper_content: Marine bivalves of the genus Mytilus are intertidal filter-feeders commonly used as biosensors of coastal pollution. Mussels adjust their functions to ordinary environmental changes, e.g. temperature fluctuations and emersion-related hypoxia, and react to various contaminants, accumulated from the surrounding water and defining a potential health risk for sea-food consumers. Despite the increasing use of mussels in environmental monitoring, their genome and gene functions are largely unexplored. Hence, we started the systematic identification of expressed sequence tags and prepared a cDNA microarray of Mytilus galloprovincialis including 1714 mussel probes (76% singletons, ∼50% putatively identified transcripts) plus unrelated controls. To assess the potential use of the gene set represented in MytArray 1.0, we tested different tissues and groups of mussels. The resulting data highlighted the transcriptional specificity of the mussel tissues. Further testing of the most responsive digestive gland allowed correct classification of mussels treated with mixtures of heavy metals or organic contaminants (expression changes of specific genes discriminated the two pollutant cocktails). Similar analyses made a distinction possible between mussels living in the Venice lagoon (Italy) at the petrochemical district and mussels close to the open sea. The suggestive presence of gene markers tracing organic contaminants more than heavy metals in mussels from the industrial district is consistent with reported trends of chemical contamination. Further study is necessary in order to understand how much gene expression profiles can disclose the signatures of pollutants in mussel cells and tissues. Nevertheless, the gene expression patterns described in this paper support a wider characterization of the mussel transcriptome and point to the development of novel environmental metrics. --- paper_title: Atypical 11q deletions identified by array CGH may be missed by FISH panels for prognostic markers in chronic lymphocytic leukemia paper_content: Atypical 11q deletions identified by array CGH may be missed by FISH panels for prognostic markers in chronic lymphocytic leukemia --- paper_title: Detection and haplotype differentiation of Southeast Asian alpha-thalassemia using polymerase chain reaction and a piezoelectric biosensor immobilized with a single oligonucleotide probe. paper_content: DNA-based diagnosis of alpha-thalassemias routinely relies on polymerase chain reaction (PCR) and gel electrophoresis. Here, we developed a new procedure for the detection and haplotype differentiation of Southeast Asian (SEA) alpha-thalassemia using a 3-primer system for PCR coupling with a DNA-based piezoelectric biosensor. PCR products amplified from genomic DNA were differentiated directly by using a quartz crystal microbalance immobilized with a single oligonucleotide probe. The frequency changes after hybridization of the PCR products amplified from a representative sample of normal alpha-globin, SEA alpha-thalassemia heterozygote, and homozygote were 206+/-11, 256+/-5, and 307+/-3 Hz, respectively. The fabricated biosensor was evaluated through an examination of 18 blind specimens. It could accurately discriminate between normal and SEA alpha-thalassemic samples, which suggests that this biosensor system is a promising alternative technique to detect SEA alpha-thalassemia because of its specificity and less hazardous exposure as compared with conventional methods. --- paper_title: Overview of mRNA expression profiling using DNA microarrays. paper_content: DNA microarray technology allows simultaneous measurement of the mRNA levels of thousands of genes. This powerful technology has applications in addressing many biological questions that were not approachable previously; however, the enormous size of microarray data sets leads to issues of experimental design and statistical analysis that are unfamiliar to many molecular biologists. The type of array used, the design of the biological experiment, the number of experimental replicates, and the statistical method for data analysis should all be chosen based on the scientific goals of the investigator. This overview presents a discussion of the relative merits and limitations of various methods with respect to some common applications of microarray experiments. --- paper_title: Gene transcription profiling in pollutant exposed mussels (Mytilus spp.) using a new low-density oligonucleotide microarray. paper_content: In this study we describe the design and implementation of a novel low-density oligonucleotide microarray (the "Mytox-chip"). It consists of 24 mussel genes involving both normalizing elements and stress response related genes, each represented on the array with one or two different 50 mer oligonucleotide-probe reporters spotted in replicated samples on glass-activated slides. Target genes were selected on the basis of their potential involvement in mechanisms of pollutant and xenobiotic response. They are implicated in both basic and stress related cellular processes such as shock response, biotransformation and excretion, cell-cycle regulation, immune defense, drug metabolism, etc. The microarray was tested on mussels exposed to sublethal concentrations of mercury or a crude North Sea oil mixture. RNA samples were extracted from digestive glands of control and treated mussels for the synthesis of fluorescence labeled cDNAs to be used in dual color hybridizations. Transcription rates of two metallothionein iso-genes (mt10 and mt20), a p53-like gene and actin were quantitatively estimated also by real-time PCR to confirm microarray data. Significant alterations in the gene transcription patterns were seen in response to both treatments. --- paper_title: Biosensors: Applications for Dairy Food Industry paper_content: Abstract Biosensors are defined as indicators of biological compounds that can be as simple as temperature-sensitive paint or as complex as DNA-RNA probes. Food microbiologists are constantly seeking rapid and reliable automated systems for the detection of biological activity. Biosensors provide sensitive, miniaturized systems that can be used to detect unwanted microbial activity or the presence of a biologically active compound, such as glucose or a pesticide. Immunodiagnostics and enzyme biosensors are two of the leading technologies that have had the greatest impact on the food industry. The use of these two systems has reduced the time for detection of pathogens such as Salmonella to 24 h and has provided detection of biological compounds such as cholesterol or chymotrypsin. The continued development of biosensor technology will soon make available "on-line quality control" of food production, which will not only reduce cost of food production but will also provide greater safety and increased food quality. --- paper_title: What would you do if you could sequence everything? paper_content: It could be argued that the greatest transformative aspect of the Human Genome Project has been not the sequencing of the genome itself, but the resultant development of new technologies. A host of new approaches has fundamentally changed the way we approach problems in basic and translational research. Now, a new generation of high-throughput sequencing technologies promises to again transform the scientific enterprise, potentially supplanting array-based technologies and opening up many new possibilities. By allowing DNA/RNA to be assayed more rapidly than previously possible, these next-generation platforms promise a deeper understanding of genome regulation and biology. Significantly enhancing sequencing throughput will allow us to follow the evolution of viral and bacterial resistance in real time, to uncover the huge diversity of novel genes that are currently inaccessible, to understand nucleic acid therapeutics, to better integrate biological information for a complete picture of health and disease at a personalized level and to move to advances that we cannot yet imagine. --- paper_title: Discovery and analysis of inflammatory disease-related genes using cDNA microarrays. paper_content: cDNA microarray technology is used to profile complex diseases and discover novel disease-related genes. In inflammatory disease such as rheumatoid arthritis, expression patterns of diverse cell types contribute to the pathology. We have monitored gene expression in this disease state with a microarray of selected human genes of probable significance in inflammation as well as with genes expressed in peripheral human blood cells. Messenger RNA from cultured macrophages, chondrocyte cell lines, primary chondrocytes, and synoviocytes provided expression profiles for the selected cytokines, chemokines, DNA binding proteins, and matrix-degrading metalloproteinases. Comparisons between tissue samples of rheumatoid arthritis and inflammatory bowel disease verified the involvement of many genes and revealed novel participation of the cytokine interleukin 3, chemokine Gro alpha and the metalloproteinase matrix metallo-elastase in both diseases. From the peripheral blood library, tissue inhibitor of metalloproteinase 1, ferritin light chain, and manganese superoxide dismutase genes were identified as expressed differentially in rheumatoid arthritis compared with inflammatory bowel disease. These results successfully demonstrate the use of the cDNA microarray system as a general approach for dissecting human diseases. --- paper_title: Optimized T7 amplification system for microarray analysis paper_content: Glass cDNA microarray technologies offer a highly parallel approach for profiling expressed gene sequences in disease-relevant tissues. However, standard hybridization and detection protocols are insufficient for milligram quantities of tissue, such as those derived from needle biopsies. Amplification systems utilizing T7 RNA polymerase can provide multiple cRNA copies from mRNA transcripts, permitting microarray studies with reduced sample inputs. Here, we describe an optimized T7-based amplification system for microarray analysis that yields between 200- and 700-fold amplification. This system was evaluated with both mRNA and total RNA samples and provided microarray sensitivity and precision that are comparable to our standard production process without amplification. The size distributions of amplified cRNA ranged from 200 bp to 4 kb and were similar to original mRNA profiles. These amplified cRNA samples were fluorescently labeled by reverse transcription and hybridized to microarrays comprising appr... --- paper_title: Yeast microarrays for genome wide parallel genetic and gene expression analysis paper_content: We have developed high-density DNA microarrays of yeast ORFs. These microarrays can monitor hybridization to ORFs for applications such as quantitative differential gene expression analysis and screening for sequence polymorphisms. Automated scripts retrieved sequence information from public databases to locate predicted ORFs and select appropriate primers for amplification. The primers were used to amplify yeast ORFs in 96-well plates, and the resulting products were arrayed using an automated micro arraying device. Arrays containing up to 2,479 yeast ORFs were printed on a single slide. The hybridization of fluorescently labeled samples to the array were detected and quantitated with a laser confocal scanning microscope. Applications of the microarrays are shown for genetic and gene expression analysis at the whole genome level. --- paper_title: Calibrating the Performance of SNP Arrays for Whole-Genome Association Studies paper_content: To facilitate whole-genome association studies (WGAS), several high-density SNP genotyping arrays have been developed. Genetic coverage and statistical power are the primary benchmark metrics in evaluating the performance of SNP arrays. Ideally, such evaluations would be done on a SNP set and a cohort of individuals that are both independently sampled from the original SNPs and individuals used in developing the arrays. Without utilization of an independent test set, previous estimates of genetic coverage and statistical power may be subject to an overfitting bias. Additionally, the SNP arrays' statistical power in WGAS has not been systematically assessed on real traits. One robust setting for doing so is to evaluate statistical power on thousands of traits measured from a single set of individuals. In this study, 359 newly sampled Americans of European descent were genotyped using both Affymetrix 500K (Affx500K) and Illumina 650Y (Ilmn650K) SNP arrays. From these data, we were able to obtain estimates of genetic coverage, which are robust to overfitting, by constructing an independent test set from among these genotypes and individuals. Furthermore, we collected liver tissue RNA from the participants and profiled these samples on a comprehensive gene expression microarray. The RNA levels were used as a large-scale set of quantitative traits to calibrate the relative statistical power of the commercial arrays. Our genetic coverage estimates are lower than previous reports, providing evidence that previous estimates may be inflated due to overfitting. The Ilmn650K platform showed reasonable power (50% or greater) to detect SNPs associated with quantitative traits when the signal-to-noise ratio (SNR) is greater than or equal to 0.5 and the causal SNP's minor allele frequency (MAF) is greater than or equal to 20% (N = 359). In testing each of the more than 40,000 gene expression traits for association to each of the SNPs on the Ilmn650K and Affx500K arrays, we found that the Ilmn650K yielded 15% times more discoveries than the Affx500K at the same false discovery rate (FDR) level. --- paper_title: Exploring nature's plasticity with a flexible probing tool, and finding new ways for its electronic distribution. paper_content: Concepts and results are described for the use of a single, but extremely flexible, probing tool to address a wide variety of genomic questions. This is achieved by transforming genomic questions into a software file that is used as the design scheme for potentially any genomic assay in a microarray format. Microarray fabrication takes place in three-dimensional microchannel reaction carriers by in situ synthesis based on spatial light modulation. This set-up allows for maximum flexibility in design and realization of genomic assays. Flexibility is achieved at the molecular, genomic and assay levels. We have applied this technology to expression profiling and genotyping experiments. --- paper_title: Fabrication of high quality microarrays. paper_content: Fabrication of DNA microarray demands that between ten (diagnostic microarrays) and many hundred thousands of probes (research or screening microarrays) are efficiently immobilised to a glass or plastic surface using a suitable chemistry. DNA microarray performance is measured by parameters like array geometry, spot density, spot characteristics (morphology, probe density and hybridised density), background, specificity and sensitivity. At least 13 factors affect these parameters and factors affecting fabrication of microarrays are used in this review to compare different fabrication methods (spotted microarrays and in situ synthesis of microarrays) and immobilisation chemistries. --- paper_title: Applications of DNA microarrays in biology. paper_content: DNA microarrays have enabled biology researchers to conduct large-scale quantitative experiments. This capacity has produced qualitative changes in the breadth of hypotheses that can be explored. In what has become the dominant mode of use, changes in the transcription rate of nearly all the genes in a genome, taking place in a particular tissue or cell type, can be measured in disease states, during development, and in response to intentional experimental perturbations, such as gene disruptions and drug treatments. The response patterns have helped illuminate mechanisms of disease and identify disease subphenotypes, predict disease progression, assign function to previously unannotated genes, group genes into functional pathways, and predict activities of new compounds. Directed at the genome sequence itself, microarrays have been used to identify novel genes, binding sites of transcription factors, changes in DNA copy number, and variations from a baseline sequence, such as in emerging strains of pathogens or complex mutations in disease-causing human genes. They also serve as a general demultiplexing tool to sort spatially the sequence-tagged products of highly parallel reactions performed in solution. A brief review of microarray platform technology options, and of the process steps involved in complete experiment workflows, is included. --- paper_title: A novel, high-performance random array platform for quantitative gene expression profiling. paper_content: We have developed a new microarray technology for quantitative gene-expression profiling on the basis of randomly assembled arrays of beads. Each bead carries a gene-specific probe sequence. There are multiple copies of each sequence-specific bead in an array, which contributes to measurement precision and reliability. We optimized the system for specific and sensitive analysis of mammalian RNA, and using RNA controls of defined concentration, obtained the following estimates of system performance: specificity of 1:250,000 in mammalian poly(A(+)) mRNA; limit of detection 0.13 pM; dynamic range 3.2 logs; and sufficient precision to detect 1.3-fold differences with 95% confidence within the dynamic range. Measurements of expression differences between human brain and liver were validated by concordance with quantitative real-time PCR (R(2) = 0.98 for log-transformed ratios, and slope of the best-fit line = 1.04, for 20 genes). Quantitative performance was further verified using a mouse B- and T-cell model system. We found published reports of B- or T-cell-specific expression for 42 of 59 genes that showed the greatest differential expression between B- and T-cells in our system. All of the literature observations were concordant with our results. Our experiments were carried out on a 96-array matrix system that requires only 100 ng of input RNA and uses standard microtiter plates to process samples in parallel. Our technology has advantages for analyzing multiple samples, is scalable to all known genes in a genome, and is flexible, allowing the use of standard or custom probes in an array. --- paper_title: Next-generation DNA sequencing paper_content: DNA sequence represents a single format onto which a broad range of biological phenomena can be projected for high-throughput data collection. Over the past three years, massively parallel DNA sequencing platforms have become widely available, reducing the cost of DNA sequencing by over two orders of magnitude, and democratizing the field by putting the sequencing capacity of a major genome center in the hands of individual investigators. These new technologies are rapidly evolving, and near-term challenges include the development of robust protocols for generating sequencing libraries, building effective new approaches to data-analysis, and often a rethinking of experimental design. Next-generation DNA sequencing has the potential to dramatically accelerate biological and biomedical research, by enabling the comprehensive analysis of genomes, transcriptomes and interactomes to become inexpensive, routine and widespread, rather than requiring significant production-scale efforts. --- paper_title: Gene expression analysis using oligonucleotide arrays produced by maskless photolithography. paper_content: Microarrays containing 195,000 in situ synthesized oligonucleotide features have been created using a benchtop, maskless photolithographic instrument. This instrument, the Maskless Array Synthesizer (MAS), uses a digital light processor (DLP) developed by Texas Instruments. The DLP creates the patterns of UV light used in the light-directed synthesis of oligonucleotides. This digital mask eliminates the need for expensive and time-consuming chromium masks. In this report, we describe experiments in which we tested this maskless technology for DNA synthesis on glass surfaces. Parameters examined included deprotection rates, repetitive yields, and oligonucleotide length. Custom gene expression arrays were manufactured and hybridized to Drosophila melanogaster and mouse samples. Quantitative PCR was used to validate the gene expression data from the mouse arrays. --- paper_title: Light-generated oligonucleotide arrays for rapid DNA sequence analysis. paper_content: In many areas of molecular biology there is a need to rapidly extract and analyze genetic information; however, current technologies for DNA sequence analysis are slow and labor intensive. We report here how modern photolithographic techniques can be used to facilitate sequence analysis by generating miniaturized arrays of densely packed oligonucleotide probes. These probe arrays, or DNA chips, can then be applied to parallel DNA hybridization analysis, directly yielding sequence information. In a preliminary experiment, a 1.28 x 1.28 cm array of 256 different octanucleotides was produced in 16 chemical reaction cycles, requiring 4 hr to complete. The hybridization pattern of fluorescently labeled oligonucleotide targets was then detected by epifluorescence microscopy. The fluorescence signals from complementary probes were 5-35 times stronger than those with single or double base-pair hybridization mismatches, demonstrating specificity in the identification of complementary sequences. This method should prove to be a powerful tool for rapid investigations in human genetics and diagnostics, pathogen detection, and DNA molecular recognition. --- paper_title: Optical technologies for the read out and quality control of DNA and protein microarrays paper_content: Microarray formats have become an important tool for parallel (or multiplexed) monitoring of biomolecular interactions. Surface-immobilized probes like oligonucleotides, cDNA, proteins, or antibodies can be used for the screening of their complementary targets, covering different applications like gene or protein expression profiling, analysis of point mutations, or immunodiagnostics. Numerous reviews have appeared on this topic in recent years, documenting the intriguing progress of these miniaturized assay formats. Most of them highlight all aspects of microarray preparation, surface chemistry, and patterning, and try to give a systematic survey of the different kinds of applications of this new technique. This review places the emphasis on optical technologies for microarray analysis. As the fluorescent read out of microarrays is dominating the field, this topic will be the focus of the review. Basic principles of labeling and signal amplification techniques will be introduced. Recent developments in total internal reflection fluorescence, resonance energy transfer assays, and time-resolved imaging are addressed, as well as non-fluorescent imaging methods. Finally, some label-free detection modes are discussed, such as surface plasmon microscopy or ellipsometry, since these are particularly interesting for microarray development and quality control purposes. --- paper_title: Expression profiling using cDNA microarrays paper_content: cDNA microarrays are capable of profiling gene expression patterns of tens of thousands of genes in a single experiment. DNA targets, in the form of 3´ expressed sequence tags (ESTs), are arrayed onto glass slides (or membranes) and probed with fluorescent– or radioactively–labelled cDNAs. Here, we review technical aspects of cDNA microarrays, including the general principles, fabrication of the arrays, target labelling, image analysis and data extraction, management and mining. --- paper_title: Use of a cDNA microarray to analyse gene expression patterns in human cancer paper_content: The development and progression of cancer and the experimental reversal of tumorigenicity are accompanied by complex changes in patterns of gene expression. Microarrays of cDNA provide a powerful tool for studying these complex phenomena. The tumorigenic properties of a human melanoma cell line, UACC-903, can be suppressed by introduction of a normal human chromosome 6, resulting in a reduction of growth rate, restoration of contact inhibition, and suppression of both soft agar clonogenicity and tumorigenicity in nude mice. We used a high density microarray of 1,161 DNA elements to search for differences in gene expression associated with tumour suppression in this system. Fluorescent probes for hybridization were derived from two sources of cellular mRNA [UACC-903 and UACC-903(+6)] which were labelled with different fluors to provide a direct and internally controlled comparison of the mRNA levels corresponding to each arrayed gene. The fluorescence signals representing hybridization to each arrayed gene were analysed to determine the relative abundance in the two samples of mRNAs corresponding to each gene. Previously unrecognized alterations in the expression of specific genes provide leads for further investigation of the genetic basis of the tumorigenic phenotype of these cells. --- paper_title: DNA biochip arraying, detection and amplification strategies paper_content: Research on DNA biochips is advancing rapidly and has yielded several commercial platforms. This review presents, critically comments upon and compares DNA-biochip-arraying methods and techniques. It also discusses methods for detecting and amplifying hybridization events. The review focuses on miniaturization of these systems for diagnostic applications of biochip technology. --- paper_title: Reconstruction and functional analysis of altered molecular pathways in human atherosclerotic arteries paper_content: BackgroundAtherosclerosis affects aorta, coronary, carotid, and iliac arteries most frequently than any other body vessel. There may be common molecular pathways sustaining this process. Plaque presence and diffusion is revealed by circulating factors that can mediate systemic reaction leading to plaque rupture and thrombosis.ResultsWe used DNA microarrays and meta-analysis to study how the presence of calcified plaque modifies human coronary and carotid gene expression. We identified a series of potential human atherogenic genes that are integrated in functional networks involved in atherosclerosis. Caveolae and JAK/STAT pathways, and S100A9/S100A8 interacting proteins are certainly involved in the development of vascular disease. We found that the system of caveolae is directly connected with genes that respond to hormone receptors, and indirectly with the apoptosis pathway.Cytokines, chemokines and growth factors released in the blood flux were investigated in parallel. High levels of RANTES, IL-1ra, MIP-1alpha, MIP-1beta, IL-2, IL-4, IL-5, IL-6, IL-7, IL-17, PDGF-BB, VEGF and IFN-gamma were found in plasma of atherosclerotic patients and might also be integrated in the molecular networks underlying atherosclerotic modifications of these vessels.ConclusionThe pattern of cytokine and S100A9/S100A8 up-regulation characterizes atherosclerosis as a proinflammatory disorder. Activation of the JAK/STAT pathway is confirmed by the up-regulation of IL-6, STAT1, ISGF3G and IL10RA genes in coronary and carotid plaques. The functional network constructed in our research is an evidence of the central role of STAT protein and the caveolae system to contribute to preserve the plaque. Moreover, Cav-1 is involved in SMC differentiation and dyslipidemia confirming the importance of lipid homeostasis in the atherosclerotic phenotype. --- paper_title: Terminal continuation (TC) RNA amplification without second strand synthesis paper_content: Terminal continuation (TC) RNA amplification was developed originally to reproducibly and inexpensively amplify RNA. The TC RNA amplification method has been improved further by obviating second strand DNA synthesis, a cost-effective protocol that takes less time to perform with fewer manipulations required for RNA amplification. Results demonstrate that TC RNA amplification without second strand synthesis does not differ from the original protocol using RNA harvested from mouse brain and from hippocampal neurons obtained via laser capture microdissection from postmortem human brains. The modified TC RNA amplification method can discriminate single cell gene expression profiles between normal control and Alzheimer's disease hippocampal neurons indistinguishable from the original protocol. Thus, TC RNA amplification without second strand synthesis is a reproducible, time- and cost-effective method for RNA amplification from minute amounts of input RNA, and is compatible with microaspiration strategies and subsequent microarray analysis as well as quantitative real-time PCR. --- paper_title: The potential and challenges of nanopore sequencing paper_content: A nanopore-based device provides single-molecule detection and analytical capabilities that are achieved by electrophoretically driving molecules in solution through a nano-scale pore. The nanopore provides a highly confined space within which single nucleic acid polymers can be analyzed at high throughput by one of a variety of means, and the perfect processivity that can be enforced in a narrow pore ensures that the native order of the nucleobases in a polynucleotide is reflected in the sequence of signals that is detected. Kilobase length polymers (single-stranded genomic DNA or RNA) or small molecules (e.g., nucleosides) can be identified and characterized without amplification or labeling, a unique analytical capability that makes inexpensive, rapid DNA sequencing a possibility. Further research and development to overcome current challenges to nanopore identification of each successive nucleotide in a DNA strand offers the prospect of 'third generation' instruments that will sequence a diploid mammalian genome for ∼$1,000 in ∼24 h. --- paper_title: Fluorescent labelling of cRNA for microarray applications. paper_content: Microarrays of oligonucleotide expression libraries can be hybridised with either cDNA, generated from mRNA during reverse transcription, or cRNA, generated in an Eberwine mRNA amplification procedure. While methods for fluorescent labelling of cDNA have been thoroughly investigated, methods for cRNA labelling have not. To this purpose, we developed an aminoallyl-UTP (aa-UTP) driven cRNA labelling protocol and compared it in expression profiling studies using spotted 7.5 K 65mer murine oligonucleotide arrays with labelling via direct incorporation of Cy-UTPs. The presence of dimethylsulfoxide during coupling of aa-modified cRNA with N-hydroxysuccinimide-modified, fluorescent Cy dyes greatly enhanced the labelling efficiency, as analysed by spectrophotometry and fluorescent hybridisation signals. Indirect labelling using aa-UTP resulted in 2- to 3-fold higher degrees of labelling and fluorescent signals than labelling by direct incorporation of Cy-UTP. By variation of the aa-UTP:UTP ratio, a clear optimal degree of labelling was found (1 dye per 20-25 nt). Incorporation of more label increased Cy3 signal but lowered Cy5 fluorescence. This effect is probably due to quenching, which is more prominent for Cy5 than for Cy3. In conclusion, the currently developed method is an efficient, robust and inexpensive technique for fluorescent labelling of cRNA and allows sensitive detection of gene expression profiles on oligonucleotide microarrays. --- paper_title: Transcript copy number estimation using a mouse whole-genome oligonucleotide microarray paper_content: The ability to quantitatively measure the expression of all genes in a given tissue or cell with a single assay is an exciting promise of gene-expression profiling technology. An in situ-synthesized 60-mer oligonucleotide microarray designed to detect transcripts from all mouse genes was validated, as well as a set of exogenous RNA controls derived from the yeast genome (made freely available without restriction), which allow quantitative estimation of absolute endogenous transcript abundance. --- paper_title: Electrochemically Generated Acid and Its Containment to 100 Micron Reaction Areas for the Production of DNA Microarrays paper_content: An addressable electrode array was used for the production of acid at sufficient concentration to allow deprotection of the dimethoxytrityl (DMT) protecting group from an overlaying substrate bound to a porous reaction layer. Containment of the generated acid to an active electrode of 100 micron diameter was achieved by the presence of an organic base. This procedure was then used for the production of a DNA array, in which synthesis was directed by the electrochemical removal of the DMT group during synthesis. The product array was found to have a detection sensitivity to as low as 0.5 pM DNA in a complex background sample. --- paper_title: Electric field directed nucleic acid hybridization on microchips paper_content: Selection and adjustment of proper physical parameters enables rapid DNA transport, site selective concentration, and accelerated hybridization reactions to be carried out on active microelectronic arrays. These physical parameters include DC current, voltage, solution conductivity and buffer species. Generally, at any given current and voltage level, the transport or mobility of DNA is inversely proportional to electrolyte or buffer conductivity. However, only a subset of buffer species produce both rapid transport, site specific concentration and accelerated hybridization. These buffers include zwitterionic and low conductivity species such as: d- and l-histidine; 1- and 3-methylhistidines; carnosine; imidazole; pyridine; and collidine. In contrast, buffers such as glycine, beta-alanine and gamma-amino-butyric acid (GABA) produce rapid transport and site selective concentration but do not facilitate hybridization. Our results suggest that the ability of these buffers (histidine, etc.) to facilitate hybridization appears linked to their ability to provide electric field concentration of DNA; to buffer acidic conditions present at the anode; and in this process acquire a net positive charge which then shields or diminishes repulsion between the DNA strands, thus promoting hybridization. --- paper_title: Creation of the whole human genome microarray paper_content: Several companies have recently announced the availability of products that enable a scientist to probe gene expression from the entire human genome on a single DNA microarray. This review will focus on the underlying technological trends that have made this achievement possible, the particular methodologies which are employed to create such microarrays and the implications of the whole human genome microarray for future biological studies. The single genome array represents an important milestone on the path to unraveling the complexity of the cellular networks that control living processes. The microarrays being designed today may, however, become distant ancestors to the whole human genome arrays of the future as our understanding of the functioning of the human genome increases. --- paper_title: Annotating genomes with massive-scale RNA sequencing paper_content: Next generation technologies enable massive-scale cDNA sequencing (so-called RNA-Seq). Mainly because of the difficulty of aligning short reads on exon-exon junctions, no attempts have been made so far to use RNA-Seq for building gene models de novo, that is, in the absence of a set of known genes and/or splicing events. We present G-Mo.R-Se (Gene Modelling using RNA-Seq), an approach aimed at building gene models directly from RNA-Seq and demonstrate its utility on the grapevine genome. --- paper_title: Zero-Mode Waveguides for Single-Molecule Analysis at High Concentrations paper_content: Optical approaches for observing the dynamics of single molecules have required pico- to nanomolar concentrations of fluorophore in order to isolate individual molecules. However, many biologically relevant processes occur at micromolar ligand concentrations, necessitating a reduction in the conventional observation volume by three orders of magnitude. We show that arrays of zero-mode waveguides consisting of subwavelength holes in a metal film provide a simple and highly parallel means for studying single-molecule dynamics at micromolar concentrations with microsecond temporal resolution. We present observations of DNA polymerase activity as an example of the effectiveness of zero-mode waveguides for performing single-molecule experiments at high concentrations. --- paper_title: Overview of DNA Sequencing Strategies paper_content: Efficient and cost-effective DNA sequencing technologies have been, and may continue to be, critical to the progress of molecular biology. This overview of DNA sequencing strategies provides a high-level review of six distinct approaches to DNA sequencing: (a) dideoxy sequencing; (b) cyclic array sequencing; (c) sequencing-by-hybridization; (d) microelectrophoresis; (e) mass spectrometry; and (f) nanopore sequencing. The primary focus is on dideoxy sequencing, which has been the dominant technology since 1977, and on cyclic array strategies, for which several competitive implementations have been developed since 2005. Because the field of DNA sequencing is changing rapidly, this unit represents a snapshot of this particular moment. --- paper_title: Discovery and analysis of inflammatory disease-related genes using cDNA microarrays. paper_content: cDNA microarray technology is used to profile complex diseases and discover novel disease-related genes. In inflammatory disease such as rheumatoid arthritis, expression patterns of diverse cell types contribute to the pathology. We have monitored gene expression in this disease state with a microarray of selected human genes of probable significance in inflammation as well as with genes expressed in peripheral human blood cells. Messenger RNA from cultured macrophages, chondrocyte cell lines, primary chondrocytes, and synoviocytes provided expression profiles for the selected cytokines, chemokines, DNA binding proteins, and matrix-degrading metalloproteinases. Comparisons between tissue samples of rheumatoid arthritis and inflammatory bowel disease verified the involvement of many genes and revealed novel participation of the cytokine interleukin 3, chemokine Gro alpha and the metalloproteinase matrix metallo-elastase in both diseases. From the peripheral blood library, tissue inhibitor of metalloproteinase 1, ferritin light chain, and manganese superoxide dismutase genes were identified as expressed differentially in rheumatoid arthritis compared with inflammatory bowel disease. These results successfully demonstrate the use of the cDNA microarray system as a general approach for dissecting human diseases. --- paper_title: Optimized T7 amplification system for microarray analysis paper_content: Glass cDNA microarray technologies offer a highly parallel approach for profiling expressed gene sequences in disease-relevant tissues. However, standard hybridization and detection protocols are insufficient for milligram quantities of tissue, such as those derived from needle biopsies. Amplification systems utilizing T7 RNA polymerase can provide multiple cRNA copies from mRNA transcripts, permitting microarray studies with reduced sample inputs. Here, we describe an optimized T7-based amplification system for microarray analysis that yields between 200- and 700-fold amplification. This system was evaluated with both mRNA and total RNA samples and provided microarray sensitivity and precision that are comparable to our standard production process without amplification. The size distributions of amplified cRNA ranged from 200 bp to 4 kb and were similar to original mRNA profiles. These amplified cRNA samples were fluorescently labeled by reverse transcription and hybridized to microarrays comprising appr... --- paper_title: Amplified detection of single-base mismatches in DNA using microgravimetric quartz-crystal-microbalance transduction. paper_content: Three different methods for the amplified detection of a single-base mismatch in DNA are described using microgravimetric quartz-crystal-microbalance as transduction means. All methods involve the primary incorporation of a biotinylated base complementary to the mutation site in the analyzed double-stranded primer/DNA assembly. The double-stranded assembly is formed between 25 complementary bases of the probe DNA assembled on the Au-quartz crystal and the target DNA. One method of amplification includes the association of avidin- and biotin-labeled liposomes to the sensing interface. The second method of amplified detection of the base mismatch includes the association of an Au-nanoparticle-avidin conjugate to the sensing interface, and the secondary Au-nanoparticle-catalyzed deposition of gold on the particles. The third amplification route includes the binding of the avidin-alkaline phosphatase biocatalytic conjugate to the double-stranded surface followed by the oxidative hydrolysis of 5-bromo-4-chloro-3-indolyl phosphate to the insoluble product indigo derivative that precipitates on the transducer. Comparison of the three amplification routes reveals that the catalytic deposition of gold on the Au-nanoparticle/avidin conjugate is the most sensitive method, and the single-base mismatch in the analyzed DNA is detected with a sensitivity that corresponds to 3x10(-16) M. --- paper_title: Enzymatically Amplified Surface Plasmon Resonance Imaging Method Using RNase H and RNA Microarrays for the Ultrasensitive Detection of Nucleic Acids paper_content: A novel surface enzymatic amplification method that utilizes RNA microarrays in conjunction with the enzyme RNase H is developed for the ultrasensitve detection and analysis of target DNA molecules. The enzyme RNase H is shown to selectively and repeatedly destroy RNA from RNA−DNA heteroduplexes on gold surfaces; when used in conjunction with the label-free technique of surface plasmon resonance imaging, multiple DNA targets can be detected at a concentration of 10 fM on a single chip. In addition, this method is utilized for the sequence-specific detection of the TSPY gene in both purified and unpurified PCR products. Finally, in a series of kinetics measurements, the initial rate of hydrolysis is shown to depend directly on the surface concentration of DNA−RNA heteroduplexes. --- paper_title: A novel, high-performance random array platform for quantitative gene expression profiling. paper_content: We have developed a new microarray technology for quantitative gene-expression profiling on the basis of randomly assembled arrays of beads. Each bead carries a gene-specific probe sequence. There are multiple copies of each sequence-specific bead in an array, which contributes to measurement precision and reliability. We optimized the system for specific and sensitive analysis of mammalian RNA, and using RNA controls of defined concentration, obtained the following estimates of system performance: specificity of 1:250,000 in mammalian poly(A(+)) mRNA; limit of detection 0.13 pM; dynamic range 3.2 logs; and sufficient precision to detect 1.3-fold differences with 95% confidence within the dynamic range. Measurements of expression differences between human brain and liver were validated by concordance with quantitative real-time PCR (R(2) = 0.98 for log-transformed ratios, and slope of the best-fit line = 1.04, for 20 genes). Quantitative performance was further verified using a mouse B- and T-cell model system. We found published reports of B- or T-cell-specific expression for 42 of 59 genes that showed the greatest differential expression between B- and T-cells in our system. All of the literature observations were concordant with our results. Our experiments were carried out on a 96-array matrix system that requires only 100 ng of input RNA and uses standard microtiter plates to process samples in parallel. Our technology has advantages for analyzing multiple samples, is scalable to all known genes in a genome, and is flexible, allowing the use of standard or custom probes in an array. --- paper_title: Electrostatic readout of DNA microarrays with charged microspheres paper_content: Microarray platforms usually rely on fluorescence detection. Clack et al. present an equally sensitive, label-free technique that electrostatically detects DNA or RNA hybridization after randomly dispersing charged microspheres onto the microarray surface. --- paper_title: New trends in bioanalytical tools for the detection of genetically modified organisms: an update paper_content: Despite the controversies surrounding genetically modified organisms (GMOs), the production of GM crops is increasing, especially in developing countries. Thanks to new technologies involving genetic engineering and unprecedented access to genomic resources, the next decade will certainly see exponential growth in GMO production. Indeed, EU regulations based on the precautionary principle require any food containing more than 0.9% GM content to be labeled as such. The implementation of these regulations necessitates sampling protocols, the availability of certified reference materials and analytical methodologies that allow the accurate determination of the content of GMOs. In order to qualify for the validation process, a method should fulfil some criteria, defined as “acceptance criteria” by the European Network of GMO Laboratories (ENGL). Several methods have recently been developed for GMO detection and quantitation, mostly based on polymerase chain reaction (PCR) technology. PCR (including its different formats, e.g., double competitive PCR and real-time PCR) remains the technique of choice, thanks to its ability to detect even small amounts of transgenes in raw materials and processed foods. Other approaches relying on DNA detection are based on quartz crystal microbalance piezoelectric biosensors, dry reagent dipstick-type sensors and surface plasmon resonance sensors. The application of visible/near-infrared (vis/NIR) spectroscopy or mass spectrometry combined with chemometrics techniques has also been envisaged as a powerful GMO detection tool. Furthermore, in order to cope with the multiplicity of GMOs released onto the market, the new challenge is the development of routine detection systems for the simultaneous detection of numerous GMOs, including unknown GMOs. --- paper_title: Optical technologies for the read out and quality control of DNA and protein microarrays paper_content: Microarray formats have become an important tool for parallel (or multiplexed) monitoring of biomolecular interactions. Surface-immobilized probes like oligonucleotides, cDNA, proteins, or antibodies can be used for the screening of their complementary targets, covering different applications like gene or protein expression profiling, analysis of point mutations, or immunodiagnostics. Numerous reviews have appeared on this topic in recent years, documenting the intriguing progress of these miniaturized assay formats. Most of them highlight all aspects of microarray preparation, surface chemistry, and patterning, and try to give a systematic survey of the different kinds of applications of this new technique. This review places the emphasis on optical technologies for microarray analysis. As the fluorescent read out of microarrays is dominating the field, this topic will be the focus of the review. Basic principles of labeling and signal amplification techniques will be introduced. Recent developments in total internal reflection fluorescence, resonance energy transfer assays, and time-resolved imaging are addressed, as well as non-fluorescent imaging methods. Finally, some label-free detection modes are discussed, such as surface plasmon microscopy or ellipsometry, since these are particularly interesting for microarray development and quality control purposes. --- paper_title: Carbon and gold electrodes as electrochemical transducers for DNA hybridisation sensors paper_content: Genosensor technology relying on the use of carbon and gold electrodes is reviewed. The key steps of each analytical procedure, namely DNA-probe immobilisation, hybridisation, labelling and electrochemical investigation of the surface, are discussed in detail with separate sections devoted to label-free and newly emerging magnetic assays. Special emphasis has been given to protocols that have been used with real DNA samples. --- paper_title: Quartz crystal microbalance (QCM) affinity biosensor for genetically modified organisms (GMOs) detection. paper_content: Abstract A DNA piezoelectric sensor has been developed for the detection of genetically modified organisms (GMOs). Single stranded DNA (ssDNA) probes were immobilised on the sensor surface of a quartz crystal microbalance (QCM) device and the hybridisation between the immobilised probe and the target complementary sequence in solution was monitored. The probe sequences were internal to the sequence of the 35S promoter (P) and Nos terminator (T), which are inserted sequences in the genome of GMOs regulating the transgene expression. Two different probe immobilisation procedures were applied: (a) a thiol–dextran procedure and (b) a thiol-derivatised probe and blocking thiol procedure. The system has been optimised using synthetic oligonucleotides, which were then applied to samples of plasmidic and genomic DNA isolated from the pBI121 plasmid, certified reference materials (CRM), and real samples amplified by the polymerase chain reaction (PCR). The analytical parameters of the sensor have been investigated (sensitivity, reproducibility, lifetime etc.). The results obtained showed that both immobilisation procedures enabled sensitive and specific detection of GMOs, providing a useful tool for screening analysis in food samples. --- paper_title: Nanoparticles with Raman Spectroscopic Fingerprints for DNA and RNA Detection paper_content: Multiplexed detection of oligonucleotide targets has been performed with gold nanoparticle probes labeled with oligonucleotides and Raman-active dyes. The gold nanoparticles facilitate the formation of a silver coating that acts as a surface-enhanced Raman scattering promoter for the dye-labeled particles that have been captured by target molecules and an underlying chip in microarray format. The strategy provides the high-sensitivity and high-selectivity attributes of gray-scale scanometric detection but adds multiplexing and ratioing capabilities because a very large number of probes can be designed based on the concept of using a Raman tag as a narrow-band spectroscopic fingerprint. Six dissimilar DNA targets with six Raman-labeled nanoparticle probes were distinguished, as well as two RNA targets with single nucleotide polymorphisms. The current unoptimized detection limit of this method is 20 femtomolar. --- paper_title: BIACORE J: a new platform for routine biomolecular interaction analysis paper_content: SPR biosensor technology continues to evolve. The recently released platform from Biacore AB (Uppsala, Sweden), BIACORE J, is designed for the routine analysis of biomolecular interactions. Using an antibody-protein A and a ligand-receptor system, we demonstrate the utility of BIACORE J in determining active concentration and binding affinities. The results from these studies illustrate the high sensitivity of the instrument and its ability to generate reproducible binding responses. The BIACORE J is easy to operate and useful in diverse applications, making SPR technology widely accessible as a research tool. --- paper_title: Surface-Enhanced Raman Scattering Substrate Based on a Self-Assembled Monolayer for Use in Gene Diagnostics paper_content: The development of surface-enhanced Raman scattering (SERS)-active substrates for cancer gene detection is described. The detection method uses Raman active dye-labeled DNA gene probes, self-assembled monolayers, and nanostructured metallic substrates as SERS-active platforms. The mercaptohexane-labeled single-stranded DNA (SH−(CH2)6-ssDNA)/6-mercapto-1-hexanol system formed on a silver surface is characterized by atomic force microcopy. The surface-enhanced Raman gene (SERGen) probes developed in this study can be used to detect DNA targets via hybridization to complementary DNA probes. The probes do not require the use of radioactive labels and have a great potential to provide both sensitivity and selectivity. The effectiveness of this approach and its application in cancer gene diagnostics (BRCA1 breast cancer gene) are investigated. --- paper_title: Application of a miniature biochip using the molecular beacon probe in breast cancer gene BRCA1 detection. paper_content: We report for the first time the application of a biochip using the molecular beacon (MB) detection scheme. The usability of this biochip novel detection system for the analysis of the breast cancer gene BRCA1 is demonstrated using molecular beacon probes. The MB is designed for the BRCA1 gene and a miniature biochip system is used for detection. The performance of the biochip-MB detection system is evaluated. The optimum conditions for the MB system for highest fluorescence detection sensitivity are investigated for the detection system. The detection of BRCA1 gene is successfully demonstrated in solution and the limit of detection (LOD) is estimated as 70 nM. --- paper_title: Electrochemical DNA sensors paper_content: Electrochemistry-based sensors offer sensitivity, selectivity and low cost for the detection of selected DNA sequences or mutated genes associated with human disease. DNA-based electrochemical sensors exploit a range of different chemistries, but all take advantage of nanoscale interactions between the target in solution, the recognition layer and a solid electrode surface. Numerous approaches to electrochemical detection have been developed, including direct electrochemistry of DNA, electrochemistry at polymer-modified electrodes, electrochemistry of DNA-specific redox reporters, electrochemical amplifications with nanoparticles, and electrochemical devices based on DNA-mediated charge transport chemistry. --- paper_title: Piezoelectric Sensor for Determination of Genetically Modified Soybean Roundup Ready® in Samples not Amplified by PCR paper_content: The chemically modified piezoelectrodes were utilized to develop relatively cheap and easy to use biosensor for determination of genetically modified Roundup Ready soybean (RR soybean). The biosensor relies on the immobilization onto gold piezoelectrodes of the 21-mer single stranded oligonucleotide (probes) related to 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS) gene, which is an active component of an insert integrated into RR soybean genome. The hybridization reaction between the probe and the target complementary sequence in solution was monitored. The system was optimized using synthetic oligonucleotides, which were applied for EPSPS gene detection in DNA samples extracted from animal feed containing 30% RR soybean amplified by the PCR and nonamplified by PCR. The detection limit for genomic DNA was in the range of 4.7·105 numbers of genom copies contained EPSPS gene in the QCM cell. The properties such as sensitivity and selectivity of piezoelectric senor presented here indicated that it could be applied for the direct determination of genetically modified RR soybean in the samples non-amplified by PCR. --- paper_title: Electrostatic surface plasmon resonance: direct electric field-induced hybridization and denaturation in monolayer nucleic acid films and label-free discrimination of base mismatches. paper_content: We demonstrate that in situ optical surface plasmon resonance spectroscopy can be used to monitor hybridization kinetics for unlabeled DNA in tethered monolayer nucleic acid films on gold in the presence of an applied electrostatic field. The dc field can enhance or retard hybridization and can also denature surface-immobilized DNA duplexes. Discrimination between matched and mismatched hybrids is achieved by simple adjustment of the electrode potential. Although the electric field at the interface is extremely large, the tethered single-stranded DNA thiol probes remain bound and can be reused for subsequent hybridization reactions without loss of efficiency. Only capacitive charging currents are drawn; redox reactions are avoided by maintaining the gold electrode potential within the ideally polarizable region. Because of potential-induced changes in the shape of the surface plasmon resonance curve, we account for the full curve rather than simply the shift in the resonance minimum. --- paper_title: DNA biochip arraying, detection and amplification strategies paper_content: Research on DNA biochips is advancing rapidly and has yielded several commercial platforms. This review presents, critically comments upon and compares DNA-biochip-arraying methods and techniques. It also discusses methods for detecting and amplifying hybridization events. The review focuses on miniaturization of these systems for diagnostic applications of biochip technology. --- paper_title: A fully electronic sensor for the measurement of cDNA hybridization kinetics paper_content: Abstract Ion sensitive field effect transistors (ISFET) are candidates for a new generation of fully electrical DNA sensors. To this purpose, we have modified ISFET sensors by adsorbing on their Si 3 N 4 surface poly- l -lysine and single (as well as double) stranded DNA. Once coupled to an accurate model of the oppositely charged layers adsorbed on the surface, the proposed sensor allows quantitatively evaluating the adsorbed molecules densities, as well as estimating DNA hybridization kinetics. --- paper_title: Piezoelectric crystal biosensors. paper_content: The recent development of piezoelectric devices as biosensors is reviewed. Biological materials, like enzymes, lipids, antibodies and antigens, have been used as specific coatings and were utilized for the determination of different substrates. Methods of protein coating and several applications are reported including microgravimetric immunoassays, microbial assays, DNA hybridization, enzyme detections and gas phase biosensors. Although the piezoelectric immunochemical sensor is convenient to use and very promising, a thorough understanding of the different phenomena associated with crystals frequency measurement in biological reactions is still lacking and deserves further investigation. --- paper_title: DNA based biosensors paper_content: There is described a method of detection of the protein-dependent coincidence of DNA in a sample which comprises detection using luminescence of one or more luminophores introduced into DNA with one, two or more DNA fragments which fragments are bound using one or more DNA-binding proteins. --- paper_title: Microgravimetric DNA sensor based on quartz crystal microbalance: comparison of oligonucleotide immobilization methods and the application in genetic diagnosis. paper_content: We report on the study of immobilization DNA probes onto quartz crystal oscillators by self-assembly technique to form variety types of mono- and multi-layered sensing films towards the realization of DNA diagnostic devices. A 18-mer DNA probe complementary to the site of genetic beta-thalassaemia mutations was immobilized on the electrodes of QCM by covalent bonding or electrostatic adsorption on polyelectrolyte films to form mono- or multi-layered sensing films by self-assembled process. Hybridization was induced by exposure of the QCMs immobilized with DNA probe to a test solution containing the target nucleic acid sequences. The kinetics of DNA probe immobilization and hybridization with the fabricated DNA sensors were studied via in-situ frequency changes. The characteristics of QCM sensors containing mono- or multi-layered DNA probe constructed by direct chemical bonding, avidin-biotin interaction or electrostatic adsorption on polyelectrolyte films were compared. Results indicated that the DNA sensing films fabricated by immobilization of biotinylated DNA probe to avidin provide fast sensor response and high hybridization efficiencies. The effects of ionic strength of the buffer solution and the concentration of target nucleic acid used in hybridization were also studied. The fabricated DNA biosensor was used to detect a set of real samples. We conclude that the microgravimetric DNA sensor with its direct detection of amplified products provide a rapid, low cost and convenient diagnostic method for genetic disease. --- paper_title: Surface Plasmon Resonance Spectroscopy as a Probe of In-Plane Polymerization in Monolayer Organic Conducting Films paper_content: Several groups have shown that alkanethiol-modified pyrroles can be tethered to a gold surface, but there is often little evidence that, once oxidized, the resulting monolayer film is an organic conducting polymer. Using surface plasma resonance (SPR) spectroscopy, we demonstrate for the first time that, upon electrochemical oxidation, self-assembled alkanethiol−pyrrole films on gold show behavior characteristic of organic conducting polymers: we observe reversible changes in the optical constants of the organic film upon doping/dedoping. Since the optical constants are related to film conductivity, we show that the effective isotropic dielectric constant of the film obtained in the standard SPR data analysis can be interpreted in terms of in-plane and out-of-plane contributions to film conductivity. We find that the in-plane conductivity of oxidized 3-(ω-mercaptoundecyl)pyrrole is smaller, but of the same order of magnitude, than that found for thick films of polypyrrole. Most importantly, we observe re... --- paper_title: Electrical biochip technology--a tool for microarrays and continuous monitoring. paper_content: Based on electrical biochips made in Si-technology cost effective portable devices have been constructed for field applications and point of care diagnosis. These miniaturized amperometric biosensor devices enable the evaluation of biomolecular interactions by measuring the redox recycling of ELISA products, as well as the electrical monitoring of metabolites. The highly sensitive redox recycling is facilitated by interdigitated ultramicroelectrodes of high spatial resolution. The application of these electrical biochips as DNA microarrays for the molecular diagnosis of viral infections demonstrates the measurement procedure. Self-assembling of capture oligonucleotides via thiol-gold coupling has been used to construct the DNA interface on-chip. Another application for this electrical detection principle is continuous measuring with bead-based biosensors. Here, paramagnetic nanoparticles are used as carriers of the bioanalytical interface in ELISA format. A Si-micromachined glucose sensor for continuous monitoring in interstitial fluid ex vivo shows the flexibility of the electrical platform. Here the novel approach is a pore membrane in micrometer-dimensions acting as a diffusion barrier. The electrochemical detection takes place in a cavity containing glucose oxidase and a Pt-electrode surface. The common hydrogen peroxide detection, together with Si technology, enable precise differential measurements using a second cavity. --- paper_title: DNA Biochip Using a Phototransistor Integrated Circuit paper_content: This work describes the development of an integrated biosensor based on phototransistor integrated circuits (IC) for use in medical detection, DNA diagnostics, and gene mapping. The evaluation of various system components developed for an integrated biosensor microchip is discussed. Methods to develop a microarray of DNA probes on nitrocellulose substrate are discussed. The biochip device has sensors, amplifiers, discriminators, and logic circuitry on board. Integration of light-emitting diodes into the device is also possible. To achieve improved sensitivity, we have designed an IC system having each phototransistor sensing element composed of 220 phototransistor cells connected in parallel. Measurements of fluorescent-labeled DNA probe microarrays and hybridization experiments with a sequence-specific DNA probe for the human immunodeficiency virus 1 system on nitrocellulose substrates illustrate the usefulness and potential of the DNA biochip. --- paper_title: Electronic detection of DNA by its intrinsic molecular charge. paper_content: We report the selective and real-time detection of label-free DNA using an electronic readout. Microfabricated silicon field-effect sensors were used to directly monitor the increase in surface charge when DNA hybridizes on the sensor surface. The electrostatic immobilization of probe DNA on a positively charged poly-l-lysine layer allows hybridization at low ionic strength where field-effect sensing is most sensitive. Nanomolar DNA concentrations can be detected within minutes, and a single base mismatch within 12-mer oligonucleotides can be distinguished by using a differential detection technique with two sensors in parallel. The sensors were fabricated by standard silicon microtechnology and show promise for future electronic DNA arrays and rapid characterization of nucleic acid samples. This approach demonstrates the most direct and simple translation of genetic information to microelectronics. --- paper_title: Plasmonics-Based Nanostructures for Surface-Enhanced Raman Scattering Bioanalysis paper_content: Surface-enhanced Raman scattering (SERS) spectroscopy is a plasmonics-based spectroscopic technique that combines modern laser spectroscopy with unique optical properties of metallic nanostructures, resulting in strongly increased Raman signals when molecules are adsorbed on or near nanometer-size structures of special metals such as gold, silver, and transition metals. This chapter provides a synopsis of the development and application of SERS-active metallic nanostructures, especially for the analysis of biologically relevant compounds. Some highlights of this chapter include reports of SERS as an immunoassay readout method, SERS gene nanoprobes, near-field scanning optical microscopy SERS probes, SERS as a tool for single-molecule detection, and SERS nanoprobes for cellular studies. --- paper_title: A novel microgravimetric DNA sensor with high sensitivity. paper_content: A novel method using an amplifier with a cantilever and gold nanoparticles successfully to extend the length of the target for the specific and high sensitive detection of DNA was reported. When the size of gold nanoparticle is 50 nm, a sensitivity of 10(-15)M for the single base mutation detection has been achieved. --- paper_title: Label-free and high-resolution protein/DNA nanoarray analysis using Kelvin probe force microscopy. paper_content: Using the scanning probe technique known as Kelvin probe force microscopy it is possible to successfully devise a sensor for charged biomolecules. The Kelvin probe force microscope is a tool for measuring local variations in surface potential across a substrate of interest. Because many biological molecules have a native state that includes the presence of charge centres (such as the negatively charged backbone of DNA), the formation of highly specific complexes between biomolecules will often be accompanied by local changes in charge density. By spatially resolving this variation in surface potential it is possible to measure the presence of a specific bound target biomolecule on a surface without the aid of special chemistries or any form of labelling. The Kelvin probe force microscope presented here is based on an atomic force microscopy nanoprobe offering high resolution (<10 nm), sensitivity (<50 nM) and speed (>1,100 microm s(-1)), and the ability to resolve as few as three nucleotide mismatches. --- paper_title: Biosensor technology and surface plasmon resonance for real-time detection of HIV-1 genomic sequences amplified by polymerase chain reaction. paper_content: BACKGROUND ::: The recent development of biosensor technologies for biospecific interaction analysis enables the monitoring of a variety of molecular reactions in real time by surface plasmon resonance (SPR). If the ligand is a biotinylated single stranded DNA, this technology could monitor DNA-DNA hybridization. This approach could be of great interest in virology, since the hybridization step is oftenly required to confirm specificity of molecular diagnosis. ::: ::: ::: OBJECTIVES ::: To determine whether real-time molecular diagnosis of human immunodeficiency virus type I (HIV-1) could be performed using biosensors and SPR technology. ::: ::: ::: STUDY DESIGN ::: Specific hybridization of a biotinylated HIV-1 oligonucleotide probe immobilized on a sensor chip to single stranded DNA obtained by asymmetric polymerase-chain reaction (PCR) was determined using the BIAcore biosensor. ::: ::: ::: RESULTS ::: Direct injection of asymmetric PCR to a sensor chip carrying an internal HIV-1 oligonucleotide probe allows detection of hybridization by SPR using biosensor technology. This enabled us to apply a real-time, one-step, non-radioactive protocol to demonstrate the specificity of amplification of HIV-1 genomic sequences by PCR. ::: ::: ::: CONCLUSION ::: The procedure described in this study for HIV-1 detection is simple, fast (PCR and SPR analyses take 30 min), reproducible and could be proposed as an integral part of automated diagnostic systems based on the use of laboratory workstations and biosensors for DNA isolation, preparation of PCR reactions and analysis of PCR products. --- paper_title: Rapid detection of single nucleotide polymorphisms associated with spinal muscular atrophy by use of a reusable fibre-optic biosensor. paper_content: Rapid (<2 min) and quantitative genotyping for single nucleotide polymorphisms (SNPs) associated with spinal muscular atrophy was done using a reusable (approximately 80 cycles of application) fibre-optic biosensor over a clinically relevant range (0-4 gene copies). Sensors were functionalized with oligonucleotide probes immobilized at high density (approximately 7 pmol/cm2) to impart enhanced selectivity for SNP discrimination and used in a total internal reflection fluorescence detection motif to detect 202 bp PCR amplicons from patient samples. Real-time detection may be done over a range of ionic strength conditions (0.1-1.0 M) without stringency rinsing to remove non-selectively bound materials and without loss of selectivity, permitting a means for facile sample preparation. By using the time-derivative of fluorescence intensity as the analytical parameter, linearity of response may be maintained while allowing for significant reductions in analysis time (10-100-fold), permitting for the completion of measurements in under 1 min. --- paper_title: Electrochemical Interrogation of DNA Monolayers on Gold Surfaces paper_content: In this report, we systematically investigated DNA immobilization at gold surfaces with electrochemical techniques. Comparative cyclic voltammetric and chronocoulometric studies suggested that DNA monolayers immobilized at gold surfaces were not homogeneous. Nonspecific Au-DNA interactions existed even with the treatment of mercaptohexanol, which was known to competitively remove loosely bound DNA at gold surfaces. While both thiolated and nonthiolated DNA formed monolayers on gold surfaces, their hybridization abilities were distinctly different. In contrast to thiolated DNA probes, nonthiolated DNA probes immobilized at gold surfaces were essentially nonhybridizable. The experimental results presented here might be useful for the design of high-performance electrochemical DNA sensors. --- paper_title: Carbon and gold electrodes as electrochemical transducers for DNA hybridisation sensors paper_content: Genosensor technology relying on the use of carbon and gold electrodes is reviewed. The key steps of each analytical procedure, namely DNA-probe immobilisation, hybridisation, labelling and electrochemical investigation of the surface, are discussed in detail with separate sections devoted to label-free and newly emerging magnetic assays. Special emphasis has been given to protocols that have been used with real DNA samples. --- paper_title: Immobilization of single-stranded deoxyribonucleic acid on gold electrode with self-assembled aminoethanethiol monolayer for DNA electrochemical sensor applications. paper_content: A synthesized 24-mer single-stranded deoxyribonucleic acid (ssDNA) was covalently immobilized onto a self-assembled aminoethanethiol monolayer modified gold electrode, using water-soluble 1-ethyl-3(3-dimethylaminopropyl)-carbodiimide (EDC). The covalently immobilized ssDNAs were hybridized with complementary ssDNA (cDNA) or yAL(3) gene in solution, forming double-stranded DNAs (dsDNA). Meanwhile, daunomycin as an electrochemical active intercalator in the hybridization buffer solution was intercalated into the dsDNA to form a dsDNA/daunomycin system on the gold electrode surface, which was used for DNA electrochemical sensor. The cathodic waves of daunomycin bound to the double-stranded DNA (dsDNA) by linear sweep voltammetry were utilized to detect the cDNA. The cathodic peak current (i(pc)) of duanomycin was linearly related to the concentrations of cDNA between 0.1 mug ml(-1) and 0.1 ng ml(-1). The detection limit was about 30 pg ml(-1). --- paper_title: Selective Release of DNA from the Surface of Indium−Tin Oxide Thin Electrode Films Using Thiol−Disulfide Exchange Chemistry paper_content: A new challenge in biointerfacial science is the development of dynamic surfaces with the ability to adjust and tune the chemical functionality at the interface between the biological and nonbiological entities. In this paper we describe fabrication of indium−tin oxide (ITO) electrodes and the design of a ligand that can be switched to enable selectively controlled interactions with DNA. Tailoring the surface composition of the ITO electrode to optimize its optical and electrical properties was also studied. The surface attachment chemistry investigated utilizes thiol−disulfide exchange chemistry. This chemistry involved the covalent attachment of a thiol-functionalized silane anchor to a hydroxyl-activated ITO electrode surface. Subsequent reaction with 2-(2-pyridinyldithio)ethanamine hydrochloride formed the disulfide bridge and provided the terminal amine group, which facilitates addition of a cross-linker. DNA was then covalently bound to the cross-linker, and hybridization with the complementary Cy3-... --- paper_title: A fully electronic sensor for the measurement of cDNA hybridization kinetics paper_content: Abstract Ion sensitive field effect transistors (ISFET) are candidates for a new generation of fully electrical DNA sensors. To this purpose, we have modified ISFET sensors by adsorbing on their Si 3 N 4 surface poly- l -lysine and single (as well as double) stranded DNA. Once coupled to an accurate model of the oppositely charged layers adsorbed on the surface, the proposed sensor allows quantitatively evaluating the adsorbed molecules densities, as well as estimating DNA hybridization kinetics. --- paper_title: Disposable DNA electrochemical sensor for hybridization detection. paper_content: A disposable electrochemical sensor for the detection of short DNA sequences is described. Synthetic single-stranded oligonucleotides have been immobilized onto graphite screen printed electrodes with two procedures, the first involving the binding of avidinbiotinylated oligonucleotide and the second adsorption at a controlled potential. The probes were hybridized with different concentrations of complementary sequences. The formed hybrids on the electrode surface were evaluated by differential pulse voltammetry and chronopotentiometric stripping analysis using daunomycin hydrochloride as indicator of hybridization reaction. The probe immobilization step, the hybridization event and the indicator detection, have been optimized. The DNA sensor obtained by adsorption at a controlled potential was able to detect 1 microgram/ml of target sequence in the buffer solution using chronopotentiometric stripping analysis. --- paper_title: Interface Layering Phenomena in Capacitance Detection of DNA with Biochips paper_content: Reliable DNA detection is of great importance for the development of the Lab-on-chip technology. The effort of the most recent projects on this field is to integrate all necessary operations, such as sample preparation (mixing, PCR amplification) together with the sensor user for DNA detection. Among the different ways to sense the DNA hybridization, fluorescence based detection has been favored by the market. However, fluorescence based approaches require that the DNA targets are labeled by means of chromophores. As an alternative label-free DNA detection method, capacitance detection was recently proposed by different authors. While this effect has been successfully demonstrated by several groups, the model used for data analysis is far too simple to describe the real behavior of a DNA sensor. The aim of the present paper is to propose a different electrochemical model to describe DNA capacitance detection. --- paper_title: Electrochemically Generated Acid and Its Containment to 100 Micron Reaction Areas for the Production of DNA Microarrays paper_content: An addressable electrode array was used for the production of acid at sufficient concentration to allow deprotection of the dimethoxytrityl (DMT) protecting group from an overlaying substrate bound to a porous reaction layer. Containment of the generated acid to an active electrode of 100 micron diameter was achieved by the presence of an organic base. This procedure was then used for the production of a DNA array, in which synthesis was directed by the electrochemical removal of the DMT group during synthesis. The product array was found to have a detection sensitivity to as low as 0.5 pM DNA in a complex background sample. --- paper_title: Electrochemical Biosensors for Sequence‐Specific DNA Detection paper_content: This review summarizes the most relevant work performed in the last years in the field of the DNA-based electrochemical biosensors for sequence-specific DNA detection. The approaches used for preparing the biosensing layer, as well as the schemes developed for the transduction of the hybridization event are also discussed. --- paper_title: Nanomaterial-based electrochemical biosensors. paper_content: The unique properties of nanoscale materials offer excellent prospects for interfacing biological recognition events with electronic signal transduction and for designing a new generation of bioelectronic devices exhibiting novel functions. In this Highlight I address recent research that has led to powerful nanomaterial-based electrical biosensing devices and examine future prospects and challenges. New nanoparticle-based signal amplification and coding strategies for bioaffinity assays are discussed, along with carbon-nanotube molecular wires for achieving efficient electrical communication with redox enzyme and nanowire-based label-free DNA sensors. --- paper_title: Array-Based Electrical Detection of DNA with Nanoparticle Probes paper_content: A DNA array detection method is reported in which the binding of oligonucleotides functionalized with gold nanoparticles leads to conductivity changes associated with target-probe binding events. The binding events localize gold nanoparticles in an electrode gap; silver deposition facilitated by these nanoparticles bridges the gap and leads to readily measurable conductivity changes. An unusual salt concentration–dependent hybridization behavior associated with these nanoparticle probes was exploited to achieve selectivity without a thermal-stringency wash. Using this method, we have detected target DNA at concentrations as low as 500 femtomolar with a point mutation selectivity factor of ∼ 100,000:1. --- paper_title: Electrochemical DNA biosensor based on silver nanoparticles/poly(3-(3-pyridyl) acrylic acid)/carbon nanotubes modified electrode. paper_content: Abstract In this work, we present an electrochemical DNA sensor based on silver nanoparticles/poly( trans -3-(3-pyridyl) acrylic acid) (PPAA)/multiwalled carbon nanotubes with carboxyl groups (MWCNTs–COOH) modified glassy carbon electrode (GCE). The polymer film was electropolymerized onto MWCNTs–COOH modified electrode by cyclic voltammetry (CV), and then silver nanoparticles were electrodeposited on the surface of PPAA/MWCNTs–COOH composite film. Thiol group end single-stranded DNA (HS–ssDNA) probe was easily covalently linked onto the surface of silver nanoparticles through a 5′ thiol linker. The DNA hybridization events were monitored based on the signal of the intercalated adriamycin by differential pulse voltammetry (DPV). Based on the response of adriamycin, only the complementary oligonucleotides gave an obvious current signal compared with the three-base mismatched and noncomplementary oligonucleotides. Under the optimal conditions, the increase of reduction peak current of adriamycin was linear with the logarithm of the concentration of the complementary oligonucleotides from 9.0 × 10 −12 to 9.0 × 10 −9 M with a detection limit of 3.2 × 10 −12 M. In addition, this DNA sensor exhibited an excellent reproducibility and stability during DNA hybridization assay. --- paper_title: Carbon Nanotube DNA Sensor and Sensing Mechanism paper_content: We report the fabrication of single-walled carbon nanotube (SWNT) DNA sensors and the sensing mechanism. The simple and generic protocol for label-free detection of DNA hybridization is demonstrated with random sequence 15mer and 30mer oligonucleotides. DNA hybridization on gold electrodes, instead of on SWNT sidewalls, is mainly responsible for the acute electrical conductance change due to the modulation of energy level alignment between SWNT and gold contact. This work provides concrete experimental evidence on the effect of SWNT−DNA binding on DNA functionality, which will help to pave the way for future designing of SWNT biocomplexes for applications in biotechnology in general and also DNA-assisted nanotube manipulation techniques. --- paper_title: Silicon nanowire arrays for label-free detection of DNA. paper_content: Arrays of highly ordered n-type silicon nanowires (SiNW) are fabricated using complementary metal-oxide semiconductor (CMOS) compatible technology, and their applications in biosensors are investigated. Peptide nucleic acid (PNA) capture probe-functionalized SiNW arrays show a concentration-dependent resistance change upon hybridization to complementary target DNA that is linear over a large dynamic range with a detection limit of 10 fM. As with other SiNW biosensing devices, the sensing mechanism can be understood in terms of the change in charge density at the SiNW surface after hybridization, the so-called "field effect". The SiNW array biosensor discriminates satisfactorily against mismatched target DNA. It is also able to monitor directly the DNA hybridization event in situ and in real time. The SiNW array biosensor described here is ultrasensitive, non-radioactive, and more importantly, label-free, and is of particular importance to the development of gene expression profiling tools and point-of-care applications. --- paper_title: Integrated nanoparticle-biomolecule systems for biosensing and bioelectronics. paper_content: Abstract The similar dimensions of biomolecules such as enzymes, antibodies or DNA, and metallic or semiconductor nanoparticles (NPs) enable the synthesis of biomolecule–NP hybrid systems where the unique electronic, photonic and catalytic properties of NPs are combined with the specific recognition and biocatalytic properties of biomolecules. The unique functions of biomolecule–NP hybrid systems are discussed with several examples: (i) the electrical contacting of redox enzymes with electrodes is the basis for the development of enzymatic electrodes for amperometric biosensors or biofuel cell elements. The reconstitution of the apo-glucose oxidase or apo-glucose dehydrogenase on flavin adenine dinucleotide (FAD)-functionalized Au NPs (1.4 nm) associated with electrodes, or on pyrroloquinoline quinone (PQQ)-functionalized Au NPs (1.4 nm) associated with electrodes, respectively, yields electrically contacted enzyme electrodes. The aligned, reconstituted enzymes on the electrode surfaces reveal effective electrical contacting, and the glucose oxidase and glucose dehydrogenase reveal turnover rates of 5000 and 11,800 s −1 , respectively. (ii) The photoexcitation of semiconductor nanoparticles yields fluorescence with a wavelength controlled by the size of the NPs. The fluorescence functions of semiconductor NPs are used to develop a fluorescence resonance energy transfer (FRET) assay for nucleic acids, and specifically, for analyzing telomerase activity in cancer cells. CdSe–ZnS NPs are functionalized by a primer recognized by telomerase, and this is elongated by telomerase extracted from HeLa cancer cells in the presence of dNTPs and Texas-red-functionalized dUTP. The dye integrated into the telomers allows the FRET process that is intensified as telomerization proceeds. Also, the photoexcited electron–hole pair generated in semiconductor NPs is used to generate photocurrents in a CdS–DNA hybrid system associated with an electrode. A redox-active intercalator, methylene blue, was incorporated into a CdS–duplex DNA monolayer associated with a Au electrode, and this facilitated the electron transfer between the electrode and the CdS NPs. The direction of the photocurrent was controlled by the oxidation state of the intercalator. (iii) Biocatalysts grow metallic NPs, and the absorbance of the NPs provides a means to assay the biocatalytic transformations. This is exemplified with the glucose oxidase-induced growth of Au NPs and with the tyrosinase-stimulated growth of Au NPs, in the presence of glucose or tyrosine, respectively. The biocatalytic growth of the metallic NPs is used to grow nanowires on surfaces. Glucose oxidase or alkaline phosphatase functionalized with Au NPs (1.4 nm) acted as ‘biocatalytic inks’ for the synthesis of metallic nanowires. The deposition of the Au NP-modified glucose oxidase, or the Au NP-modified alkaline phosphatase on Si surfaces by dip-pen nanolithography led to biocatalytic templates, that after interaction with glucose/AuCl 4 − or p -aminophenolphosphate/Ag + , allowed the synthesis of Au nanowires or Ag nanowires, respectively. --- paper_title: Carbon nanotubes for electrochemical biosensing. paper_content: The aim of this review is to summarize the most relevant contributions in the development of electrochemical (bio)sensors based on carbon nanotubes in the last years. Since the first application of carbon nanotubes in the preparation of an electrochemical sensor, an increasing number of publications involving carbon nanotubes-based sensors have been reported, demonstrating that the particular structure of carbon nanotubes and their unique properties make them a very attractive material for the design of electrochemical biosensors. The advantages of carbon nanotubes to promote different electron transfer reactions, in special those related to biomolecules; the different strategies for constructing carbon nanotubes-based electrochemical sensors, their analytical performance and future prospects are discussed in this article. --- paper_title: Allele-specific genotype detection of factor V Leiden mutation from polymerase chain reaction amplicons based on label-free electrochemical genosensor. paper_content: An electrochemical genosensor for the genotype detection of allele-specific factor V Leiden mutation from PCR amplicons using the intrinsic guanine signal is described. The biosensor relies on the immobilization of the 21-mer inosine-substituted oligonucleotide capture probes related to the wild-type or mutant-type amplicons, and these probes are hybridized with their complementary DNA sequences at a carbon paste electrode (CPE). The extent of hybridization between the probe and target sequences was determined by using the oxidation signal of guanine in connection with differential pulse voltammetry (DPV). The guanine signal was monitored as a result of the specific hybridization between the probe and amplicon at the CPE surface. No label-binding step was necessary, and the appearance of the guanine signal shortened the assay time and simplified the detection of the factor V Leiden mutation from polymerase chain reaction (PCR)-amplified amplicons. The discrimination between the homozygous and heterozygous mutations was also established by comparing the peak currents of the guanine signals. Numerous factors affecting the hybridization and nonspecific binding events were optimized to detect down to 51.14 fmol/mL target DNA. With the help of the appearance of the guanine signal, the yes/no system is established for the electrochemical detection of allele-specific mutation on factor V for the first time. Features of this protocol are discussed and optimized. --- paper_title: In situ DNA amplification with magnetic primers for the electrochemical detection of food pathogens. paper_content: A sensitive and selective genomagnetic assay for the electrochemical detection of food pathogens based on in situ DNA amplification with magnetic primers has been designed. The performance of the genomagnetic assay was firstly demonstrated for a DNA synthetic target by its double-hybridization with both a digoxigenin probe and a biotinylated capture probe, and further binding to streptavidin-modified magnetic beads. The DNA sandwiched target bound on the magnetic beads is then separated by using a magneto electrode based on graphite-epoxy composite. The electrochemical detection is finally achieved by an enzyme marker, anti-digoxigenin horseradish peroxidase (HRP). The novel strategy was used for the rapid and sensitive detection of polymerase chain reaction (PCR) amplified samples. Promising resultants were also achieved for the DNA amplification directly performed on magnetic beads by using a novel magnetic primer, i.e., the up PCR primer bound to magnetic beads. Moreover, the magneto DNA biosensing assay was able to detect changes at single nucleotide polymorphism (SNP) level, when stringent hybridization conditions were used. The reliability of the assay was tested for Salmonella spp., the most important pathogen affecting food safety. --- paper_title: Labelfree fully electronic nucleic acid detection system based on a field-effect transistor device. paper_content: The labelfree detection of nucleic acid sequences is one of the modern attempts to develop quick, cheap and miniaturised hand-held devices for the future genetic testing in biotechnology and medical diagnostics. We present an approach to detect the hybridisation of DNA sequences using electrolyte-oxide-semiconductor field-effect transistors (EOSFETs) with micrometer dimensions. These semiconductor devices are sensitive to electrical charge variations that occur at the surface/electrolyte interface, i.e. upon hybridisation of oligonucleotides with complementary single-stranded (ss) oligonucleotides, which are immobilised on the oxide surface of the transistor gate. This method allows direct, time-resolved and in situ detection of specific nucleic acid binding events without any labelling. We focus on the detection mechanism of our sensors by using oppositely charged polyelectrolytes (PAH and PSS) subsequently attached to the transistor structures. Our results indicate that the sensor output is charge sensitive and distance dependent from the gate surface, which pinpoints the need for very defined surface chemistry at the device surface. The hybridisation of natural 19 base-pair sequences has been successfully detected with the sensors. In combination with nano-transistors a PCR free detection system might be feasible in future. --- paper_title: Electrostatic surface plasmon resonance: direct electric field-induced hybridization and denaturation in monolayer nucleic acid films and label-free discrimination of base mismatches. paper_content: We demonstrate that in situ optical surface plasmon resonance spectroscopy can be used to monitor hybridization kinetics for unlabeled DNA in tethered monolayer nucleic acid films on gold in the presence of an applied electrostatic field. The dc field can enhance or retard hybridization and can also denature surface-immobilized DNA duplexes. Discrimination between matched and mismatched hybrids is achieved by simple adjustment of the electrode potential. Although the electric field at the interface is extremely large, the tethered single-stranded DNA thiol probes remain bound and can be reused for subsequent hybridization reactions without loss of efficiency. Only capacitive charging currents are drawn; redox reactions are avoided by maintaining the gold electrode potential within the ideally polarizable region. Because of potential-induced changes in the shape of the surface plasmon resonance curve, we account for the full curve rather than simply the shift in the resonance minimum. --- paper_title: Label-free detection of DNA hybridization using carbon nanotube network field-effect transistors paper_content: We report carbon nanotube network field-effect transistors (NTNFETs) that function as selective detectors of DNA immobilization and hybridization. NTNFETs with immobilized synthetic oligonucleotides have been shown to specifically recognize target DNA sequences, including H63D single-nucleotide polymorphism (SNP) discrimination in the HFE gene, responsible for hereditary hemochromatosis. The electronic responses of NTNFETs upon single-stranded DNA immobilization and subsequent DNA hybridization events were confirmed by using fluorescence-labeled oligonucleotides and then were further explored for label-free DNA detection at picomolar to micromolar concentrations. We have also observed a strong effect of DNA counterions on the electronic response, thus suggesting a charge-based mechanism of DNA detection using NTNFET devices. Implementation of label-free electronic detection assays using NTNFETs constitutes an important step toward low-cost, low-complexity, highly sensitive and accurate molecular diagnostics. --- paper_title: A fully electronic sensor for the measurement of cDNA hybridization kinetics paper_content: Abstract Ion sensitive field effect transistors (ISFET) are candidates for a new generation of fully electrical DNA sensors. To this purpose, we have modified ISFET sensors by adsorbing on their Si 3 N 4 surface poly- l -lysine and single (as well as double) stranded DNA. Once coupled to an accurate model of the oppositely charged layers adsorbed on the surface, the proposed sensor allows quantitatively evaluating the adsorbed molecules densities, as well as estimating DNA hybridization kinetics. --- paper_title: Microelectrodes on a Silicon Chip for Label-Free Capacitive DNA Sensing paper_content: This paper presents the experimental characterization of two-terminal microfabricated capacitors for microarrays with an electrical sensing of label-free deoxyribonucleic acid (DNA). So far, such a concept has been demonstrated only in experimental setups featuring dimensions much larger than those typical of microfabrication. Therefore, this paper investigates: 1) the compatibility of the silicon microelectronic processes with biological functionalization procedures; 2) the effects of parasitics when electrodes have realistic dimensions; 3) measurement stability and reproducibility; and 4) the possibility of a fully integrated stand-alone device. The obtained results clearly indicate that two-terminal capacitive sensing with fully integrated electronics represents a viable technology for a DNA label-free detection/recognition --- paper_title: An FET-type charge sensor for highly sensitive detection of DNA sequence. paper_content: We have fabricated an field effect transistor (FET)-type DNA charge sensor based on 0.5 microm standard complementary metal oxide semiconductor (CMOS) technology which can detect the deoxyribonucleic acid (DNA) probe's immobilization and information on hybridization by sensing the variation of drain current due to DNA charge and investigated its electrical characteristics. FET-type charge sensor for detecting DNA sequence is a semiconductor sensor measuring the change of electric charge caused by DNA probe's immobilization on the gate metal, based on the field effect mechanism of MOSFET. It was fabricated in p-channel (P) MOSFET-type because the phosphate groups present in DNA have a negative charge and this charge determines the effective gate potential of PMOSFET. Gold (Au) which has a chemical affinity with thiol was used as the gate metal in order to immobilize DNA. The gate potential is determined by the electric charge which DNA possesses. Variation of the drain current versus time was measured. The drain current increased when thiol DNA and target DNA were injected into the solution, because of the field effect due to the electrical charge of DNA molecules. The experimental validity was verified by the results of mass changes detected using quartz crystal microbalance (QCM) under the same measurement condition. Therefore it is confirmed that DNA sequence can be detected by measuring the variation of the drain current due to the variation of DNA charge and the proposed FET-type DNA charge sensor might be useful in the development for DNA chips. --- paper_title: Surface Plasmon Resonance Spectroscopy as a Probe of In-Plane Polymerization in Monolayer Organic Conducting Films paper_content: Several groups have shown that alkanethiol-modified pyrroles can be tethered to a gold surface, but there is often little evidence that, once oxidized, the resulting monolayer film is an organic conducting polymer. Using surface plasma resonance (SPR) spectroscopy, we demonstrate for the first time that, upon electrochemical oxidation, self-assembled alkanethiol−pyrrole films on gold show behavior characteristic of organic conducting polymers: we observe reversible changes in the optical constants of the organic film upon doping/dedoping. Since the optical constants are related to film conductivity, we show that the effective isotropic dielectric constant of the film obtained in the standard SPR data analysis can be interpreted in terms of in-plane and out-of-plane contributions to film conductivity. We find that the in-plane conductivity of oxidized 3-(ω-mercaptoundecyl)pyrrole is smaller, but of the same order of magnitude, than that found for thick films of polypyrrole. Most importantly, we observe re... --- paper_title: Electronic detection of DNA by its intrinsic molecular charge. paper_content: We report the selective and real-time detection of label-free DNA using an electronic readout. Microfabricated silicon field-effect sensors were used to directly monitor the increase in surface charge when DNA hybridizes on the sensor surface. The electrostatic immobilization of probe DNA on a positively charged poly-l-lysine layer allows hybridization at low ionic strength where field-effect sensing is most sensitive. Nanomolar DNA concentrations can be detected within minutes, and a single base mismatch within 12-mer oligonucleotides can be distinguished by using a differential detection technique with two sensors in parallel. The sensors were fabricated by standard silicon microtechnology and show promise for future electronic DNA arrays and rapid characterization of nucleic acid samples. This approach demonstrates the most direct and simple translation of genetic information to microelectronics. --- paper_title: A Fully Electronic Label-Free DNA Sensor Chip paper_content: This paper presents a microfabricated DNA chip for fully electronic, label-free DNA recognition based on capacitance measurements. The chip has been fabricated in 0.5-mum CMOS technology and it features an array of individually addressable sensing sites consisting of pairs of gold electrodes and addressing logic. Read-out circuitry is built externally using standard components to provide increased experimental flexibility. The chip has been electrically characterized and tested with various solutions containing DNA samples. Significant capacitance variations due to DNA hybridization have been measured, thus showing that the approach represents a viable solution for a single chip DNA sensor array --- paper_title: Label-free determination of picogram quantities of DNA by stripping voltammetry with solid copper amalgam or mercury electrodes in the presence of copper. paper_content: Highly sensitive label-free techniques of DNA determination are particularly interesting in relation to the present development of the DNA sensors. We show that subnanomolar concentrations (related to monomer content) of unlabeled DNA can be determined using copper solid amalgam electrodes or hanging mercury drop electrodes in the presence of copper. DNA is first treated with acid (e.g., 0.5 M perchloric acid), and the acid-released purine bases are directly determined by the cathodic stripping voltammetry. Volumes of 5−3 μL of acid-treated DNA can easily be analyzed, thus making possible the determination of picogram and subpicogram amounts of DNA corresponding to attomole and subattomole quantities of 1000-base pair DNA. Application of this determination in DNA hybridization detection is demonstrated using surface H for the hybridization (superparamagnetic beads with covalently attached DNA probe) and the mercury electrodes only for the determination of DNA selectively captured at surface H. --- paper_title: Room-temperature transistor based on a single carbon nanotube paper_content: The use of individual molecules as functional electronic devices was first proposed in the 1970s (ref. 1). Since then, molecular electronics2,3 has attracted much interest, particularly because it could lead to conceptually new miniaturization strategies in the electronics and computer industry. The realization of single-molecule devices has remained challenging, largely owing to difficulties in achieving electrical contact to individual molecules. Recent advances in nanotechnology, however, have resulted in electrical measurements on single molecules4,5,6,7. Here we report the fabrication of a field-effect transistor—a three-terminal switching device—that consists of one semiconducting8,9,10 single-wall carbon nanotube11,12 connected to two metal electrodes. By applying a voltage to a gate electrode, the nanotube can be switched from a conducting to an insulating state. We have previously reported5 similar behaviour for a metallic single-wall carbon nanotube operated at extremely low temperatures. The present device, in contrast, operates at room temperature, thereby meeting an important requirement for potential practical applications. Electrical measurements on the nanotube transistor indicate that its operation characteristics can be qualitatively described by the semiclassical band-bending models currently used for traditional semiconductor devices. The fabrication of the three-terminal switching device at the level of a single molecule represents an important step towards molecular electronics. --- paper_title: Modification of indium tin oxide electrodes with repeat polynucleotides: electrochemical detection of trinucleotide repeat expansion. paper_content: Genomic expansion of the triplet repeat sequences 5'-(CTG)n and 5'-(CGG)n leads to myotonic dystrophy and fragile X syndrome, respectively. Methods for determining the number of repeats in unprocessed nucleic acids would be useful in diagnosing diseases based on triplet repeat expansion. Electrochemical reactions based on the oxidation of guanine were expected to give larger signals per strand for expansion of repeats containing guanine. A novel PCR reaction was used to generate fragments containing 150, 230, 400, and 830 repeats of (CTG)n, which codes for myotonic dystrophy, and 130 and 600 repeats of (CGG)n, which codes for fragile X syndrome. These PCR fragments were immobilized to indium tin oxide electrodes, and oxidation of guanine in the fragments was realized using electrocatalysis by Ru(bpy)3(2+) (bpy = 2,2'-bipyridine). The catalytic currents due to oxidation of the immobilized guanines by Ru(bpy)3(3+) increased with the number of repeats and were a linear function of the repeat number when normalized to the number of strands immobilized. These results suggest a sensing strategy for repeat length based on the combination of the electrocatalytic strategy for determining the repeat length combined with existing methods for determining the number of strands. --- paper_title: Alkaline Phosphatase-Catalyzed Silver Deposition for Electrochemical Detection paper_content: Alkaline phosphatase (AP) is one of the most used enzymatic labels for the development of ELISAs, immunosensors, DNA hybridization assays, etc. This enzyme catalyzes the dephosphorylation of a substrate into a detectable product usually quantified by optical or electrochemical measurements. This work is based on a substrate (3-indoxyl phosphate) that produces a compound able to reduce silver ions in solution into a metallic deposit, which is localized where the enzymatic label AP is attached. The deposited silver is electrochemically stripped into solution and measured by anodic stripping voltammetry. Its application to an enzymatic genosensor on streptavidin-modified screen-printed carbon electrodes for the detection of virulence nucleic acid determinants of autolysin gene, exclusively present on the genome of the human pathogen Streptococcus pneumoniae, is described. Compared with the direct voltammetric detection of indigo carmine, the anodic stripping voltammetry of silver ions is 14-fold more sensitive. --- paper_title: Identification of Upper Respiratory Tract Pathogens Using Electrochemical Detection on an Oligonucleotide Microarray paper_content: Bacterial and viral upper respiratory infections (URI) produce highly variable clinical symptoms that cannot be used to identify the etiologic agent. Proper treatment, however, depends on correct identification of the pathogen involved as antibiotics provide little or no benefit with viral infections. Here we describe a rapid and sensitive genotyping assay and microarray for URI identification using standard amplification and hybridization techniques, with electrochemical detection (ECD) on a semiconductor-based oligonucleotide microarray. The assay was developed to detect four bacterial pathogens (Bordetella pertussis, Streptococcus pyogenes, Chlamydia pneumoniae and Mycoplasma pneumoniae) and 9 viral pathogens (adenovirus 4, coronavirus OC43, 229E and HK, influenza A and B, parainfluinza types 1, 2, and 3 and respiratory syncytial virus. This new platform forms the basis for a fully automated diagnostics system that is very flexible and can be customized to suit different or additional pathogens. Multiple probes on a flexible platform allow one to test probes empirically and then select highly reactive probes for further iterative evaluation. Because ECD uses an enzymatic reaction to create electrical signals that can be read directly from the array, there is no need for image analysis or for expensive and delicate optical scanning equipment. We show assay sensitivity and specificity that are excellent for a multiplexed format. --- paper_title: Exploring the Metabolic and Genetic Control of Gene Expression on a Genomic Scale paper_content: DNA microarrays containing virtually every gene of Saccharomyces cerevisiae were used to carry out a comprehensive investigation of the temporal program of gene expression accompanying the metabolic shift from fermentation to respiration. The expression profiles observed for genes with known metabolic functions pointed to features of the metabolic reprogramming that occur during the diauxic shift, and the expression patterns of many previously uncharacterized genes provided clues to their possible functions. The same DNA microarrays were also used to identify genes whose expression was affected by deletion of the transcriptional co-repressor TUP1 or overexpression of the transcriptional activator YAP1. These results demonstrate the feasibility and utility of this approach to genomewide exploration of gene expression patterns. --- paper_title: Enzyme-catalyzed signal amplification for electrochemical DNA detection with a PNA-modified electrode paper_content: The signal amplification technique of peptide nucleic acid (PNA)-based electrochemical DNA sensor was developed in a label-free and one-step method utilizing enzymatic catalysis. Electrochemical detection of DNAhybridization on a PNA-modified electrode is based on the change of surface charge caused by the hybridization of negatively charged DNA molecules. The negatively charged mediator, ferrocenedicarboxylic acid, cannot diffuse to the DNA hybridized electrode surface due to the charge repulsion with the hybridized DNA molecule while it can easily approach the neutral PNA-modified electrode surface without the hybridization. By employing glucose oxidase catalysis on this PNA-based electrochemical system, the oxidized mediator could be immediately reduced leading to greatly increased electrochemical signals. Using the enzymatic strategy, we successfully demonstrated its clinical utility by detecting one of the mutation sequences of the breast cancer susceptibility gene BRCA1 at a sample concentration lower than 10–9 M. Furthermore, a single base-mismatched sample could be also discriminated from a perfectly matched sample. --- paper_title: DNA single-base mismatch study with an electrochemical enzymatic genosensor paper_content: Abstract A thorough selectivity study of DNA hybridization employing an electrochemical enzymatic genosensor is discussed here. After immobilizing on a gold film a 30-mer 3′-thiolated DNA strand, hybridization with a biotinylated complementary one takes place. Then, alkaline phosphatase is incorporated to the duplex through the interaction streptavidin–biotin. Enzymatic generation of indigo blue from 3-indoxyl phosphate and subsequent electrochemical detection was made. The influence of hybridization conditions was studied in order to better discern between fully complementary and mismatched strands. Detection of 3, 2 and 1 mismatch was possible. The type and location of the single-base mismatch, as well as the influence of the length of the strands was studied too. Mutations that suppose displacement of the reading frame were also considered. The effect of the concentration on the selectivity was tested, resulting a highly selective genosensor with an adequate sensitivity and stability. --- paper_title: Allele-specific genotype detection of factor V Leiden mutation from polymerase chain reaction amplicons based on label-free electrochemical genosensor. paper_content: An electrochemical genosensor for the genotype detection of allele-specific factor V Leiden mutation from PCR amplicons using the intrinsic guanine signal is described. The biosensor relies on the immobilization of the 21-mer inosine-substituted oligonucleotide capture probes related to the wild-type or mutant-type amplicons, and these probes are hybridized with their complementary DNA sequences at a carbon paste electrode (CPE). The extent of hybridization between the probe and target sequences was determined by using the oxidation signal of guanine in connection with differential pulse voltammetry (DPV). The guanine signal was monitored as a result of the specific hybridization between the probe and amplicon at the CPE surface. No label-binding step was necessary, and the appearance of the guanine signal shortened the assay time and simplified the detection of the factor V Leiden mutation from polymerase chain reaction (PCR)-amplified amplicons. The discrimination between the homozygous and heterozygous mutations was also established by comparing the peak currents of the guanine signals. Numerous factors affecting the hybridization and nonspecific binding events were optimized to detect down to 51.14 fmol/mL target DNA. With the help of the appearance of the guanine signal, the yes/no system is established for the electrochemical detection of allele-specific mutation on factor V for the first time. Features of this protocol are discussed and optimized. --- paper_title: CombiMatrix oligonucleotide arrays: genotyping and gene expression assays employing electrochemical detection. paper_content: Electrochemical detection has been developed and assay performances studied for the CombiMatrix oligonucleotide microarray platform that contains 12,544 individually addressable microelectrodes (features) in a semiconductor matrix. The approach is based on the detection of redox active chemistries (such as horseradish peroxidase (HRP) and the associated substrate TMB) proximal to specific microarray electrodes. First, microarray probes are hybridized to biotin-labeled targets, second, the HRP-streptavidin conjugate binds to biotin, and enzymatic oxidation of the electron donor substrate then occurs. The detection current is generated due to electro-reduction of the HRP reaction product, and it is measured with the CombiMatrix ElectraSense Reader. Performance of the ElectraSense platform has been characterized using gene expression and genotyping assays to analyze: (i) signal to concentration dependence, (ii) assay resolution, (iii) coefficients of variation, (CV) and (iv) array-to-array reproducibility and data correlation. The ElectraSense platform was also compared to the standard fluorescent detection, and good consistency was observed between these two different detection techniques. A lower detection limit of 0.75 pM was obtained for ElectraSense as compared to the detection limit of 1.5 pM obtained for fluorescent detection. Thus, the ElectraSense platform has been used to develop nucleic acid assays for highly accurate genotyping of a variety of pathogens including bio-threat agents (such as Bacillus anthracis, Yersinia pestis, and other microorganisms including Escherichia coli, Bacillus subtilis, etc.) and common pathogens of the respiratory tract (e.g. influenza A virus). --- paper_title: Electrochemical detection of gene expression in tumor samples: overexpression of Rak nuclear tyrosine kinase. paper_content: Absolute quantification of Rak nuclear tyrosine kinase mRNA in breast tissue samples was determined by competitive RT-PCR. The total RNA from the same samples was also chemically amplified through conventional RT-PCR, and the relative amounts of these amplified RT-PCR products were determined by adsorption onto an indium tin oxide (ITO) electrode followed by electrochemical detection. The electrochemical detection was performed using the inorganic metal complex Ru(bpy)32+ (bpy = 2,2‘ bipyridine) to catalyze the oxidation of the guanine residues of the immobilized RT-PCR products. Using the competitive RT-PCR values as standards, it was found that an optimized conventional RT-PCR coupled with electrochemical detection provides a simple method for measuring relative gene expression among a series of mRNA samples from breast tumors. The use of electrochemical detection potentially eliminates the need for gel electrophoresis and fluorescent or radioactive labels in detecting the target genes. --- paper_title: DNA aptamers that recognize fluorophore using on-chip screening in combination with an in silico evolution paper_content: We successfully developed a novel screening method for the acquisition of DNA aptamers. The technique selectively recognizes resorufin using on-chip screening in combination with an in silico evolution method. This method proved efficient for screening for DNA aptamers of single-stranded oligo-DNAs. A genetic algorithm was applied to make oligonucleotide sequences for the combinatorial library. A fluorophore, resorufin was applied to the ligand screening as a target. The affinity of the library was analyzed by the DNA microarray. This method for screening DNA ligands includes on-chip selection and a computer-evolved sequence, where the highest affinity was chosen. The fluorescence intensity of the library on the DNA microarray increased after three repetitions of the selection round. --- paper_title: Array-based evolution of DNA aptamers allows modelling of an explicit sequence-fitness landscape paper_content: Mapping the landscape of possible macromolecular polymer sequences to their fitness in performing biological functions is a challenge across the biosciences. A paradigm is the case of aptamers, nucleic acids that can be selected to bind particular target molecules. We have characterized the sequence-fitness landscape for aptamers binding allophycocyanin (APC) protein via a novel Closed Loop Aptameric Directed Evolution (CLADE) approach. In contrast to the conventional SELEX methodology, selection and mutation of aptamer sequences was carried out in silico, with explicit fitness assays for 44 131 aptamers of known sequence using DNA microarrays in vitro. We capture the landscape using a predictive machine learning model linking sequence features and function and validate this model using 5500 entirely separate test sequences, which give a very high observed versus predicted correlation of 0.87. This approach reveals a complex sequence-fitness mapping, and hypotheses for the physical basis of aptameric binding; it also enables rapid design of novel aptamers with desired binding properties. We demonstrate an extension to the approach by incorporating prior knowledge into CLADE, resulting in some of the tightest binding sequences. --- paper_title: Electrochemically Generated Acid and Its Containment to 100 Micron Reaction Areas for the Production of DNA Microarrays paper_content: An addressable electrode array was used for the production of acid at sufficient concentration to allow deprotection of the dimethoxytrityl (DMT) protecting group from an overlaying substrate bound to a porous reaction layer. Containment of the generated acid to an active electrode of 100 micron diameter was achieved by the presence of an organic base. This procedure was then used for the production of a DNA array, in which synthesis was directed by the electrochemical removal of the DMT group during synthesis. The product array was found to have a detection sensitivity to as low as 0.5 pM DNA in a complex background sample. --- paper_title: CombiMatrix oligonucleotide arrays: genotyping and gene expression assays employing electrochemical detection. paper_content: Electrochemical detection has been developed and assay performances studied for the CombiMatrix oligonucleotide microarray platform that contains 12,544 individually addressable microelectrodes (features) in a semiconductor matrix. The approach is based on the detection of redox active chemistries (such as horseradish peroxidase (HRP) and the associated substrate TMB) proximal to specific microarray electrodes. First, microarray probes are hybridized to biotin-labeled targets, second, the HRP-streptavidin conjugate binds to biotin, and enzymatic oxidation of the electron donor substrate then occurs. The detection current is generated due to electro-reduction of the HRP reaction product, and it is measured with the CombiMatrix ElectraSense Reader. Performance of the ElectraSense platform has been characterized using gene expression and genotyping assays to analyze: (i) signal to concentration dependence, (ii) assay resolution, (iii) coefficients of variation, (CV) and (iv) array-to-array reproducibility and data correlation. The ElectraSense platform was also compared to the standard fluorescent detection, and good consistency was observed between these two different detection techniques. A lower detection limit of 0.75 pM was obtained for ElectraSense as compared to the detection limit of 1.5 pM obtained for fluorescent detection. Thus, the ElectraSense platform has been used to develop nucleic acid assays for highly accurate genotyping of a variety of pathogens including bio-threat agents (such as Bacillus anthracis, Yersinia pestis, and other microorganisms including Escherichia coli, Bacillus subtilis, etc.) and common pathogens of the respiratory tract (e.g. influenza A virus). --- paper_title: In Vitro Selection of DNA Aptamers on Chips Using a Method for Generating Point Mutations paper_content: Abstract We successfully developed a novel selection method for the acquisition of DNA aptamers that selectively recognize resorufin using on‐chip selection in combination with method for point mutations. This method proved efficient for selection for DNA aptamers of single‐stranded oligo‐DNAs. A genetic algorithm was applied to produce oligonucleotides for the combinatorial library. A fluorescent molecule, resorufin, was applied for the ligand selection as a target. The binding affinity of the library was analyzed by the DNA chip. This selection method of DNA ligands includes on‐chip selection and point‐mutated sequence, which the highest affinity was selected. The fluorescence intensity of the library on the DNA chip increased after three repetitions of the selection round. The average of response for the affinity test increased with each generation. --- paper_title: Mutation detection by electrocatalysis at DNA-modified electrodes paper_content: Detection of mutations and damaged DNA bases is important for the early diagnosis of genetic disease. Here we describe an electrocatalytic method for the detection of single-base mismatches as well as DNA base lesions in fully hybridized duplexes, based on charge transport through DNA films. Gold electrodes modified with preassembled DNA duplexes are used to monitor the electrocatalytic signal of methylene blue, a redox-active DNA intercalator, coupled to [Fe(CN)_6]^3-. The presence of mismatched or damaged DNA bases substantially diminishes the electrocatalytic signal. Because this assay is not a measure of differential hybridization, all single-base mismatches, including thermodynamically stable GT and GA mismatches, can be detected without stringent hybridization conditions. Furthermore, many common DNA lesions and "hot spot" mutations in the human p53 genome can be distinguished from perfect duplexes. Finally, we have demonstrated the application of this technology in a chip-based format. This system provides a sensitive method for probing the integrity of DNA sequences and a completely new approach to single-base mismatch detection. --- paper_title: Charge transport in DNA. paper_content: The base pair stack within double helical DNA provides an effective medium for charge transport. The DNA pi-stack mediates oxidative DNA damage over long molecular distances in a reaction that is exquisitely sensitive to the sequence-dependent conformation and dynamics of DNA. A mixture of tunneling and hopping mechanisms have been proposed to account for this long-range chemistry, which is gated by dynamical variations within the stack. Electrochemical sensors have also been developed, based upon the sensitivity of DNA charge transport to base pair stacking, and these sensors provide a completely new approach to diagnosing single base mismatches in DNA and monitoring protein-DNA interactions electrically. DNA charge transport, furthermore, may play a role within the cell and, indeed, oxidative damage to DNA from a distance has been demonstrated in the cell nucleus. As a result, the biological consequences of and opportunities for DNA-mediated charge transport now require consideration. --- paper_title: Sequence-selective biosensor for DNA based on electroactive hybridization indicators. paper_content: Deoxyribonucleic acid was covalently immobilized onto oxidized glassy carbon electrode surfaces that had been activated using 1-[3-(dimethylamino)-propyl]-3-ethylcarbodimide hydrochloride and N-hydroxysulfosuccinimide. This reaction is selective for immobilization through deoxyguanosine (dG) residues. Immobilized DNA was detected voltammetrically, using tris (2,2'-bipyridyl)cobalt(III) perchlorate and tris (1,10-phenanthroline)cobalt(III) perchlorate (Co(bpy)3(3+) and Co(phen)3(3+). These complexes are reversibly electroactive (1e-) and preconcentrate at the electrode surface through association with double-stranded DNA. Voltammetric peak currents obtained with a poly(dG)poly(dC)-modified electrode depend on [Co(bpy)3(3+)] and [Co(phen)3(3+)] in a nonlinear fashion and indicate saturation binding with immobilized DNA. Voltammetric peak currents for Co(phen)3(3+) reduction were used to estimate the (constant) local DNA concentration at the modified electrode surface; a binding site size of 5 base pairs and an association constant of 1.74 x 10(3) M(-1) yield 8.6 +/- 0.2 mM base pairs. Cyclic voltammetric peak separations indicate that heterogeneous electron transfer is slower at DNA-modified electrodes than at unmodified glassy carbon electrodes. A prototype sequence-selective DNA sensor was constructed by immobilizing a 20-mer oligo (deoxythymidylic acid) (oligo(dT)20), following its enzymatic elongation with dG residues, which yielded the species oligo(dT)20(dG)98. Cyclic voltammograms of 0.12 mM Co(bpy)3(3+) obtained before and after hybridization with poly-(dA) and oligo(dA)20 show increased cathodic peaks after hybridization. The single-stranded form is regenerated on the electrode surface by rinsing with hot deionized water. These results demonstrate the use of electroactive hybridization indicators in a reusable sequence-selective biosensor for DNA. --- paper_title: Electrochemical Quantitation of DNA Immobilized on Gold paper_content: We have developed an electrochemical method to quantify the surface density of DNA immobilized on gold. The surface density of DNA, more specifically the number of nucleotide phosphate residues, is calculated from the amount of cationic redox marker measured at the electrode surface. DNA was immobilized on gold by forming mixed monolayers of thiol-derivitized, single-stranded oligonucleotide and 6-mercapto-1-hexanol. The saturated amount of charge-compensating redox marker in the DNA monolayer, determined using chronocoulometry, is directly proportional to the number of phosphate residues and thereby the surface density of DNA. This method permits quantitative determination of both single- and double-stranded DNA at electrodes. Surface densities of single-stranded DNA were precisely varied in the range of (1−10) × 1012 molecules/cm2, as determined by the electrochemical method, using mixed monolayers. We measured the hybridization efficiency of immobilized single-stranded DNA to complementary strands as a... --- paper_title: Applications of DNA microarrays in biology. paper_content: DNA microarrays have enabled biology researchers to conduct large-scale quantitative experiments. This capacity has produced qualitative changes in the breadth of hypotheses that can be explored. In what has become the dominant mode of use, changes in the transcription rate of nearly all the genes in a genome, taking place in a particular tissue or cell type, can be measured in disease states, during development, and in response to intentional experimental perturbations, such as gene disruptions and drug treatments. The response patterns have helped illuminate mechanisms of disease and identify disease subphenotypes, predict disease progression, assign function to previously unannotated genes, group genes into functional pathways, and predict activities of new compounds. Directed at the genome sequence itself, microarrays have been used to identify novel genes, binding sites of transcription factors, changes in DNA copy number, and variations from a baseline sequence, such as in emerging strains of pathogens or complex mutations in disease-causing human genes. They also serve as a general demultiplexing tool to sort spatially the sequence-tagged products of highly parallel reactions performed in solution. A brief review of microarray platform technology options, and of the process steps involved in complete experiment workflows, is included. --- paper_title: Transcription profiling of rheumatic diseases paper_content: Rheumatic diseases are a diverse group of disorders. Most of these diseases are heterogeneous in nature and show varying responsiveness to treatment. Because our understanding of the molecular complexity of rheumatic diseases is incomplete and criteria for categorization are limited, we mainly refer to them in terms of group averages. The advent of DNA microarray technology has provided a powerful tool to gain insight into the molecular complexity of these diseases; this technology facilitates open-ended survey to identify comprehensively the genes and biological pathways that are associated with clinically defined conditions. During the past decade, encouraging results have been generated in the molecular description of complex rheumatic diseases, such as rheumatoid arthritis, systemic lupus erythematosus, Sjögren syndrome and systemic sclerosis. Here, we describe developments in genomics research during the past decade that have contributed to our knowledge of pathogenesis, and to the identification of biomarkers for diagnosis, patient stratification and prognostication. --- paper_title: Microarray-Based Genomic DNA Profiling Technologies in Clinical Molecular Diagnostics paper_content: BACKGROUND ::: Microarray-based genomic DNA profiling (MGDP) technologies are rapidly moving from translational research to clinical diagnostics and have revolutionized medical practices. Such technologies have shown great advantages in detecting genomic imbalances associated with genomic disorders and single-gene diseases. ::: ::: ::: CONTENT ::: We discuss the development and applications of the major array platforms that are being used in both academic and commercial laboratories. Although no standardized platform is expected to emerge soon, comprehensive oligonucleotide microarray platforms-both comparative genomic hybridization arrays and genotyping hybrid arrays-are rapidly becoming the methods of choice for their demonstrated analytical validity in detecting genomic imbalances, for their flexibility in incorporating customized designs and updates, and for the advantage of being easily manufactured. Copy number variants (CNVs), the form of genomic deletions/duplications detected through MGDP, are a common etiology for a variety of clinical phenotypes. The widespread distribution of CNVs poses great challenges in interpretation. A broad survey of CNVs in the healthy population, combined with the data accumulated from the patient population in clinical laboratories, will provide a better understanding of the nature of CNVs and enhance the power of identifying genetic risk factors for medical conditions. ::: ::: ::: SUMMARY ::: MGDP technologies for molecular diagnostics are still at an early stage but are rapidly evolving. We are in the process of extensive clinical validation and utility evaluation of different array designs and technical platforms. CNVs of currently unknown importance will be a rich source of novel discoveries. --- paper_title: A leukemia-enriched cDNA microarray platform identifies new transcripts with relevance to the biology of pediatric acute lymphoblastic leukemia. paper_content: BACKGROUND AND OBJECTIVES: Microarray gene expression profiling has been widely applied to characterize hematologic malignancies, has attributed a molecular signature to leukemia subclasses and has allowed new subclasses to be distinguished. We set out to use microarray technology to identify novel genes relevant for leukemogenesis. To this end we used a unique leukemia-enriched cDNA microarray platform. DESIGN AND METHODS: The systematic sequencing of cDNA libraries of normal and leukemic bone marrow allowed us to increase the number of genes to yield a new release of a previously generated cDNA microarray. Using this platform we analyzed the expression profiles of 4,670 genes in bone marrow samples from 18 pediatric patients with acute lymphoblastic leukemia (ALL). RESULTS: Expression profiling consistently distinguished the leukemia patients into three groups, those with T-ALL, B-ALL and B-ALL with MLL/AF4 rearrangement, in agreement with the clinical classification. Our platform identified 30 genes that best discriminate these three subtypes. Using mini-array technology these 30 genes were validated in another cohort of 17 patients. In particular we identified two novel genes not previously reported: endomucin (EMCN) and ubiquitin specific protease 33 (USP33) that appear to be over-expressed in B-ALL relative to their expression in T-ALL. INTERPRETATION AND CONCLUSIONS: Microarray technology not only allows the distinction between disease subclasses but also offers a chance to identify new genes involved in leukemogenesis. Our approach of using a unique platform has proven to be fruitful in identifying new genes and we suggest exploration of other malignancies using this approach. --- paper_title: Parallel protein and transcript profiles of FSHD patient muscles correlate to the D4Z4 arrangement and reveal a common impairment of slow to fast fibre differentiation and a general deregulation of MyoD‐dependent genes paper_content: Here, we present the first study of a human neuromuscular disorder at transcriptional and proteomic level. Autosomal dominant facio-scapulo-humeral muscular dystrophy (FSHD) is caused by a deletion of an integral number of 3.3-kb KpnI repeats inside the telomeric region D4Z4 at the 4q35 locus. We combined a muscle-specific cDNA microarray platform with a proteomic investigation to analyse muscle biopsies of patients carrying a variable number of KpnI repeats. Unsupervised cluster analysis divides patients into three classes, according to their KpnI repeat number. Expression data reveal a transition from fast-glycolytic to slow-oxidative phenotype in FSHD muscle, which is accompanied by a deficit of proteins involved in response to oxidative stress. Besides, FSHD individuals show a disruption in the MyoD-dependent gene network suggesting a coregulation at transcriptional level during myogenesis. We also discuss the hypothesis that D4Z4 contraction may affect in trans the expression of a set of genes involved in myogenesis, as well as in the regeneration pathway of satellite cells in adult tissue. Muscular wasting could result from the inability of satellite cells to successfully differentiate into mature fibres and from the accumulation of structural damages caused by a reactive oxygen species (ROS) imbalance induced by an increased oxidative metabolism in fibres. --- paper_title: Reconstruction and functional analysis of altered molecular pathways in human atherosclerotic arteries paper_content: BackgroundAtherosclerosis affects aorta, coronary, carotid, and iliac arteries most frequently than any other body vessel. There may be common molecular pathways sustaining this process. Plaque presence and diffusion is revealed by circulating factors that can mediate systemic reaction leading to plaque rupture and thrombosis.ResultsWe used DNA microarrays and meta-analysis to study how the presence of calcified plaque modifies human coronary and carotid gene expression. We identified a series of potential human atherogenic genes that are integrated in functional networks involved in atherosclerosis. Caveolae and JAK/STAT pathways, and S100A9/S100A8 interacting proteins are certainly involved in the development of vascular disease. We found that the system of caveolae is directly connected with genes that respond to hormone receptors, and indirectly with the apoptosis pathway.Cytokines, chemokines and growth factors released in the blood flux were investigated in parallel. High levels of RANTES, IL-1ra, MIP-1alpha, MIP-1beta, IL-2, IL-4, IL-5, IL-6, IL-7, IL-17, PDGF-BB, VEGF and IFN-gamma were found in plasma of atherosclerotic patients and might also be integrated in the molecular networks underlying atherosclerotic modifications of these vessels.ConclusionThe pattern of cytokine and S100A9/S100A8 up-regulation characterizes atherosclerosis as a proinflammatory disorder. Activation of the JAK/STAT pathway is confirmed by the up-regulation of IL-6, STAT1, ISGF3G and IL10RA genes in coronary and carotid plaques. The functional network constructed in our research is an evidence of the central role of STAT protein and the caveolae system to contribute to preserve the plaque. Moreover, Cav-1 is involved in SMC differentiation and dyslipidemia confirming the importance of lipid homeostasis in the atherosclerotic phenotype. --- paper_title: Expression profiling characterization of laminin alpha-2 positive MDC. paper_content: In the Caucasian population, patients affected by the most frequent forms of congenital muscular dystrophies (MDC) are commonly divided into two groups. The first is characterized by mutations of the gene for the laminin alpha-2 (LAMA2). The second is positive for this protein, highly heterogeneous, and has no specific genetic defect associated yet. We studied the skeletal muscle transcriptome of four LAMA2 deficient and six LAMA2 positive MDC patients by cDNA microarrays. The expression profiling defined two patients groups: one mild and one severe phenotype. This result was in agreement with histopathological features but only partially with the clinical classification. The mild phenotype is characterized by a delayed maturation from slow to fast muscle fibers. Other muscle transcripts, such as telethonin, myosin light-chains 3 and 1V, are underexpressed in this group. We suggest that expression profiling will provide important information to improve our understanding of the molecular basis of laminin alpha-2 positive MDC. --- paper_title: Differential gene expression profiling of laryngeal squamous cell carcinoma by laser capture microdissection and complementary DNA microarrays. paper_content: Background and Aims Genetic alteration associated with initiation and progression of laryngeal squamous cell carcinoma (LSCC) is largely unknown. The aim of this study was to identify genetic changes associated with the disease pathogenesis and pinpoint genes whose expression is impacted by these genetic alterations. Methods Tumor cells were collected from eight matched pairs of specimens of glottic carcinoma of the larynx and histologically normal epithelium tissues adjacent to the carcinoma by laser capture microdissection (LCM). RNAs prepared from these cells were used for genome-wide transcriptome analysis by probing 16 cDNA microarrays. Real-time quantitative RT-PCR and immunohistochemistry of tissue microarrays were used to validate a group of the differentially expressed genes identified by the cDNA microarrays. Results Hierarchical cluster analysis of the expressed genes showed that 2351 genes were differentially expressed and could distinguish cancerous and noncancerous samples. We also found 761 differentially expressed genes that were consistently different between early stage and later stage specimens. Furthermore, abnormal expression of some relevant genes such as MMP12 , HMGA2, and TIMP4 were validated by real-time quantitative RT-PCR and immunohistochemistry. Analysis of gene ontology and pathway distributions then highlighted genes that may be critically important to laryngeal carcinogenesis. Conclusions Our results suggest that using LCM plus DNA microarray analysis may facilitate the identification of clinical molecular markers for disease and novel potential therapeutic targets for LSCC. --- paper_title: Allele-specific genotype detection of factor V Leiden mutation from polymerase chain reaction amplicons based on label-free electrochemical genosensor. paper_content: An electrochemical genosensor for the genotype detection of allele-specific factor V Leiden mutation from PCR amplicons using the intrinsic guanine signal is described. The biosensor relies on the immobilization of the 21-mer inosine-substituted oligonucleotide capture probes related to the wild-type or mutant-type amplicons, and these probes are hybridized with their complementary DNA sequences at a carbon paste electrode (CPE). The extent of hybridization between the probe and target sequences was determined by using the oxidation signal of guanine in connection with differential pulse voltammetry (DPV). The guanine signal was monitored as a result of the specific hybridization between the probe and amplicon at the CPE surface. No label-binding step was necessary, and the appearance of the guanine signal shortened the assay time and simplified the detection of the factor V Leiden mutation from polymerase chain reaction (PCR)-amplified amplicons. The discrimination between the homozygous and heterozygous mutations was also established by comparing the peak currents of the guanine signals. Numerous factors affecting the hybridization and nonspecific binding events were optimized to detect down to 51.14 fmol/mL target DNA. With the help of the appearance of the guanine signal, the yes/no system is established for the electrochemical detection of allele-specific mutation on factor V for the first time. Features of this protocol are discussed and optimized. --- paper_title: Recovery of developmentally defined gene sets from high-density cDNA macroarrays. paper_content: New technologies for isolating differentially expressed genes from large arrayed cDNA libraries are reported. These methods can be used to identify genes that lie downstream of developmentally important transcription factors and genes that are expressed in specific tissues, processes, or stages of embryonic development. Though developed for the study of gene expression during the early embryogenesis of the sea urchin Strongylocentrotus purpuratus, these technologies can be applied generally. Hybridization parameters were determined for the reaction of complex cDNA probes to cDNA libraries carried on six nylon filters, each containing duplicate spots from 18,432 bacterial clones (macroarrays). These libraries are of sufficient size to include nearly all genes expressed in the embryo. The screening strategy we have devised is designed to overcome inherent sensitivity limitations of macroarray hybridization and thus to isolate differentially expressed genes that are represented only by low-prevalence mRNAs. To this end, we have developed improved methods for the amplification of cDNA from small amounts of tissue (as little as approximately 300 sea urchin embryos, or 2 x 10(5) cells, or about 10 ng of mRNA) and for the differential enhancement of probe sequence concentration by subtractive hybridization. Quantitative analysis of macroarray hybridization shows that these probes now suffice for detection of differentially expressed mRNAs down to a level below five molecules per average embryo cell. ---
Title: Overview of Electrochemical DNA Biosensors: New Approaches to Detect the Expression of Life Section 1: Introduction Description 1: Introduce the background and significance of DNA biosensors, comparing them with DNA microarrays, and specify the focus of the paper. Section 2: Conventional Microarrays Description 2: Describe the basics of conventional microarrays, including their use, process, and limitations, as a benchmark for DNA biosensors. Section 3: DNA Biosensors Description 3: Outline the components of biosensors and different methods used for signal transduction, specifically for DNA biosensors. Section 4: Electrochemical/Electrical DNA Biosensors Description 4: Explain the development and methods for electrical and electrochemical DNA biosensors, highlighting their advantages and methodologies. Section 5: Nano-Objects for the Electrochemical Biosensors Description 5: Discuss the impact of nanomaterials on electrochemical biosensors, including various nanomaterials used and their benefits. Section 6: Label-Free Electrochemical DNA Detection Description 6: Detail the methods and advantages of label-free detection in electrochemical DNA biosensors and compare these to other detection methods. Section 7: Indirect Electrochemical DNA Detection Description 7: Describe the indirect methods for electrochemical DNA detection, including the use of mediators and their applications. Section 8: CombiMatrix Chip: A High Throughput DNA Sensor Description 8: Introduce and explain the CombiMatrix 12K ElectraSense microarray, its functionality, and its applications in genomic testing. Section 9: Charge Transport by DNA Description 9: Provide an overview of DNA-mediated charge transport as an alternative approach for DNA detection in electrochemical sensors. Section 10: Conclusions Description 10: Summarize the potential and future directions of DNA biosensors, emphasizing the advancements in electrochemical detection and the role of nanotechnology.
A Survey on Wireless Body Area Networks for eHealthcare Systems in Residential Environments
8
--- paper_title: Early diagnosis of Parkinson's disease: recommendations from diagnostic clinical guidelines. paper_content: Therapeutic options for Parkinson's disease (PD) are currently limited to symptomatic agents. Levodopa is the most efficacious treatment; however, higher doses and long-term use are associated with adverse effects such as motor fluctuations and dyskinesia. Early treatment of PD with other agents such as dopamine agonists and monoamine oxidase type B inhibitors can provide symptomatic benefit and delay initiation of levodopa therapy. Early treatment of PD is contingent upon early and accurate diagnosis of the disease, which can be challenging because there are no biomarkers or neuroimaging or other clinical tests available to confirm the diagnosis. PD diagnosis is currently based on the presence or absence of various clinical features and the experience of the treating physician. A definitive diagnosis can be made only after autopsy. Moreover, the signs and symptoms present in early PD can resemble those of a number of other movement disorders, particularly other forms of parkinsonism, such as multiple system atrophy, drug-induced parkinsonism, and vascular parkinsonism, as well as diffuse Lewy body disease and essential tremor. Nevertheless, diagnosis of PD based on clinical features and response to antiparkinsonian medication can be achieved with a fairly high level of accuracy, particularly when made by a physician specializing in movement disorders. This article reviews and summarizes published recommendations for the clinical diagnosis of PD. --- paper_title: A Survey on Wireless Body Area Networks paper_content: The increasing use of wireless networks and the constant miniaturization of electrical devices has empowered the development of Wireless Body Area Networks (WBANs). In these networks various sensors are attached on clothing or on the body or even implanted under the skin. The wireless nature of the network and the wide variety of sensors offer numerous new, practical and innovative applications to improve health care and the Quality of Life. The sensors of a WBAN measure for example the heartbeat, the body temperature or record a prolonged electrocardiogram. Using a WBAN, the patient experiences a greater physical mobility and is no longer compelled to stay in the hospital. This paper offers a survey of the concept of Wireless Body Area Networks. First, we focus on some applications with special interest in patient monitoring. Then the communication in a WBAN and its positioning between the different technologies is discussed. An overview of the current research on the physical layer, existing MAC and network protocols is given. Further, cross layer and quality of service is discussed. As WBANs are placed on the human body and often transport private data, security is also considered. An overview of current and past projects is given. Finally, the open research issues and challenges are pointed out. --- paper_title: A Comprehensive Overview of Wireless Body Area Networks WBAN paper_content: In recent years, the wireless body area network WBAN has emerged as a new technology for e-healthcare applications. The WBANs promise to revolutionize health monitoring. However, this technology remains in the first stages and much research is underway. Designers of such systems face a number of challenging tasks, as they need to address conflicting requirements. This includes managing the network, the data, while maximizing the autonomy of each network node. Reducing the consumption of a node, the management of network resources and security insurance are therefore major challenges. This paper presents a survey of body area networks including the WBANs challenges and -architecture, the most important body sensor devices, as well as sensor board hardware and platforms. Further, various applications of WBANs in the medical field are discussed, as well as wireless communications standards and technologies. The newest researches related to WBANs at physical and MAC layers are presented. Finally the paper identifies data security and privacy in WBANs as well as open research issues. --- paper_title: Body Area Networks: A Survey paper_content: Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications. --- paper_title: A Review on Telemedicine-Based WBAN Framework for Patient Monitoring paper_content: Abstract Objective: In this article, we describe the important aspects like major characteristics, research issues, and challenges with body area sensor networks in telemedicine systems for patient monitoring in different scenarios. Present and emerging developments in communications integrated with the developments in microelectronics and embedded system technologies will have a dramatic impact on future patient monitoring and health information delivery systems. The important challenges are bandwidth limitations, power consumption, and skin or tissue protection. Materials and Methods: This article presents a detailed survey on wireless body area networks (WBANs). Results and Conclusions: We have designed the framework for integrating body area networks on telemedicine systems. Recent trends, overall WBAN-telemedicine framework, and future research scope have also been addressed in this article. --- paper_title: Review: Health Care Utilization and Costs of Elderly Persons With Multiple Chronic Conditions paper_content: This systematic literature review identified and summarized 35 studies that investigated the relationship between multiple chronic conditions (MCCs) and health care utilization outcomes (i.e. physician use, hospital use, medication use) and health care cost outcomes (medication costs, out-of-pocket costs, total health care costs) for elderly general populations. Although synthesis of studies was complicated because of ambiguous definitions and measurements of MCCs, and because of the multitude of outcomes investigated, almost all studies observed a positive association of MCCs and use/costs, many of which found that use/costs significantly increased with each additional condition. Several studies indicate a curvilinear, near exponential relationship between MCCs and costs. The rising prevalence, substantial costs, and the fear that current care arrangements may be inappropriate for many patients with MCCs, bring about a multitude of implications for research and policy, of which the most important are presented and discussed. --- paper_title: Early Diagnosis of Alzheimer's Disease: Clinical and Economic Benefits paper_content: An estimated four million individuals in the United States have Alzheimer's disease (AD). This number is expected to more than triple by mid-century. Primary care physicians have a key role in evaluating older patients for early signs of dementia and in initiating treatment that can significantly retard its progression over the maximum period of time. That role and its challenges will inevitably grow along with the expected increase in the population aged 65 and older. The tendency for physicians to dismiss memory complaints as normal aging must be replaced by awareness of the need to assess and possibly intervene. Early intervention is the optimal strategy, not only because the patient's level of function will be preserved for a longer period, but also because community-dwelling patients with AD incur less societal cost than those who require long-term institutional placement. Institutionalization contributes heavily to the annual cost of care for AD in the United States, which is estimated to be $100 billion annually. --- paper_title: An Ambient Assisted Living System for Telemedicine with Detection of Symptoms paper_content: Elderly people have a high risk of health problems. Hence, we propose an architecture for Ambient Assisted Living (AAL) that supports pre-hospital health emergencies, remote monitoring of patients with chronic conditions and medical collaboration through sharing of health-related information resources (using the European electronic health records CEN/ISO EN13606). Furthermore, it is going to use medical data from vital signs for, on the one hand, the detection of symptoms using a simple rule system (e.g. fever), and on the other hand, the prediction of illness using chronobiology algorithms (e.g. prediction of myocardial infarction eight days before). So this architecture provides a great variety of communication interfaces to get vital signs of patients from a heterogeneous set of sources, as well as it supports the more important technologies for Home Automation. Therefore, we can combine security, comfort and ambient intelligence with a telemedicine solution, thereby, improving the quality of life in elderly people. --- paper_title: The global burden of chronic diseases: overcoming impediments to prevention and control. paper_content: Chronic diseases are the largest cause of death in the world. In 2002, the leading chronic diseases--cardiovascular disease, cancer, chronic respiratory disease, and diabetes--caused 29 million deaths worldwide. Despite growing evidence of epidemiological and economic impact, the global response to the problem remains inadequate. Stakeholders include governments, the World Health Organization and other United Nations bodies, academic and research groups, nongovernmental organizations, and the private sector. Lack of financial support retards capacity development for prevention, treatment, and research in most developing countries. Reasons for this include that up-to-date evidence related to the nature of the burden of chronic diseases is not in the hands of decision makers and strong beliefs persist that chronic diseases afflict only the affluent and the elderly, that they arise solely from freely acquired risks, and that their control is ineffective and too expensive and should wait until infectious diseases are addressed. The influence of global economic factors on chronic disease risks impedes progress, as does the orientation of health systems toward acute care. We identify 3 policy levers to address these impediments elevating chronic diseases on the health agenda of key policymakers, providing them with better evidence about risk factor control, and persuading them of the need for health systems change. A more concerted, strategic, and multisectoral policy approach, underpinned by solid research, is essential to help reverse the negative trends in the global incidence of chronic disease. --- paper_title: Early detection of CKD: the benefits, limitations and effects on prognosis paper_content: The past decade has seen an increasing focus on chronic kidney disease (CKD) and its attendant complications, which has resulted in improved understanding of their impact on health-care resources. The early detection of CKD has been facilitated by the implementation of routine reporting of estimated glomerular filtration rates (eGFRs) and by education of primary care physicians on the implications of detecting a decreased eGFR with respect to patient safety as well as to cardiovascular and renal outcomes. The goals of early CKD detection are to prevent CKD progression and associated complications, thus improving patient outcomes and reducing the impact of CKD on health-care resources. This Review examines the benefits of the early detection of CKD, and describes the limitations of current knowledge with respect to screening, early detection and treatment, as well as the unintended consequences of detection. In addition, this article highlights what is currently known about cardiovascular and renal outcomes and the effects of intervention in patients with CKD. --- paper_title: Effects of exercise and diet on chronic disease paper_content: Currently, modern chronic diseases, including cardiovascular diseases, Type 2 diabetes, metabolic syndrome, and cancer, are the leading killers in Westernized society and are increasing rampantly in developing nations. In fact, obesity, diabetes, and hypertension are now even commonplace in children. Clearly, however, there is a solution to this epidemic of metabolic disease that is inundating today’s societies worldwide: exercise and diet. Overwhelming evidence from a variety of sources, including epidemiological, prospective cohort, and intervention studies, links most chronic diseases seen in the world today to physical inactivity and inappropriate diet consumption. The purpose of this review is to 1 ) discuss the effects of exercise and diet in the prevention of chronic disease, 2 ) highlight the effects of lifestyle modification for both mitigating disease progression and reversing existing disease, and 3 ) suggest potential mechanisms for beneficial effects. --- paper_title: A Comprehensive Survey of Wireless Body Area Networks paper_content: Recent advances in microelectronics and integrated circuits, system-on-chip design, wireless communication and intelligent low-power sensors have allowed the realization of a Wireless Body Area Network (WBAN). A WBAN is a collection of low-power, miniaturized, invasive/non-invasive lightweight wireless sensor nodes that monitor the human body functions and the surrounding environment. In addition, it supports a number of innovative and interesting applications such as ubiquitous healthcare, entertainment, interactive gaming, and military applications. In this paper, the fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed. A comprehensive study of the proposed technologies for WBAN at Physical (PHY), MAC, and Network layers is presented and many useful solutions are discussed for each layer. Finally, numerous WBAN applications are highlighted. --- paper_title: The Growing Burden of Chronic Disease in America paper_content: a In 2000, approximately 125 million Americans (45% of the population) had chronic conditions and 61 million (21% of the population) had multiple chronic conditions. The number of people with chronic conditions is projected to increase steadily for the next 30 years. While current health care financing and delivery systems are designed primarily to treat acute conditions, 78% of health spending is devoted to people with chronic conditions. Quality medical care for people with chronic conditions requires a new orientation toward prevention of chronic disease and provision of ongoing care and care manage- ment to maintain their health status and functioning. Specific focus should be applied to people with multiple chronic conditions. --- paper_title: Improving transmission reliability of low-power medium access control protocols using average diversity combining paper_content: Embedded computer systems equipped with wireless communication transceivers are nowadays used in a vast number of application scenarios. Energy consumption is important in many of these scenarios, as systems are battery operated and long maintenance-free operation is required. To achieve this goal, embedded systems employ low-power communication transceivers and protocols. However, currently used protocols cannot operate efficiently when communication channels are highly erroneous. In this study, we show how average diversity combining (ADC) can be used in state-of-the-art low-power communication protocols. This novel approach improves transmission reliability and in consequence energy consumption and transmission latency in the presence of erroneous channels. Using a testbed, we show that highly erroneous channels are indeed a common occurrence in situations, where low-power systems are used and we demonstrate that ADC improves low-power communication dramatically. --- paper_title: Reliability comparison of transmit/receive diversity and error control coding in low-power medium access control protocols paper_content: Low-power medium access control (MAC) protocols used for communication of energy constraint wireless embedded devices do not cope well with situations where transmission channels are highly erroneous. Existing MAC protocols discard corrupted messages which lead to costly retransmissions. To improve transmission performance, it is possible to include an error correction scheme and transmit/receive diversity. It is possible to add redundant information to transmitted packets in order to recover data from corrupted packets. It is also possible to make use of transmit/receive diversity via multiple antennas to improve error resiliency of transmissions. Both schemes may be used in conjunction to further improve the performance. In this study, the authors show how an error correction scheme and transmit/receive diversity can be integrated in low-power MAC protocols. Furthermore, the authors investigate the achievable performance gains of both methods. This is important as both methods have associated costs (processing requirements; additional antennas and power) and for a given communication situation it must be decided which methods should be employed. The authors’ results show that, in many practical situations, error control coding outperforms transmission diversity; however, if very high reliability is required, it is useful to employ both schemes together. --- paper_title: Antennas and propagation for body centric communications paper_content: Body centric wireless communication is now accepted as an important part of 4th generation mobile communications systems. The design of antennas and the characterisation of radiowave propagation on the body are now being considered, by many groups around the world. The paper gives a brief overview of the current position and reports on some recent advances in the topic. --- paper_title: A Survey of Communications and Networking Technologies for Energy Management in Buildings and Home Automation paper_content: With the exploding power consumption in private households and increasing environmental and regulatory restraints, the need to improve the overall efficiency of electrical networks has never been greater. That being said, the most efficient way to minimize the power consumption is by voluntary mitigation of home electric energy consumption, based on energy-awareness and automatic or manual reduction of standby power of idling home appliances. Deploying bi-directional smart meters and home energy management (HEM) agents that provision real-time usage monitoring and remote control, will enable HEM in “smart households.” Furthermore, the traditionally inelastic demand curve has began to change, and these emerging HEM technologies enable consumers (industrial to residential) to respond to the energy market behavior to reduce their consumption at peak prices, to supply reserves on a as-needed basis, and to reduce demand on the electric grid. Because the development of smart grid-related activities has resulted in an increased interest in demand response (DR) and demand side management (DSM) programs, this paper presents some popular DR and DSM initiatives that include planning, implementation and evaluation techniques for reducing energy consumption and peak electricity demand. The paper then focuses on reviewing and distinguishing the various state-of-the-art HEM control and networking technologies, and outlines directions for promoting the shift towards a society with low energy demand and low greenhouse gas emissions. The paper also surveys the existing software and hardware tools, platforms, and test beds for evaluating the performance of the information and communications technologies that are at the core of future smart grids. It is envisioned that this paper will inspire future research and design efforts in developing standardized and user-friendly smart energy monitoring systems that are suitable for wide scale deployment in homes. --- paper_title: Antennas and propagation for on-body communication systems paper_content: Studies on the narrowband characterization of the on-body propagation channel show variations of up to 40dB in link loss for different antenna position and body posture. Channels on the body trunk show small variance of the path gain probability density function, whilst those involving the arms or legs a much greater variance. For most cases the monopole antenna located with the ground plane parallel to the body surface give least loss. For ultra wideband channels, path delay is highest for non line of sight links around the body and delay spreads are generally less than 10nsec. Link modeling using a free space assumption gives results within 10dB, but for more accurate modeling full body modeling is required and an example is given using locally distorted non-orthogonal FD-TD. --- paper_title: A Survey of Recent Developments in Home M2M Networks paper_content: Recent years have witnessed the emergence of machine-to-machine (M2M) networks as an efficient means for providing automated communications among distributed devices. Automated M2M communications can offset the overhead costs of conventional operations, thus promoting their wider adoption in fixed and mobile platforms equipped with embedded processors and sensors/actuators. In this paper, we survey M2M technologies for applications such as healthcare, energy management and entertainment. In particular, we examine the typical architectures of home M2M networks and discuss the performance tradeoffs in existing designs. Our investigation covers quality of service, energy efficiency and security issues. Moreover, we review existing home networking projects to better understand the real-world applicability of these systems. This survey contributes to better understanding of the challenges in existing M2M networks and further shed new light on future research directions. --- paper_title: A Power Line Communication Network Infrastructure for The Smart Home paper_content: Low voltage electrical wiring has largely been dismissed as too noisy and unpredictable to support high-speed communication signals. Advances in communication and modulation methodologies as well as in adaptive digital signal processing and error detection and correction have spawned novel protocols capable of supporting power line communication networks at speeds comparable to wired LANs. We motivate the use of power line LANs as a basic infrastructure for building integrated smart homes, wherein information appliances ranging from simple control or monitoring devices to multimedia entertainment systems are seamlessly interconnected by the very wires that provide them electricity. By simulation and actual measurements using "reference design" prototype commercial powerline products, we show that the HomePlug MAC and PHY layers can guarantee QoS for real-time communications, supporting delay-sensitive data streams for smart home applications. --- paper_title: VLSI Circuits for Biomedical Applications paper_content: VLSI (very large scale integration) is the process of creating integrated circuits by combining thousands of transistor based circuits into a single chip. Written by top-notch international experts in industry and academia, this groundbreaking resource presents a comprehensive, state-of-the-art overview of VLSI circuit design for a wide range of applications in biology and medicine.Supported with over 280 illustrations and over 160 equations, the book offers cutting-edge guidance on designing integrated circuits for wireless biosensing, body implants, biosensing interfaces, and molecular biology. Engineers discover innovative design techniques and novel materials to help them achieve higher levels circuit and system performance. This invaluable volume is essential reading for professionals and graduate students with a serious interest in circuit design and future biomedical technology. --- paper_title: Energy efficient communication in body area networks using collaborative communication in Rayleigh fading channel paper_content: Due to resource limited nature of nodes in body area networks (BAN), it is often very difficult to replace or recharge its power source. To prolong the network's life, only way out is energy efficient communication system. In this article an energy efficient communication system based on collaborative communication is proposed for BAN. Signals from the implanted nodes are received out-of-phase at the base station with no line-of-sight through an AWGN channel. Mathematical model derived here is based on three figures of merit i.e, received power, bit error rate and energy consumption. Analysis of the proposed model and Monte Carlo simulation show that the gain in received power increases as the number of collaborative nodes increase whereas BER is directly related to SNR $$(E_b/N_0)$$(Eb/N0). To evaluate energy consumption of the proposed system, it is compared with single-input-single-output (SISO) system. In this comparison it has been found that SISO performs well at short distances but collaborative communication outperforms SISO in case of long distances. It is also found that collaborative communication requires "N $$\times $$× Transmitted power", less transmission power in comparison to SISO systems. It is observed that collaborative communication achieve energy saving very close to 99 %. On the basis of these results it is safe to recommend collaborative communication for resource limited BAN. --- paper_title: Towards the fast and robust optimal design of Wireless Body Area Networks paper_content: Wireless body area networks are wireless sensor networks whose adoption has recently emerged and spread in important healthcare applications, such as the remote monitoring of health conditions of patients. A major issue associated with the deployment of such networks is represented by energy consumption: in general, the batteries of the sensors cannot be easily replaced and recharged, so containing the usage of energy by a rational design of the network and of the routing is crucial. Another issue is represented by traffic uncertainty: body sensors may produce data at a variable rate that is not exactly known in advance, for example because the generation of data is event-driven. Neglecting traffic uncertainty may lead to wrong design and routing decisions, which may compromise the functionality of the network and have very bad effects on the health of the patients. In order to address these issues, in this work we propose the first robust optimization model for jointly optimizing the topology and the routing in body area networks under traffic uncertainty. Since the problem may result challenging even for a state-of-the-art optimization solver, we propose an original optimization algorithm that exploits suitable linear relaxations to guide a randomized fixing of the variables, supported by an exact large variable neighborhood search. Experiments on realistic instances indicate that our algorithm performs better than a state-of-the-art solver, fast producing solutions associated with improved optimality gaps. --- paper_title: Energy efficient cooperative transmission in single-relay UWB based body area networks paper_content: Energy efficiency is one of the most critical parameters in ultra-wideband (UWB) based wireless body area networks (WBANs). In this paper, the energy efficiency optimization problem is investigated for cooperative transmission with a single relay in UWB based WBANs. Two practical onbody transmission scenarios are taken into account, namely, along-torso scenario and around-torso scenario. With a proposed single-relay WBAN model, a joint optimal scheme for the energy efficiency optimization is developed, which not only derives the optimal power allocation but also seeks the corresponding optimal relay location for each scenario. Simulation results show that the utilization of a relay node is necessary for the energy efficient transmission in particular for the around-torso scenario and the relay location is an important parameter. With the joint optimal relay location and power allocation, the proposed scheme is able to achieve up to 30 times improvement compared to direct transmission in terms of the energy efficiency when the battery of the sensor node is very limited, which indicates that it is an effective way to prolong the network lifetime in WBANs. --- paper_title: Energy-Efficient Resource Allocation with QoS Support in Wireless Body Area Networks paper_content: Wireless Body Area Network (WBAN) has become a promising type of networks to provide applications such as real-time health monitoring and ubiquitous e-Health services. One challenge in the design of WBAN is that energy efficiency needs to be ensured to increase the network lifetime in such a resourceconstrained network. Another critical challenge for WBAN is that quality of service (QoS) requirements, including packet loss rate (PLR), throughput and delay, should be guaranteed even under the highly dynamic environment due to changing of body postures. In this paper, we design a unified framework of energy efficient resource allocation scheme for WBAN, in which both constraints of QoS metrics and the characteristics of dynamic links are considered. A transmission rate allocation policy (TRAP) is proposed to carefully adjust the transmission rate at each sensor such that more strict PLR requirement could be achieved even when the link quality is very poor. A QoS optimization problem is then formulated to optimize the transmission power and allocated time slots for each sensor, which minimizes energy consumption subject to the QoS constraints. Numerical results demonstrate the effectiveness of the proposed transmission rate allocation policy and the resource allocation scheme. --- paper_title: An Energy-Efficient Hybrid System for Wireless Body Area Network Applications paper_content: Wireless Body Area Networks (WBANs) consist of a number of miniaturized wearable or implanted sensor nodes that are employed to monitor vital parameters of a patient over long duration of time. These sensors capture physiological data and wirelessly transfer the collected data to a local base station in order to be further processed. Almost all of these body sensors are expected to have low data-rate and to run on a battery. Since recharging or replacing the battery is not a simple task specifically in the case of implanted devices such as pacemakers, extending the lifetime of sensor nodes in WBANs is one of the greatest challenges. To achieve this goal, WBAN systems employ low-power communication transceivers and low duty cycle Medium Access Control (MAC) protocols. Although, currently used MAC protocols are able to reduce the energy consumption of devices for transmission and reception, yet they are still unable to offer an ultimate energy self-sustaining solution for low-power MAC protocols. This paper proposes to utilize energy harvesting technologies in low-power MAC protocols. This novel approach can further reduce energy consumption of devices in WBAN systems. --- paper_title: Power reduction by varying sampling rate paper_content: The rate at which a digital signal processing (DSP) system operates depends on the highest frequency component in the input signal. DSP applications must sample their inputs at a frequency at least twice the highest frequency in the input signal (i.e., the Nyquist rate) to accurately reproduce the signal. Typically a fixed sampling rate, guaranteed to always be high enough, is used. However, an input signal may have periods when the signal has little high frequency content as well as periods of silence. When the input signal has no perceptible high frequency components, the system can reduce its sampling rate, thereby reducing the number of samples processed per second, allowing the CPU speed to be scaled down without reducing output quality. This paper describes how to reduce power consumption in DSP applications by varying the amount of processing based on the input signal, and reports results of experiments with a prototype implementation. Experiments with a prototype show that when the system performs little processing, the added overhead of the variable sampling rate technique increased power consumption. When the system performs more processing, 18 FIR filters per frame, the power consumption was reduced to 40 % of the power required for a static sampling rate, while not reducing sound quality. --- paper_title: Co-LAEEBA: Cooperative link aware and energy efficient protocol for wireless body area networks paper_content: Proposed model gives energy efficient communications for human body in WBAN.Link aware communications.Cooperative routing decreases path loss.Cooperative routing allows more frequent data gathering. Performance evaluation of Wireless Body Area Networks (WBANs) is primarily conducted in terms of simulation based studies. From this perspective, recent research has focused on channel modeling, and energy conservation at Network/MAC layer. Most of these studies ignore collaborative learning and path loss. In this paper, we present Link-Aware and Energy Efficient protocol for wireless Body Area networks (LAEEBA) and Cooperative Link-Aware and Energy Efficient protocol for wireless Body Area networks (Co-LAEEBA) routing schemes. Unlike existing schemes, the proposed work factors in collaborative learning and path loss. Cost functions are introduced to learn and select the most feasible route from a given node to sink while sharing each others distance and residual energy information. Simulation results show improved performance of the proposed protocols in comparison to the selected existing ones in terms of the chosen performance metrics. --- paper_title: Trading off prediction accuracy and power consumption for context-aware wearable computing paper_content: Context-aware mobile computing requires wearable sensors to acquire information about the user. Continuous sensing rapidly depletes the -wearable system's energy, which is a critically constrained resource. In this paper, we analyze the trade-off between power consumption and prediction accuracy of context classifiers working on dual-axis accelerometer data collected from the eWaich sensing and notification platform. We improve power consumption techniques by providing competitive classification performance even in the low frequency region of 1-10 Hz and for the highly erratic wrist based sensing location. Furthermore, we propose and analyze a collection of selective sampling strategies in order to reduce the number of required sensor readings and the computation cycles even further. Our results indicate that optimized sampling schemes can increase the deployment lifetime of a wearable computing platform by a factor of four without a significant loss in prediction accuracy. --- paper_title: UWB on-body radio channel modeling using ray theory and subband FDTD method paper_content: This paper presents the ultra-wideband on-body radio channel modeling using a subband finite-difference time-domain (FDTD) method and a model combining the uniform geometrical theory of diffraction (UTD) and ray tracing (RT). In the subband FDTD model, the frequency band (3-9 GHz) is uniformly divided into 12 subbands in order to take into account the material frequency dispersion. Each subband is simulated separately and then a combination technique is used to recover all simulations at the receiver. In the UTD/RT model, the RT technique is used to find the surface diffracted ray path, while the UTD is applied for calculating the received signal. Respective modeling results from two- and three-dimensional subband FDTD and UTD/RT models indicate that antenna patterns have significant impacts on the on-body radio channel. The effect of different antenna types on on-body radio channels is also investigated through the UTD/RT approach. --- paper_title: Dynamic Channel Modeling for Multi-Sensor Body Area Networks paper_content: A channel model for time-variant multi-link wireless body area networks (WBANs) is proposed in this paper, based on an extensive measurement campaign using a multi-port channel sounder. A total of 12 nodes were placed on the body to measure the multi-link channel within the created WBAN. The resulting empirical model takes into account the received power, the link fading statistics, and the link auto- and cross-correlations. The distance dependence of the received power is investigated, and the link fading is modeled by a log-normal distribution. The link autocorrelation function is divided into a decaying component and a sinusoidal component to account for the periodical movement of the limbs caused by walking. The cross-correlation between different links is also shown to be high for a number of specific on-body links. Finally, the model is validated by considering several extraction-independent validation metrics: multi-hop link capacity, level crossing rate (LCR) and average fade duration (AFD). The capacity aims at validating the path-loss and fading model, while the LCR and AFD aim at validating the temporal behavior. For all validation metrics, the model is shown to satisfactorily reproduce the measurements, whereas its limits are pointed out. --- paper_title: Dynamic Channel Modeling at 2.4 GHz for On-Body Area Networks paper_content: In wireless body area networks, on-body radio propagation channels are typically time-varying, because of the frequent body movements. The dynamic local body scattering dominates the temporal and spatial properties of the on-body channels. The influence varies largely depending on the distribution of the channels and the modes of body movements. In this paper, we present some major achievements on the dynamic onbody channel modeling at 2.4 GHz under the framework of the COST 2100 action. Results of two complementary measurement campaigns are presented: a geometry-based one on a single subject, and a scenario-based one covering different subjects. Statistical models including the Doppler spectrum and the spatial correlation of on-body channels are presented. An analytical model is also introduced to offer a time-space description of the on-body channels, which is validated by the geometry-based measurement campaign. --- paper_title: A Power Line Communication Network Infrastructure for The Smart Home paper_content: Low voltage electrical wiring has largely been dismissed as too noisy and unpredictable to support high-speed communication signals. Advances in communication and modulation methodologies as well as in adaptive digital signal processing and error detection and correction have spawned novel protocols capable of supporting power line communication networks at speeds comparable to wired LANs. We motivate the use of power line LANs as a basic infrastructure for building integrated smart homes, wherein information appliances ranging from simple control or monitoring devices to multimedia entertainment systems are seamlessly interconnected by the very wires that provide them electricity. By simulation and actual measurements using "reference design" prototype commercial powerline products, we show that the HomePlug MAC and PHY layers can guarantee QoS for real-time communications, supporting delay-sensitive data streams for smart home applications. --- paper_title: A Review on Telemedicine-Based WBAN Framework for Patient Monitoring paper_content: Abstract Objective: In this article, we describe the important aspects like major characteristics, research issues, and challenges with body area sensor networks in telemedicine systems for patient monitoring in different scenarios. Present and emerging developments in communications integrated with the developments in microelectronics and embedded system technologies will have a dramatic impact on future patient monitoring and health information delivery systems. The important challenges are bandwidth limitations, power consumption, and skin or tissue protection. Materials and Methods: This article presents a detailed survey on wireless body area networks (WBANs). Results and Conclusions: We have designed the framework for integrating body area networks on telemedicine systems. Recent trends, overall WBAN-telemedicine framework, and future research scope have also been addressed in this article. --- paper_title: A Review on Telemedicine-Based WBAN Framework for Patient Monitoring paper_content: Abstract Objective: In this article, we describe the important aspects like major characteristics, research issues, and challenges with body area sensor networks in telemedicine systems for patient monitoring in different scenarios. Present and emerging developments in communications integrated with the developments in microelectronics and embedded system technologies will have a dramatic impact on future patient monitoring and health information delivery systems. The important challenges are bandwidth limitations, power consumption, and skin or tissue protection. Materials and Methods: This article presents a detailed survey on wireless body area networks (WBANs). Results and Conclusions: We have designed the framework for integrating body area networks on telemedicine systems. Recent trends, overall WBAN-telemedicine framework, and future research scope have also been addressed in this article. --- paper_title: An Overview of IEEE 802.15.6 Standard paper_content: Wireless Body Area Networks (WBAN) has emerged as a key technology to provide real-time health monitoring of a patient and to diagnose and treat many life threatening diseases. WBAN operates in close vicinity to, on, or inside a human body and supports a variety of medical and non-medical applications. IEEE 802 has established a Task Group called IEEE 802.15.6 for the standardization of WBAN. The purpose of the group is to establish a communication standard optimized for low-power in-body/on-body nodes to serve a variety of medical and non-medical applications. This paper explains the most important features of the new IEEE 802.15.6 standard. The standard defines a Medium Access Control (MAC) layer supporting several Physical (PHY) layers. We briefly overview the PHY and MAC layers specifications together with the bandwidth efficiency of IEEE 802.15.6 standard. We also discuss the security paradigm of the standard. --- paper_title: Design and Application of RuBee-Based Telemedicine Data Acquisition System paper_content: Telemedicine can be defined as the delivery of health care and sharing of medical knowledge over a distance using telecommunication. This paper introduced the new technology of RuBee, RuBee fills the drawback of RFID tags which have no network and cannot be programmable, which has made tremendous progress for the development of the telemedicine. A brief introduction of the RuBee protocol, and analyzed the design of RuBee Router and its application in the Telemedicine System. Provide a snapshot of the applications of electronic patient record, emergency telemedicine and home monitoring etc. in wireless telemedicine systems. --- paper_title: A comparative study of short range wireless sensor network on high density networks paper_content: ZigBee, Wibree, Z-Wave, and RuBee are four protocol standards for short range wireless communications with low power consumption. From an application point of view, ZigBee is designed for reliable wirelessly networked monitoring and control networks, RuBee is proposed for high security applications and use in harsh environment, Wibree considered for sports and healthcare while Z-Wave is planned for residential control systems. In this paper, after an overview of the mentioned four short-range wireless protocols, we attempt to make a preliminary comparison of them and then specifically study their radio frequency, data coding, security etc. At last we have compared different protocols capabilities in high density network. --- paper_title: Link Technologies and BlackBerry Mobile Health (mHealth) Solutions: A Review paper_content: The number of wearable wireless sensors is expected to grow to 400 million by the year 2014, while the number of operational mobile subscribers has already passed the 5.2 billion mark in 2011. This growth results in an increasing number of mobile applications including: Machine-to-Machine (M2M) communications, Electronic-Health (eHealth), and Mobile-Health (mHealth). A number of emerging mobile applications that require 3G and 4G mobile networks for data transport relate to telemedicine, including establishing, maintaining, and transmitting health-related information, research, education, and training. This review paper takes a closer look at these applications, specifically with regard to the healthcare industry and their underlying link technologies. The authors believe that the BlackBerry platform and the associated infrastructure (i.e., BlackBerry Enterprise Server) is a logical and practical solution for eHealth, mHealth, sensor and M2M deployments, which are considered in this paper. --- paper_title: Survey of the DASH7 Alliance Protocol for 433 MHz Wireless Sensor Communication paper_content: 433 MHz is getting more attention for Machine-to-Machine communication. This paper presents the DASH7 Alliance Protocol, an active RFID alliance standard for 433 MHz wireless sensor communication based on the ISO/IEC 18000-7. First, the major differences of 433 MHz communication compared to more frequently used frequencies, such as 2.4 GHz and 868/920 MHz are explained. Subsequently, the general concepts of DASH7 Alliance Protocol are described, such as the BLAST networking topology and the different OSI layer implementations, in a top-down method. Basic DASH7 features such as the advertising protocol, ad-hoc synchronization and query based addressing are used to explain the different layers. Finally, the paper introduces a software stack implementation named OSS-7, which is an open source implementation of the DASH7 alliance protocol used for testing, rapid prototyping, and demonstrations. --- paper_title: DASH7 alliance protocol 1.0: Low-power, mid-range sensor and actuator communication paper_content: This paper presents the DASH7 Alliance Protocol 1.0. It is an industry alliance standard for wireless sensor and actuator communication using the unlicensed sub-1 GHz bands. The paper explains its historic relation to active RFID standards ISO 18000-7 for 433 MHz communication, the basic concepts and communication paradigms of the protocol. Since the protocol is a full OSI stack specification, the paper discusses the implementation of every OSI layer. --- paper_title: Wireless Network Standards for Building Automation paper_content: This chapter gives an overview over several existing wireless network standards with their main usage, used frequency band, modulation, transfer speed as well as features and limitations, based on their data sheets. The main intention of this chapter is an assistance to choose the best fitting protocol for a given application. Therefore some typical scenarios for applications are described and after a preselection based on the main application the wireless network protocols are compared to find the best protocol for each scenario. --- paper_title: Link Technologies and BlackBerry Mobile Health (mHealth) Solutions: A Review paper_content: The number of wearable wireless sensors is expected to grow to 400 million by the year 2014, while the number of operational mobile subscribers has already passed the 5.2 billion mark in 2011. This growth results in an increasing number of mobile applications including: Machine-to-Machine (M2M) communications, Electronic-Health (eHealth), and Mobile-Health (mHealth). A number of emerging mobile applications that require 3G and 4G mobile networks for data transport relate to telemedicine, including establishing, maintaining, and transmitting health-related information, research, education, and training. This review paper takes a closer look at these applications, specifically with regard to the healthcare industry and their underlying link technologies. The authors believe that the BlackBerry platform and the associated infrastructure (i.e., BlackBerry Enterprise Server) is a logical and practical solution for eHealth, mHealth, sensor and M2M deployments, which are considered in this paper. --- paper_title: Mbps experimental acoustic through-tissue communications: MEAT-COMMS paper_content: Methods for digital, phase-coherent acoustic communication date to at least the work of Stojanjovic, et al [20], and the added robustness afforded by improved phase tracking and compensation of Johnson, et al [21]. This work explores the use of such methods for communications through tissue for potential biomedical applications, using the tremendous bandwidth available in commercial medical ultrasound transducers. While long-range ocean acoustic experiments have been at rates of under 100kbps, typically on the order of 1- 10kbps, data rates in excess of 120Mb/s have been achieved over cm-scale distances in ultrasonic testbeds [19]. This paper describes experimental transmission of digital communication signals through samples of real pork tissue and beef liver, achieving data rates of 20-30Mbps, demonstrating the possibility of real-time video-rate data transmission through tissue for inbody ultrasonic communications with implanted medical devices. --- paper_title: Signal Transmission by Galvanic Coupling Through the Human Body paper_content: Galvanic coupling is a promising approach for wireless intrabody data transmission between sensors. Using the human body as a transmission medium for electrical signals becomes a novel data communication technique in biomedical monitoring systems. In this paper, special attention is given to the coupling of the current into the human body. Safety requirements have to be fulfilled, and optimal signal coupling is of essence. Therefore, different electrodes are compared. A test system offers up to 1 mA contact current modulated in the frequency range of 10 kHz to 1 MHz. The injected current is up to 20 times below the maximum allowed contact current. Such a low-current approach enables data communication that is more energy saving than other wireless technologies. --- paper_title: Challenges and implications of using ultrasonic communications in intra-body area networks paper_content: Body area networks (BANs) promise to enable revolutionary biomedical applications by wirelessly interconnecting devices implanted or worn by humans. However, BAN wireless communications based on radio-frequency (RF) electromagnetic waves suffer from poor propagation of signals in body tissues, which leads to high levels of attenuation. In addition, in-body transmissions are constrained to be low-power to prevent overheating of tissues and consequent death of cells. To address the limitations of RF propagation in the human body, we propose a paradigm shift by exploring the use of ultrasonic waves as the physical medium to wirelessly interconnect in-body implanted devices. Acoustic waves are the transmission technology of choice for underwater communications, since they are known to propagate better than their RF counterpart in media composed mainly of water. Similarly, we envision that ultrasound (e.g., acoustic waves at non-audible frequencies) will provide support for communications in the human body, which is composed for 65% of water. In this paper, we first assess the feasibility of using ultrasonic communications in intra-body BANs, i.e., in-body networks where the devices are biomedical sensors that communicate with an actuator/gateway device located inside the body. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, bandwidth, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack. --- paper_title: Comparison of low-power wireless communication technologies for wearable health-monitoring applications paper_content: Health monitoring technologies such as Body Area Network (BAN) systems has gathered a lot of attention during the past few years. Largely encouraged by the rapid increase in the cost of healthcare services and driven by the latest technological advances in Micro-Electro-Mechanical Systems (MEMS) and wireless communications. BAN technology comprises of a network of body worn or implanted sensors that continuously capture and measure the vital parameters such as heart rate, blood pressure, glucose levels and movement. The collected data must be transferred to a local base station in order to be further processed. Thus, wireless connectivity plays a vital role in such systems. However, wireless connectivity comes at a cost of increased power usage, mainly due to the high energy consumption during data transmission. Unfortunately, battery-operated devices are unable to operate for ultra-long duration of time and are expected to be recharged or replaced once they run out of energy. This is not a simple task especially in the case of implanted devices such as pacemakers. Therefore, prolonging the network lifetime in BAN systems is one of the greatest challenges. In order to achieve this goal, BAN systems take advantage of low-power in-body and on-body/off-body wireless communication technologies. This paper compares some of the existing and emerging low-power communication protocols that can potentially be employed to support the rapid development and deployment of BAN systems. --- paper_title: A Survey on Intrabody Communications for Body Area Network Applications paper_content: The rapid increase in healthcare demand has seen novel developments in health monitoring technologies, such as the body area networks (BAN) paradigm. BAN technology envisions a network of continuously operating sensors, which measure critical physical and physiological parameters e.g., mobility, heart rate, and glucose levels. Wireless connectivity in BAN technology is key to its success as it grants portability and flexibility to the user. While radio frequency (RF) wireless technology has been successfully deployed in most BAN implementations, they consume a lot of battery power, are susceptible to electromagnetic interference and have security issues. Intrabody communication (IBC) is an alternative wireless communication technology which uses the human body as the signal propagation medium. IBC has characteristics that could naturally address the issues with RF for BAN technology. This survey examines the on-going research in this area and highlights IBC core fundamentals, current mathematical models of the human body, IBC transceiver designs, and the remaining research challenges to be addressed. IBC has exciting prospects for making BAN technologies more practical in the future. --- paper_title: Wireless sensor devices for animal tracking and control paper_content: This paper describes some new wireless sensor hardware ::: developed for pastoral and environmental applications. ::: From our early experiments with Mote hardware we ::: were inspired to develop our devices with improved radio ::: range, solar power capability, mechanical and electrical robustness, ::: and with unique combinations of sensors. Here we ::: describe the design and evolution of a small family of devices: ::: radio/processor board, a soil moisture sensor interface, ::: and a single board multi-sensor unit for animal tracking ::: experiments. --- paper_title: Performance comparison of frequency flopping and direct sequence spread spectrum systems in the 2.4 GHz range paper_content: The FHSS and DSSS technologies are described as well as different types of spread coding. DSSS systems can provide higher access data rates (up to 11 Mbit/s) than FHSS systems (up to 3 Mbit/s). More FHSS systems can operate simultaneously in the same area than DSSS systems. DSSS systems have higher interference susceptibility tolerance levels. However, if there is a strong broadband interferer in the operational spectrum, FHSS systems will suffer less due to frequency hopping. Assuming equal transmit powers, DSSS systems have lower power spectral density due to wider operating spectrum. Monte Carlo analysis results show that DSSS systems are more likely to cause interference than FHSS systems. The worst case interference levels are higher when the interferer is an FHSS system. It was shown that this case has a very low probability of occurring. --- paper_title: Ubiquitous WSN for Healthcare: Recent Advances and Future Prospects paper_content: Wireless sensor networks (WSNs) have witnessed rapid advancement in medical applications from real-time telemonitoring and computer-assisted rehabilitation to emergency response systems. In this paper, we present the state-of-the-art research from the ubiquity perspective, and discuss the insights as well as vision of future directions in WSN-based healthcare systems. First, we propose a novel tiered architecture that can be generally applied to WSN-based healthcare systems. Then, we analyze the IEEE 802 series standards in the access layer on their capabilities in setting up WSNs for healthcare. We also explore some of the up-to-date work in the application layer, mostly on the smartphone platforms. Furthermore, in order to develop and integrate effective ubiquitous sensing for healthcare (USH), we highlight four important design goals (i.e., proactiveness, transparency, awareness, and trustworthiness) that should be taken into account in future systems. --- paper_title: Mobile Phone Sensing Systems: A Survey paper_content: Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction. --- paper_title: The MONARCA self-assessment system: Persuasive personal monitoring for bipolar patients paper_content: An increasing number of persuasive personal healthcare monitoring systems are being researched, designed and tested, many of them being based on Smartphone technology. These systems could help patients and clinicians monitor and manage mental illness. Mental illness is complex, difficult to treat, and carries social stigma. We describe our setup to support the treatment of bipolar patients using a persuasive mobile phone monitoring system and a web portal. --- paper_title: DistancePPG: Robust non-contact vital signs monitoring using a camera paper_content: Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios. --- paper_title: Validation of heart rate extraction using video imaging on a built-in camera system of a smartphone paper_content: As a smartphone is becoming very popular and its performance is being improved fast, a smartphone shows its potential as a low-cost physiological measurement solution which is accurate and can be used beyond the clinical environment. Because cardiac pulse leads the subtle color change of a skin, a pulsatile signal which can be described as photoplethysmographic (PPG) signal can be measured through recording facial video using a digital camera. In this paper, we explore the potential that the reliable heart rate can be measured remotely by the facial video recorded using smartphone camera. First, using the front facing-camera of a smartphone, facial video was recorded. We detected facial region on the image of each frame using face detection, and yielded the raw trace signal from the green channel of the image. To extract more accurate cardiac pulse signal, we applied independent component analysis (ICA) to the raw trace signal. The heart rate was extracted using frequency analysis of the raw trace signal and the analyzed signal from ICA. The accuracy of the estimated heart rate was evaluated by comparing with the heart rate from reference electrocardiogram (ECG) signal. Finally, we developed FaceBEAT, an iPhone application for remote heart rate measurement, based on this study. --- paper_title: iSleep: unobtrusive sleep quality monitoring using smartphones paper_content: The quality of sleep is an important factor in maintaining a healthy life style. To date, technology has not enabled personalized, in-place sleep quality monitoring and analysis. Current sleep monitoring systems are often difficult to use and hence limited to sleep clinics, or invasive to users, e.g., requiring users to wear a device during sleep. This paper presents iSleep -- a practical system to monitor an individual's sleep quality using off-the-shelf smartphone. iSleep uses the built-in microphone of the smartphone to detect the events that are closely related to sleep quality, including body movement, couch and snore, and infers quantitative measures of sleep quality. iSleep adopts a lightweight decision-tree-based algorithm to classify various events based on carefully selected acoustic features, and tracks the dynamic ambient noise characteristics to improve the robustness of classification. We have evaluated iSleep based on the experiment that involves 7 participants and total 51 nights of sleep, as well the data collected from real iSleep users. Our results show that iSleep achieves consistently above 90% accuracy for event classification in a variety of different settings. By providing a fine-grained sleep profile that depicts details of sleep-related events, iSleep allows the user to track the sleep efficiency over time and relate irregular sleep patterns to possible causes. --- paper_title: Adoption of telemedicine: from pilot stage to routine delivery paper_content: Today there is much debate about why telemedicine has stalled. Teleradiology is the only widespread telemedicine application. Other telemedicine applications appear to be promising candidates for widespread use, but they remain in the early adoption stage. The objective of this debate paper is to achieve a better understanding of the adoption of telemedicine, to assist those trying to move applications from pilot stage to routine delivery. We have investigated the reasons why telemedicine has stalled by focusing on two, high-level topics: 1) the process of adoption of telemedicine in comparison with other technologies; and 2) the factors involved in the widespread adoption of telemedicine. For each topic, we have formulated hypotheses. First, the advantages for users are the crucial determinant of the speed of adoption of technology in healthcare. Second, the adoption of telemedicine is similar to that of other health technologies and follows an S-shaped logistic growth curve. Third, evidence of cost-effectiveness is a necessary but not sufficient condition for the widespread adoption of telemedicine. Fourth, personal incentives for the health professionals involved in service provision are needed before the widespread adoption of telemedicine will occur. The widespread adoption of telemedicine is a major -- and still underdeveloped -- challenge that needs to be strengthened through new research directions. We have formulated four hypotheses, which are all susceptible to experimental verification. In particular, we believe that data about the adoption of telemedicine should be collected from applications implemented on a large-scale, to test the assumption that the adoption of telemedicine follows an S-shaped growth curve. This will lead to a better understanding of the process, which will in turn accelerate the adoption of new telemedicine applications in future. Research is also required to identify suitable financial and professional incentives for potential telemedicine users and understand their importance for widespread adoption. --- paper_title: Body and Visual Sensor Fusion for Motion Analysis in Ubiquitous Healthcare Systems paper_content: Human motion analysis provides a valuable solution for monitoring the wellbeing of the elderly, quantifying post-operative patient recovery and monitoring the progression of neurodegenerative diseases such as Parkinson’s. The development of accurate motion analysis models, however, requires the integration of multi-sensing modalities and the utilization of appropriate data analysis techniques. This paper describes a robust framework for improved patient motion analysis by integrating information captured by body and visual sensor networks. Real-time target extraction is applied and a skeletonization procedure is subsequently carried out to quantify the internal motion of moving target and compute two metrics, spatiotemporal cyclic motion between leg segments and head trajectory, for each vision node. Extracted motion metrics from multiple vision nodes and accelerometer information from a wearable body sensor are then fused at the feature level by using K-Nearest Neighbor algorithm and used to classify target’s walking gait into normal or abnormal. The potential value of the proposed framework for patient monitoring is demonstrated and the results obtained from practical experiments are described. --- paper_title: A Wearable Assistant for Gait Training for Parkinson’s Disease with Freezing of Gait in Out-of-the-Lab Environments paper_content: People with Parkinson’s disease (PD) suffer from declining mobility capabilities, which cause a prevalent risk of falling. Commonly, short periods of motor blocks occur during walking, known as freezing of gait (FoG). To slow the progressive decline of motor abilities, people with PD usually undertake stationary motor-training exercises in the clinics or supervised by physiotherapists. We present a wearable system for the support of people with PD and FoG. The system is designed for independent use. It enables motor training and gait assistance at home and other unsupervised environments. The system consists of three components. First, FoG episodes are detected in real time using wearable inertial sensors and a smartphone as the processing unit. Second, a feedback mechanism triggers a rhythmic auditory signal to the user to alleviate freeze episodes in an assistive mode. Third, the smartphone-based application features support for training exercises. Moreover, the system allows unobtrusive and long-term monitoring of the user’s clinical condition by transmitting sensing data and statistics to a telemedicine service. We investigate the at-home acceptance of the wearable system in a study with nine PD subjects. Participants deployed and used the system on their own, without any clinical support, at their homes during three protocol sessions in 1 week. Users’ feedback suggests an overall positive attitude toward adopting and using the system in their daily life, indicating that the system supports them in improving their gait. Further, in a data-driven analysis with sensing data from five participants, we study whether there is an observable effect on the gait during use of the system. In three out of five subjects, we observed a decrease in FoG duration distributions over the protocol days during gait-training exercises. Moreover, sensing data-driven analysis shows a decrease in FoG duration and FoG number in four out of five participants when they use the system as a gait-assistive tool during normal daily life activities at home. --- paper_title: A Personalized Exercise Trainer for Elderly paper_content: Physical activity provides many physiological benefits. On the one hand it reduces the risk of disease outcomes. On the other hand it is the basis for proper rehabilitation in case of or after a severe disease. Both aspects are especially important for the elderly population. Within this context, the present paper proposes a personalized, home-based exercise trainer for elderly people. The system is based on a wearable sensor network that enables capturing the user's motions. These are then evaluated by comparing them to a prescribed exercise, taking both exercise load and technique into account. Moreover, the results are translated into appropriate feedback to the user to assist the correct exercise execution. A novel part of the system is the generic personalization by means of a supervised teach-in phase. --- paper_title: Anxiety detection using wearable monitoring paper_content: Social Anxiety Disorder (SAD) might be confused with shyness. However, experiencing anxiety can have profound short and long-term implications. During an anxiety span, the subject suffers from blushing, sweating or trembling. Social activities are harder to accomplish and the subject might tend to avoid them. Although there are tested methods to treat SAD such as Exposure Therapy (ET) and Pharmacotherapy, patients do not treat themselves or suspend treatment due economic, time or space barriers. Wearable computing technologies can be used to constantyly monitor user context offering the possibility to detect anxiety spans. In this work we used Google Glass and the Zephyr HxM Bluetooth band to monitor Spontaneous Blink Rate (SBR) and Heart Rate (HR) respectively. We conducted an experiment that involved 8 subjects in two groups: Mild SAD and No SAD. The experiment consisted on an induced anxiety situation where each participant gave a 10 minutes speech in front of 2 professors. We found higher average heart rates after induced anxiety spans on the mild SAD group. However, we found no evidence of increased SBR as an anxiety indicator. These results indicate that wearable devices can be used to detect anxiety. --- paper_title: Managing Wearable Sensor Data through Cloud Computing paper_content: Mobile pervasive healthcare technologies can support a wide range of applications and services including patient monitoring and emergency response. At the same time they introduce several challenges, like data storage and management, interoperability and availability of heterogeneous resources, unified and ubiquitous access issues. One potential solution for addressing all aforementioned issues is the introduction of the Cloud Computing concept. Within this context, in this work we have developed and present a wearable -- textile platform based on open hardware and software that collects motion and heartbeat data and stores them wirelessly on an open Cloud infrastructure for monitoring and further processing. The proposed system may be used to promote the independent living of patient and elderly requiring constant surveillance. --- paper_title: Direction sensitive fall detection using a triaxial accelerometer and a barometric pressure sensor paper_content: Falling is one of the leading causes of serious health decline or injury-related deaths in the elderly. For survivors of a fall, the resulting health expenses can be a devastating burden, largely because of the long recovery time and potential comorbidities that ensue. The detection of a fall is, therefore, important in care of the elderly for decreasing the reaction time by the care-givers especially for those in care who are particularly frail or living alone. Recent advances in motion-sensor technology have enabled wearable sensors to be used efficiently for pervasive care of the elderly. In addition to fall detection, it is also important to determine the direction of a fall, which could help in the location of joint weakness or post-fall fracture. This work uses a waist-worn sensor, encompassing a 3D accelerometer and a barometric pressure sensor, for reliable fall detection and the determination of the direction of a fall. Also assessed is an efficient analysis framework suitable for on-node implementation using a low-power micro-controller that involves both feature extraction and fall detection. A detailed laboratory analysis is presented validating the practical application of the system. --- paper_title: Advancetid network based wireless, single PMS for multiple-patient monitoring paper_content: Real-time monitoring of the physical condition of the patients is one of the major challenges faced by hospital authorities nowadays. All the hospitals today have one Patient Monitoring System (PMS) per patient. Also human intervention is needed frequently for critical patients. In this paper we propose a network based Wireless Single PMS (NWSPMS), which can monitor multiple patients to measure various physical parameters. This must be precise, fast and effective for transmission of information about their health condition to the concerned. Also there lies a need to transmit more parameters and more data for the convenience and fast response by the staffs in the hospital. Since we need an immediate response, we make use of the voice-call facility apart from the SMS mechanism via an GSM modem. The system proposed here monitors blood pressure, heart beat rate, body temperature and ECG of the elderly patient thus giving an overall condition of the patient. --- paper_title: HOPE: An electronic gadget for home-bound patients and elders paper_content: Home-bound patients, mostly elders face many problems regarding their critical health parameter variations and timely assistance in case of emergencies. It is really a malady when they suffer from other severe diseases, heart problems etc. A constant and reliable assistive technology is essential while taking care of home-bound patients. The system, HOPE we have proposed has sensors to monitor the heart rate, body temperature, tilt and fall. The sensors are attached to the body of the elderly patient in a contented manner. The data can be sent to any Smartphone with Bluetooth support. In case of any emergency the caretaker will be given a notification about the critical situation. The provision to monitor the posture of the patient in the bed helps to reduce the cases of bedsore in bedridden elders. Day by day the menace of weakening health and chances of skin related problems, bed sores etc. are becoming critical in case of bed ridden patients. The experimental results presented in this paper give some insight into the behavior of the proposed system. --- paper_title: Data set for fall events and daily activities from inertial sensors paper_content: Wearable sensors are becoming popular for remote health monitoring as technology improves and cost reduces. One area in which wearable sensors are increasingly being used is falls monitoring. The elderly, in particular are vulnerable to falls and require continuous monitoring. Indeed, many attempts, with insufficient success have been made towards accurate, robust and generic falls and Activities of Daily Living (ADL) classification. A major challenge in developing solutions for fall detection is access to sufficiently large data sets. This paper presents a description of the data set and the experimental protocols designed by the authors for the simulation of falls, near-falls and ADL. Forty-two volunteers were recruited to participate in an experiment that involved a set of scripted protocols. Four types of falls (forward, backward, lateral left and right) and several ADL were simulated. This data set is intended for the evaluation of fall detection algorithms by combining daily activities and transitions from one posture to another with falls. In our prior work, machine learning based fall detection algorithms were developed and evaluated. Results showed that our algorithm was able to discriminate between falls and ADL with an F-measure of 94%. --- paper_title: Smart Multi-Level Tool for Remote Patient Monitoring Based on a Wireless Sensor Network and Mobile Augmented Reality paper_content: Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia. --- paper_title: An integrated system for wireless monitoring of chronic patients and elderly people paper_content: In the last years the demographic changes and ageing of population increase health care demand. Increasing number of chronic patients and elders requires close attention to their health conditions. In this paper we present the realization of a wireless monitoring system capable to measure process and transmit patient's physiologic signals (electrocardiogram, respiratory rhythm, arterial saturation of oxygen, blood pressure and body temperature) to a central medical server. The use of our system is suitable for continuous long-time monitoring, as a part of a diagnostic procedure or can achieve medical assistance of a chronic condition. We use custom developed and commercially available devices, low power microcontrollers and RF transceivers that perform the measurements and transmit them to the Personal Server. The Personal Server, in form of a PDA, that running a monitor application, receives the physiological signals from monitoring devices, activates the alarms when the monitored parameters exceed the preset limits, and communicates periodically to the central server by using WiFi or GSM/GPRS connection. --- paper_title: CoolSpots: reducing the power consumption of wireless mobile devices with multiple radio interfaces paper_content: CoolSpots enable a wireless mobile device to automatically switch between multiple radio interfaces, such as WiFi and Bluetooth, in order to increase battery lifetime. The main contribution of this work is an exploration of the policies that enable a system to switch among these interfaces, each with diverse radio characteristics and different ranges, in order to save power - supported by detailed quantitative measurements. The system and policies do not require any changes to the mobile applications themselves, and changes required to existing infrastructure are minimal. Results are reported for a suite of commonly used applications, such as file transfer, web browsing, and streaming media, across a range of operating conditions. Experimental validation of the CoolSpot system on a mobile research platform shows substantial energy savings: more than a 50% reduction in energy consumption of the wireless subsystem is possible, with an associated increase in the effective battery lifetime. --- paper_title: Multi-parameters wireless shirt for physiological monitoring paper_content: The ability to monitor the health status of elderly patients or patients undergoing home therapy allows significant advantages in terms of cost and convenience of the subject. However, these non-clinical applications of biomedical signals acquisition require different monitoring devices having, between the other characteristics, reduced size, low power and environment compatibility. The research activity concerns the development of a new wearable device that can monitor the main physiological parameters of a person in a non-invasive manner. All sensors have contactless characteristics that permit to avoid the direct contact with the skin. This system is a useful solution for monitoring the health condition of patients at home. The wearable monitoring system consists of two subsystems: first, a wearable data acquisition hardware, in which the sensors are integrated for the acquisition of biomedical parameters, and secondly, a remote monitoring station located separately and connected to the Internet for telemedicine applications. The physiological parameters that are monitored are electrocardiogram (ECG), heart rate (HR), derived from ECG signals through the determination of RR intervals, respiratory rate, and three-axis motion (acceleration and position) of the subject measured using an accelerometer. All sensors are designed using contactless measurement techniques, thus avoiding the use of gel for the conduction of the signal and possible skin irritation due to contact. The electrodes for measuring ECG signal are capacitive, while the measure of respiration is obtained by plethysmography, which does not require direct contact with skin. In order to design and construct the signal acquisition circuits in an efficient and simple manner, modular design concept is adopted in this research. The flexible signal conditioning modules are designed and assembled together. The human parameters can be recorded and analyzed continuously during work activities at home. The correct evaluation of these parameters allows the medical staff to assess to the state of health, to know accidental injury or other danger occurred in patients at home. --- paper_title: An Integrated Architecture for Remote Healthcare Monitoring paper_content: Remote healthcare monitoring has attracted the interest of many research projects during last years. The need to address the issue of ageing population has lead to the exploitation of modern communication and software technologies in this domain too. This is instigated by related technological evolutions such as the evolutions in sensor networks which can now support self organization. This paper presents the INCASA architecture that provides advances towards mainly two directions. To start with, by using the appropriate middleware, it manages to transform the network of devices to a network of services following a Service Oriented Architecture. In this way, it turns the implementation of applications (e.g. clinical applications) on top of the middleware to be easy and efficient. Furthermore, INCASA architecture provides an integrated solution for profiling user habits. This is particularly important in case of elderly people who tend to follow daily activities in a repeating manner. In the proposed architecture, the procedure of modeling user habits is the necessary step in order to generate alert actions in case of divergence. --- paper_title: Noise Filtering, Channel Modeling and Energy Utilization in Wireless Body Area Networks paper_content: Constant monitoring of patients without disturbing their daily activities can be achieved through mobile networks. Sensor nodes distributed in a home environment to provide home assistance gives concept of Wireless Wearable Body Area Networks. Gathering useful information and its transmission to the required destination may face several problems. In this paper we figure out different issues and discuss their possible solutions in order to obtain an optimized infrastructure for the care of elderly people. Different channel models along with their characteristics, noise filtering in different equalization techniques, energy consumption and effect of different impairments have been discussed in our paper. The novelty of this work is that we highlighted multiple issues along with their possible solutions that a BAN infrastructure is still facing. --- paper_title: Wireless ECG monitoring and alarm system using ZigBee paper_content: This paper presents the development of a system for wireless ECG monitoring and alarm using ZigBee. The system is intended for home use by patients that are not in a critical condition but need to be constant or periodically monitored by clinicians or family. Patient monitoring is the cornerstone of proper medical care. It provides clinicians the much needed information about a person's current health status, so that they can act accordingly if anything goes wrong. Nowadays, complex patient monitoring systems offer the possibility of continuously monitoring a multitude of biological signals, analyze them, interpret them and take the appropriate action; or alert clinicians if necessary [1]. The usual shortcomings of most of these systems reside in affecting patient mobility and home comfort. A patient would need to be sitting on a bed wired to these devices in order for his vital signs to be monitored. This system measures, records and presents in real-time the electrical activity of the heart while preserving comfort of the patient. The device is built as a low-power, small-sized, low-cost solution suitable for monitoring elderly people at home or in a nursing facility without interfering with the daily activity of a patient. It should give sufficient information in real time, and make it available remotely. The intention is not to achieve perfect clinical accuracy but the device is able to detect anomalies in the measured data and it also has alerting features. Authorized observers (clinicians or family) can monitor at any moment the state of the patient through the internet. --- paper_title: A distributed wireless body area network for medical supervision paper_content: The emerging of wireless body area network has profound impacts on our daily life, such as pervasive medical supervision and outdoor exercises, and the large scale application of wireless body area network can effectively reduce higher cost burden owing to the aging society and long term healthcare for the chronic illness. It can also enhance the quality of life for elderly people and chronic patients, and decrease the harm of the sudden diseases. The paper presents a distributed wireless body area network for medical supervision. The system contains three layers: sensor network tier, mobile computing network tier, and remote monitoring network tier. It provides collection, demonstration, and storage of the vital information such as ECG, blood oxygen, body temperature, respiration rate. Furthermore, it also provides medical service management and disease warning. The system has many advantages such as comfort, low-cost, low-power, easy configuration, convenient carrying, easy transplantation, real-time reliable data, and friendly human-machine interaction. And then the design and implementation issues of the system composition are discussed in this paper. --- paper_title: Mobile Messaging Services-Based Personal Electrocardiogram Monitoring System paper_content: A mobile monitoring system utilizing Bluetooth and mobile messaging services (MMS/SMSs) with low-cost hardware equipment is proposed. A proof of concept prototype has been developed and implemented to enable transmission of an Electrocardiogram (ECG) signal and body temperature of a patient, which can be expanded to include other vital signs. Communication between a mobile smart-phone and the ECG and temperature acquisition apparatus is implemented using the popular personal area network standard specification Bluetooth. When utilizing MMS for transmission, the mobile phone plots the received ECG signal and displays the temperature using special application software running on the client mobile phone itself, where the plot can be captured and saved as an image before transmission. Alternatively, SMS can be selected as a transmission means, where in this scenario, dedicated application software is required at the receiving device. The experimental setup can be operated formonitoring from anywhere in the globe covered by a cellular network that offers data services. --- paper_title: Real life applicable fall detection system based on wireless body area network paper_content: Real-time health monitoring with wearable sensors is an active area of research. In this domain, observing the physical condition of elderly people or patients in personal environments such as home, office, and restroom has special significance because they might be unassisted in these locations. The elderly people have limited physical abilities and are more vulnerable to serious physical damages even with small accidents, e.g. fall. The falls are unpredictable and unavoidable. In case of a fall, early detection and prompt notification to emergency services is essential for quick recovery. However, the existing fall detection devices are bulky and uncomfortable to wear. Also, detection system using the devices requires the higher computation overhead to detect falls from activities of daily living (ADL). In this paper, we propose a new fall detection system using one sensor node which can be worn as a necklace to provide both the comfortable wearing and low computation overhead. The proposed necklace-shaped sensor node includes tri-axial accelerometer and gyroscope sensors to classify the behaviour and posture of the detection subject. The simulated experimental results performed 5 fall scenarios 50 times by 5 persons show that our proposed detection approach can successfully distinguish between ADL and fall, with sensitivities greater than 80% and specificities of 100%. --- paper_title: Feasibility Study of a Wearable System Based on a Wireless Body Area Network for Gait Assessment in Parkinson's Disease Patients paper_content: Parkinson's disease (PD) alters the motor performance of affected individuals. The dopaminergic denervation of the striatum, due to substantia nigra neuronal loss, compromises the speed, the automatism and smoothness of movements of PD patients. The development of a reliable tool for long-term monitoring of PD symptoms would allow the accurate assessment of the clinical status during the different PD stages and the evaluation of motor complications. Furthermore, it would be very useful both for routine clinical care as well as for testing novel therapies. Within this context we have validated the feasibility of using a Body Network Area (BAN) of wireless accelerometers to perform continuous at home gait monitoring of PD patients. The analysis addresses the assessment of the system performance working in real environments. --- paper_title: Leveraging Mobile Cloud for Telemedicine: A Performance Study in Medical Monitoring paper_content: Telemedicine has proven to be an effective and promising solution in promoting more affordable and higher quality healthcare. Wearable body sensors and mobile devices have been widely used to monitor health status of patients or elderly and generate alarms in case of imminent medical conditions. However, the limited computation power and energy supply of mobile devices result in either high false alarm rate or short battery life, prohibitive for medical monitoring. Cloud computing embraces new opportunities of transforming healthcare delivery into a more reliable and sustainable manner. In this paper, we propose a mobile cloud telemedicine framework and discuss its potential performance by taking advantage of the real-time, on-site monitoring capability of Android mobile device and the abundant computing power of the cloud. --- paper_title: An Open, Ubiquitous and Adaptive Chronic Disease Management Platform for Chronic Respiratory and Renal Diseases (CHRONIOUS) paper_content: CHRONIOUS is a highly innovative Information and Communication Technologies (ICT) research initiative that aspires to implement its vision for ubiquitous health and lifestyle monitoring of people with chronic diseases at a European level.CHRONIOUS is funded by the European Union under its 7th Framework Program. The project started on February 1st, 2008 and has a planned duration of three years and half. The CHRONIOUS team is a consortium of internationally renowned research labs in Europe, universities and private companies, who collaborate to define a European framework for a generic health status monitoring platform addressing people with chronic health conditions. CHRONIOUS addresses a smart wearable platform, based on multi-parametric sensors processing, for monitoring people suffering from chronic diseases. In particular CHRONIOUS will be tested with chronic obstructive pulmonary disease (COPD) and chronic kidney disease (CKD) and renal insufficiency inpatients at their home. --- paper_title: Using physiological signals for authentication in a group key agreement protocol paper_content: A Body Area Network (BAN) can be used to monitor the elderly people or patients with chronic diseases. Securing broadcasted data and commands within BANs is essential for preserving the privacy of health data and for ensuring the safety of the patient. We show how a group key can be securely established between the different sensors within a BAN. The proposed mechanism uses the inherent secure environmental values. An implementation of the protocols is carried out on mica2 motes and performance is examined in detail. The time elapsed, complexity of the code and memory requirements are analysed. The results confirm the potential benefits in real-world application. We show that a key establishment protocol based on RSA has advantages over a protocol based on ECC for this application. --- paper_title: Estimation of Basic Activities of Daily Living Using ZigBee 3D Accelerometer Sensor Network paper_content: Recently, how to support and care elderly people is serious problem. Modern approach is trying to support to keep health (body and mental) seniors living alone, but it is hard to keep a health by oneself. Our aim is monitoring a user's life for providing suitable services to keep his/her health. To providing suitable service, life log is very useful information. However, traditional monitoring system is only focusing elementary motion, walking, sleeping, and so on. We deal with recognizing advanced motions by using transportation data from an area to other areas in a house. These advanced motion logs support not only elderly but also care stuff or family who plans supports. In daily life, our motion is very variety. Therefore we focused on Basic Activities of Daily Living (ADL) which categories normally doing in daily life such as feeding, bathing, homemaking, and so on. In this research, we pick up 11 motion from ADL for recognition. --- paper_title: Automatic identification and placement verification of wearable wireless sensor nodes using atmospheric air pressure distribution paper_content: We present a new approach to identifying and verifying the location of wearable wireless sensor nodes (W2SNs) placed on a body by inferring differences in altitudes using atmospheric air pressure sensors. This technique is aimed at long-term, in-home monitoring applications of the elderly and patients with chronic conditions, where the user has the freedom to install and remove the W2SNs as required without caregiver assistance. Our prototype shows that each IEEE 802.15.4-capable W2SN, employing pressure sensors, is capable of detecting altitude changes sufficient to distinguish between the relative elevation of a patient's arm and leg, and recognize which W2SN is placed on which limb; preliminary results show detection in altitudes down to 39cm apart. --- paper_title: On Power and Throughput Tradeoffs of WiFi and Bluetooth in Smartphones paper_content: This paper describes a combined power and throughput performance study of WiFi and Bluetooth usage in smartphones. The work measures the obtained throughput in various settings while employing each of these technologies, and the power consumption level associated with them. In addition, the power requirements of Bluetooth and WiFi in their respective noncommunicating modes are also compared. The study reveals several interesting phenomena and tradeoffs. In particular, the paper identifies many situations in which WiFi is superior to Bluetooth, countering previous reports. The study also identifies a couple of scenarios that are better handled by Bluetooth. The conclusions from this study suggest preferred usage patterns, as well as operative suggestions for researchers and smartphone developers. This includes a cross-layer optimization for TCP/IP that could greatly improve the throughput to power ratio whenever the transmitter is more capable than the receiver. --- paper_title: Wireless sensor network based on multilevel femtocells for home monitoring paper_content: An intelligent femtocell-based sensor network is proposed for home monitoring elderly or people with chronic diseases. The femtocell is considered as a small sensor network which is placed into patient house and consists on both mobile and fixed sensors disposed on three layers. The first layer contains body sensors attached to the patient that monitor different health parameters, patient location, position and possible falls. The second layer is dedicated for ambient sensors and routing inside the cell. The third layer contains emergency ambient sensors that cover burglary events or toxic gas concentration, distributed by necessities. Cell implementation is based on IRIS family of motes. In order to reduce energy consumption and radiation level, adaptive rates of acquisition and communication are used. Experimental results on body sensors and ambient ones are presented in the last section. --- paper_title: Assistive Technology for Elders: Wireless Intelligent Healthcare Gadget paper_content: Improving the quality of life for the elderly persons and giving them the proper care at the right time is the responsibility of the younger generation. But due to lack of awareness on proper elder care, unavoidable busy schedule etc. the elderly population is seem to be quite ignored. In such a scenario we are trying to find an amicable solution to this problem using a compact and user friendly system design. The proposed system is a compact device which has various wearable sensors all attached inside a glove. It has a process flow controller module and a communication control module. These modules perform together to excite an alert mechanism to the relatives or to the doctors via an SMS or a voice call to their mobile phones, whenever some critical situations arise. The system measures the heart rate, oxygen content in blood, body temperature and the pressure of the elderly person, which are the important parameters for a proper diagnosis. --- paper_title: Synchronous Wearable Wireless Body Sensor Network Composed of Autonomous Textile Nodes paper_content: A novel, fully-autonomous, wearable, wireless sensor network is presented, where each flexible textile node performs cooperative synchronous acquisition and distributed event detection. Computationally efficient situational-awareness algorithms are implemented on the low-power microcontroller present on each flexible node. The detected events are wirelessly transmitted to a base station, directly, as well as forwarded by other on-body nodes. For each node, a dual-polarized textile patch antenna serves as a platform for the flexible electronic circuitry. Therefore, the system is particularly suitable for comfortable and unobtrusive integration into garments. In the meantime, polarization diversity can be exploited to improve the reliability and energy-efficiency of the wireless transmission. Extensive experiments in realistic conditions have demonstrated that this new autonomous, body-centric, textile-antenna, wireless sensor network is able to correctly detect different operating conditions of a firefighter during an intervention. By relying on four network nodes integrated into the protective garment, this functionality is implemented locally, on the body, and in real time. In addition, the received sensor data are reliably transferred to a central access point at the command post, for more detailed and more comprehensive real-time visualization. This information provides coordinators and commanders with situational awareness of the entire rescue operation. A statistical analysis of measured on-body node-to-node, as well as off-body person-to-person channels is included, confirming the reliability of the communication system. --- paper_title: Towards a low cost open architecture wearable sensor network for health care applications paper_content: Wireless sensor networks present a growing interest in health care applications since they can replace wired devices for detecting signals of physiological origin and continuously monitoring health parameters, offering a reliable and inexpensive solution. In this paper a low cost open architecture wearable sensor network for health care applications is presented. Through this study, an experimental wireless sensor network (WSN) architecture has been built from scratch in order to investigate and present the development procedure and the corresponding capabilities and limitations of such a system. Moreover, technological aspects regarding implementation are also presented. --- paper_title: Processing of wearable sensor data on the cloud - a step towards scaling of continuous monitoring of health and well-being paper_content: As part of a sleep monitoring project, we used actigraphy based on body-worn accelerometer sensors to remotely monitor and study the sleep-wake cycle of elderly staying at nursing homes. We have conducted a fifteen patient trial of a sleep activity pattern monitoring (SAPM) system at a local nursing home. The data was collected and stored in our server and the processing of the data was done offline after sleep diaries used for validation and ground truth were updated into the system. The processing algorithm matches and annotates the sensor data with manual sleep diary information and is processed asynchronously on the grid/cloud back end. In this paper we outline the mapping of the system for grid / cloud processing, and initial results that show expected near-linear performance for scaling the number of users. --- paper_title: Emergency Fall Incidents Detection in Assisted Living Environments Utilizing Motion, Sound, and Visual Perceptual Components paper_content: This paper presents the implementation details of a patient status awareness enabling human activity interpretation and emergency detection in cases, where the personal health is threatened like elder falls or patient collapses. The proposed system utilizes video, audio, and motion data captured from the patient's body using appropriate body sensors and the surrounding environment, using overhead cameras and microphone arrays. Appropriate tracking techniques are applied to the visual perceptual component enabling the trajectory tracking of persons, while proper audio data processing and sound directionality analysis in conjunction to motion information and subject's visual location can verify fall and indicate an emergency event. The postfall visual and motion behavior of the subject, which indicates the severity of the fall (e.g., if the person remains unconscious or patient recovers) is performed through a semantic representation of the patient's status, context and rules-based evaluation, and advanced classification. A number of advanced classification techniques have been examined in the framework of this study and their corresponding performance in terms of accuracy and efficiency in detecting an emergency situation has been thoroughly assessed. --- paper_title: Monitoring activities of daily living based on wearable wireless body sensor network paper_content: With recent advances in microprocessor chip technology, wireless communication, and biomedical engineering it is possible to develop miniaturized ubiquitous health monitoring devices that are capable of recording physiological and movement signals during daily life activities. The aim of the research is to implement and test the prototype of health monitoring system. The system consists of the body central unit with Bluetooth module and wearable sensors: the custom-designed ECG sensor, the temperature sensor, the skin humidity sensor and accelerometers placed on the human body or integrated with clothes and a network gateway to forward data to a remote medical server. The system includes custom-designed transmission protocol and remote web-based graphical user interface for remote real time data analysis. Experimental results for a group of humans who performed various activities (eg. working, running, etc.) showed maximum 5% absolute error compared to certified medical devices. The results are promising and indicate that developed wireless wearable monitoring system faces challenges of multi-sensor human health monitoring during performing daily activities and opens new opportunities in developing novel healthcare services. --- paper_title: Wireless sensor network based E-health system: Implementation and experimental results paper_content: With the increasing number senior citizens, E-health is targeted for home use with the special requirements of being usable in everyday life and low cost. A wireless sensor network application is proposed here for 24 hour constant monitoring without disturbing daily activities of elderly people and their caretakers. In the system proposed, both fixed and body (mobile) sensors are used. Since not every elder likes to have a sensor board attached to him/her, and in many cases, he/she may not carry the sensor; the home sensor network independently would have the ability to monitor the health status and living environment based on the multisensor data analysis and fusion. A mixed positioning algorithm is proposed to determine where the elderly person is. The purpose of the positioning is to help the system to determine the person's activities and further to make decisions about his/her health status. The system could take care of two types of the basic needs of an elderly person: everyday needs as abnormal events and emergency alarms to doctors and caretakers through telephone, SMS and e-mail, and day to day requirements such as taking of medicine, having lunch, turn off the microwave oven, and so on. At same time, the system is sensitive to security and privacy issues. --- paper_title: iCare: A Mobile Health Monitoring System for the Elderly paper_content: This paper describes a mobile health monitoring system called iCare for the elderly. We use wireless body sensors and smart phones to monitor the well being of the elderly. It can offer remote monitoring for the elderly anytime anywhere and provide tailored services for each person based on their personal health condition. When detecting an emergency, the smart phone will automatically alert pre-assigned people who could be the old people's family and friends, and call the ambulance of the emergency centre. It also acts as the personal health information system and the medical guidance which offers one communication platform and the medical knowledge database so that the family and friends of the served people can cooperate with doctors to take care of him/her. The system also features some unique functions that cater to the living demands of the elderly, including regular reminder, quick alarm, medical guidance, etc. iCare is not only a real-time health monitoring system for the elderly, but also a living assistant which can make their lives more convenient and comfortable. ---
Title: A Survey on Wireless Body Area Networks for eHealthcare Systems in Residential Environments Section 1: Introduction Description 1: Introduce the background and motivation for using Wireless Body Area Network (WBAN) systems in eHealthcare, especially in the context of an aging population and increasing chronic diseases. Section 2: Residential Environment eHealthcare System Architecture Description 2: Describe a typical architecture of a residential environment eHealthcare system, detailing its different layers and components. Section 3: Taxonomy and Requirements Description 3: Summarize the primary requirements and design considerations of wireless communication technologies applicable to WBAN systems, covering low power consumption, transmission reliability, latency, data rates, and security. Section 4: Candidate Wireless Technologies Description 4: Review the latest wireless communication technologies that support the development and deployment of WBAN systems, including BLE, ZigBee, RuBee, Sensium, Zarlink, Z-Wave, and others. Section 5: Discussion Description 5: Highlight important features of low-power communication technologies, discussing their protocol efficiency, user flexibility, communication range, and energy efficiency. Section 6: Future Prospects Description 6: Explore the potential future growth areas for WBAN systems in healthcare, focusing on smartphone-based healthcare applications, their advantages, challenges, and fastest-growing mHealth areas. Section 7: Summary of Recent Research Articles Description 7: Provide an overview of recent technological advances in eHealthcare systems, focusing on studies that use WBAN for remote patient monitoring and their findings. Section 8: Conclusions Description 8: Summarize the key points and findings from the survey, highlighting the challenges, opportunities, and potential solutions in the context of WBAN systems for residential eHealthcare.
A Survey of Intelligent Control and Health Management Technologies for Aircraft Propulsion Systems
21
--- paper_title: Fault Diagnosis based on Analytical Models for Linear and Nonlinear Systems - A Tutorial paper_content: Abstract The diagnosis systems considered in this paper rely on the inconsistency between the actual process behaviour and its expected behaviour as described by an analytical model. The inconsistency is exhibited in signals called residuals. Two methods for residual generation are presented in a tutorial way: the parity space and the observer based approaches. Linear and nonlinear models are successively considered as a basis for the design of the residual generators. --- paper_title: Engine data analysis using decision trees paper_content: Monitoring systems on aircraft engines typically record many parameters, sampled frequently, over the duration of a flight. On some newer engines such data have been recorded, and stored, over hundreds of flights. But data alone are not sufficient, and are too unwieldy, to help the analyst understand the state of the engine's health. Automated data mining, for knowledge discovery, is being successfully used in several fields. One well-known technique is a decision tree, which is induced from a subset of a known training set of engine data and outcome. Since engine parameters are continuous-valued a splitting criterion, known as the minimum description length principle, is used to define the branch points of the tree. The decision tree is induced from a subset of a known training set of engine data and outcome. Issues such as overfitting the training data, and the problem of large, or randomly-chosen, training sets are also discussed in this paper. Copyright ©2000 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. BACKGROUND Many airlines have invested in engine monitoring systems (EMS) on their newer aircraft. These systems record many parameters, such as temperatures, pressures, rotor speeds, fuel flow and others. The hope is that, by tracking the engines over hundreds of flights, one can discern component degradations which lead to diminished performance or safety. The large numbers of parameters, and the large volumes of time-series data, pose a challenge to the analyst sifting through the data. Simple checks such as minimum/maximum parameter bounds often lead to false alarms. Relaxing these bounds could lead to missed alerts. Several NASA Ames research efforts are directed at exploring ways of using computers to monitor the health of physical systems. In a previous paper [8] surveying aircraft engine health monitoring systems, various machine learning-based techniques, that are being researched for engine data analysis, were outlined. In this paper we focus on using the decision tree method for learning to classify engine's health based on its sensor data. (c)2000 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization. --- paper_title: A SURVEY OF AIRCRAFT ENGINE HEALTH MONITORING SYSTEMS paper_content: This paper presents a survey of engine health monitoring systems for commercial aircraft. The state of practice is explored first, with the purpose of identifying the shortcomings of current systems. The state of the research to address these shortcomings is them surveyed to explore the alternatives. Research and monitoring applications for various other types of engines provide a good basis for further exploring the topic. This survey is meant to serve as a precursor to engine health and monitoring research at the NASA Ames Research Center. --- paper_title: Fundamentals of model-based diagnosis paper_content: Abstract Over the last 25 years, the Computer Science community and particularly the Artificial Intelligence community, have developed a framework for system diagnosis, called Model-Based Diagnosis. This framework is extremely general and covers a broad range of capabilities including detecting malfunctions, isolating faulty components, handling multiple faults, identifying repair actions, and automatically generating embedded software. This field grew independently of the fault detection and isolation community (FDI) and has developed its own terminologies and conventions. This paper is an attempt to present the fundamental concepts of Model-Based Diagnosis (MBD) in onc place and in one consistent terminology, and thus make the field much more accessible to the FDI community. --- paper_title: Optimal discrete event supervisory control of aircraft gas turbine engines paper_content: This paper presents an application of the recently developed theory of optimal discrete event supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability. --- paper_title: Pilot's Associate: a cooperative, knowledge-based system application paper_content: The Pilot's Associate program, a joint effort of the Defense Advanced Research Projects Agency (DARPA) and the US Air Force to build a cooperative, knowledge-based system to help pilots make decisions is described, and the lessons learned are examined. The Pilot's Associate concept developed as a set of cooperating, knowledge-based subsystems: two assessor and two planning subsystems, and a pilot interface. The two assessors, situation assessment and system status, determine the state of the outside world and the aircraft systems, respectively. The two planners, tactics planner and mission planner, react to the dynamic environment by responding to immediate threats and their effects on the prebriefed mission plan. The pilot-vehicle interface subsystem provides the critical connection between the pilot and the rest of the system. The focus is on the air-to-air subsystems. > --- paper_title: Sensor Needs for Control and Health Management of Intelligent Aircraft Engines paper_content: ABSTRACT NASA and the U.S. Department of Defense are conducting programs which support the future vision of “intelligent” aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized. --- paper_title: A Study on the Requirements for Fast Active Turbine Tip Clearance Control Systems paper_content: This paper addresses the requirements of a control system for active turbine tip clearance control in a generic commercial turbofan engine through design and analysis. The control objective is to articulate the shroud in the high pressure turbine section in order to maintain a certain clearance set point given several possible engine transient events. The system must also exhibit reasonable robustness to modeling uncertainties and reasonable noise rejection properties. Two actuators were chosen to fulfill such a requirement, both of which possess different levels of technological readiness: electrohydraulic servovalves and piezoelectric stacks. Identification of design constraints, desired actuator parameters, and actuator limitations are addressed in depth; all of which are intimately tied with the hardware and controller design process. Analytical demonstrations of the performance and robustness characteristics of the two axisymmetric LQG clearance control systems are presented. Takeoff simulation results show that both actuators are capable of maintaining the clearance within acceptable bounds and demonstrate robustness to parameter uncertainty. The present model-based control strategy was employed to demonstrate the tradeoff between performance, control effort, and robustness and to implement optimal state estimation in a noisy engine environment with intent to eliminate ad hoc methods for designing reliable control systems. --- paper_title: An investigation of life extending control techniques for gas turbine engines paper_content: The consumption of engine life characterized by low EGT margin, expended life-limited parts, and slow engine accelerations is the principal cause of aircraft engine removal. Life extending control results from a conscious effort on the part of control system designers to extend the life of an engine by modifying the control logic or control hardware to influence one or more of these life-consuming factors. General Electric Aircraft Engines and NASA Glenn Research Center are currently engaged in a collaborative research programme to investigate control technologies applicable to extending on-wing life of aircraft engines. A trade study of potential schemes that may have a positive impact on engine life has been performed, and the results of this study are used to narrow the focus of further research under this programme. --- paper_title: Intelligent Life -Extending Controls for Aircraft Engines paper_content: Aircraft engine controllers are designed and operated to provide desired performance and stability margins. The purpose of life -extending -control (LEC) is to study the relationship between control action and engin e component life usage, and to design an intelligent control algorithm to provide proper trade -offs between performance and engine life usage. The benefit of this approach is that it is expected to maintain safety while minimizing the overall operating co sts. With the advances of computer technology, engine operation models, and damage physics, it is necessary to reevaluate the control strategy for overall operating cost consideration. This paper uses the thermo -mechanical fatigue (TMF) of a critical com ponent to demonstrate how an intelligent engine control algorithm can drastically reduce the engine life usage with minimum sacrifice in performance. A Monte Carlo simulation is also performed to evaluate the likely engine damage accumulation under variou s operating conditions. The simulation results show that an optimized acceleration schedule can provide a significant life saving in selected engine components. Nomenclature --- paper_title: Model-Based Life Extending Control for Aircraft Engines paper_content: ** * * The objective of this work is to extend the on-wing life of aircraft engines using advanced control techniques. Previous studies demonstrated that, out of various schemes that have a positive impact on engine life, improved active clearance control (ACC) is the one with the best potential. Modern commercial turbofan engines are already implementing various forms of ACC schemes to maintain turbine tip clearances within appropriate ranges for safety and efficiency reasons. However, one limitation of these schemes is that they are designed for a "nominal" engine, and as the engine deteriorates over time the ACC performance degrades. The present paper proposes to improve the high-pressure turbine ACC in a high bypass commercial engine to account for deterioration. A key component of this new ACC scheme is an estimator of the clearance variation due to deterioration. This estimator processes the difference between some of the actual engine outputs and outputs from a model of the nominal engine running in real-time. The new ACC scheme was implemented on a high-fidelity model of the commercial engine. Extensive simulations show that exhaust gas temperatures are clearly lowered by the new ACC, which translate into more cycles between engine overhauls. --- paper_title: Longitudinal emergency control system using thrust modulation demonstrated on an MD-11 airplane paper_content: This report describes how an MD-11 airplane landed using only thrust modulation, with the control surfaces locked. The propulsion-controlled aircraft system would be used if the aircraft suffered a major primary flight control system failure and lost most or all the hydraulics. The longitudinal and lateral–directional controllers were designed and flight tested, but only the longitudinal control of flightpath angle is addressed in this paper. A flight-test program was conducted to evaluate the aircraft’s high-altitude flying characteristics and to demonstrate its capacity to perform safe landings. In addition, over 50 low approaches and three landings without the movement of any aerodynamic control surfaces were performed. The longitudinal control modes include a wing engines only mode for flightpath control and a three-engine operation mode with speed control and dynamic control of the flightpath angle using the tail engine. These modes were flown in either a pilot-commanded mode or an instrument landing system coupled mode. Also included are the results of an analytical study of an autothrottle longitudinal controller designed to improve the phugoid damping. This mode requires the pilot to use differential throttles for lateral control. Nomenclature Alon longitudinal state derivative matrix Blon control input derivative matrix c.g. center of gravity *Aerospace Engineer. †Chief, Propulsion Branch. Associate Fellow AIAA. ‡Flight Control Engineer. Copyright  1996 by the American Institute of Aeronautics and Astronautics, Inc. No copyright is asserted in the United States under Title 17, U.S. Code. The U.S. Government has a royalty-free license to exercise all rights under the copyright claimed herein for Governmental purposes. All other rights are reserved by the copyright owner. 1 American Institute of Aero Clon state output matrix Dlon control input observation matrix EPR engine pressure ratio (turbine and inlet total pressures) FADEC full-authority digital engine control computers FCC flight control computer FCP flight control panel sink rate, ft/sec ILS instrument landing system flightpath error feed-forward gain, deg pitch integrator error gain, 1/sec pitch rate feedback gain, deg/deg/sec velocity error feedback gain, deg/kn pitch angle feedback gain, deg/deg/sec center engine washout gain, lb MCDU multifunction control and display unit PCA propulsion-controlled aircraft PIO pilot induced oscillation q pitch rate, deg/sec t time, sec uu x axis velocity perturbation, ft/sec Vel velocity or airspeed, kn s Laplace transform ww z axis velocity perturbation, ft/sec xlon longitudinal state vector α angle of attack, deg γ flightpath angle, deg ḣ Kvc Kvi Kq K rs sec Kthad Kvm nautics and Astronautics flightpath angle command, deg velocity error θ pitch attitude, deg pitch attitude rate, deg/sec φ roll attitude, deg Introduction Aircraft flight control systems are designed with extensive redundancy to ensure a low probability of failure. During recent years, however, several aircraft have experienced major flight control system failures, leaving engine thrust as the only control effectors.1,2 In some of these emergency situations, the engines were used to maintain control of the airplane flightpath angle, γ. In the majority of the cases surveyed, crashes resulted, and over 1200 people have died.1 The challenge was to create a sufficient degree of control through thrust modulation to control and safely land an airplane with severely damaged or inoperative flight control surfaces. Meeting this challenge is the objective of the Propulsion-Controlled Aircraft (PCA) Emergency Backup System. The PCA emergency backup flight control system requires that the airplane have at least two engines, preferably two wing engines. In addition, the normal control surfaces can not be locked in a hardover position which could exceed the moments resulting from the thrust of the engines. The National Aeronautics and Space Administration, Dryden Flight Research Center, Edwards, California, has performed nonlinear and linear analytical studies and conducted several flight-test programs investigating the PCA concept. Results of these programs2–6 show that gross control can be obtained by manually moving the throttles. However, making a safe runway landing is exceedingly difficult because of low phugoid and dutch roll damping coupled with the high pilot work load near the ground. To improve the performance and reduce the pilot work load, the PCA program was developed. The goal was to make flying an airplane with the PCA system a viable task with minimal or no previous pilot training with this system. This report describes the longitudinal PCA control systems and flight test results of four modes: • Mode A—using the wing engines only for control of flightpath angle, γ. • Mode B—using the tail engine for speed control in conjunction with mode A. • Mode C—using all the wing and tail engines for dynamic control of γ and speed control. • Mode D—using an existing autothrottle system for γ control. The autothrottle system was developed to provide a simpler implementation that did not require changes to the engine controllers. This system was not flight tested, but simulation results are presented.§ Within control modes A, B, and C, the pilot has the option of selecting the instrument landing system (ILS)coupled with PCA for approach and landing. This option virtually eliminates the pilot work load. Two ILS landings using the wing engines (mode A) were performed, and one is presented in this report. The lateral–directional controller is described in reference 7. Test Vehicle Description The MD-11 airplane is a large, long-range, threeengine, wide-body transport. This airplane is 202 ft long, has a wing span of 170 ft, and a maximum takeoff gross weight of 618,000 lb (fig. 1). Flight Control Systems The MD-11 airplane has a mechanical flight control system with irreversible hydraulically powered actuators. The hydraulic power provided by three independent systems is intended for fail-safe capability. Essential control functions may be maintained by any one of these three systems. Pitch control is provided by dual elevators on each horizontal stabilizer, and pitch trim is provided by a moveable horizontal stabilizer. Inboard and outboard ailerons supplemented by wing spoilers provide roll control. A dual rudder mounted on a single vertical stabilizer provides yaw control. The lateral dynamics is controlled by the yaw damper. The longitudinal stability augmentation system controls the pitch dynamics. The aerodynamic surfaces are controlled by hydraulic actuators. The flight control computers (FCC) were built by Honeywell, Phoenix, Arizona, and operate at 20 samples/sec. The MD-11 airplane is equipped with a flight management system which integrates autopilot, navigation, and autoland functions. The automatic pilot control includes a thumbwheel for commanding flightpath angle, . §NASA has a patent pending for mode d. γ cmd γ err --- paper_title: Fuzzy Fuel Flow Selection Logic for a Real Time Embedded Full Authority Digital Engine Control paper_content: The control logic of the modern full authority digital engine control is comprised of many control loops, each of which has a specific purpose. Typical control loops include (but are not limited to) a high or low rotor speed governor, an acceleration and deceleration loop, and various limiting loops for temperature, speed, fuel flow, and rate of change of fuel flow. The logic that determines which of these loops is in control at any particular time has a history of being very simplistic. This selection logic is usually nothing more that a cascade of minimum and maximum selection logic gates. Since this logic is so simplistic, the control engineer often has to fine-tune the compensator design for each loop, in order to achieve proper performance. --- paper_title: Survivable Engine Control Algorithm Development (SECAD) paper_content: This presentation describes the work conducted as part of the Survivable Engine Control Algorithm Development (SECAD) project. The SECAD project was sponsored by the Joint Technical Coordinating Group on Aircraft Survivability (JTCG/AS) Vulnerability Reduction Subgroup and conducted by the Naval Air Warfare Center Weapons Division and General Electric Aircraft Engines, Lynn MA. The overall objective of the SECAD project is to develop turbine engine control algorithms that have the potential to reduce aircraft engine vulnerability to combat damage, including foreign object damage (FOD). The initial development effort was laid out in three-phases. Phase 1 conducted in early FY99, defined candidate engine damage modes to be addressed, and approaches to mitigate the damage effects. In phase 2 damage models, detection algorithms, and mitigation strategies were developed and evaluated using F414 computer engine models. Phase 3 integrated the damage detection algorithms with the F414 control system and an engine test was conducted to demonstrate the capabilities of the concepts developed. During these tests, SECAD successfully detected fan and compressor damage, combustor airflow loss and variable exhaust nozzle loss of control. This presentation documents the successful development and test of the damage detection and mitigation algorithms, and discusses the potential for merging these techniques with current prognostics and health monitoring technologies. --- paper_title: Adaptive Control paper_content: From the Publisher: ::: Written by two of the pioneers in the field, this book contains a wealth of practical information unavailable anywhere else. The authors give a comprehensive presentation of the field of adaptive control, carefully bending theory and implementation to provide the reader with insight and understanding. Benefiting from the feedback of users who are familiar with the first edition, the material has been reorganized and rewritten, giving a more balanced and teachable presentation of fundamentals and applications. --- paper_title: Multi-Variable Control of the GE T700 Engine using the LQG/LTR Design Methodology paper_content: In this paper we examine the design of scalar and multi-variable feedback control systems for the GE T700 turboshaft engine coupled to a helicopter rotor system. A series of linearized models are presented and analyzed. Robustness and performance specifications are posed in the frequency domain. The LQG/LTR methodology is used to obtain a sequence of three feedback designs. Even in the single-input single-output case, comparison of the current control system with that derived from the LQG/LTR approach shows significant performance improvement. The multi-variable designs, evaluated using linear and nonlinear simulations, show even more potential for performance improvement. --- paper_title: Performance benefits of adaptive in-flight propulsion system optimization paper_content: The communication throughput and data-processing capacities of integrated flight/propulsion control systems allow engine operating schedules to be adjusted in-flight, on the basis of adaptive optimization algorithms which identify engine component performance variations due to manufacturing, wear, and damage. A quantification is presently made of the performance benefits accruing to adaptive in-flight optimization, via comparisons of fuel consumption and turbine temperature data for variable geometry and component match optimized cases with conventional cases. A low-bypass mixed-flow turbofan and a high-bypass nonmixed turbofan are thus treated. 6 refs. --- paper_title: MODEL-BASED INTELLIGENT DIGITAL ENGINE CONTROL (MoBIDEC) paper_content: A model-based control system for an advanced gas turbine engine is described, and some preliminary performance benefits are derived based on simulation testing. The approach to plant model development and a method for adapting a nominal engine model to reflect the performance of a deteriorated engine is given. A general process for model-based control law design and validation, with emphasis on linear and nonlinear analysis, is then presented. Performance and Operability modes that implement closed loop control of model-derived parameters are demonstrated via simulation testing. Future plans include testing of the model-based control described in this paper on an IHPTET Phase II advanced technology engine. NOMENCLATURE --- paper_title: eSTORM: Enhanced self tuning on-board real-time engine model paper_content: Abstract : A key to producing reliable engine diagnostics and prognostics resides in the fusion of different processing techniques. Fusion of techniques has been shown to improve diagnostic performance while simultaneously reducing false alarms. Presented here is an approach that fuses a physical model called STORM (Self Tuning Onboard, Real-time engine Model) developed by Pratt & Whitney, with an empirical neural net model to provide a unique hybrid model called enhanced STORM (eSTORM) for engine diagnostics. STORM is a piecewise linear approximation of the engine cycle deck. Though STORM provides significant improvement over existing real-time engine model methods, there are several effects that impact engine performance that STORM does not capture. Integrating an empirical model with STORM accommodates the modeling errors. This paper describes the development of eSTORM for a Pratt & Whitney high bypass turbofan engine. Results of using STORM and eSTORM on simulated engine data are presented and compared. eSTORM is shown to work extremely well in reducing STORM modeling errors and biases for the conditions considered. --- paper_title: Performance-analysis-based gas turbine diagnostics: A review paper_content: Abstract Gas turbine diagnostics has a history almost as long as gas turbine development itself. Early engine fault diagnosis was carried out based on manufacturer information supplied in a technical manual combined with maintenance experience. In the late 1960s, when L. A. Urban introduced gas path analysis, gas turbine diagnostics made a big breakthrough. Since then different methods have been developed and used in both aerospace and industrial applications. To date, a substantial number of papers have been published in this area. This paper intends to give a comprehensive review of performance-analysis-based methods available thus far for gas turbine fault diagnosis in the open literature. --- paper_title: Sensor biases effect on the estimation algorithm for performance-seeking controllers paper_content: The performance-seeking-control algorithm (PSC) is designed to continuously optimize the performance of propulsion systems. The PSC uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters (EDPs) characterizing engine deviations with respect to nominal conditions. In practice, the measurement biases (or model uncertainties) may prevent the estimated EDPs from reflecting the engine's actual off-nominal condition. This factor has a direct impact on the PSC scheme exacerbated by the open-loop character of the algorithm. An observability analysis shows that the biases cannot be estimated together with the EDPs. Moreover, biases and EDPs turn out to have equivalent effects on the measurements, leaving it undecided whether the estimated EDPs represent the actual engine deviation or whether they simply reflect the measurement biases. In this article, the effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the PSC algorithm to an F100 engine. --- paper_title: Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics paper_content: ABSTRACT In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated. NOMENCLATURE A16 Variable bypass duct area A8 Nozzle area BST Booster CLM Component Level Model FAN Fan FDI Fault detection and isolation FOD Foreign object damage HPC High-pressure compressor HPT High-pressure turbine LPT Low-pressure turbine P27 HPC inlet pressure PS15 Bypass duct static pressure PS3 Combustor inlet static pressure PS56 LPT exit static pressure T27D Booster inlet temperature T56 LPT exit temperature TMPC Burner exit heat soak WF36 Fuel flow XN2 Low-pressure spool speed, measured XN25 High-pressure spool speed, measured XNH High-pressure spool speed, state variable XNL Low-pressure spool speed, state variable --- paper_title: Turbofan engine demonstration of sensor failure detection paper_content: In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at hoth medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed. HE objective of the advanced detection, isolation, and accommodation (ADIA) program was to improve demonstrated reliability of digital electronic control systems for turbine engines by detecting, isolating, and accommodat- ing sensor failures using analytical redundancy methods. This paper discusses the results of an engine demonstration of an analytical-redundancy-based algorithm developed as part of the ADIA program. Over the past 45 years, hydromechanical implementations of turbine engine control systems have ma- tured into highly reliable units. However, there is a trend toward increased engine complexity. Engine control has be- come increasingly complex and has evolved from a hydrome- chanical to a full authority digital electronic (FADEC) imple- mentation. These FADEC-type controls have to demonstrate the same or improved levels of reliability as their hydrome- chanical predecessors. Various redundancy management techniques have been ap- plied to both the total control system and to individual compo- nents. Studies1 have shown that the least reliable of the control system components are the engine sensors. Sensor redundancy will be required to achieve adequate control system reliability. One important type of sensor redundancy is analytical redun- dancy2 (AR), which uses a model to generate redundant infor- mation that can be compared to measured information to detect failures. AR-based systems can have cost and weight savings over hardware redundancy. Considerable progress has been made in the application of analytical redundancy to improve turbine engine control sys- tem reliability. However, little has been done to demonstrate AR-based techniques on real engine systems. One exception was a flightiest program3 where selected out-of-range (hard) sensor faults were induced and the resulting actions of the control evaluated. The test program included a ground-thrust stand evaluation and a flight test. The sensors that failed during the flight test included the compressor inlet variable geometry sensor, inlet static pressure, burner pressure, and fan turbine inlet temperature. Most failures were detected and accommodated. However, a recreation of a broken line (hard) burner pressure went undetected. Pilot response to aircraft performance after accommodation was favorable. --- paper_title: Using neural networks for sensor validation paper_content: This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed. --- paper_title: Evaluation of an Enhanced Bank of Kalman Filters for In-Flight Aircraft Engine Sensor Fault Diagnostics paper_content: In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.Copyright © 2004 by ASME --- paper_title: Neural network-based sensor validation for turboshaft engines paper_content: Abstract : Sensor failure detection, isolation, and accommodation using a neural network approach is described. An autoassociative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures. --- paper_title: Sensor and Data Fusion Concepts and Applications paper_content: Multiple Sensor System Applications, Benefits, and Atmospheric Attenuation Data Fusion Algorithms and Architectures Bayesian Inference Dempster-Shafer Algorithm Artificial Neural Networks Voting Fusion Fuzzy Logic and Neural Networks Passive Data Association Techniques for Unambiguous Location of Targets. Appendices: Planck Radiation Law and Radiative Transfer Voting Fusion With Nested Confidence Levels. --- paper_title: Development of an information fusion system for engine diagnostics and health management paper_content: Aircraft gas-turbine engine data is available from a variety of sources including on-board sensor measurements, maintenance histories, and component models. An ultimate goal of Propulsion Health Management (PHM) is to maximize the amount of meaningful information that can be extracted from disparate data sources to obtain comprehensive diagnostic and prognostic knowledge regarding the health of the engine. Data Fusion is the integration of data or information from multiple sources, to achieve improved accuracy and more specific inferences than can be obtained from the use of a single sensor alone. The basic tenet underlying the data/information fusion concept is to leverage all available information to enhance diagnostic visibility, increase diagnostic reliability and reduce the number of diagnostic false alarms. This paper describes a basic PHM Data Fusion architecture being developed in alignment with the NASA C-17 Propulsion Health Management (PHM) Flight Test program. The challenge of how to maximize the meaningful information extracted from disparate data sources to obtain enhanced diagnostic and prognostic information regarding the health and condition of the engine is the primary goal of this endeavor. To address this challenge, NASA Glenn Research Center (GRC), NASA Dryden Flight Research Center (DFRC) and Pratt & Whitney (P&W) have formed a team with several small innovative technology companies to plan and conduct a research project in the area of data fusion as applied to PHM * . Methodologies being developed and evaluated have been drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and fuzzy logic. This paper will provide a broad overview of this work, discuss some of the methodologies employed and give some illustrative examples. --- paper_title: Fuzzy belief networks paper_content: The most common method for knowledge representation in an expert system is the production rule [Waterman 1986]. Unfortunately, the modularity inherent in a rule-based system is limiting, especially in an uncertain environment [Morawski 1989]. A fuzzy belief network (FBN) provides a more holistic, graphical approach and lends itself well to implementation in expert systems on personal and small computers. --- paper_title: Bayesian belief networks for fault identification in aircraft gas turbine engines paper_content: Describes the methodology for usage of Bayesian belief networks (BBNs) in fault detection for aircraft gas turbine engines. First, the basic theory of BBNs is discussed, followed by a discussion on the application of this theory to a specific engine. In particular, the selection of faults and the means by which operating regions for the BBN system are chosen are analyzed. This methodology is then illustrated using the GE CFM56-7 turbofan engine as an example. --- paper_title: PCA-based Fuzzy Classification of Turbine Blade Fatigue Modes paper_content: An efficient classification method is needed for th e health diagnosis of turbine blades. Traditional classification methods assign each dat a point or feature vector to one and only one of the categories, which poses significant prob lems in dealing with the data set containing overlapping structures. In this paper, a PCA-based fuzzy classification method is used to address this problem. After preprocessing o f the data, Principle Component Analysis (PCA) is first used to reduce the excessive dimensi onality in the feature space. Generalized multi-dimension Gaussian membership functions are then formulated for each category among the training patterns. Finally every new patt ern can be assigned appropriate membership value for each category. The proposed algorithm has been tested using a set of historical flight data from a group of engines. Exp erimental results have shown that this approach is effective in classifying turbine blade fatigue modes and agrees with the manual inspection findings. ---
Title: A Survey of Intelligent Control and Health Management Technologies for Aircraft Propulsion Systems Section 1: Introduction Description 1: Write an introductory section that outlines the paper’s intent, general background, and organization. Section 2: Engine Overview Description 2: Provide a detailed explanation of the turbofan engine components, their function, and their operational relevance. Section 3: Control Description 3: Discuss traditional and modern control techniques for aircraft engines, including specific applications and approaches beyond current practices. Section 4: Diagnostics Description 4: Explore various systems and approaches for on-wing health monitoring, including sensor validation and fault detection. Section 5: Intelligent Control Description 5: Explain modern intelligent control techniques and their potential to enhance aircraft propulsion system performance. Section 6: Life Extending Control Description 6: Describe life-extending control techniques that minimize component damage while maintaining acceptable engine response. Section 7: Adaptive Control Description 7: Discuss adaptive control techniques that adjust controller gains and schedules based on operating conditions. Section 8: Multivariable Control Description 8: Review multivariable control schemes emphasizing the optimization of system performance through state space representation. Section 9: Performance Seeking Control Description 9: Explain the concept and implementation of performance seeking control, aiming at optimizing engine operation based on current conditions. Section 10: Model Predictive Control Description 10: Describe model predictive control (MPC), an open loop control scheme that uses real-time models for optimization. Section 11: Gas Path Performance Diagnostics Description 11: Outline the techniques for on-line monitoring of gas path components and the challenges in estimating health parameters. Section 12: Vibration Diagnostics Description 12: Discuss the role of vibration diagnostics in assessing structural health and future advancements in high frequency measurements. Section 13: Lubrication System Diagnostics Description 13: Review the techniques used for monitoring lubrication systems and identifying part wear through contaminants. Section 14: Control System Integrity Assurance Description 14: Explore methods for ensuring the integrity of control systems through various diagnostic sensors and validation techniques. Section 15: Diagnostic Sensors Description 15: Discuss the types of sensors employed for propulsion diagnostics and the challenges associated with integrating new sensors. Section 16: Data/Information Fusion Description 16: Explain the application of data fusion techniques to integrate data from multiple sources for enhanced engine health management. Section 17: Prognostics Description 17: Describe prognostic methods for estimating the remaining useful life of engine components based on operating conditions. Section 18: Integration Description 18: Discuss the integration of various algorithms and systems into a cohesive engine health management system. Section 19: Integrated Fault Detection, Isolation, and Accommodation Logic Description 19: Elaborate on FDIA logic embedded within electronic engine controllers to detect and mitigate faults in real-time. Section 20: MPC for Integrated Control and Diagnostics Description 20: Explain how MPC can integrate control and diagnostics by accounting for faults within its optimization framework. Section 21: Engine Health Management, Support Engineering, Maintenance and Logistics Integration Description 21: Discuss the potential of automated engine health management to reduce maintenance effort and improve logistical processes. Section 22: Final Comments Description 22: Conclude the paper with final remarks on the potential contributions of intelligent control and health management technologies to future engines.
Transfer in Reinforcement Learning: a Framework and a Survey
12
--- paper_title: Using Options for Knowledge Transfer in Reinforcement Learning paper_content: One of the original motivations for the use of temporally extended actions, or options, in reinforcement learning was to enable t he transfer of learned value functions or policies to new problems. Many experimenters have used options to speed learning on single problems, but options have not been studied in depth as a tool for transfer. In this paper we introduce a formal model of a learning problem as a distribution of Markov Decision Problems (MDPs). Each MDP represents a task the agent will have to solve. Our model can also be viewed as a partially observable Markov decision problem (POMDP), with a special structure that we describe. We study two learning algorithms, one which keeps a single value function that generalizes across tasks, and an increm ental POMDPinspired method maintaining separate value functions for each task. We evaluate the learning algorithms on an extension of the Mountain Car domain, in terms of both learning speed and asymptotic performance. Empirically, we find that temporally extended options can fa cilitate transfer for both algorithms. In our domain, the single value func tion algorithm has much better learning speed because it generalizes its ex perience more broadly across tasks. We also observe that different sets of options can achieve tradeoffs of learning speed versus asymptotic perf ormance. --- paper_title: Schema induction and analogical transfer paper_content: An analysis of the process of analogical thinking predicts that analogies will be noticed on the basis of semantic retrieval cues and that the induction of a general schema from concrete analogs will facilitate analogical transfer. These predictions were tested in experiments in which subjects first read one or more stories illustrating problems and their solutions and then attempted to solve a disparate but analogous transfer problem. The studies in Part I attempted to foster the abstraction of a problem schema from a single story analog by means of summarization instructions, a verbal statement of the underlying principle, or a diagrammatic representation of it. None of these devices achieved a notable degree of sucess. In contrast, the experiments in Part II demonstrated that if two prior analogs were given, subjects often derived a problem schema as an incidental product of describing the similarities of the analogs. The quality of the induced schema was highly predictive of subsequent transfer performance. Furthermore, the verbal statements and diagrams that had failed to facilitate transfer from one analog proved highly beneficial when paired with two. The function of examples in learning was discussed in light of the present study. --- paper_title: Proto-transfer learning in markov decision processes using spectral methods paper_content: In this paper we introduce proto-transfer leaning, a new framework for transfer learning. We explore solutions to transfer learning within reinforcement learning through the use of spectral methods. Proto-value functions (PVFs) are basis functions computed from a spectral analysis of random walks on the state space graph. They naturally lead to the ability to transfer knowledge and representation between related tasks or domains. We investigate task transfer by using the same PVFs in Markov decision processes (MDPs) with different rewards functions. Additionally, our experiments in domain transfer explore applying the Nystrom method for interpolation of PVFs between MDPs of different sizes. 1. Problem Statement The aim of transfer learning is to reuse behavior by using the knowledge learned about one domain or task to accelerate learning in a related domain or task. In this paper we explore solutions to transfer learning within reinforcement learning (Sutton & Barto, 1998) through spectral methods. The new framework of proto-transfer learning transfers representations from one domain to another. This transfer entails the reuse of eigenvectors learned from one graph on another. We explore how to transfer knowledge learned on the source graph to a similar graph by modifying the eigenvectors of the Laplacian of the source domain to be reused for the target domain. Proto-value functions (PVFs) are a natural abstraction since they condense a domain by automatically learning an embedding of the Appearing in the ICML-06 Workshop on Structural Knowledge Transfer for Machine Learning, Pittsburgh, PA, 2006. Copyright 2006 by the author(s)/owner(s). state space based on its topology (Mahadevan, 2005). PVFs lead to the ability to transfer knowledge about domains and tasks, since they are constructed without taking reward into account. We define task transfer as the problem of transferring knowledge when the state space remains the same and only the reward differs. For task transfer, taskindependent basis functions, such as PVFs, can be reused from one task to the next without modification. Domain transfer refers to the more challenging problem of the state space changing. This change in state space can be a change in topology (i.e. obstacles moving to different locations) or a change in scale (i.e. a smaller or larger domain of the same shape). For domain transfer, the basis functions may need to be modified to reflect the changes in the state space. (Foster & Dayan, 2002) study the task transfer problem by applying unsupervised, mixture model, learning methods to a collection of optimal value functions of different tasks in order to decompose and extract the underlying structure. In this paper, we investigate task transfer in discrete domains by reusing PVFs in MDPs with different reward functions. For domain transfer, we apply the Nystrom extension for interpolation of PVFs between MDPs of different sizes (Mahadevan et al., 2006). Previous work has accelerated learning when transferring behaviors between tasks and domains (Taylor et al., 2005), but we transfer representation and reuse knowledge to learn comparably on a new task or domain. --- paper_title: Transfer Learning for Reinforcement Learning Domains: A Survey paper_content: The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. --- paper_title: Learning and Transfer: A General Role for Analogical Encoding paper_content: Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problem-solving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise. (PsycINFO Database Record (c) 2016 APA, all rights reserved) --- paper_title: Tree-based batch mode reinforcement learning paper_content: Reinforcement learning aims to determine an optimal control policy from interaction with a system or from observations gathered from a system. In batch mode, it can be achieved by approximating the so-called Q-function based on a set of four-tuples (xt, ut , rt, xt+1) where xt denotes the system state at time t, ut the control action taken, rt the instantaneous reward obtained and xt+1 the successor state of the system, and by determining the control policy from this Q-function. The Q-function approximation may be obtained from the limit of a sequence of (batch mode) supervised learning problems. Within this framework we describe the use of several classical tree-based supervised learning methods (CART, Kd-tree, tree bagging) and two newly proposed ensemble algorithms, namely extremely and totally randomized trees. We study their performances on several examples and find that the ensemble methods based on regression trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples. In particular, the totally randomized trees give good results while ensuring the convergence of the sequence, whereas by relaxing the convergence constraint even better accuracy results are provided by the extremely randomized trees. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Selective transfer of neural network task knowledge paper_content: Within the context of artificial neural networks (ANN), we explore the question: How can a learning system retain and use previously learned knowledge to facilitate future learning? The research objectives are to develop a theoretical model and test a prototype system which sequentially retains ANN task knowledge and selectively uses that knowledge to bias the learning of a new task in an efficient and effective manner. ::: A theory of selective functional transfer is presented that requires a learning algorithm that employs a measure of task relatedness. ηMTL is introduced as a knowledge based inductive learning method that learns one or more secondary tasks within a back-propagation ANN as a source of inductive bias for a primary task. ηMTL employs a separate learning rate, ηk, for each secondary task output k. ηk varies as a function of a measure of relatedness, Rk, between the kth secondary task and the primary task of interest. Three categories of a priori measures of relatedness are developed for controlling inductive bias. ::: The task rehearsal method (TRM) is introduced to address the issue of sequential retention and generation of learned task knowledge. The representations of successfully learned tasks are stored within a domain knowledge repository. Virtual training examples generated from domain knowledge are rehearsed as secondary tasks in parallel with each new task using either standard multiple task learning (MTL) or ηMTL. ::: TRM using ηMTL is tested as a method of selective knowledge transfer and sequential learning on two synthetic domains and one medical diagnostic domain. Experiments show that the TRM provides an excellent method of retaining and generating accurate functional task knowledge. Hypotheses generated are compared statistically to single task learning and MTL hypotheses. We conclude that selective knowledge transfer with ηMTL develops more effective hypotheses but not necessarily with greater efficiency. The a priori measures of relatedness demonstrate significant value on certain domains of tasks but have difficulty scaling to large numbers of tasks. Several issues identified during the research indicate the importance of consolidating a representational form of domain knowledge. --- paper_title: A Survey on Transfer Learning paper_content: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. --- paper_title: Transfer in variable-reward hierarchical reinforcement learning paper_content: Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs. --- paper_title: Transfer learning via inter-task mappings for temporal difference learning paper_content: Temporal difference (TD) learning (Sutton and Barto, 1998) has become a popular reinforcement learning technique in recent years. TD methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but the most basic algorithms have often been found slow in practice. This empirical result has motivated the development of many methods that speed up reinforcement learning by modifying a task for the learner or helping the learner better generalize to novel situations. This article focuses on generalizing across tasks, thereby speeding up learning, via a novel form of transfer using handcoded task relationships. We compare learning on a complex task with three function approximators, a cerebellar model arithmetic computer (CMAC), an artificial neural network (ANN), and a radial basis function (RBF), and empirically demonstrate that directly transferring the action-value function can lead to a dramatic speedup in learning with all three. Using transfer via inter-task mapping (TVITM), agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup soccer Keepaway domain. ::: ::: This article contains and extends material published in two conference papers (Taylor and Stone, 2005; Taylor et al., 2005). --- paper_title: Autonomous transfer for reinforcement learning paper_content: Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer. --- paper_title: A Survey on Transfer Learning paper_content: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. --- paper_title: Intrinsically Motivated Reinforcement Learning paper_content: Psychologists call behavior intrinsically motivated when it is engaged in for its own sake rather than as a step toward solving a specific problem of clear practical value. But what we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated reinforcement learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy. --- paper_title: Transfer Learning for Reinforcement Learning Domains: A Survey paper_content: The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. --- paper_title: Autonomous shaping: knowledge transfer in reinforcement learning paper_content: We introduce the use of learned shaping rewards in reinforcement learning tasks, where an agent uses prior experience on a sequence of tasks to learn a portable predictor that estimates intermediate rewards, resulting in accelerated learning in later tasks that are related but distinct. Such agents can be trained on a sequence of relatively easy tasks in order to develop a more informative measure of reward that can be transferred to improve performance on more difficult tasks without requiring a hand coded shaping function. We use a rod positioning task to show that this significantly improves performance even after a very brief training period. --- paper_title: Knowledge Transfer in Reinforcement Learning Agent paper_content: This manuscript is focused on transfer learning methods for reinforcement learning agents. An preview of contemporary papers in area of transfer Leaning and Knowledge transfer. We provided the background and overview of knowledge transfer methods with an emphasis on the topics of reinforcement learning. --- paper_title: Selective transfer of neural network task knowledge paper_content: Within the context of artificial neural networks (ANN), we explore the question: How can a learning system retain and use previously learned knowledge to facilitate future learning? The research objectives are to develop a theoretical model and test a prototype system which sequentially retains ANN task knowledge and selectively uses that knowledge to bias the learning of a new task in an efficient and effective manner. ::: A theory of selective functional transfer is presented that requires a learning algorithm that employs a measure of task relatedness. ηMTL is introduced as a knowledge based inductive learning method that learns one or more secondary tasks within a back-propagation ANN as a source of inductive bias for a primary task. ηMTL employs a separate learning rate, ηk, for each secondary task output k. ηk varies as a function of a measure of relatedness, Rk, between the kth secondary task and the primary task of interest. Three categories of a priori measures of relatedness are developed for controlling inductive bias. ::: The task rehearsal method (TRM) is introduced to address the issue of sequential retention and generation of learned task knowledge. The representations of successfully learned tasks are stored within a domain knowledge repository. Virtual training examples generated from domain knowledge are rehearsed as secondary tasks in parallel with each new task using either standard multiple task learning (MTL) or ηMTL. ::: TRM using ηMTL is tested as a method of selective knowledge transfer and sequential learning on two synthetic domains and one medical diagnostic domain. Experiments show that the TRM provides an excellent method of retaining and generating accurate functional task knowledge. Hypotheses generated are compared statistically to single task learning and MTL hypotheses. We conclude that selective knowledge transfer with ηMTL develops more effective hypotheses but not necessarily with greater efficiency. The a priori measures of relatedness demonstrate significant value on certain domains of tasks but have difficulty scaling to large numbers of tasks. Several issues identified during the research indicate the importance of consolidating a representational form of domain knowledge. --- paper_title: Proto-value functions: A laplacian framework for learning representation and control in markov decision processes paper_content: This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called proto-value functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A three-phased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using least-squares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nystrom extension for out-of-sample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end. --- paper_title: Regularized Policy Iteration paper_content: In this paper we consider approximate policy-iteration-based reinforcement learning algorithms. In order to implement a flexible function approximation scheme we propose the use of non-parametric methods with regularization, providing a convenient way to control the complexity of the function approximator. We propose two novel regularized policy iteration algorithms by adding L2-regularization to two widely-used policy evaluation methods: Bellman residual minimization (BRM) and least-squares temporal difference learning (LSTD). We derive efficient implementation for our algorithms when the approximate value-functions belong to a reproducing kernel Hilbert space. We also provide finite-sample performance bounds for our algorithms and show that they are able to achieve optimal rates of convergence under the studied conditions. --- paper_title: Finite-Sample Analysis of LSTD paper_content: In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is β-mixing. --- paper_title: Transfer Learning for Reinforcement Learning Domains: A Survey paper_content: The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. --- paper_title: Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path paper_content: In this paper we consider the problem of finding a near-optimal policy in a continuous space, discounted Markovian Decision Problem (MDP) by employing value-function-based methods when only a single trajectory of a fixed policy is available as the input. We study a policy-iteration algorithm where the iterates are obtained via empirical risk minimization with a risk function that penalizes high magnitudes of the Bellman-residual. Our main result is a finite-sample, high-probability bound on the performance of the computed policy that depends on the mixing rate of the trajectory, the capacity of the function set as measured by a novel capacity concept (the VC-crossing dimension), the approximation power of the function set and the controllability properties of the MDP. Moreover, we prove that when a linear parameterization is used the new algorithm is equivalent to Least-Squares Policy Iteration. To the best of our knowledge this is the first theoretical result for off-policy control learning over continuous state-spaces using a single trajectory. --- paper_title: Finite-Sample Analysis of Bellman Residual Minimization paper_content: We consider the Bellman residual minimization approach for solving discounted Markov decision problems, where we assume that a generative model of the dynamics and rewards is available. At each policy iteration step, an approximation of the value function for the current policy is obtained by minimizing an empirical Bellman residual defined on a set of n states drawn i.i.d. from a distribution , the immediate rewards, and the next states sampled from the model. Our main result is a generalization bound for the Bellman residual in linear approximation spaces. In particular, we prove that the empirical Bellman residual approaches the true (quadratic) Bellman residual in -norm with a rate of order O(1= √ n). This result implies that minimizing the empirical residual is indeed a sound approach for the minimization of the true Bellman residual which guarantees a good approximation of the value function for each policy. Finally, we derive performance bounds for the resulting approximate policy iteration algorithm in terms of the number of samples n and a measure of how well the function space is able to approximate the sequence of value functions. --- paper_title: Knowledge Transfer in Reinforcement Learning Agent paper_content: This manuscript is focused on transfer learning methods for reinforcement learning agents. An preview of contemporary papers in area of transfer Leaning and Knowledge transfer. We provided the background and overview of knowledge transfer methods with an emphasis on the topics of reinforcement learning. --- paper_title: Transferring instances for model-based reinforcement learning paper_content: Reinforcement learningagents typically require a significant amount of data before performing well on complex tasks. Transfer learningmethods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces timbrel , a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that timbrel can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of timbrel 's effectiveness. --- paper_title: Transfer in variable-reward hierarchical reinforcement learning paper_content: Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs. --- paper_title: Incremental Skill Acquisition for Self-Motivated Learning Animats paper_content: A central role in the development process of children is played by self-exploratory activities Through a playful interaction with the surrounding environment, they test their own capabilities, explore novel situations, and understand how their actions affect the world During this kind of exploration, interesting situations may be discovered By learning to reach these situations, a child incrementally develops more and more complex skills Inspired by studies from psychology, neuroscience, and machine learning, we designed SMILe (Self-Motivated Incremental Learning), a learning framework that allows artificial agents to autonomously identify and learn a set of abilities useful to face several different tasks, through an iterated three phase process: by means of a random exploration of the environment (babbling phase), the agent identifies interesting situations and generates an intrinsic motivation (motivating phase) aimed at learning how to get into these situations (skill acquisition phase) This process incrementally increases the skills of the agent, so that new interesting configurations can be experienced We present results on two gridworld environments to show how SMILe makes it possible to learn skills that enable the agent to perform well and robustly in many different tasks. --- paper_title: Near-optimal regret bounds for reinforcement learning paper_content: For undiscounted reinforcement learning in Markov decision processes (MDPs) we consider the total regret of a learning algorithm with respect to an optimal policy. In order to describe the transition structure of an MDP we propose a new parameter: An MDP has diameter D if for any pair of states s, s' there is a policy which moves from s to s' in at most D steps (on average). We present a reinforcement learning algorithm with total regret O(DS √AT) after T steps for any unknown MDP with S states, A actions per state, and diameter D. This bound holds with high probability. We also present a corresponding lower bound of Ω(√DSAT) on the total regret of any learning algorithm. --- paper_title: Using Homomorphisms to Transfer Options across Continuous Reinforcement Learning Domains paper_content: We examine the problem of Transfer in Reinforcement Learning and present a method to utilize knowledge acquired in one Markov Decision Process (MDP) to bootstrap learning in a more complex but related MDP. We build on work in model minimization in Reinforcement Learning to define relationships between state-action pairs of the two MDPs. Our main contribution in this work is to provide a way to compactly represent such mappings using relationships between state variables in the two domains. We use these functions to transfer a learned policy in the first domain into an option in the new domain, and apply intra-option learning methods to bootstrap learning in the new domain. We first evaluate our approach in the well known Blocksworld domain. We then demonstrate that our approach to transfer is viable in a complex domain with a continuous state space by evaluating it in the Robosoccer Keepaway domain. --- paper_title: Building Portable Options: Skill Transfer in Reinforcement Learning paper_content: The options framework provides methods for reinforcement learning agents to build new high-level skills. However, since options are usually learned in the same state space as the problem the agent is solving, they cannot be used in other tasks that are similar but have different state spaces. We introduce the notion of learning options in agentspace, the space generated by a feature set that is present and retains the same semantics across successive problem instances, rather than in problemspace. Agent-space options can be reused in later tasks that share the same agent-space but have different problem-spaces. We present experimental results demonstrating the use of agent-space options in building transferrable skills, and show that they perform best when used in conjunction with problem-space options. --- paper_title: Skill acquisition via transfer learning and advice taking paper_content: We describe a reinforcement learning system that transfers skills from a previously learned source task to a related target task. The system uses inductive logic programming to analyze experience in the source task, and transfers rules for when to take actions. The target task learner accepts these rules through an advice-taking algorithm, which allows learners to benefit from outside guidance that may be imperfect. Our system accepts a human-provided mapping, which specifies the similarities between the source and target tasks and may also include advice about the differences between them. Using three tasks in the RoboCup simulated soccer domain, we demonstrate that this system can speed up reinforcement learning substantially. --- paper_title: Identifying useful subgoals in reinforcement learning by local graph partitioning paper_content: We present a new subgoal-based method for automatically creating useful skills in reinforcement learning. Our method identifies subgoals by partitioning local state transition graphs---those that are constructed using only the most recent experiences of the agent. The local scope of our subgoal discovery method allows it to successfully identify the type of subgoals we seek---states that lie between two densely-connected regions of the state space while producing an algorithm with low computational cost. --- paper_title: Structure in the Space of Value Functions paper_content: Solving in an efficient manner many different optimal control tasks within the same underlying environment requires decomposing the environment into its computationally elemental fragments. We suggest how to find fragmentations using unsupervised, mixture model, learning methods on data derived from optimal value functions for multiple tasks, and show that these fragmentations are in accord with observable structure in the environments. Further, we present evidence that such fragments can be of use in a practical reinforcement learning context, by facilitating online, actor-critic learning of multiple goals MDPs. --- paper_title: Improving action selection in MDP’s via knowledge transfer paper_content: Temporal-difference reinforcement learning (RL) has been successfully applied in several domains with large state sets. Large action sets, however, have received considerably less attention. This paper demonstrates the use of knowledge transfer between related tasks to accelerate learning with large action sets. We introduce action transfer, a technique that extracts the actions from the (near-)optimal solution to the first task and uses them in place of the full action set when learning any subsequent tasks. When optimal actions make up a small fraction of the domain's action set, action transfer can substantially reduce the number of actions and thus the complexity of the problem. However, action transfer between dissimilar tasks can be detrimental. To address this difficulty, we contribute randomized task perturbation (RTP), an enhancement to action transfer that makes it robust to unrepresentative source tasks. We motivate RTP action transfer with a detailed theoretical analysis featuring a formalism of related tasks and a bound on the suboptimality of action transfer. The empirical results in this paper show the potential of RTP action transfer to substantially expand the applicability of RL to problems with large action sets. --- paper_title: Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning paper_content: Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem. --- paper_title: Transfer of Experience Between Reinforcement Learning Environments with Progressive Difficulty paper_content: This paper describes an extension to reinforcement learning (RL), in which a standard RL algorithm is augmented with a mechanism for transferring experience gained in one problem to new but related problems. In this approach, named Progressive RL, an agent acquires experience of operating in a simple environment through experimentation, and then engages in a period of introspection, during which it rationalises the experience gained and formulates symbolic knowledge describing how to behave in that simple environment. When subsequently experimenting in a more complex but related environment, it is guided by this knowledge until it gains direct experience. A test domain with 15 maze environments, arranged in order of difficulty, is described. A range of experiments in this domain are presented, that demonstrate the benefit of Progressive RL relative to a basic RL approach in which each puzzle is solved from scratch. The experiments also analyse the knowledge formed during introspection, illustrate how domain knowledge may be incorporated, and show that Progressive Reinforcement Learning may be used to solve complex puzzles more quickly. --- paper_title: Accelerating Reinforcement Learning by Composing Solutions of Automatically Identified Subtasks paper_content: This paper discusses a system that accelerates reinforcement learning by using transfer from related tasks. Without such transfer, even if two tasks are very similar at some abstract level, an extensive re-learning effort is required. The system achieves much of its power by transferring parts of previously learned solutions rather than a single complete solution. The system exploits strong features in the multi-dimensional function produced by reinforcement learning in solving a particular task. These features are stable and easy to recognize early in the learning process. They generate a partitioning of the state space and thus the function. The partition is represented as a graph. This is used to index and compose functions stored in a case base to form a close approximation to the solution of the new task. Experiments demonstrate that function composition often produces more than an order of magnitude increase in learning rate compared to a basic reinforcement learning algorithm. --- paper_title: Convex multi-task feature learning paper_content: We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks. --- paper_title: Q-Cut - Dynamic Discovery of Sub-goals in Reinforcement Learning paper_content: We present the Q-Cut algorithm, a graph theoretic approach for automatic detection of sub-goals in a dynamic environment, which is used for acceleration of the Q-Learning algorithm. The learning agent creates an on-line map of the process history, and uses an efficient Max-Flow/Min-Cut algorithm for identifying bottlenecks. The policies for reaching bottlenecks are separately learned and added to the model in a form of options (macro-actions). We then extend the basic Q-Cut algorithm to the Segmented Q-Cut algorithm, which uses previously identified bottlenecks for state space partitioning, necessary for finding additional bottlenecks in complex environments. Experiments show significant performance improvements, particulary in the initial learning phase. --- paper_title: Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density paper_content: This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks. --- paper_title: Effective Control Knowledge Transfer through Learning Skill and Representation Hierarchies paper_content: Learning capabilities of computer systems still lag far behind biological systems. One of the reasons can be seen in the inefficient re-use of control knowledge acquired over the lifetime of the artificial learning system. To address this deficiency, this paper presents a learning architecture which transfers control knowledge in the form of behavioral skills and corresponding representation concepts from one task to subsequent learning tasks. The presented system uses this knowledge to construct a more compact state space representation for learning while assuring bounded optimality of the learned task policy by utilizing a representation hierarchy. Experimental results show that the presented method can significantly outperform learning on a flat state space representation and the MAXQ method for hierarchical reinforcement learning. --- paper_title: Proto-value functions: A laplacian framework for learning representation and control in markov decision processes paper_content: This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called proto-value functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A three-phased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using least-squares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nystrom extension for out-of-sample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end. --- paper_title: Relativized Options: Choosing the Right Transformation paper_content: Relativized options combine model minimization methods and a hierarchical reinforcement learning framework to derive compact reduced representations of a related family of tasks. Relativized options are defined without an absolute frame of reference, and an option's policy is transformed suitably based on the circumstances under which the option is invoked. In earlier work we addressed the issue of learning the option policy online. In this article we develop an algorithm for choosing, from among a set of candidate transformations, the right transformation for each member of the family of tasks. --- paper_title: Transfer of samples in batch reinforcement learning paper_content: The main objective of transfer in reinforcement learning is to reduce the complexity of learning the solution of a target task by effectively reusing the knowledge retained from solving a set of source tasks. In this paper, we introduce a novel algorithm that transfers samples (i.e., tuples 〈s, a, s', r〉) from source to target tasks. Under the assumption that tasks have similar transition models and reward functions, we propose a method to select samples from the source tasks that are mostly similar to the target task, and, then, to use them as input for batch reinforcement-learning algorithms. As a result, the number of samples an agent needs to collect from the target task to learn its solution is reduced. We empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity, even when some source tasks are significantly different from the target task. --- paper_title: Multitask reinforcement learning on the distribution of MDPs paper_content: In this paper we address a new problem in reinforcement learning. Here we consider an agent that faces multiple learning tasks within its lifetime. The agent's objective is to maximize its total reward in the lifetime as well as a conventional return in each task. To realize this, it has to be endowed an important ability to keep its past learning experiences and utilize them for improving future learning performance. This time we try to phrase this problem formally. The central idea is to introduce an environmental class, BV-MDPs that is defined with the distribution of MDPs. As an approach to exploiting past learning experiences, we focus on statistics (mean and deviation) about the agent's value tables. The mean can be used as initial values of the table when a new task is presented. The deviation can be viewed as measuring reliability of the mean, and we utilize it in calculating priority of simulated backups. We conduct experiments in computer simulation to evaluate the effectiveness. --- paper_title: Reinforcement learning with Gaussian processes paper_content: Gaussian Process Temporal Difference (GPTD) learning offers a Bayesian solution to the policy evaluation problem of reinforcement learning. In this paper we extend the GPTD framework by addressing two pressing issues, which were not adequately treated in the original GPTD paper (Engel et al., 2003). The first is the issue of stochasticity in the state transitions, and the second is concerned with action selection and policy improvement. We present a new generative model for the value function, deduced from its relation with the discounted return. We derive a corresponding on-line algorithm for learning the posterior moments of the value Gaussian process. We also present a SARSA based extension of GPTD, termed GPSARSA, that allows the selection of actions and the gradual improvement of policies without requiring a world-model. --- paper_title: Using advice to transfer knowledge acquired in one reinforcement learning task to another paper_content: We present a method for transferring knowledge learned in one task to a related task. Our problem solvers employ reinforcement learning to acquire a model for one task. We then transform that learned model into advice for a new task. A human teacher provides a mapping from the old task to the new task to guide this knowledge transfer. Advice is incorporated into our problem solver using a knowledge-based support vector regression method that we previously developed. This advice-taking approach allows the problem solver to refine or even discard the transferred knowledge based on its subsequent experiences. We empirically demonstrate the effectiveness of our approach with two games from the RoboCup soccer simulator: KeepAway and BreakAway. Our results demonstrate that a problem solver learning to play BreakAway using advice extracted from KeepAway outperforms a problem solver learning without the benefit of such advice. --- paper_title: Transferring instances for model-based reinforcement learning paper_content: Reinforcement learningagents typically require a significant amount of data before performing well on complex tasks. Transfer learningmethods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces timbrel , a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that timbrel can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of timbrel 's effectiveness. --- paper_title: Finite-Sample Analysis of LSTD paper_content: In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is β-mixing. --- paper_title: Transfer via inter-task mappings in policy search reinforcement learning paper_content: The ambitious goal of transfer learning is to accelerate learning on a target task after training on a different, but related, source task. While many past transfer methods have focused on transferring value-functions, this paper presents a method for transferring policies across tasks with different state and action spaces. In particular, this paper utilizes transfer via inter-task mappings for policy search methods (TVITM-PS) to construct a transfer functional that translates a population of neural network policies trained via policy search from a source task to a target task. Empirical results in robot soccer Keepaway and Server Job Scheduling show that TVITM-PS can markedly reduce learning time when full inter-task mappings are available. The results also demonstrate that TVITMPS still succeeds when given only incomplete inter-task mappings. Furthermore, we present a novel method for learning such mappings when they are not available, and give results showing they perform comparably to hand-coded mappings. --- paper_title: Multi-task reinforcement learning: a hierarchical Bayesian approach paper_content: We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknown distribution. We model the distribution over MDPs using a hierarchical Bayesian infinite mixture model. For each novel MDP, we use the previously learned distribution as an informed prior for modelbased Bayesian reinforcement learning. The hierarchical Bayesian framework provides a strong prior that allows us to rapidly infer the characteristics of new environments based on previous environments, while the use of a nonparametric model allows us to quickly adapt to environments we have not encountered before. In addition, the use of infinite mixtures allows for the model to automatically learn the number of underlying MDP components. We evaluate our approach and show that it leads to significant speedups in convergence to an optimal policy after observing only a small number of tasks. --- paper_title: Transfer in variable-reward hierarchical reinforcement learning paper_content: Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs. --- paper_title: Transfer Learning for Reinforcement Learning Domains: A Survey paper_content: The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. --- paper_title: An Experts Algorithm for Transfer Learning paper_content: A long-lived agent continually faces new tasks in its environment. Such an agent may be able to use knowledge learned in solving earlier tasks to produce candidate policies for its current task. There may, however, be multiple reasonable policies suggested by prior experience, and the agent must choose between them potentially without any a priori knowledge about their applicability to its current situation. We present an "experts" algorithm for efficiently choosing amongst candidate policies in solving an unknown Markov decision process task. We conclude with the results of experiments on two domains in which we generate candidate policies from solutions to related tasks and use our experts algorithm to choose amongst them. --- paper_title: Autonomous transfer for reinforcement learning paper_content: Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer. --- paper_title: Transferring instances for model-based reinforcement learning paper_content: Reinforcement learningagents typically require a significant amount of data before performing well on complex tasks. Transfer learningmethods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces timbrel , a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that timbrel can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of timbrel 's effectiveness. --- paper_title: Transfer of samples in batch reinforcement learning paper_content: The main objective of transfer in reinforcement learning is to reduce the complexity of learning the solution of a target task by effectively reusing the knowledge retained from solving a set of source tasks. In this paper, we introduce a novel algorithm that transfers samples (i.e., tuples 〈s, a, s', r〉) from source to target tasks. Under the assumption that tasks have similar transition models and reward functions, we propose a method to select samples from the source tasks that are mostly similar to the target task, and, then, to use them as input for batch reinforcement-learning algorithms. As a result, the number of samples an agent needs to collect from the target task to learn its solution is reduced. We empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity, even when some source tasks are significantly different from the target task. --- paper_title: Transfer learning via inter-task mappings for temporal difference learning paper_content: Temporal difference (TD) learning (Sutton and Barto, 1998) has become a popular reinforcement learning technique in recent years. TD methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but the most basic algorithms have often been found slow in practice. This empirical result has motivated the development of many methods that speed up reinforcement learning by modifying a task for the learner or helping the learner better generalize to novel situations. This article focuses on generalizing across tasks, thereby speeding up learning, via a novel form of transfer using handcoded task relationships. We compare learning on a complex task with three function approximators, a cerebellar model arithmetic computer (CMAC), an artificial neural network (ANN), and a radial basis function (RBF), and empirically demonstrate that directly transferring the action-value function can lead to a dramatic speedup in learning with all three. Using transfer via inter-task mapping (TVITM), agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup soccer Keepaway domain. ::: ::: This article contains and extends material published in two conference papers (Taylor and Stone, 2005; Taylor et al., 2005). --- paper_title: Regularized Policy Iteration paper_content: In this paper we consider approximate policy-iteration-based reinforcement learning algorithms. In order to implement a flexible function approximation scheme we propose the use of non-parametric methods with regularization, providing a convenient way to control the complexity of the function approximator. We propose two novel regularized policy iteration algorithms by adding L2-regularization to two widely-used policy evaluation methods: Bellman residual minimization (BRM) and least-squares temporal difference learning (LSTD). We derive efficient implementation for our algorithms when the approximate value-functions belong to a reproducing kernel Hilbert space. We also provide finite-sample performance bounds for our algorithms and show that they are able to achieve optimal rates of convergence under the studied conditions. --- paper_title: Finite-Sample Analysis of LSTD paper_content: In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is β-mixing. --- paper_title: Learning from multiple sources paper_content: We consider the problem of learning accurate models from multiple sources of "nearby" data. Given distinct samples from multiple data sources and estimates of the dissimilarities between these sources, we provide a general theory of which samples should be used to learn models for each source. This theory is applicable in a broad decision-theoretic learning framework, and yields general results for classification and regression. A key component of our approach is the development of approximate triangle inequalities for expected loss, which may be of independent interest. We discuss the related problem of learning parameters of a distribution from multiple data sources. Finally, we illustrate our theory through a series of synthetic simulations. --- paper_title: REGAL: A Regularization based Algorithm for Reinforcement Learning in Weakly Communicating MDPs paper_content: We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Markov Decision Process (MDP). The algorithm proceeds in episodes where, in each episode, it picks a policy using regularization based on the span of the optimal bias vector. For an MDP with S states and A actions whose optimal bias vector has span bounded by H, we show a regret bound of O(HS√AT). We also relate the span to various diameter-like quantities associated with the MDP, demonstrating how our results improve on previous regret bounds. --- paper_title: Transfer Learning for Reinforcement Learning Domains: A Survey paper_content: The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. --- paper_title: Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path paper_content: In this paper we consider the problem of finding a near-optimal policy in a continuous space, discounted Markovian Decision Problem (MDP) by employing value-function-based methods when only a single trajectory of a fixed policy is available as the input. We study a policy-iteration algorithm where the iterates are obtained via empirical risk minimization with a risk function that penalizes high magnitudes of the Bellman-residual. Our main result is a finite-sample, high-probability bound on the performance of the computed policy that depends on the mixing rate of the trajectory, the capacity of the function set as measured by a novel capacity concept (the VC-crossing dimension), the approximation power of the function set and the controllability properties of the MDP. Moreover, we prove that when a linear parameterization is used the new algorithm is equivalent to Least-Squares Policy Iteration. To the best of our knowledge this is the first theoretical result for off-policy control learning over continuous state-spaces using a single trajectory. --- paper_title: Finite-Sample Analysis of Bellman Residual Minimization paper_content: We consider the Bellman residual minimization approach for solving discounted Markov decision problems, where we assume that a generative model of the dynamics and rewards is available. At each policy iteration step, an approximation of the value function for the current policy is obtained by minimizing an empirical Bellman residual defined on a set of n states drawn i.i.d. from a distribution , the immediate rewards, and the next states sampled from the model. Our main result is a generalization bound for the Bellman residual in linear approximation spaces. In particular, we prove that the empirical Bellman residual approaches the true (quadratic) Bellman residual in -norm with a rate of order O(1= √ n). This result implies that minimizing the empirical residual is indeed a sound approach for the minimization of the true Bellman residual which guarantees a good approximation of the value function for each policy. Finally, we derive performance bounds for the resulting approximate policy iteration algorithm in terms of the number of samples n and a measure of how well the function space is able to approximate the sequence of value functions. ---
<format> Title: Transfer in Reinforcement Learning: a Framework and a Survey Section 1: Introduction Description 1: Summarize the main concepts of transfer learning in reinforcement learning, its origins, objectives, and significance in machine learning. Section 2: A Framework and a Taxonomy for Transfer in Reinforcement Learning Description 2: Introduce a formal framework and propose a taxonomy to classify transfer learning approaches in reinforcement learning. Section 3: Transfer Framework Description 3: Define the general transfer framework, including formal definitions and symbols, and illustrate how the framework applies to reinforcement learning. Section 4: Taxonomy Description 4: Propose a taxonomy categorizing transfer learning approaches based on the setting, transferred knowledge, and objectives. Section 5: The Settings Description 5: Discuss three different categories of transfer settings focusing on domain characteristics and transfer scenarios. Section 6: The Knowledge Description 6: Explain the different types of knowledge that can be transferred and how each approach uses this knowledge to improve learning performance. Section 7: The Objectives Description 7: Outline the performance evaluation metrics and objectives of transfer learning in reinforcement learning. Section 8: The Survey Description 8: Organize the surveyed transfer approaches based on the framework and taxonomy, and classify existing algorithms according to their settings, types of knowledge transfer, and performance objectives. Section 9: Methods for Transfer from Source to Target with a Fixed State-Action Space Description 9: Review approaches in the setting where transfer occurs from a single source task to a target task with a fixed state-action space. Section 10: Methods for Transfer across Tasks with a Fixed State-Action Space Description 10: Review transfer approaches that handle multiple source tasks sharing the same state-action space and discuss methods for merging and selectively transferring knowledge to avoid negative transfer. Section 11: Methods for Transfer from Source to Target Tasks with a Different State-Action Spaces Description 11: Review transfer approaches in the most general setting where source and target tasks differ in their state-action spaces, and discuss methods for mapping state-action variables. Section 12: Conclusions and Open Questions Description 12: Summarize the key findings, propose potential lines of research, and discuss open questions relevant to the advancement of transfer learning in reinforcement learning. </format>
A survey on reliability in distributed systems
12
--- paper_title: Reliability in grid computing systems paper_content: In recent years, grid technology has emerged as an important tool for solving compute-intensive problems within the scientific community and in industry. To further the development and adoption of this technology, researchers and practitioners from different disciplines have collaborated to produce standard specifications for implementing large-scale, interoperable grid systems. The focus of this activity has been the Open Grid Forum, but other standards development organizations have also produced specifications that are used in grid systems. To date, these specifications have provided the basis for a growing number of operational grid systems used in scientific and industrial applications. However, if the growth of grid technology is to continue, it will be important that grid systems also provide high reliability. In particular, it will be critical to ensure that grid systems are reliable as they continue to grow in scale, exhibit greater dynamism, and become more heterogeneous in composition. Ensuring grid system reliability in turn requires that the specifications used to build these systems fully support reliable grid services. This study surveys work on grid reliability that has been done in recent years and reviews progress made toward achieving these goals. The survey identifies important issues and problems that researchers are working to overcome in order to develop reliability methods for large-scale, heterogeneous, dynamic environments. The survey also illuminates reliability issues relating to standard specifications used in grid systems, identifying existing specifications that may need to be evolved and areas where new specifications are needed to better support the reliability. Published in 2009 by John Wiley & Sons, Ltd. ::: ::: This article is a U.S. Government work and is in the public domain in the U.S.A. --- paper_title: Survey of reliability and availability prediction methods from the viewpoint of software architecture paper_content: Many future software systems will be distributed across a network, extensively providing different kinds of services for their users. These systems must be highly reliable and provide services when required. Reliability and availability must be engineered into software from the onset of its development, and potential problems must be detected in the early stages, when it is easier and less expensive to implement modifications. The software architecture design phase is the first stage of software development in which it is possible to evaluate how well the quality requirements are being met. For this reason, a method is needed for analyzing software architecture with respect to reliability and availability. In this paper, we define a framework for comparing reliability and availability analysis methods from the viewpoint of software architecture. Our contribution is the comparison of the existing analysis methods and techniques that can be used for reliability and availability prediction at the architectural level. The objective is to discover which methods are suitable for the reliability and availability prediction of today’s complex systems, what are the shortcomings of the methods, and which research activities need to be conducted in order to overcome these identified shortcomings. The comparison reveals that none of the existing methods entirely fulfill the requirements that are defined in the framework. The comparison framework also defines the characteristics required of new reliability and availability analysis methods. Additionally, the framework is a valuable tool for selecting the best suitable method for architecture analysis. Furthermore, the framework can be extended and used for other evaluation methods as well. --- paper_title: Software Reliability Engineering: A Roadmap paper_content: Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying software failure processes for accurate reliability analysis and forecasting. Although software reliability has remained an active research subject over the past 35 years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we review the history of software reliability engineering, the current trends and existing problems, and specific difficulties. Possible future directions and promising research subjects in software reliability engineering are also addressed. --- paper_title: Multimedia Object Placement for Transparent Data Replication paper_content: Transparent data replication is a promising technique for improving the system performance of a large distributed network. Transcoding is an important technology which adapts the same multimedia object to diverse mobile appliances; thus, users' requests for a specified version of a multimedia object could be served by a more detailed version cached according to transcoding. Therefore, it is particularly of theoretical and practical necessity to determine the proper version to be cached at each node such that the specified objective is achieved. In this paper, we address the problem of multimedia object placement for transparent data replication. The performance objective is to minimize the total access cost by considering both transmission cost and transcoding cost. We present optimal solutions for different cases for this problem. The performance of the proposed solutions is evaluated with a set of carefully designed simulation experiments for various performance metrics over a wide range of system parameters. The simulation results show that our solution consistently and significantly outperforms comparison solutions in terms of all the performance metrics considered --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: On distributed computing systems reliability analysis under program execution constraints paper_content: Presents an algorithm for computing the reliability of distributed computing systems (DCS). The algorithm, called the Fast Reliability Evaluation Algorithm, is based on the factoring theorem employing several reliability preserving reduction techniques. The effect of file distributions, program distributions, and various topologies on reliability of the DCS is studied in detail using the proposed algorithm. Compared with existing algorithms on various network topologies, file distributions, and program distributions, the proposed algorithm is much more economical in both time and space. To compute the distributed program reliability, the ARPA network is studied to illustrate the feasibility of the proposed algorithm. > --- paper_title: Quality Prediction of Service Compositions through Probabilistic Model Checking paper_content: The problem of composing services to deliver integrated business solutions has been widely studied in the last years. Besides addressing functional requirements, services compositions should also provide agreed service levels. Our goal is to support model-based analysis of service compositions, with a focus on the assessment of non-functional quality attributes, namely performance and reliability. We propose a model-driven approach, which automatically transforms a design model of service composition into an analysis model, which then feeds a probabilistic model checker for quality prediction. To bring this approach to fruition, we developed a prototype tool called ATOP , and we demonstrate its use on a simple case study. --- paper_title: Real-time distributed program reliability analysis paper_content: Distributed program reliability has been proposed as a reliability index for distributed computing systems to analyze the probability of the successful execution of a program, task, or mission in the system. However, current reliability models proposed for distributed program reliability evaluation do not capture the effects of real-time constraints. We propose an approach to the reliability analysis of distributed programs that addresses real-time constraints. Our approach is based on a model for evaluating transmission time, which allow us to find the time needed to complete execution of the program, task, or mission under evaluation. With information on time-constraints, the corresponding Markov state space can then be defined for reliability computation. To speed up the evaluation process and reduce the size of the Markov state space, several dynamic reliability-preserving reductions are developed. A simple distributed real-time system is used as an example to illustrate the feasibility and uniqueness of the proposed approach. > --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: Architecture-based reliability prediction for service-oriented computing paper_content: In service-oriented computing, services are dynamically built as an assembly of pre-existing, independently developed, network accessible services. Hence, predicting as much as possible automatically their dependability is important to appropriately drive the selection and assembly of services, in order to get some required dependability level. We present an approach to the reliability prediction of such services, based on the partial information published with each service, and that lends itself to automatization. The proposed methodology exploits ideas from the Software Architecture- and Component-based approaches to software design. --- paper_title: On distributed computing systems reliability analysis under program execution constraints paper_content: Presents an algorithm for computing the reliability of distributed computing systems (DCS). The algorithm, called the Fast Reliability Evaluation Algorithm, is based on the factoring theorem employing several reliability preserving reduction techniques. The effect of file distributions, program distributions, and various topologies on reliability of the DCS is studied in detail using the proposed algorithm. Compared with existing algorithms on various network topologies, file distributions, and program distributions, the proposed algorithm is much more economical in both time and space. To compute the distributed program reliability, the ARPA network is studied to illustrate the feasibility of the proposed algorithm. > --- paper_title: Real-time distributed program reliability analysis paper_content: Distributed program reliability has been proposed as a reliability index for distributed computing systems to analyze the probability of the successful execution of a program, task, or mission in the system. However, current reliability models proposed for distributed program reliability evaluation do not capture the effects of real-time constraints. We propose an approach to the reliability analysis of distributed programs that addresses real-time constraints. Our approach is based on a model for evaluating transmission time, which allow us to find the time needed to complete execution of the program, task, or mission under evaluation. With information on time-constraints, the corresponding Markov state space can then be defined for reliability computation. To speed up the evaluation process and reduce the size of the Markov state space, several dynamic reliability-preserving reductions are developed. A simple distributed real-time system is used as an example to illustrate the feasibility and uniqueness of the proposed approach. > --- paper_title: A quantitative and qualitative analysis of factors affecting software processes paper_content: Despite the growing body of research on software process improvement (SPI), there is still a great deal of variability in the success of SPI programmes. In this paper, we explore 26 factors that potentially affect SPI. We also consider the research strategies used to study these factors. We have used a multi-strategy approach for this study: first, by combining qualitative and quantitative analysis within case studies; second, by comparing our case study results with the results of a previously conducted survey study. Seven factors relevant to SPI (i.e. executive support, experienced staff, internal process ownership, metrics, procedures, reviews, and training) were identified by the case studies and the survey study. Two factors (reward schemes and estimating tools) were found, by both the case studies and the survey study, not to be relevant to SPI. Three additional factors (people, problems and change) were identified by the case studies. The frequency with which people, problems and change are discussed by practitioners suggests that these three factors may be pervasive in SPI, in a way that the other factors are not. These factors, however, require further investigation. --- paper_title: Reliability in grid computing systems paper_content: In recent years, grid technology has emerged as an important tool for solving compute-intensive problems within the scientific community and in industry. To further the development and adoption of this technology, researchers and practitioners from different disciplines have collaborated to produce standard specifications for implementing large-scale, interoperable grid systems. The focus of this activity has been the Open Grid Forum, but other standards development organizations have also produced specifications that are used in grid systems. To date, these specifications have provided the basis for a growing number of operational grid systems used in scientific and industrial applications. However, if the growth of grid technology is to continue, it will be important that grid systems also provide high reliability. In particular, it will be critical to ensure that grid systems are reliable as they continue to grow in scale, exhibit greater dynamism, and become more heterogeneous in composition. Ensuring grid system reliability in turn requires that the specifications used to build these systems fully support reliable grid services. This study surveys work on grid reliability that has been done in recent years and reviews progress made toward achieving these goals. The survey identifies important issues and problems that researchers are working to overcome in order to develop reliability methods for large-scale, heterogeneous, dynamic environments. The survey also illuminates reliability issues relating to standard specifications used in grid systems, identifying existing specifications that may need to be evolved and areas where new specifications are needed to better support the reliability. Published in 2009 by John Wiley & Sons, Ltd. ::: ::: This article is a U.S. Government work and is in the public domain in the U.S.A. --- paper_title: Evaluating the reliability of computational grids from the end user’s point of view paper_content: Reliability, in terms of Grid component fault tolerance and minimum quality of service, is an important aspect that has to be addressed to foster Grid technology adoption. Software reliability is critically important in today's integrated and distributed systems, as is often the weak link in system performance. In general, reliability is difficult to measure, and specially in Grid environments, where evaluation methodologies are novel and controversial matters. This paper describes a straightforward procedure to analyze the reliability of computational grids from the viewpoint of an end user. The procedure is illustrated in the evaluation of a research Grid infrastructure based on Globus basic services and the GridWay meta-scheduler. The GridWay support for fault tolerance is also demonstrated in a production-level environment. Results show that GridWay is a reliable workload management tool for dynamic and faulty Grid environments. Transparently to the end user, GridWay is able to detect and recover from any of the Grid element failure, outage and saturation conditions specified by the reliability analysis procedure. --- paper_title: Software Reliability Engineering: A Roadmap paper_content: Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying software failure processes for accurate reliability analysis and forecasting. Although software reliability has remained an active research subject over the past 35 years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we review the history of software reliability engineering, the current trends and existing problems, and specific difficulties. Possible future directions and promising research subjects in software reliability engineering are also addressed. --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: Designing Fault Tolerant Web Services Using BPEL paper_content: The Web services technology provides an approach for developing distributed applications by using simple and well defined interfaces. Due to the flexibility of this architecture, it is possible to compose business processes integrating services from different domains. This paper presents an approach, which uses the specification of services orchestration, in order to create a fault tolerant model combining active and passive replication technique. This model support fault of crash. The characteristics and the results obtained by implementing this model are described along this paper. --- paper_title: Implementation and evaluation of transparent fault-tolerant Web service with kernel-level support paper_content: Most of the techniques used for increasing the availability of Web services do not provide fault tolerance for requests being processed at the time of server failure. Other schemes require deterministic servers or changes to the Web client. These limitations are unacceptable for many current and future applications of the Web. We have developed an efficient implementation of a client-transparent mechanism for providing fault-tolerant Web service that does not have the limitations mentioned above. The scheme is based on a hot standby backup server that maintains logs of requests and replies. The implementation includes modifications to the Linux kernel and to the Apache Web server, using their respective module mechanisms. We describe the implementation and present an evaluation of the impact of the backup scheme in terms of throughput, latency, and CPU processing cycles overhead. --- paper_title: Making services fault tolerant paper_content: With ever growing use of Internet, Web services become increasingly popular and their growth rate surpasses even the most optimistic predictions. Services are self-descriptive, self-contained, platform-independent and openly-available components that interact over the network. They are written strictly according to open specifications and/or standards and provide important and often critical functions for many business-to-business systems. Failures causing either service downtime or producing invalid results in such systems may range from a mere inconvenience to significant monetary penalties or even loss of human lives. In applications where sensing and control of machines and other devices take place via services, making the services highly dependable is one of main critical goals. Currently, there is no experimental investigation to evaluate the reliability and availability of Web services systems. In this paper, we identify parameters impacting the Web services dependability, describe the methods of dependability enhancement by redundancy in space and redundancy in time and perform a series of experiments to evaluate the availability of Web services. To increase the availability of the Web service, we use several replication schemes and compare them with a single service. The Web services are coordinated by a replication manager. The replication algorithm and the detailed system configuration are described in this paper. --- paper_title: Software Reliability Engineering: A Roadmap paper_content: Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying software failure processes for accurate reliability analysis and forecasting. Although software reliability has remained an active research subject over the past 35 years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we review the history of software reliability engineering, the current trends and existing problems, and specific difficulties. Possible future directions and promising research subjects in software reliability engineering are also addressed. --- paper_title: Estimation of the Reliability of Distributed Applications paper_content: In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The paper presents results of a study conducted by the students of Economic Informatics at the University "Lucian Blaga" Sibiu, over four months. The studied population was represented by several distributed applications made under the object oriented programming techniques. This study aimed to estimate the reliability of these applications using object-oriented design metrics validation techniques. --- paper_title: Grid workflow: a flexible failure handling framework for the grid paper_content: The generic, heterogeneous, and dynamic nature of the grid requires a new from of failure recovery mechanism to address its unique requirements such as support for diverse failure handling strategies, separation of failure handling strategies from application codes, and user-defined exception handling. We here propose a grid workflow system (grid-WFS), a flexible failure handling framework for the grid, which addresses these grid-unique failure recovery requirements. Central to the framework is flexibility by the use of workflow structure as a high-level recovery policy specification. We show how this use of high-level workflow structure allows users to achieve failure recovery in a variety of ways depending on the requirements and constraints of their applications. We also demonstrate that this use of workflow structure enables users to not only rapidly prototype and investigate failure handling strategies, but also easily change them by simply modifying the encompassing workflow structure, while the application code remains intact. Finally, we present an experimental evaluation of our framework using a simulation, demonstrating the value of supporting multiple failure recovery techniques in grid systems to achieve high performance in the presence of failures. --- paper_title: FTWeb: a fault tolerant infrastructure for Web services paper_content: The Web services architecture came as answers to the search for interoperability among applications. There has been a growing interest in deploying on the Internet applications with high availability and reliability requirements. However, the technologies associated with this architecture still do not deliver adequate support to this requirement. The model proposed in this article is located in this context and provides a new layer of software that acts as a proxy between client requests and service delivery by providers. The main objective is to ensure client transparent fault tolerance by means of the active replication technique. This model supports the following faults: value, omission and stops. This paper describes the features and outcomes obtained through the implementation of this model. --- paper_title: Fault-tolerant grid services using primary-backup: feasibility and performance paper_content: The combination of grid technology and Web services has produced an attractive platform for deploying distributed applications: grid services, as represented by the Open Grid Services Infrastructure (OGSI) and its Globus toolkit implementation. As the use of grid services grows in popularity, tolerating failures becomes increasingly important. This work addresses the problem of building a reliable and highly-available grid service by replicating the service on two or more hosts using the primary-backup approach. The primary goal is to evaluate the ease and efficiency with which this can be done, by first designing a primary-backup protocol using OGSI, and then implementing it using Globus to evaluate performance implications and tradeoffs. We compared three implementations: one that makes heavy use of the notification interface defined in OGSI, one that uses standard grid service requests instead of notification, and one that uses low-level socket primitives. The overall conclusion is that, while the performance penalty of using Globus primitives - especially notification - for replica coordination can be significant, the OGSI model is suitable for building highly-available services and it makes the task of engineering such services easier. --- paper_title: Transparent Fault Tolerance for Web Services Based Architectures paper_content: Service-based architectures enable the development of new classes of Grid and distributed applications. One of the main capabilities provided by such systems is the dynamic and flexible integration of services, according to which services are allowed to be a part of more than one distributed system and simultaneously serve different applications. This increased flexibility in system composition makes it difficult to address classical distributed system issues such as fault-tolerance. While it is relatively easy to make an individual service fault-tolerant, improving fault-tolerance of services collaborating in multiple application scenarios is a challenging task. In this paper, we look at the issue of developing fault-tolerant service-based distributed systems, and propose an infrastructure to implement fault tolerance capabilities transparent to services. --- paper_title: Implementing e-Transactions with Asynchronous Replication paper_content: This paper describes a distributed algorithm that implements the abstraction of e-Transaction: a transaction that executes exactly-once despite failures. Our algorithm is based on an asynchronous replication scheme that generalizes well-known active-replication and primary-backup schemes. We devised the algorithm with a three-tier architecture in mind: the end-user interacts with front-end clients (e.g., browsers) that invoke middle-tier application servers (e.g., web servers) to access back-end databases. The algorithm preserves the three-tier nature of the architecture and introduces a very acceptable overhead with respect to unreliable solutions. --- paper_title: Client-Transparent Fault-Tolerant Web Service paper_content: Most of the existing fault tolerance schemes for Web servers detect server failure and route future client requests to backup servers. These techniques typically do not provide transparent handling of requests whose processing was in progress when the failure occurred. Thus, the system may fail to provide the user with confirmation for a requested transaction or clear indication that the transaction was not performed. We describe a client-transparent fault tolerance scheme for Web servers that ensures correct handling of requests in progress at the time of server failure. The scheme is based on a standby backup server and simple proxies. The error handling mechanisms of TCP are used to multicast requests to the primary and backup as well as to reliably deliver replies from a server that may fail while sending the reply. Our scheme does not involve OS kernel changes or use of user-level TCP implementations and requires minimal changes to the Web server software. --- paper_title: Software Reliability Engineering: A Roadmap paper_content: Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying software failure processes for accurate reliability analysis and forecasting. Although software reliability has remained an active research subject over the past 35 years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we review the history of software reliability engineering, the current trends and existing problems, and specific difficulties. Possible future directions and promising research subjects in software reliability engineering are also addressed. --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: Tolerating hardware device failures in software paper_content: Hardware devices can fail, but many drivers assume they do not. When confronted with real devices that misbehave, these assumptions can lead to driver or system failures. While major operating system and device vendors recommend that drivers detect and recover from hardware failures, we find that there are many drivers that will crash or hang when a device fails. Such bugs cannot easily be detected by regular stress testing because the failures are induced by the device and not the software load. This paper describes Carburizer, a code-manipulation tool and associated runtime that improves system reliability in the presence of faulty devices. Carburizer analyzes driver source code to find locations where the driver incorrectly trusts the hardware to behave. Carburizer identified almost 1000 such bugs in Linux drivers with a false positive rate of less than 8 percent. With the aid of shadow drivers for recovery, Carburizer can automatically repair 840 of these bugs with no programmer involvement. To facilitate proactive management of device failures, Carburizer can also locate existing driver code that detects device failures and inserts missing failure-reporting code. Finally, the Carburizer runtime can detect and tolerate interrupt-related bugs, such as stuck or missing interrupts. --- paper_title: Software Reliability Engineering: A Roadmap paper_content: Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying software failure processes for accurate reliability analysis and forecasting. Although software reliability has remained an active research subject over the past 35 years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we review the history of software reliability engineering, the current trends and existing problems, and specific difficulties. Possible future directions and promising research subjects in software reliability engineering are also addressed. --- paper_title: Implementation and evaluation of transparent fault-tolerant Web service with kernel-level support paper_content: Most of the techniques used for increasing the availability of Web services do not provide fault tolerance for requests being processed at the time of server failure. Other schemes require deterministic servers or changes to the Web client. These limitations are unacceptable for many current and future applications of the Web. We have developed an efficient implementation of a client-transparent mechanism for providing fault-tolerant Web service that does not have the limitations mentioned above. The scheme is based on a hot standby backup server that maintains logs of requests and replies. The implementation includes modifications to the Linux kernel and to the Apache Web server, using their respective module mechanisms. We describe the implementation and present an evaluation of the impact of the backup scheme in terms of throughput, latency, and CPU processing cycles overhead. --- paper_title: Making services fault tolerant paper_content: With ever growing use of Internet, Web services become increasingly popular and their growth rate surpasses even the most optimistic predictions. Services are self-descriptive, self-contained, platform-independent and openly-available components that interact over the network. They are written strictly according to open specifications and/or standards and provide important and often critical functions for many business-to-business systems. Failures causing either service downtime or producing invalid results in such systems may range from a mere inconvenience to significant monetary penalties or even loss of human lives. In applications where sensing and control of machines and other devices take place via services, making the services highly dependable is one of main critical goals. Currently, there is no experimental investigation to evaluate the reliability and availability of Web services systems. In this paper, we identify parameters impacting the Web services dependability, describe the methods of dependability enhancement by redundancy in space and redundancy in time and perform a series of experiments to evaluate the availability of Web services. To increase the availability of the Web service, we use several replication schemes and compare them with a single service. The Web services are coordinated by a replication manager. The replication algorithm and the detailed system configuration are described in this paper. --- paper_title: Software Reliability Engineering: A Roadmap paper_content: Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying software failure processes for accurate reliability analysis and forecasting. Although software reliability has remained an active research subject over the past 35 years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we review the history of software reliability engineering, the current trends and existing problems, and specific difficulties. Possible future directions and promising research subjects in software reliability engineering are also addressed. --- paper_title: Multimedia Object Placement for Transparent Data Replication paper_content: Transparent data replication is a promising technique for improving the system performance of a large distributed network. Transcoding is an important technology which adapts the same multimedia object to diverse mobile appliances; thus, users' requests for a specified version of a multimedia object could be served by a more detailed version cached according to transcoding. Therefore, it is particularly of theoretical and practical necessity to determine the proper version to be cached at each node such that the specified objective is achieved. In this paper, we address the problem of multimedia object placement for transparent data replication. The performance objective is to minimize the total access cost by considering both transmission cost and transcoding cost. We present optimal solutions for different cases for this problem. The performance of the proposed solutions is evaluated with a set of carefully designed simulation experiments for various performance metrics over a wide range of system parameters. The simulation results show that our solution consistently and significantly outperforms comparison solutions in terms of all the performance metrics considered --- paper_title: An Effective Cache Replacement Algorithm in Transcoding-Enabled Proxies paper_content: In this paper, we address the problem of cache replacement for transcoding proxy caching. Transcoding proxy is a proxy that has the functionality of transcoding a multimedia object into an appropriate format or resolution for each client. We first propose an effective cache replacement algorithm for transcoding proxy. In general, when a new object is to be cached, cache replacement algorithms evict some of the cached objects with the least profit to accommodate the new object. Our algorithm takes into account of the inter-relationships among different versions of the same multimedia object, and selects the versions to replace according to their aggregate profit which usually differs from simple summation of their individual profits as assumed in the existing algorithms. It also considers cache consistency, which is not considered in the existing algorithms. We then present a complexity analysis to show the efficiency of our algorithm. Finally, we give extensive simulation results to compare the performance of our algorithm with some existing algorithms. The results show that our algorithm outperforms others in terms of various performance metrics. --- paper_title: Estimation of the Reliability of Distributed Applications paper_content: In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The paper presents results of a study conducted by the students of Economic Informatics at the University "Lucian Blaga" Sibiu, over four months. The studied population was represented by several distributed applications made under the object oriented programming techniques. This study aimed to estimate the reliability of these applications using object-oriented design metrics validation techniques. --- paper_title: Fault-tolerant and reliable computation in cloud computing paper_content: Cloud computing, with its great potentials in low cost and on-demand services, is a promising computing platform for both commercial and non-commercial computation clients. In this work, we investigate the security perspective of scientific computation in cloud computing. We investigate a cloud selection strategy to decompose the matrix multiplication problem into several tasks which will be submitted to different clouds. In particular, we propose techniques to improve the fault-tolerance and reliability of a rather general scientific computation: matrix multiplication. Through our techniques, we demonstrate that fault-tolerance and reliability against faulty and even malicious clouds in cloud computing can be achieved. --- paper_title: Evaluating the reliability of computational grids from the end user’s point of view paper_content: Reliability, in terms of Grid component fault tolerance and minimum quality of service, is an important aspect that has to be addressed to foster Grid technology adoption. Software reliability is critically important in today's integrated and distributed systems, as is often the weak link in system performance. In general, reliability is difficult to measure, and specially in Grid environments, where evaluation methodologies are novel and controversial matters. This paper describes a straightforward procedure to analyze the reliability of computational grids from the viewpoint of an end user. The procedure is illustrated in the evaluation of a research Grid infrastructure based on Globus basic services and the GridWay meta-scheduler. The GridWay support for fault tolerance is also demonstrated in a production-level environment. Results show that GridWay is a reliable workload management tool for dynamic and faulty Grid environments. Transparently to the end user, GridWay is able to detect and recover from any of the Grid element failure, outage and saturation conditions specified by the reliability analysis procedure. --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: WS-DREAM: A distributed reliability assessment Mechanism for Web Services paper_content: It is critical to guarantee the reliability of service-oriented applications. This is because they may employ remote Web services as components, which may easily become unavailable in the unpredictable Internet environment. This practical experience report presents a distribute reliability assessment mechanism for Web services (WS-DREAM), allowing users to carry out Web services reliability assessment in a collaborative manner. With WS-DREAM, users in different geography locations help each other to carry out testing, and share test cases under the coordination of a centralized server. Based on this collaborative mechanism, reliability assessment for Web services in real environment from different locations of the world becomes seamless. To illustrate the advantage of this mechanism, a prototype is implemented and a case study is carried out. Users from five locations all over the world perform reliability assessment to Web services distributed in six countries. Over 1,000,000 test cases are executed in a collaborative manner and detailed results are provided. --- paper_title: Real-time distributed program reliability analysis paper_content: Distributed program reliability has been proposed as a reliability index for distributed computing systems to analyze the probability of the successful execution of a program, task, or mission in the system. However, current reliability models proposed for distributed program reliability evaluation do not capture the effects of real-time constraints. We propose an approach to the reliability analysis of distributed programs that addresses real-time constraints. Our approach is based on a model for evaluating transmission time, which allow us to find the time needed to complete execution of the program, task, or mission under evaluation. With information on time-constraints, the corresponding Markov state space can then be defined for reliability computation. To speed up the evaluation process and reduce the size of the Markov state space, several dynamic reliability-preserving reductions are developed. A simple distributed real-time system is used as an example to illustrate the feasibility and uniqueness of the proposed approach. > --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: Architecture-based reliability prediction for service-oriented computing paper_content: In service-oriented computing, services are dynamically built as an assembly of pre-existing, independently developed, network accessible services. Hence, predicting as much as possible automatically their dependability is important to appropriately drive the selection and assembly of services, in order to get some required dependability level. We present an approach to the reliability prediction of such services, based on the partial information published with each service, and that lends itself to automatization. The proposed methodology exploits ideas from the Software Architecture- and Component-based approaches to software design. --- paper_title: A software reliability model for Web services paper_content: This paper proposes a service-oriented software reliability model that dynamically evaluates the reliability of Web services. There are two kinds of Web services: atomic services without the structural information and the composite services consisting of atomic services. The model first evaluates the reliability of atomic services based on group testing and majority voting. Group testing is the key technique proposed in this paper to support the service-oriented reliability model. Then, the reliability model evaluates the overall reliability of composite services using an architecture-based model and based on reliabilities of the atomic services, execution scenarios, and operational profiles. The reliability model is dynamic and the reliabilities of the services are evaluated in the actual operational environment. A case study is designed, implemented, and analyzed using the design of experiment technique. The results show the significances of the model and its components. --- paper_title: Quality Prediction of Service Compositions through Probabilistic Model Checking paper_content: The problem of composing services to deliver integrated business solutions has been widely studied in the last years. Besides addressing functional requirements, services compositions should also provide agreed service levels. Our goal is to support model-based analysis of service compositions, with a focus on the assessment of non-functional quality attributes, namely performance and reliability. We propose a model-driven approach, which automatically transforms a design model of service composition into an analysis model, which then feeds a probabilistic model checker for quality prediction. To bring this approach to fruition, we developed a prototype tool called ATOP , and we demonstrate its use on a simple case study. --- paper_title: Reliability in grid computing systems paper_content: In recent years, grid technology has emerged as an important tool for solving compute-intensive problems within the scientific community and in industry. To further the development and adoption of this technology, researchers and practitioners from different disciplines have collaborated to produce standard specifications for implementing large-scale, interoperable grid systems. The focus of this activity has been the Open Grid Forum, but other standards development organizations have also produced specifications that are used in grid systems. To date, these specifications have provided the basis for a growing number of operational grid systems used in scientific and industrial applications. However, if the growth of grid technology is to continue, it will be important that grid systems also provide high reliability. In particular, it will be critical to ensure that grid systems are reliable as they continue to grow in scale, exhibit greater dynamism, and become more heterogeneous in composition. Ensuring grid system reliability in turn requires that the specifications used to build these systems fully support reliable grid services. This study surveys work on grid reliability that has been done in recent years and reviews progress made toward achieving these goals. The survey identifies important issues and problems that researchers are working to overcome in order to develop reliability methods for large-scale, heterogeneous, dynamic environments. The survey also illuminates reliability issues relating to standard specifications used in grid systems, identifying existing specifications that may need to be evolved and areas where new specifications are needed to better support the reliability. Published in 2009 by John Wiley & Sons, Ltd. ::: ::: This article is a U.S. Government work and is in the public domain in the U.S.A. --- paper_title: Cloud Computing – Issues, Research and Implementations paper_content: "Cloud" computing – a relatively recent term, builds on decades of research in virtualization, distributed computing, utility computing, and more recently networking, web and software services. It implies a service oriented architecture, reduced information technology overhead for the end-user, great flexibility, reduced total cost of ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today. --- paper_title: Elements of Applied Stochastic Processes paper_content: Fundamentals of Queueing Theory, 2nd Edition Donald Gross and Carl M. Harris A graduate text and reference treating queueing theory from the development of standard models to applications. The emphasis is on real analysis of queueing systems, applications, and problem solving. It has been brought up-to-date by modernizing older treatments. 1985 (0 471-89067-7) 475 pp. Multivariate Descriptive Analysis Correspondence Analysis and Related Techniques for Large Matrices Ludovic Lebart, Alain Morineau and Kenneth M. Warwick Presents a set of statistical methods for exploratory analysis of large date sets and categorical data. This unique approach uses graphical aspects of multidimensional scaling techniques within the context of exploratory data analysis. 1984 (0 471-86743-8) 231 pp. Introduction to Linear Regression Analysis Douglas C. Montgomery and Elizabeth A. Peck A definitive introduction to linear regression analysis covering basic topics as well as recent approaches in the field. It blends theory and application in a way that enables readers to apply regression methodology in a variety of practical settings. Many detailed examples drawn directly from various fields of engineering, physical science, and the management sciences provide clear guidance to the use of the techniques. The interface with widely available computer programs for regression analysis is illustrated throughout with numerous actual computer printouts. 1982 (0 471-05850-5) 504 pp. --- paper_title: Characterizing cloud computing hardware reliability paper_content: Modern day datacenters host hundreds of thousands of servers that coordinate tasks in order to deliver highly available cloud computing services. These servers consist of multiple hard disks, memory modules, network cards, processors etc., each of which while carefully engineered are capable of failing. While the probability of seeing any such failure in the lifetime (typically 3-5 years in industry) of a server can be somewhat small, these numbers get magnified across all devices hosted in a datacenter. At such a large scale, hardware component failure is the norm rather than an exception. Hardware failure can lead to a degradation in performance to end-users and can result in losses to the business. A sound understanding of the numbers as well as the causes behind these failures helps improve operational experience by not only allowing us to be better equipped to tolerate failures but also to bring down the hardware cost through engineering, directly leading to a saving for the company. To the best of our knowledge, this paper is the first attempt to study server failures and hardware repairs for large datacenters. We present a detailed analysis of failure characteristics as well as a preliminary analysis on failure predictors. We hope that the results presented in this paper will serve as motivation to foster further research in this area. --- paper_title: Reliability analysis of grid computing systems paper_content: Grid computing system is different from conventional distributed computing systems by its focus on large-scale resource sharing, where processors and communication have significant influence on grid computing reliability. Most previous research on conventional small-scale distributed systems ignored the communication time and processing time when studying the distributed program reliability, which is not practical in the analysis of grid computing systems. This paper describes the property of the grid computing systems and presents algorithms to analyze the grid program and system reliability. --- paper_title: Real-time distributed program reliability analysis paper_content: Distributed program reliability has been proposed as a reliability index for distributed computing systems to analyze the probability of the successful execution of a program, task, or mission in the system. However, current reliability models proposed for distributed program reliability evaluation do not capture the effects of real-time constraints. We propose an approach to the reliability analysis of distributed programs that addresses real-time constraints. Our approach is based on a model for evaluating transmission time, which allow us to find the time needed to complete execution of the program, task, or mission under evaluation. With information on time-constraints, the corresponding Markov state space can then be defined for reliability computation. To speed up the evaluation process and reduce the size of the Markov state space, several dynamic reliability-preserving reductions are developed. A simple distributed real-time system is used as an example to illustrate the feasibility and uniqueness of the proposed approach. > --- paper_title: A Hierarchical Modeling and Analysis for Grid Service Reliability paper_content: Grid computing is a recently developed technology. Although the developmental tools and techniques for the grid have been extensively studied, grid reliability analysis is not easy because of its complexity. This paper is the first one that presents a hierarchical model for the grid service reliability analysis and evaluation. The hierarchical modeling is mapped to the physical and logical architecture of the grid service system and makes the evaluation and calculation tractable by identifying the independence among layers. Various types of failures are interleaved in the grid computing environment, such as blocking failures, time-out failures, matchmaking failures, network failures, program failures, and resource failures. This paper investigates all of them to achieve a complete picture about grid service reliability. Markov models, queuing theory, and graph theory are mainly used to model, evaluate, and analyze the grid service reliability. Numerical examples are illustrated --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. --- paper_title: A Taxonomy and Survey of Cloud Computing Systems paper_content: The computational world is becoming very large and complex. Cloud Computing has emerged as a popular computing model to support processing large volumetric data using clusters of commodity computers. According to J.Dean and S. Ghemawat [1], Google currently processes over 20 terabytes of raw web data. It's some fascinating, large-scale processing of data that makes your head spin and appreciate the years of distributed computing fine-tuning applied to today's large problems. The evolution of cloud computing can handle such massive data as per on demand service. Nowadays the computational world is opting for pay-for-use models and Hype and discussion aside, there remains no concrete definition of cloud computing. In this paper, we first develop a comprehensive taxonomy for describing cloud computing architecture. Then we use this taxonomy to survey several existing cloud computing services developed by various projects world-wide such as Google, force.com, Amazon. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research. --- paper_title: Collaborative reliability prediction of service-oriented systems paper_content: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches. ---
Title: A Survey on Reliability in Distributed Systems Section 1: Overview Description 1: Introduce the context of the survey and the importance of reliability in distributed systems. Section 2: Pragmatic Requirements Description 2: Explain the fundamental requirements for highly reliable distributed systems. Section 3: Conceptual Framework Description 3: Define basic concepts and highlight challenges and issues affecting reliability in various computing environments. Section 4: Reliability Description 4: Discuss the broader concept of reliability in software applications and its impact. Section 5: Various Challenges and Factors Affecting Reliability Description 5: Analyze different challenges and factors that impact the reliability of distributed systems. Section 6: Existing Works Description 6: Review existing models and approaches for fault tolerance and fault prediction in distributed systems. Section 7: Fault Tolerant Techniques Description 7: Describe the different fault tolerance mechanisms used in distributed systems. Section 8: Fault or Failure Forecasting Techniques Description 8: Analyze different techniques for forecasting faults or failures in distributed systems. Section 9: User Centric Approaches Description 9: Discuss approaches that focus on user behavior and usage to predict reliability. Section 10: Architecture Based Approaches Description 10: Explore approaches focusing on predicting reliability at the architectural design phase. Section 11: State Based Approaches Description 11: Describe the use of state-based models like the Markov chain for reliability prediction. Section 12: Discussion and Limitations of Existing Models Description 12: Critically analyze the limitations and open issues in existing reliability models for distributed systems. Section 13: Conclusion and Future Work Description 13: Summarize the findings of the survey, highlight the importance of comprehensive fault tolerance mechanisms, and propose areas for future research.
A Survey of Crowd Sensing Opportunistic Signals for Indoor Localization
12
--- paper_title: Opportunistic sensing: Security challenges for the new paradigm paper_content: We study the security challenges that arise in opportunistic people-centric sensing, a new sensing paradigm leveraging humans as part of the sensing infrastructure. Most prior sensor-network research has focused on collecting and processing environmental data using a static topology and an application-aware infrastructure, whereas opportunistic sensing involves collecting, storing, processing and fusing large volumes of data related to everyday human activities. This highly dynamic and mobile setting, where humans are the central focus, presents new challenges for information security, because data originates from sensors carried by people— not tiny sensors thrown in the forest or attached to animals. In this paper we aim to instigate discussion of this critical issue, because opportunistic people-centric sensing will never succeed without adequate provisions for security and privacy. To that end, we outline several important challenges and suggest general solutions that hold promise in this new sensing paradigm. --- paper_title: Challenge: ubiquitous location-aware computing and the "place lab" initiative paper_content: To be widely adopted, location-aware computing must be as effortless, familiar and rewarding as web search tools like Google. We envisage the global scale Place Lab, consisting of an open software base and a community building activity as a way to bootstrap the broad adoption of location-aware computing. The initiative is a laboratory because it will also be a vehicle for research and instruction, especially in the formative stages. The authors draw on their experiences with campus and building-scale location systems to identify the technological and social barriers to a truly ubiquitous deployment. With a grasp of these "barriers to adoption," we present a usage scenario, the problems in realizing this scenario, and how these problems will be addressed. We conclude with a sketch of the multi-organization cooperative being formed to move this effort forward. --- paper_title: Medusa: a programming framework for crowd-sensing applications paper_content: The ubiquity of smartphones and their on-board sensing capabilities motivates crowd-sensing, a capability that harnesses the power of crowds to collect sensor data from a large number of mobile phone users. Unlike previous work on wireless sensing, crowd-sensing poses several novel requirements: support for humans-in-the-loop to trigger sensing actions or review results, the need for incentives, as well as privacy and security. Beyond existing crowd-sourcing systems, crowd-sensing exploits sensing and processing capabilities of mobile devices. In this paper, we design and implement Medusa, a novel programming framework for crowd-sensing that satisfies these requirements. Medusa provides high-level abstractions for specifying the steps required to complete a crowd-sensing task, and employs a distributed runtime system that coordinates the execution of these tasks between smartphones and a cluster on the cloud. We have implemented ten crowd-sensing tasks on a prototype of Medusa. We find that Medusa task descriptions are two orders of magnitude smaller than standalone systems required to implement those crowd-sensing tasks, and the runtime has low overhead and is robust to dynamics and resource attacks. --- paper_title: The case for crowd computing paper_content: We introduce and motivate "crowd computing", which combines mobile devices and social interactions to achieve large-scale distributed computation. An opportunistic network of mobile devices offers substantial aggregate bandwidth and processing power. In this paper, we analyse encounter traces to place an upper bound on the amount of computation that is possible in such networks. We also investigate a practical task-farming algorithm that approaches this upper bound, and show that exploiting social structure can dramatically increase its performance. --- paper_title: reCAPTCHA: Human-Based Character Recognition via Web Security Measures paper_content: CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99%, matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words. --- paper_title: Social sensing for epidemiological behavior change paper_content: An important question in behavioral epidemiology and public health is to understand how individual behavior is affected by illness and stress. Although changes in individual behavior are intertwined with contagion, epidemiologists today do not have sensing or modeling tools to quantitatively measure its effects in real-world conditions. In this paper, we propose a novel application of ubiquitous computing. We use mobile phone based co-location and communication sensing to measure characteristic behavior changes in symptomatic individuals, reflected in their total communication, interactions with respect to time of day (e.g., late night, early morning), diversity and entropy of face-to-face interactions and movement. Using these extracted mobile features, it is possible to predict the health status of an individual, without having actual health measurements from the subject. Finally, we estimate the temporal information flux and implied causality between physical symptoms, behavior and mental health. --- paper_title: Motion Recognition Assisted Indoor Wireless Navigation on a Mobile Phone paper_content: The paper presents an indoor navigation solution combining physical motion recognition with WLAN positioning on a smart phone. Orientationindependent features are extracted from vertical and horizontal components of acceleration. The simple features such as the mean of horizontal acceleration, the variance of acceleration magnitude, and the variance of horizontal acceleration are selected as the nodes in a decision tree. Six common motion modes during indoor navigation, e.g., static, standing with hand swinging, normal walking with holding the phone in hand, normal walking with hand swinging, fast walking, and U-turning are detected. A fingerprinting based WLAN positioning approach offers the headings and initial positions for pedestrian dead reckoning with about 10 seconds interval. The velocities for six motion modes are trained by five testers and applied in the dead reckoning during the WLAN positioning gap. Test results indicate that the motion mode is recognized correctly in 95% of test cases. The field test shows 4.6m horizontal mean error for motion recognition assisted indoor wireless positioning. --- paper_title: Indoor Positioning and Navigation with Camera Phones paper_content: This low-cost indoor navigation system runs on off-the-shelf camera phones. More than 2,000 users at four different large-scale events have already used it. The system uses built-in cameras to determine user location in real time by detecting unobtrusive fiduciary markers. The required infrastructure is limited to paper markers and static digital maps, and common devices are used, facilitating quick deployment in new environments. The authors have studied the application quantitatively in a controlled environment and qualitatively during deployment at four large international events. According to test users, marker-based navigation is easier to use than conventional mobile digital maps. Moreover, the users' location awareness in navigation tasks improved. Experiences drawn from questionnaires, usage log data, and user interviews further highlight the benefits of this approach. --- paper_title: iParking: An Intelligent Indoor Location-Based Smartphone Parking Service paper_content: Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution. --- paper_title: Participatory Sensing: Crowdsourcing Data from Mobile Smartphones in Urban Spaces paper_content: The recent wave of sensor-rich, Internet-enabled, smart mobile devices such as the Apple iPhone has opened the door for a novel paradigm for monitoring the urban landscape known as participatory sensing. Using this paradigm, ordinary citizens can collect multi-modal data streams from the surrounding environment using their mobile devices and share the same using existing communication infrastructure (e.g., 3G service or WiFi access points). The data contributed from multiple participants can be combined to build a spatiotemporal view of the phenomenon of interest and also to extract important community statistics. Given the ubiquity of mobile phones and the high density of people in metropolitan areas, participatory sensing can achieve an unprecedented level of coverage in both space and time for observing events of interest in urban spaces. Several exciting participatory sensing applications have emerged in recent years. For example, GPS traces uploaded by drivers and passengers can be used to generate real time traffic statistics. Similarly, street-level audio samples collected by pedestrians can be aggregated to create a citywide noise map. In this advanced seminar, we will provide a comprehensive overview of this new and exciting paradigm and outline the major research challenges. --- paper_title: Using LS-SVM Based Motion Recognition for Smartphone Indoor Wireless Positioning paper_content: The paper presents an indoor navigation solution by combining physical motion recognition with wireless positioning. Twenty-seven simple features are extracted from the built-in accelerometers and magnetometers in a smartphone. Eight common motion states used during indoor navigation are detected by a Least Square-Support Vector Machines (LS-SVM) classification algorithm, e.g., static, standing with hand swinging, normal walking while holding the phone in hand, normal walking with hand swinging, fast walking, U-turning, going up stairs, and going down stairs. The results indicate that the motion states are recognized with an accuracy of up to 95.53% for the test cases employed in this study. A motion recognition assisted wireless positioning approach is applied to determine the position of a mobile user. Field tests show a 1.22 m mean error in “Static Tests” and a 3.53 m in “Stop-Go Tests”. --- paper_title: Human computation: a survey and taxonomy of a growing field paper_content: The rapid growth of human computation within research and industry has produced many novel ideas aimed at organizing web users to do great things. However, the growth is not adequately supported by a framework with which to understand each new system in the context of the old. We classify human computation systems to help identify parallels between different systems and reveal "holes" in the existing work as opportunities for new research. Since human computation is often confused with "crowdsourcing" and other terms, we explore the position of human computation with respect to these related topics. --- paper_title: RADAR: an in-building RF-based user location and tracking system paper_content: The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy. --- paper_title: Visual-aided Two-dimensional Pedestrian Indoor Navigation with a Smartphone paper_content: Indoor pedestrian positioning sets severe challenges for a navigation system. To be applicable for pedestrian navigation the platform used has to be small in size and reasonably priced. Smartphones fulfill these requirements satisfyingly. GNSS signals are degraded indoors and in order to obtain accurate navigation aiding from other sensors is needed. Self-contained sensors provide valuable information about the motion of the pedestrian and when integrated with GNSS measurements a position solution is typically obtainable indoors. The accuracy is however decreased due to errors in the measurements of the self-contained sensors introduced by various environmental disturbances. When the effect of the disturbance is constrained using visualaiding the accuracy can be increased to an acceptable level. This paper introduces a visual-aided twodimensional indoor pedestrian navigation system integrating measurements from GNSS, Bluetooth, WLAN, self-contained sensors, and heading change information obtained from consecutive images. The integration is performed with an Extended Kalman filter. Reliability information of the heading change measurements calculated from images using vanishing points is provided to the filter and utilized in the integration. The visual-aiding algorithm is computationally lightweight taking into account the restricted resources of the smartphone. In the conducted experiment, the accuracy of the position solution is increased by 1.2 meters due to the visual-aiding. --- paper_title: Efficient WiFi fingerprint training using semi-supervised learning paper_content: Fingerfrinting based WiFi positioning approach needs an off-line training phase to build a radio map with received signal strength indication vector of each reference point. In existing systems, this training phase may cost a tremendous amount of workload to achieve satisfying location result. To cut down on the workload notably and guarantee the location result in the meantime, we will introduce an efficient WiFi fingerprint training method: Fa-Fi namely fast fingerprint generation, which uses semi-supervised learning in this article. This proposed method can reduce the training phase time cost to about 1/5, and guarantee the localization accuracy at the same time. --- paper_title: A Probabilistic Approach to WLAN User Location Estimation paper_content: We estimate the location of a WLAN user based on radio signal strength measurements performed by the user's mobile terminal. In our approach the physical properties of the signal propagation are not taken into account directly. Instead the location estimation is regarded as a machine learning problem in which the task is to model how the signal strengths are distributed in different geographical areas based on a sample of measurements collected at several known locations. We present a probabilistic framework for solving the location estimation problem. In the empirical part of the paper we demonstrate the feasibility of this approach by reporting results of field tests in which a probabilistic location estimation method is validated in a real-world indoor environment. --- paper_title: A wireless LAN-based indoor positioning technology paper_content: Context-aware computing is an emerging computing paradigm that can provide new or improved services by exploiting user context information. In this paper, we present a wireless-local-area-network-based (WLAN-based) indoor positioning technology. The wireless device deploys a position-determination model to gather location information from collected WLAN signals. A model-based signal distribution training scheme is proposed to trade off the accuracy of signal distribution and training workload. A tracking-assistant positioning algorithm is presented to employ knowledge of the area topology to assist the procedure of position determination. We have set up a positioning system at the IBM China Research Laboratory. Our experimental results indicate an accuracy of 2 m with a 90% probability for static devices and, for moving (walking) devices, an accuracy of 5 m with a 90% probability. Moreover, the complexity of the training procedure is greatly reduced compared with other positioning algorithms. --- paper_title: WLAN location determination via clustering and probability distributions paper_content: We present a WLAN location determination technique, the Joint Clustering technique, that uses: (1) signal strength probability distributions to address the noisy wireless channel, and (2) clustering of locations to reduce the computational cost of searching the radio map. The Joint Clustering technique reduces computational cost by more than an order of magnitude, compared to the current state of the art techniques, allowing non-centralized implementation on mobile clients. Results from 802.11-equipped iPAQ implementations show that the new technique gives user location to within 7 feet with over 90% accuracy. --- paper_title: RADAR: an in-building RF-based user location and tracking system paper_content: The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy. --- paper_title: The uses of ambient light for ubiquitous positioning paper_content: This paper proposed ambient light (ambilight) as a new type of signal sources for positioning. The possibility and methods of ambilight positioning were presented in this paper. It has been shown that two kinds of observables of ambient light can be used for positioning through different principles. Ambilight intensity spectrum measurements have highly location dependency, and they can be used for positioning with the traditional fingerprinting approach. Total ambilight irradiance intensity is used to detect the proximity of a lighting source, and a location solution can be further resolved with the support of knowledge of lighting infrastructure. Ambilight positioning can work in areas where other traditional techniques are not able to function. An ambilight sensor is cost-efficient and miniature in size, and it can be easily integrated with other sensors to form a hybrid positioning system. This paper was concluded with discussions on the possibility, applicability, challenges and outlook of the new ambient light positioning techniques. --- paper_title: SurroundSense: mobile phone localization via ambience fingerprinting paper_content: A growing number of mobile computing applications are centered around the user's location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts in recognizing logical locations. This paper argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. We postulate that ambient sound, light, and color in a place convey a photo-acoustic signature that can be sensed by the phone's camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user-motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close. We propose SurroundSense, a mobile phone based system that explores logical localization via ambience fingerprinting. Evaluation results from 51 different stores show that SurroundSense can achieve an average accuracy of 87% when all sensing modalities are employed. We believe this is an encouraging result, opening new possibilities in indoor localization. --- paper_title: WiFi-SLAM Using Gaussian Process Latent Variable Models paper_content: WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization. --- paper_title: FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem paper_content: The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filter-based algorithms, for example, require time quadratic in the number of landmarks to incorporate each sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map. This algorithm is based on an exact factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM algorithm on both simulated and real-world data. --- paper_title: 1 A Tutorial on Graph-Based SLAM paper_content: Being able to build a map of the environment and to simultaneously localize within this map is an essential skill for mobile robots navigating in unknown environments in absence of external referencing systems such as GPS. This so-called simultaneous localization and mapping (SLAM) problem has been one of the most popular research topics in mobile robotics for the last two decades and efficient approaches for solving this task have been proposed. One intuitive way of formulating SLAM is to use a graph whose nodes correspond to the poses of the robot at different points in time and whose edges represent constraints between the poses. The latter are obtained from observations of the environment or from movement actions carried out by the robot. Once such a graph is constructed, the map can be computed by finding the spatial configuration of the nodes that is mostly consistent with the measurements modeled by the edges. In this paper, we provide an introductory description to the graph-based SLAM problem. Furthermore, we discuss a state-of-the-art solution that is based on least-squares error minimization and exploits the structure of the SLAM problems during optimization. The goal of this tutorial is to enable the reader to implement the proposed methods from scratch. --- paper_title: Simultaneous Localization and Mapping for pedestrians using distortions of the local magnetic field intensity in large indoor environments paper_content: We present a Simultaneous Localization and Mapping (SLAM) algorithm based on measurements of the ambient magnetic field strength (MagSLAM) that allows quasi-real-time mapping and localization in buildings, where pedestrians with foot-mounted sensors are the subjects to be localized. We assume two components to be present: firstly a source of odometry (human step measurements), and secondly a sensor of the local magnetic field intensity. Our implementation follows the FastSLAM factorization using a particle filter. We augment the hexagonal transition map used in the pre-existing FootSLAM algorithm with local maps of the magnetic field strength, binned in a hierarchical hexagonal structure. We performed extensive experiments in a number of different buildings and present the results for five data sets for which we have ground truth location information. We consider the results obtained using MagSLAM to be strong evidence that scalable and accurate localization is possible without an a priori map. --- paper_title: Growing an organic indoor location system paper_content: Most current methods for 802.11-based indoor localization depend on surveys conducted by experts or skilled technicians. Some recent systems have incorporated surveying by users. Structuring localization systems "organically," however, introduces its own set of challenges: conveying uncertainty, determining when user input is actually required, and discounting erroneous and stale data. Through deployment of an organic location system in our nine-story building, which contains nearly 1,400 distinct spaces, we evaluate new algorithms for addressing these challenges. We describe the use of Voronoi regions for conveying uncertainty and reasoning about gaps in coverage, and a clustering method for identifying potentially erroneous user data. Our algorithms facilitate rapid coverage while maintaining positioning accuracy comparable to that achievable with survey-driven indoor deployments. --- paper_title: Crowdsourcing with Smartphones paper_content: Smartphones can reveal crowdsourcing's full potential and let users transparently contribute to complex and novel problem solving. This emerging area is illustrated through a taxonomy that classifies the mobile crowdsourcing field and through three new applications that optimize location-based search and similarity services based on crowd-generated data. Such applications can be deployed on SmartLab, a cloud of more than 40 Android devices deployed at the University of Cyprus that provides an open testbed to facilitate research and development of smartphone applications on a massive scale. --- paper_title: Growing an organic indoor location system paper_content: Most current methods for 802.11-based indoor localization depend on surveys conducted by experts or skilled technicians. Some recent systems have incorporated surveying by users. Structuring localization systems "organically," however, introduces its own set of challenges: conveying uncertainty, determining when user input is actually required, and discounting erroneous and stale data. Through deployment of an organic location system in our nine-story building, which contains nearly 1,400 distinct spaces, we evaluate new algorithms for addressing these challenges. We describe the use of Voronoi regions for conveying uncertainty and reasoning about gaps in coverage, and a clustering method for identifying potentially erroneous user data. Our algorithms facilitate rapid coverage while maintaining positioning accuracy comparable to that achievable with survey-driven indoor deployments. --- paper_title: Combining human and machine intelligence in large-scale crowdsourcing paper_content: We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowd-sourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens' science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility. --- paper_title: Quality Control in Crowdsourcing Systems: Issues and Directions paper_content: As a new distributed computing model, crowdsourcing lets people leverage the crowd's intelligence and wisdom toward solving problems. This article proposes a framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue. The authors briefly review existing quality-control approaches, identify open issues, and look to future research directions. In the Web extra, the authors discuss both design-time and runtime approaches in more detail. --- paper_title: Covariance consistency methods for fault-tolerant distributed data fusion paper_content: Abstract This paper presents a general, rigorous, and fault-tolerant framework for maintaining consistent mean and covariance estimates in an arbitrary, dynamic, distributed network of information processing nodes. In particular, a solution is provided that addresses the information deconfliction problem that arises when estimates from two or more different nodes are determined to be inconsistent with each other, e.g., when two high precision (small covariance) estimates place the position of a particular object at very different locations. The challenge is to be able to resolve such inconsistencies without having to access and exploit global information to determine which of the estimates is spurious. The solution proposed in this paper is called Covariance Union. --- paper_title: Crowd IQ: aggregating opinions to boost performance paper_content: We show how the quality of decisions based on the aggregated opinions of the crowd can be conveniently studied using a sample of individual responses to a standard IQ questionnaire. We aggregated the responses to the IQ questionnaire using simple majority voting and a machine learning approach based on a probabilistic graphical model. The score for the aggregated questionnaire, Crowd IQ, serves as a quality measure of decisions based on aggregating opinions, which also allows quantifying individual and crowd performance on the same scale. ::: ::: We show that Crowd IQ grows quickly with the size of the crowd but saturates, and that for small homogeneous crowds the Crowd IQ significantly exceeds the IQ of even their most intelligent member. We investigate alternative ways of aggregating the responses and the impact of the aggregation method on the resulting Crowd IQ. We also discuss Contextual IQ, a method of quantifying the individual participant's contribution to the Crowd IQ based on the Shapley value from cooperative game theory. --- paper_title: Smartphone-Based Collaborative and Autonomous Radio Fingerprinting paper_content: Although active research has recently been conducted on received signal strength (RSS) fingerprint-based indoor localization, most of the current systems hardly overcome the costly and time-consuming offline training phase. In this paper, we propose an autonomous and collaborative RSS fingerprint collection and localization system. Mobile users track their position with inertial sensors and measure RSS from the surrounding access points. In this scenario, anonymous mobile users automatically collect data in daily life without purposefully surveying an entire building. The server progressively builds up a precise radio map as more users interact with their fingerprint data. The time drift error of inertial sensors is also compromised at run-time with the fingerprint-based localization, which runs with the collective fingerprints being currently built by the server. The proposed system has been implemented on a recent Android smartphone. The experiment results show that reasonable location accuracy is obtained with automatic fingerprinting in indoor environments. --- paper_title: LOF: identifying density-based local outliers paper_content: For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. --- paper_title: Learning From Crowds paper_content: For many supervised learning tasks it may be infeasible (or very expensive) to obtain objective and reliable labels. Instead, we can collect subjective (possibly noisy) labels from multiple experts or annotators. In practice, there is a substantial amount of disagreement among the annotators, and hence it is of great practical interest to address conventional supervised learning problems in this scenario. In this paper we describe a probabilistic approach for supervised learning when we have multiple annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline. --- paper_title: Database updating through user feedback in fingerprint-based Wi-Fi location systems paper_content: Wi-Fi fingerprinting is a technique which can provide location in GPS-denied environments, relying exclusively on Wi-Fi signals. It first requires the construction of a database of “fingerprints”, i.e. signal strengths from different access points (APs) at different reference points in the desired coverage area. The location of the device is then obtained by measuring the signal strengths at its location, and comparing it with the different reference fingerprints in the database. The main disadvantage of this technique is the labour required to build and maintain the fingerprints database, which has to be rebuilt every time a significant change in the wireless environment occurs, such as installation or removal of new APs, changes in the layout of a building, etc. This paper investigates a new method to utilise user feedback as a way of monitoring changes in the wireless environment. It is based on a system of “points” given to each AP in the database. When an AP is switched off, the number of points associated with that AP will gradually reduce as the users give feedback, until it is eventually deleted from the database. If a new AP is installed, the system will detect it and update the database with new fingerprints. Our proposed system has two main advantages. First it can be used as a tool to monitor the wireless environment in a given place, detecting faulty APs or unauthorised installation of new ones. Second, it regulates the size of the database, unlike other systems where feedback is only used to insert new fingerprints in the database. --- paper_title: The multidimensional wisdom of crowds paper_content: Distributing labeling tasks among hundreds or thousands of annotators is an increasingly important method for annotating large datasets. We present a method for estimating the underlying value (e.g. the class) of each image from (noisy) annotations provided by multiple annotators. Our method is based on a model of the image formation and annotation process. Each image has different characteristics that are represented in an abstract Euclidean space. Each annotator is modeled as a multidimensional entity with variables representing competence, expertise and bias. This allows the model to discover and represent groups of annotators that have different sets of skills and knowledge, as well as groups of images that differ qualitatively. We find that our model predicts ground truth labels on both synthetic and real data more accurately than state of the art methods. Experiments also show that our model, starting from a set of binary labels, may discover rich information, such as different "schools of thought" amongst the annotators, and can group together images belonging to separate categories. --- paper_title: Trust-based fusion of untrustworthy information in crowdsourcing applications paper_content: In this paper, we address the problem of fusing untrustworthy reports provided from a crowd of observers, while simultaneously learning the trustworthiness of individuals. To achieve this, we construct a likelihood model of the users's trustworthiness by scaling the uncertainty of its multiple estimates with trustworthiness parameters. We incorporate our trust model into a fusion method that merges estimates based on the trust parameters and we provide an inference algorithm that jointly computes the fused output and the individual trustworthiness of the users based on the maximum likelihood framework. We apply our algorithm to cell tower local- isation using real-world data from the OpenSignal project and we show that it outperforms the state-of-the-art methods in both accuracy, by up to 21%, and consistency, by up to 50% of its predictions. --- paper_title: Spatial Weighted Outlier Detection paper_content: Spatial outliers are the spatial objects with distinct features from their surrounding neighbors. Detection of spatial outliers helps reveal valuable information from large spatial data sets. In many real applications, spatial objects can not be simply abstracted as isolated points. They have different boundary, size, volume, and location. These spatial properties affect the impact of a spatial object on its neighbors and should be taken into consideration. In this paper, we propose two spatial outlier detection methods which integrate the impact of spatial properties to the outlierness measurement. Experimental results on a real data set demonstrate the effectiveness of the proposed algorithms. --- paper_title: Whose vote should count more: Optimal integration of labels from labelers of unknown expertise paper_content: Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used "Majority Vote" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers. --- paper_title: WLAN location determination via clustering and probability distributions paper_content: We present a WLAN location determination technique, the Joint Clustering technique, that uses: (1) signal strength probability distributions to address the noisy wireless channel, and (2) clustering of locations to reduce the computational cost of searching the radio map. The Joint Clustering technique reduces computational cost by more than an order of magnitude, compared to the current state of the art techniques, allowing non-centralized implementation on mobile clients. Results from 802.11-equipped iPAQ implementations show that the new technique gives user location to within 7 feet with over 90% accuracy. --- paper_title: RADAR: an in-building RF-based user location and tracking system paper_content: The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy. --- paper_title: Prediction of Protein Structural Classes paper_content: A protein is usually classified into one of the following five struc- tural classes: a!, j3, a! +j3, a!/j3, and ( (irregular). The structural class of aprotein is correlated with its amino acid composition. However, given the amino acid composition of aprotein, how may one predict its structural class? Various efforts have been made in addressing this problem. This review addresses the progress in this field, with the focus on the state of the art, which is featured by a novel prediction algorithm and a recently developed database. The novel algorithm is characterized by a covariance matrix that takes into account the coupling effect among different amino acid components of a protein. The new database was established based on the requirement that the classes should have (1) as many nonhomologous structures as possible, (2) good quality structure, and (3) typical or distinguishable features for each of the structural classes concerned. The very high success rate for both the training-set proteins and the testing-set proteins, which has been further validated by a simulated analysis and a jackknife analysis, indicates that it is possible to predict the structural class of a protein according to its amino acid composition if an ideal and complete database can be established. It also suggests that the overall fold of a protein is basically determined by its amino acid composition. --- paper_title: Challenge: ubiquitous location-aware computing and the "place lab" initiative paper_content: To be widely adopted, location-aware computing must be as effortless, familiar and rewarding as web search tools like Google. We envisage the global scale Place Lab, consisting of an open software base and a community building activity as a way to bootstrap the broad adoption of location-aware computing. The initiative is a laboratory because it will also be a vehicle for research and instruction, especially in the formative stages. The authors draw on their experiences with campus and building-scale location systems to identify the technological and social barriers to a truly ubiquitous deployment. With a grasp of these "barriers to adoption," we present a usage scenario, the problems in realizing this scenario, and how these problems will be addressed. We conclude with a sketch of the multi-organization cooperative being formed to move this effort forward. --- paper_title: Reliability considerations of multi-sensor multi-network pedestrian navigation paper_content: This study discusses a simple multi-sensor multi-network positioning system that integrates global positioning satellite (GPS) measurements, accelerometers and a digital compass with wireless network localisation utilising a pedestrian motion model and dead reckoning. The feasibility of the multi-technology system for seamless outdoor to indoor pedestrian navigation is discussed with the emphasis on reliability issues and adaptability requirements. The multi-sensor multi-network positioning system is developed for challenging navigation environments such as indoors and deep urban canyons. This study considers how to estimate and improve such a multi-sensor multi-network system's reliability and estimate its accuracy. An outdoor to indoor pedestrian test is conducted. Adaptive filtering performance of the multi-technology solution as well as general measurement quality monitoring and error detection when an over-determined solution is at hand is shown. Reliability estimation utilising adaptation in the form of environment detection to estimate the final positioning accuracy is also presented. --- paper_title: A Hybrid Smartphone Indoor Positioning Solution for Mobile LBS paper_content: Smartphone positioning is an enabling technology used to create new business in the navigation and mobile location-based services (LBS) industries. This paper presents a smartphone indoor positioning engine named HIPE that can be easily integrated with mobile LBS. HIPE is a hybrid solution that fuses measurements of smartphone sensors with wireless signals. The smartphone sensors are used to measure the user’s motion dynamics information (MDI), which represent the spatial correlation of various locations. Two algorithms based on hidden Markov model (HMM) problems, the grid-based filter and the Viterbi algorithm, are used in this paper as the central processor for data fusion to resolve the position estimates, and these algorithms are applicable for different applications, e.g., real-time navigation and location tracking, respectively. HIPE is more widely applicable for various motion scenarios than solutions proposed in previous studies because it uses no deterministic motion models, which have been commonly used in previous works. The experimental results showed that HIPE can provide adequate positioning accuracy and robustness for different scenarios of MDI combinations. HIPE is a cost-efficient solution, and it can work flexibly with different smartphone platforms, which may have different types of sensors available for the measurement of MDI data. The reliability of the positioning solution was found to increase with increasing precision of the MDI data. --- paper_title: Growing an organic indoor location system paper_content: Most current methods for 802.11-based indoor localization depend on surveys conducted by experts or skilled technicians. Some recent systems have incorporated surveying by users. Structuring localization systems "organically," however, introduces its own set of challenges: conveying uncertainty, determining when user input is actually required, and discounting erroneous and stale data. Through deployment of an organic location system in our nine-story building, which contains nearly 1,400 distinct spaces, we evaluate new algorithms for addressing these challenges. We describe the use of Voronoi regions for conveying uncertainty and reasoning about gaps in coverage, and a clustering method for identifying potentially erroneous user data. Our algorithms facilitate rapid coverage while maintaining positioning accuracy comparable to that achievable with survey-driven indoor deployments. --- paper_title: Redpin - adaptive, zero-configuration indoor localization through user collaboration paper_content: Redpin is a fingerprint-based indoor localization system designed and built to run on mobile phones. The basic principles of our system are based on known systems like Place Lab or Radar. However, with Redpin it is possible to consider the signal-strength of GSM, Bluetooth, and WiFi access points on a mobile phone. Moreover, we devised methods to omit the time-consuming training phase and instead incorporate a folksonomy-like approach where the users train the system while using it. Finally, this approach also enables the system to expeditiously adapt to changes in the environment, caused for example by replaced access points. --- paper_title: WiFi-SLAM Using Gaussian Process Latent Variable Models paper_content: WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization. --- paper_title: Crowdsourcing with Smartphones paper_content: Smartphones can reveal crowdsourcing's full potential and let users transparently contribute to complex and novel problem solving. This emerging area is illustrated through a taxonomy that classifies the mobile crowdsourcing field and through three new applications that optimize location-based search and similarity services based on crowd-generated data. Such applications can be deployed on SmartLab, a cloud of more than 40 Android devices deployed at the University of Cyprus that provides an open testbed to facilitate research and development of smartphone applications on a massive scale. --- paper_title: Zee: zero-effort crowdsourcing for indoor localization paper_content: Radio Frequency (RF) fingerprinting, based onWiFi or cellular signals, has been a popular approach to indoor localization. However, its adoption in the real world has been stymied by the need for sitespecific calibration, i.e., the creation of a training data set comprising WiFi measurements at known locations in the space of interest. While efforts have been made to reduce this calibration effort using modeling, the need for measurements from known locations still remains a bottleneck. In this paper, we present Zee -- a system that makes the calibration zero-effort, by enabling training data to be crowdsourced without any explicit effort on the part of users. Zee leverages the inertial sensors (e.g., accelerometer, compass, gyroscope) present in the mobile devices such as smartphones carried by users, to track them as they traverse an indoor environment, while simultaneously performing WiFi scans. Zee is designed to run in the background on a device without requiring any explicit user participation. The only site-specific input that Zee depends on is a map showing the pathways (e.g., hallways) and barriers (e.g., walls). A significant challenge that Zee surmounts is to track users without any a priori, user-specific knowledge such as the user's initial location, stride-length, or phone placement. Zee employs a suite of novel techniques to infer location over time: (a) placement-independent step counting and orientation estimation, (b) augmented particle filtering to simultaneously estimate location and user-specific walk characteristics such as the stride length,(c) back propagation to go back and improve the accuracy of ocalization in the past, and (d) WiFi-based particle initialization to enable faster convergence. We present an evaluation of Zee in a large office building. --- paper_title: Collaborative Pedestrian Mapping of Buildings Using Inertial Sensors and FootSLAM paper_content: The FeetSLAM technique builds on iterative processing of multiple sets of pedestrian odometry data, based on FootSLAM. The objective is to obtain maps of large areas based on many data sets. The central idea is that maps originating from other data sets are used as a so-called prior map for a given data set. We show that this follows from the optimal FeetSLAM derivation but is more suited to practical computation limitations such as limited memory. It also yields maps which are not overly dominated by one data set but rather balances the characteristics of each with the effect of averaging out errors. Over iterations, FootSLAM maps are gradually combined to yield a high-accuracy global map the iteration speed is controlled by employing concepts from simulated annealing. We validate our approach using two data sets from two locations, consisting of four and five walks respectively. --- paper_title: Simultaneous Localization and Mapping for pedestrians using distortions of the local magnetic field intensity in large indoor environments paper_content: We present a Simultaneous Localization and Mapping (SLAM) algorithm based on measurements of the ambient magnetic field strength (MagSLAM) that allows quasi-real-time mapping and localization in buildings, where pedestrians with foot-mounted sensors are the subjects to be localized. We assume two components to be present: firstly a source of odometry (human step measurements), and secondly a sensor of the local magnetic field intensity. Our implementation follows the FastSLAM factorization using a particle filter. We augment the hexagonal transition map used in the pre-existing FootSLAM algorithm with local maps of the magnetic field strength, binned in a hierarchical hexagonal structure. We performed extensive experiments in a number of different buildings and present the results for five data sets for which we have ground truth location information. We consider the results obtained using MagSLAM to be strong evidence that scalable and accurate localization is possible without an a priori map. --- paper_title: Locating in fingerprint space: wireless indoor localization with little human intervention paper_content: Indoor localization is of great importance for a range of pervasive applications, attracting many research efforts in the past decades. Most radio-based solutions require a process of site survey, in which radio signatures of an interested area are annotated with their real recorded locations. Site survey involves intensive costs on manpower and time, limiting the applicable buildings of wireless localization worldwide. In this study, we investigate novel sensors integrated in modern mobile phones and leverage user motions to construct the radio map of a floor plan, which is previously obtained only by site survey. On this basis, we design LiFS, an indoor localization system based on off-the-shelf WiFi infrastructure and mobile phones. LiFS is deployed in an office building covering over 1600m2, and its deployment is easy and rapid since little human intervention is needed. In LiFS, the calibration of fingerprints is crowdsourced and automatic. Experiment results show that LiFS achieves comparable location accuracy to previous approaches even without site survey. --- paper_title: HiMLoc: Indoor smartphone localization via activity aware Pedestrian Dead Reckoning with selective crowdsourced WiFi fingerprinting paper_content: The large number of applications that rely on indoor positioning encourages more advancement in this field. Smartphones are becoming a common presence in our daily life, so taking advantage of their sensors can help to provide ubiquitous positioning solution. We propose HiMLoc, a novel solution that synergistically uses Pedestrian Dead Reckoning (PDR) and WiFi fingerprinting to exploit their positive aspects and limit the impact of their negative aspects. Specifically, HiMLoc combines location tracking and activity recognition using inertial sensors on mobile devices with location-specific weighted assistance from a crowd-sourced WiFi fingerprinting system via a particle filter. By using just the most common sensors available on the large majority of smartphones (accelerometer, compass, and WiFi card) and offering an easily deployable method (requiring just the locations of stairs, elevators, corners and entrances), HiMLoc is shown to achieve median accuracies lower than 3 meters in most cases. --- paper_title: No need to war-drive: unsupervised indoor localization paper_content: We propose UnLoc, an unsupervised indoor localization scheme that bypasses the need for war-driving. Our key observation is that certain locations in an indoor environment present identifiable signatures on one or more sensing dimensions. An elevator, for instance, imposes a distinct pattern on a smartphone's accelerometer; a corridor-corner may overhear a unique set of WiFi access points; a specific spot may experience an unusual magnetic fluctuation. We hypothesize that these kind of signatures naturally exist in the environment, and can be envisioned as internal landmarks of a building. Mobile devices that "sense" these landmarks can recalibrate their locations, while dead-reckoning schemes can track them between landmarks. Results from 3 different indoor settings, including a shopping mall, demonstrate median location errors of 1:69m. War-driving is not necessary, neither are floorplans the system simultaneously computes the locations of users and landmarks, in a manner that they converge reasonably quickly. We believe this is an unconventional approach to indoor localization, holding promise for real-world deployment. --- paper_title: Elekspot: A Platform for Urban Place Recognition via Crowdsourcing paper_content: The proliferation of Wi-Fi infrastructures has facilitated numerous indoor localization techniques using Wi-Fi location fingerprints. They make it possible to identify a room or a place in urban environment, which is especially important in enabling many interesting location-based services. As there are too many rooms and places such as cafes and restaurants to be recognized in urban environment, the crowdsourcing approach has been proposed to collect Wi-Fi location fingerprints based on user participation. However, its actual deployment in a large-scale urban environment presents numerous design and implementation challenges due to urban characteristics such as a large crowd, dense region, and device diversity. This paper presents Elekspot system, whose design goal is to support system scalability, device heterogeneity, robustness against lack of contributions, and localization accuracy. Through several experiments and implementation of actual applications targeting urban places we confirmed that the architecture and methods of Elekspot can effectively meet the design goals. --- paper_title: FreeLoc: Calibration-free crowdsourced indoor localization paper_content: Many indoor localization techniques that rely on RF signals from wireless Access Points have been proposed in the last decade. In recent years, research on crowdsourced (also known as “Organic”) Wi-Fi fingerprint positioning systems has been attracting much attention. This participatory approach introduces new challenges that no previously proposed techniques have taken into account. This paper proposes “FreeLoc”, an efficient localization method addressing three major technical issues posed in crowdsourcing based systems. Our novel solution facilitates 1) extracting accurate fingerprint values from short RSS measurement times 2) calibration-free positioning across different devices and 3) maintaining a single fingerprint for each location in a radio map, irrespective of any number of uploaded data sets for a given location. Through experiments using four different smartphones, we evaluate our new indoor positioning method. The experimental results confirm that the proposed scheme provides consistent localization accuracy in an environment where the device heterogeneity and the multiple surveyor problems exist. --- paper_title: WicLoc: An indoor localization system based on WiFi fingerprints and crowdsourcing paper_content: WiFi fingerprint-based indoor localization techniques have been proposed and widely used in recent years. Most solutions need a site survey to collect fingerprints from interested locations to construct the fingerprint database. However, the site survey is labor-intensive and time-consuming. To overcome this shortcoming, we record user motions as well as WiFi signals without the active participation of the users to construct the fingerprint database, in place of the previous site survey. In this paper, we develop an indoor localization system called WicLoc, which is based on WiFi fingerprinting and crowdsourcing. We design a fingerprint model to form fingerprints of each location of interest after fingerprint collection. We propose a weighted KNN (K-Nearest Neighbor) algorithm to assign different weights to APs and achieve room-level localization. To obtain the absolute coordinate of users, we design a novel MDS (Multi-Dimensional Scaling) algorithm called MDS-C (Multi-Dimensional Scaling with Calibrations) to calculate coordinates of interested locations in the corridor and rooms, where anchor points are used to calibrate absolute coordinates of users. Experimental results show that our system can achieve a competitive localization accuracy compared with state-of-the-art WiFi fingerprint-based methods while avoiding the labor-intensive site survey. --- paper_title: Rank based fingerprinting algorithm for indoor positioning paper_content: A novel Received Signal Strength (RSS) rank based fingerprinting algorithm for indoor positioning is presented. Because RSS rank is invariant to bias and scaling, the algorithm provides the same accuracy for any receiver device, without the need for RSS calibration. Similarity measures to compare ranked vectors are introduced and their impact on positioning accuracy is investigated in experiments. Experimental results shown that proposed algorithm can achieve better accuracy than commonly used NN and WKNN fingerprinting algorithms. ---
Title: A Survey of Crowd Sensing Opportunistic Signals for Indoor Localization Section 1: Introduction Description 1: This section provides background information on mobile indoor localization and crowd sensing, setting the context for the survey. Section 2: Foundation of Crowd Sensing for Indoor Localization Description 2: This section describes the basic principles of crowd sensing and its application in fingerprinting-based indoor localization, including the learning and positioning phases. Section 3: Scheme of a Crowd Sensing Based Mobile Indoor Localization Approach Description 3: This section outlines the structure of a crowd sensing-based localization system, focusing on the roles of the frontend and backend components. Section 4: Crowd Sensing versus Expert Survey Description 4: This section compares crowd sensing with traditional expert surveys, evaluating factors like time consumption, labor cost, data quality, and more. Section 5: Opportunistic Signals Description 5: This section examines various types of opportunistic signals used for fingerprinting-based localization, including Wi-Fi, Bluetooth, magnetic fields, image features, and others. Section 6: Walking Trajectory Description 6: This section discusses methods for obtaining the walking trajectory of participants, focusing on Pedestrian Dead Reckoning (PDR) and Simultaneous Localization and Mapping (SLAM). Section 7: Indoor Maps Description 7: This section explores the use of indoor maps, such as raster and vector maps, in indoor localization. Section 8: Organic Fingerprint Description 8: This section defines the concept of organic fingerprints and discusses the challenges and solutions related to data fusion in crowd sensing. Section 9: Fingerprinting-Based Positioning Algorithms Description 9: This section reviews various positioning algorithms, including deterministic, probabilistic, and hybrid solutions for indoor localization. Section 10: The State-of-the-Art Solutions Description 10: This section presents a comparison of recent state-of-the-art solutions in crowd sensing-based indoor localization. Section 11: Challenges Description 11: This section outlines the challenges facing crowd sensing-based indoor localization, such as device diversity, quality control, power consumption, and privacy protection. Section 12: Conclusion and Future Trends Description 12: This section concludes the survey by summarizing the findings and discussing future research directions in crowd sensing-based indoor localization.
Variability in quality attributes of service-based software systems: A systematic literature review
37
--- paper_title: Variability management in software product lines: an investigation of contemporary industrial challenges paper_content: Variability management is critical for achieving the large scale reuse promised by the software product line paradigm. It has been studied for almost 20 years. We assert that it is important to explore how well the body of knowledge of variability management solves the challenges faced by industrial practitioners, and what are the remaining and (or) emerging challenges. To gain such understanding of the challenges of variability management faced by practitioners, we have conducted an empirical study using focus group as data collection method. The results of the study highlight several technical challenges that are often faced by practitioners in their daily practices. Different from previous studies, the results also reveal and shed light on several non-technical challenges that were almost neglected by existing research. --- paper_title: Systematic literature reviews in software engineering - A tertiary study paper_content: Context: In a previous study, we reported on a systematic literature review (SLR), based on a manual search of 13 journals and conferences undertaken in the period 1st January 2004 to 30th June 2007. Objective: The aim of this on-going research is to provide an annotated catalogue of SLRs available to software engineering researchers and practitioners. This study updates our previous study using a broad automated search. Method: We performed a broad automated search to find SLRs published in the time period 1st January 2004 to 30th June 2008. We contrast the number, quality and source of these SLRs with SLRs found in the original study. Results: Our broad search found an additional 35 SLRs corresponding to 33 unique studies. Of these papers, 17 appeared relevant to the undergraduate educational curriculum and 12 appeared of possible interest to practitioners. The number of SLRs being published is increasing. The quality of papers in conferences and workshops has improved as more researchers use SLR guidelines. Conclusion: SLRs appear to have gone past the stage of being used solely by innovators but cannot yet be considered a main stream software engineering research methodology. They are addressing a wide range of topics but still have limitations, such as often failing to assess primary study quality. --- paper_title: Systematic Review in Software Engineering paper_content: A kit of assemblable components for implantation into the bone of a mammal for use in distraction osteogenesis. The kit comprises a fixture, a footing and a distracter, the fixture including a longitudinally extending body portion with a proximal end and a distal end, the body portion having an exterior surface adapted for contact and integration with bone tissue, the body portion having a generally longitudinally extending bore extending from a proximal opening adjacent the proximal end to a distal opening adjacent the distal end. The footing includes a proximal surface and a distal surface. The distracter comprises a generally rod-shaped body including a distal end and a proximal end, and the proximal end of the distracter is adapted to bear against the footing. There are first and second engaging means on the fixture and the distracter respectively for adjustably locating the fixture relative to the distracter. --- paper_title: An Overview of Software Engineering Approaches to Service Oriented Architectures in Various Fields paper_content: For the last few years, a rise has been observed in research activity in Service Oriented Architectures, with applications in different sectors. Several new technologies have been introduced and even more are being currently researched and aimed to the future. In this paper we present and analyze some of the most influential approaches from a software engineer’s point of view that belong either to the academic or to the industrial field. Despite their differences though, all of these approaches share a service oriented mentality, with the purpose of lessening the issues of clients and companies, students and teachers, citizens and government employees alike. Lastly, we discuss our findings from the comparison and present possible new research opportunities for the immediate future. --- paper_title: Exploring service-oriented system engineering challenges: a systematic literature review paper_content: Service-oriented system engineering (SOSE) has drawn increasing attention since service-oriented computing was introduced in the beginning of this decade. A large number of SOSE challenges that call for special software engineering efforts have been proposed in the research community. Our goal is to gain insight into the current status of SOSE research issues as published to date. To this end, we conducted a systematic literature review exploring SOSE challenges that have been claimed between January 2000 and July 2008. This paper presents the results of the systematic review as well as the empirical research method we followed. In this review, of the 729 publications that have been examined, 51 were selected as primary studies, from which more than 400 SOSE challenges were elicited. By applying qualitative data analysis methods to the extracted data from the review, we proved our hypotheses about the classification scheme. We are able to conclude that the SOSE challenges can be classified along two dimensions: (a) based on themes (or topics) that they cover and (b) based on characteristics (or types) that they reveal. By analyzing the distribution of the SOSE challenges on the topics and types in the years 2000–2008, we are able to point out the trend in SOSE research activities. The findings of this review further provide empirical evidence for establishing future SOSE research agendas. --- paper_title: Systematic literature reviews in software engineering – A systematic literature review paper_content: Background: In 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference. Aims: This study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence. Method: We used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings. Results: Of 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4. Conclusions: Currently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners. --- paper_title: Experiences conducting systematic reviews from novices' perspective paper_content: A systematic review (SR) is a sound methodology for collecting evidence on a research topic of interest and establishing the context of future research. Unlike ordinary or even expert literature reviews, SRs are systematic thus increasing the confidence in the findings from the previous published literature. SRs can be carried out by both experienced and novice researchers; however, while expert researchers? experiences with conducting SRs are important for improving the SR body of knowledge, we believe that novice researchers? experiences are equally important to establish what distinct problems they face while carrying out SRs. With a prior knowledge of these issues, novice researchers can better plan their SRs and seek guidance from expert researchers. --- paper_title: Variability management in software product lines: a systematic review paper_content: Variability Management (VM) in Software Product Line (SPL) is a key activity that usually affects the degree to which a SPL is successful. SPL community has spent huge amount of resources on developing various approaches to dealing with variability related challenges over the last decade. To provide an overview of different aspects of the proposed VM approaches, we carried out a systematic literature review of the papers reporting VM in SPL. This paper presents and discusses the findings from this systematic literature review. The results reveal the chronological backgrounds of various approaches over the history of VM research, and summarize the key issues that drove the evolution of different approaches. This study has also identified several gaps that need to be filled by future efforts in this line of research. --- paper_title: Variability management in software product lines: an investigation of contemporary industrial challenges paper_content: Variability management is critical for achieving the large scale reuse promised by the software product line paradigm. It has been studied for almost 20 years. We assert that it is important to explore how well the body of knowledge of variability management solves the challenges faced by industrial practitioners, and what are the remaining and (or) emerging challenges. To gain such understanding of the challenges of variability management faced by practitioners, we have conducted an empirical study using focus group as data collection method. The results of the study highlight several technical challenges that are often faced by practitioners in their daily practices. Different from previous studies, the results also reveal and shed light on several non-technical challenges that were almost neglected by existing research. --- paper_title: An Overview of Software Engineering Approaches to Service Oriented Architectures in Various Fields paper_content: For the last few years, a rise has been observed in research activity in Service Oriented Architectures, with applications in different sectors. Several new technologies have been introduced and even more are being currently researched and aimed to the future. In this paper we present and analyze some of the most influential approaches from a software engineer’s point of view that belong either to the academic or to the industrial field. Despite their differences though, all of these approaches share a service oriented mentality, with the purpose of lessening the issues of clients and companies, students and teachers, citizens and government employees alike. Lastly, we discuss our findings from the comparison and present possible new research opportunities for the immediate future. --- paper_title: Exploring service-oriented system engineering challenges: a systematic literature review paper_content: Service-oriented system engineering (SOSE) has drawn increasing attention since service-oriented computing was introduced in the beginning of this decade. A large number of SOSE challenges that call for special software engineering efforts have been proposed in the research community. Our goal is to gain insight into the current status of SOSE research issues as published to date. To this end, we conducted a systematic literature review exploring SOSE challenges that have been claimed between January 2000 and July 2008. This paper presents the results of the systematic review as well as the empirical research method we followed. In this review, of the 729 publications that have been examined, 51 were selected as primary studies, from which more than 400 SOSE challenges were elicited. By applying qualitative data analysis methods to the extracted data from the review, we proved our hypotheses about the classification scheme. We are able to conclude that the SOSE challenges can be classified along two dimensions: (a) based on themes (or topics) that they cover and (b) based on characteristics (or types) that they reveal. By analyzing the distribution of the SOSE challenges on the topics and types in the years 2000–2008, we are able to point out the trend in SOSE research activities. The findings of this review further provide empirical evidence for establishing future SOSE research agendas. --- paper_title: Software Architecture in Practice paper_content: The award-winning and highly influential Software Architecture in Practice, Third Edition, has been substantially revised to reflect the latest developments in the field. In a real-world setting, the book once again introduces the concepts and best practices of software architecturehow a software system is structured and how that systems elements are meant to interact. Distinct from the details of implementation, algorithm, and data representation, an architecture holds the key to achieving system quality, is a reusable asset that can be applied to subsequent systems, and is crucial to a software organizations business strategy. The authors have structured this edition around the concept of architecture influence cycles. Each cycle shows how architecture influences, and is influenced by, a particular context in which architecture plays a critical role. Contexts include technical environment, the life cycle of a project, an organizations business profile, and the architects professional practices. The authors also have greatly expanded their treatment of quality attributes, which remain central to their architecture philosophywith an entire chapter devoted to each attributeand broadened their treatment of architectural patterns. If you design, develop, or manage large software systems (or plan to do so), you will find this book to be a valuable resource for getting up to speed on the state of the art. Totally new material covers Contexts of software architecture: technical, project, business, and professional Architecture competence: what this means both for individuals and organizations The origins of business goals and how this affects architecture Architecturally significant requirements, and how to determine them Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation; architecture and testing; and architecture and agile development Architecture and current technologies, such as the cloud, social networks, and end-user devices --- paper_title: Quality Attributes for Service-Oriented Architectures paper_content: The SOA approach is a very popular choice today for the implementation of distributed systems. The use of SOA or more specifically the Web services technology is an important architecture decision. An architect should understand how different quality attributes for a system are impacted by that decision. While there are significant benefits with respect to interoperability and modifiability, other qualities such as performance, security and testability are concerns. This paper discusses how the different quality attributes of a system can be positively or negatively affected by the use of such technology. It describes the factors related to each attribute, as well as possible tradeoffs and existing efforts to achieve that quality. The paper also discusses open issues in service level agreements that are used to contract the level of service quality between service providers and users. --- paper_title: Managing Variation in Services in a Software Product Line Context paper_content: Abstract : Software product line (SPL) and service-oriented architecture (SOA) approaches both enable an organization to reuse existing assets and capabilities rather than repeatedly redeveloping them for new systems. Organizations can capitalize on such reuse in software-reliant systems to achieve business goals such as productivity gains, decreased development costs, improved time to market, increased reliability, increased agility, and competitive advantage. Both approaches accommodate variation in the software that is being reused or the way in which it is employed. Meeting business goals through a product line or a set of service-oriented systems requires managing the variation of assets, including services. This report examines combining existing SOA and software product line approaches for variation management. This examination has two objectives: (1) for service-oriented systems development, to present an approach for managing variation by identifying and designing services explicitly targeted to multiple service-oriented systems, and (2) for SPL systems, to present an approach for managing variation where services are a mechanism for variation within a product line or for expanding the product line scope. --- paper_title: Quality Attributes and Service-Oriented Architectures paper_content: Abstract : This report examines the relationship between service-oriented architectures (SOAs) and quality attributes. Because software architecture is the bridge between mission/business goals and a software-intensive system, and quality attribute requirements drive software architecture design, it is important to understand how SOAs support these requirements. This report gives a short introduction to SOAs and outlines some of the main business goals that may lead an organization to choose an SOA to design and implement a system. The report outlines a set of quality attributes that may be derived from an organization's business goals and examines how those attributes relate to an SOA. In addition, the report describes how the SOA impacts those attributes and how choosing an SOA can help an organization achieve its business goals. --- paper_title: Requirements and Tools for Variability Management paper_content: Explicit and software-supported Business Process Management has become the core infrastructure of any medium and large organization that has a need to be efficient and effective. The number of processes of a single organization can be very high, furthermore, they might be very similar, be in need of momentary change, or evolve frequently. If the adhoc adaptation and customization of processes is currently the dominant way, it clearly is not the best. In fact, providing tools for supporting the explicit management of variation in processes (due to customization or evolution needs) has a profound impact on the overall life-cycle of processes in organizations. Additionally, with the increasing adoption of Service-Oriented Architectures, the infrastructure to support automatic reconfiguration and adaptation of business process is solid. In this paper, after defining variability in business process management, we consider the requirements for explicit variation handling for (service based) business process systems. eGovernment serves as an illustrative example of reuse. In this case study, all local municipalities need to implement the same general legal process while adapting it to the local business practices and IT infrastructure needs. Finally, an evaluation of existing tools for explicit variability management is provided with respect to the requirements identified. --- paper_title: The Landscape of Service-Oriented Systems: A Research Perspective paper_content: Service orientation has been touted as one of the most important technologies for designing, implementing and deploying large scale service provision software systems. In this position paper we attempt to investigate an initial classification of challenge areas related to service orientation and service-oriented systems. We start by organizing the research issues related to service orientation in three general categories- business, engineering and operations, plus a set of cross-cutting concerns across domain. We further propose the notion of Service Strategy as a binding model for these three categories. Finally, concluding this position paper, we outline a set of emerging opportunities to be used for further discussion. --- paper_title: A systematic review of quality attributes and measures for software product lines paper_content: It is widely accepted that software measures provide an appropriate mechanism for understanding, monitoring, controlling, and predicting the quality of software development projects. In software product lines (SPL), quality is even more important than in a single software product since, owing to systematic reuse, a fault or an inadequate design decision could be propagated to several products in the family. Over the last few years, a great number of quality attributes and measures for assessing the quality of SPL have been reported in literature. However, no studies summarizing the current knowledge about them exist. This paper presents a systematic literature review with the objective of identifying and interpreting all the available studies from 1996 to 2010 that present quality attributes and/or measures for SPL. These attributes and measures have been classified using a set of criteria that includes the life cycle phase in which the measures are applied; the corresponding quality characteristics; their support for specific SPL characteristics (e.g., variability, compositionality); the procedure used to validate the measures, etc. We found 165 measures related to 97 different quality attributes. The results of the review indicated that 92% of the measures evaluate attributes that are related to maintainability. In addition, 67% of the measures are used during the design phase of Domain Engineering, and 56% are applied to evaluate the product line architecture. However, only 25% of them have been empirically validated. In conclusion, the results provide a global vision of the state of the research within this area in order to help researchers in detecting weaknesses, directing research efforts, and identifying new research lines. In particular, there is a need for new measures with which to evaluate both the quality of the artifacts produced during the entire SPL life cycle and other quality characteristics. There is also a need for more validation (both theoretical and empirical) of existing measures. In addition, our results may be useful as a reference guide for practitioners to assist them in the selection or the adaptation of existing measures for evaluating their software product lines. --- paper_title: Issues in the Design of Flexible and Dynamic Service-Oriented Systems paper_content: Due to the use of the concepts embedded in Service- Oriented Architecture (SOA), software design, now more than ever, involves the use of incomplete information. Applications that utilizeWeb Services are also highly impacted by the problem of deployment and subsequent undeployment of services. Specifically, there is a level of uncertainty caused by the potential for services to become unavailable (either temporarily or permanently). In a scenario where an application must switch from one service to another due to the undeployment problem, the client application may require that new or different handlers be used to cope with the properties of the alternative service. In this current development climate, the design issues become clear: there is a need to reason about how a design is impacted by discovered services, design analysis must consider the transaction and event properties of discovered services, and design of systems must incorporate fault tolerance and high-integrity issues to cope with the dynamic landscape caused by the uncertainty associated with using services. --- paper_title: Testing in Service Oriented Architectures with Dynamic Binding: A Mapping Study paper_content: Context: Service Oriented Architectures (SOA) have emerged as a new paradigm to develop interoperable and highly dynamic applications. Objective: This paper aims to identify the state of the art in the research on testing in Service Oriented Architectures with dynamic binding. Method: A mapping study has been performed employing both manual and automatic search in journals, conference/workshop proceedings and electronic databases. Results: A total of 33 studies have been reviewed in order to extract relevant information regarding a previously defined set of research questions. The detection of faults and the decision making based on the information gathered from the tests have been identified as the main objectives of these studies. To achieve these goals, monitoring and test case generation are the most proposed techniques testing both functional and non-functional properties. Furthermore, different stakeholders have been identified as participants in the tests, which are performed in specific points in time during the life cycle of the services. Finally, it has been observed that a relevant group of studies have not validated their approach yet. Conclusions: Although we have only found 33 studies that address the testing of SOA where the discovery and binding of the services are performed at runtime, this number can be considered significant due to the specific nature of the reviewed topic. The results of this study have contributed to provide a body of knowledge that allows identifying current gaps in improving the quality of the dynamic binding in SOA using testing approaches. --- paper_title: Adaptation of service-based systems paper_content: The advances in modern technology development and future technology changes dictate new challenges and requirements to the engineering and provision of services and service-based systems (SBS). These services and systems should become drastically more flexible; they should be able to operate and evolve in highly dynamic environments and to adequately react to various changes in these environments. In these settings, adaptability becomes a key feature of services as it provides a way for an application to continuously change itself in order to satisfy new contextual requirements. --- paper_title: Exploring service-oriented system engineering challenges: a systematic literature review paper_content: Service-oriented system engineering (SOSE) has drawn increasing attention since service-oriented computing was introduced in the beginning of this decade. A large number of SOSE challenges that call for special software engineering efforts have been proposed in the research community. Our goal is to gain insight into the current status of SOSE research issues as published to date. To this end, we conducted a systematic literature review exploring SOSE challenges that have been claimed between January 2000 and July 2008. This paper presents the results of the systematic review as well as the empirical research method we followed. In this review, of the 729 publications that have been examined, 51 were selected as primary studies, from which more than 400 SOSE challenges were elicited. By applying qualitative data analysis methods to the extracted data from the review, we proved our hypotheses about the classification scheme. We are able to conclude that the SOSE challenges can be classified along two dimensions: (a) based on themes (or topics) that they cover and (b) based on characteristics (or types) that they reveal. By analyzing the distribution of the SOSE challenges on the topics and types in the years 2000–2008, we are able to point out the trend in SOSE research activities. The findings of this review further provide empirical evidence for establishing future SOSE research agendas. --- paper_title: Systematic literature reviews in software engineering – A systematic literature review paper_content: Background: In 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference. Aims: This study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence. Method: We used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings. Results: Of 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4. Conclusions: Currently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners. --- paper_title: The Landscape of Service-Oriented Systems: A Research Perspective paper_content: Service orientation has been touted as one of the most important technologies for designing, implementing and deploying large scale service provision software systems. In this position paper we attempt to investigate an initial classification of challenge areas related to service orientation and service-oriented systems. We start by organizing the research issues related to service orientation in three general categories- business, engineering and operations, plus a set of cross-cutting concerns across domain. We further propose the notion of Service Strategy as a binding model for these three categories. Finally, concluding this position paper, we outline a set of emerging opportunities to be used for further discussion. --- paper_title: Variability management in software product lines: a systematic review paper_content: Variability Management (VM) in Software Product Line (SPL) is a key activity that usually affects the degree to which a SPL is successful. SPL community has spent huge amount of resources on developing various approaches to dealing with variability related challenges over the last decade. To provide an overview of different aspects of the proposed VM approaches, we carried out a systematic literature review of the papers reporting VM in SPL. This paper presents and discusses the findings from this systematic literature review. The results reveal the chronological backgrounds of various approaches over the history of VM research, and summarize the key issues that drove the evolution of different approaches. This study has also identified several gaps that need to be filled by future efforts in this line of research. --- paper_title: The Goal Question Metric Approach paper_content: As with any engineering discipline, software development requires a measurement mechanism for feedback and evaluation. Measurement is a mechanism for creating a corporate memory and an aid in answering a variety of questions associated with the enactment of any software process. It helps support project planning (e.g., How much will a new project cost?); it allows us to determine the strengths and weaknesses of the current processes and products (e.g., What is the frequency of certain types of errors?); it provides a rationale for adopting/refining techniques (e.g., What is the impact of the technique XX on the productivity of the projects?); it allows us to evaluate the quality of specific processes and products (e.g., What is the defect density in a specific system after deployment?). Measurement also helps, during the course of a project, to assess its progress, to take corrective action based on this assessment, and to evaluate the impact of such action. --- paper_title: On searching relevant studies in software engineering paper_content: BACKGROUND: Systematic Literature Review (SLR) has become an important research methodology in software engineering since 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is quite time consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need of a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries. ::: ::: OBJECTIVE: The main objective of the research reported in this paper is to improve the search step of doing SLRs in SE by devising and evaluating systematic and practical approaches to identifying relevant studies in SE. ::: ::: OUTCOMES: We have systematically selected and analytically studied a large number of papers to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLR, we have devised a systematic approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of 'quasi-gold standard', which consists of collection of known studies and corresponding 'quasi-sensitivity' into the search process for evaluating search performance. We report the case study and its finding to demonstrate that the approach is able to improve the rigor of search process in an SLR, and can serves as the supplements to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using several case studies with varying topics in software engineering. --- paper_title: Exploring service-oriented system engineering challenges: a systematic literature review paper_content: Service-oriented system engineering (SOSE) has drawn increasing attention since service-oriented computing was introduced in the beginning of this decade. A large number of SOSE challenges that call for special software engineering efforts have been proposed in the research community. Our goal is to gain insight into the current status of SOSE research issues as published to date. To this end, we conducted a systematic literature review exploring SOSE challenges that have been claimed between January 2000 and July 2008. This paper presents the results of the systematic review as well as the empirical research method we followed. In this review, of the 729 publications that have been examined, 51 were selected as primary studies, from which more than 400 SOSE challenges were elicited. By applying qualitative data analysis methods to the extracted data from the review, we proved our hypotheses about the classification scheme. We are able to conclude that the SOSE challenges can be classified along two dimensions: (a) based on themes (or topics) that they cover and (b) based on characteristics (or types) that they reveal. By analyzing the distribution of the SOSE challenges on the topics and types in the years 2000–2008, we are able to point out the trend in SOSE research activities. The findings of this review further provide empirical evidence for establishing future SOSE research agendas. --- paper_title: Variability management in software product lines: a systematic review paper_content: Variability Management (VM) in Software Product Line (SPL) is a key activity that usually affects the degree to which a SPL is successful. SPL community has spent huge amount of resources on developing various approaches to dealing with variability related challenges over the last decade. To provide an overview of different aspects of the proposed VM approaches, we carried out a systematic literature review of the papers reporting VM in SPL. This paper presents and discusses the findings from this systematic literature review. The results reveal the chronological backgrounds of various approaches over the history of VM research, and summarize the key issues that drove the evolution of different approaches. This study has also identified several gaps that need to be filled by future efforts in this line of research. --- paper_title: A systematic review of comparative evidence of aspect-oriented programming paper_content: Context: Aspect-oriented programming (AOP) promises to improve many facets of software quality by providing better modularization and separation of concerns, which may have system wide affect. There have been numerous claims in favor and against AOP compared with traditional programming languages such as Objective Oriented and Structured Programming Languages. However, there has been no attempt to systematically review and report the available evidence in the literature to support the claims made in favor or against AOP compared with non-AOP approaches. Objective: This research aimed to systematically identify, analyze, and report the evidence published in the literature to support the claims made in favor or against AOP compared with non-AOP approaches. Method: We performed a systematic literature review of empirical studies of AOP based development, published in major software engineering journals and conference proceedings. Results: Our search strategy identified 3307 papers, of which 22 were identified as reporting empirical studies comparing AOP with non-AOP approaches. Based on the analysis of the data extracted from those 22 papers, our findings show that for performance, code size, modularity, and evolution related characteristics, a majority of the studies reported positive effects, a few studies reported insignificant effects, and no study reported negative effects; however, for cognition and language mechanism, negative effects were reported. Conclusion: AOP is likely to have positive effect on performance, code size, modularity, and evolution. However its effect on cognition and language mechanism is less likely to be positive. Care should be taken using AOP outside the context in which it has been validated. --- paper_title: On searching relevant studies in software engineering paper_content: BACKGROUND: Systematic Literature Review (SLR) has become an important research methodology in software engineering since 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is quite time consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need of a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries. ::: ::: OBJECTIVE: The main objective of the research reported in this paper is to improve the search step of doing SLRs in SE by devising and evaluating systematic and practical approaches to identifying relevant studies in SE. ::: ::: OUTCOMES: We have systematically selected and analytically studied a large number of papers to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLR, we have devised a systematic approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of 'quasi-gold standard', which consists of collection of known studies and corresponding 'quasi-sensitivity' into the search process for evaluating search performance. We report the case study and its finding to demonstrate that the approach is able to improve the rigor of search process in an SLR, and can serves as the supplements to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using several case studies with varying topics in software engineering. --- paper_title: Exploring service-oriented system engineering challenges: a systematic literature review paper_content: Service-oriented system engineering (SOSE) has drawn increasing attention since service-oriented computing was introduced in the beginning of this decade. A large number of SOSE challenges that call for special software engineering efforts have been proposed in the research community. Our goal is to gain insight into the current status of SOSE research issues as published to date. To this end, we conducted a systematic literature review exploring SOSE challenges that have been claimed between January 2000 and July 2008. This paper presents the results of the systematic review as well as the empirical research method we followed. In this review, of the 729 publications that have been examined, 51 were selected as primary studies, from which more than 400 SOSE challenges were elicited. By applying qualitative data analysis methods to the extracted data from the review, we proved our hypotheses about the classification scheme. We are able to conclude that the SOSE challenges can be classified along two dimensions: (a) based on themes (or topics) that they cover and (b) based on characteristics (or types) that they reveal. By analyzing the distribution of the SOSE challenges on the topics and types in the years 2000–2008, we are able to point out the trend in SOSE research activities. The findings of this review further provide empirical evidence for establishing future SOSE research agendas. --- paper_title: Refining the systematic literature review process—two participant-observer case studies paper_content: Systematic literature reviews (SLRs) are a major tool for supporting evidence-based software engineering. Adapting the procedures involved in such a review to meet the needs of software engineering and its literature remains an ongoing process. As part of this process of refinement, we undertook two case studies which aimed 1) to compare the use of targeted manual searches with broad automated searches and 2) to compare different methods of reaching a consensus on quality. For Case 1, we compared a tertiary study of systematic literature reviews published between January 1, 2004 and June 30, 2007 which used a manual search of selected journals and conferences and a replication of that study based on a broad automated search. We found that broad automated searches find more studies than manual restricted searches, but they may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers, or they are assessing research trends in research methodologies. For Case 2, we analyzed the process used to evaluate the quality of SLRs. We conclude that if quality evaluation of primary studies is a critical component of a specific SLR, assessments should be based on three independent evaluators incorporating at least two rounds of discussion. --- paper_title: A systematic review of comparative evidence of aspect-oriented programming paper_content: Context: Aspect-oriented programming (AOP) promises to improve many facets of software quality by providing better modularization and separation of concerns, which may have system wide affect. There have been numerous claims in favor and against AOP compared with traditional programming languages such as Objective Oriented and Structured Programming Languages. However, there has been no attempt to systematically review and report the available evidence in the literature to support the claims made in favor or against AOP compared with non-AOP approaches. Objective: This research aimed to systematically identify, analyze, and report the evidence published in the literature to support the claims made in favor or against AOP compared with non-AOP approaches. Method: We performed a systematic literature review of empirical studies of AOP based development, published in major software engineering journals and conference proceedings. Results: Our search strategy identified 3307 papers, of which 22 were identified as reporting empirical studies comparing AOP with non-AOP approaches. Based on the analysis of the data extracted from those 22 papers, our findings show that for performance, code size, modularity, and evolution related characteristics, a majority of the studies reported positive effects, a few studies reported insignificant effects, and no study reported negative effects; however, for cognition and language mechanism, negative effects were reported. Conclusion: AOP is likely to have positive effect on performance, code size, modularity, and evolution. However its effect on cognition and language mechanism is less likely to be positive. Care should be taken using AOP outside the context in which it has been validated. --- paper_title: Generalizing a Model of Software Architecture Design from Five Industrial Approaches paper_content: We compare five industrial software architecture design methods and we extract from their commonalities a general software architecture design approach. Using this general approach, we compare across the five methods the artifacts and activities they use or recommend, and we pinpoint similarities and differences. Once we get beyond the great variance in terminology and description, we find that the 5 approaches have a lot in common and match more or less the "ideal" pattern we introduced. --- paper_title: Requirements Engineering for Software Product Lines: A Systematic Literature Review paper_content: Context: Software product line engineering (SPLE) is a growing area showing promising results in research and practice. In order to foster its further development and acceptance in industry, it is necessary to assess the quality of the research so that proper evidence for adoption and validity are ensured. This holds in particular for requirements engineering (RE) within SPLE, where a growing number of approaches have been proposed. Objective: This paper focuses on RE within SPLE and has the following goals: assess research quality, synthesize evidence to suggest important implications for practice, and identify research trends, open problems, and areas for improvement. Method: A systematic literature review was conducted with three research questions and assessed 49 studies, dated from 1990 to 2009. Results: The evidence for adoption of the methods is not mature, given the primary focus on toy examples. The proposed approaches still have serious limitations in terms of rigor, credibility, and validity of their findings. Additionally, most approaches still lack tool support addressing the heterogeneity and mostly textual nature of requirements formats as well as address only the proactive SPLE adoption strategy. Conclusions: Further empirical studies should be performed with sufficient rigor to enhance the body of evidence in RE within SPLE. In this context, there is a clear need for conducting studies comparing alternative methods. In order to address scalability and popularization of the approaches, future research should be invested in tool support and in addressing combined SPLE adoption strategies. --- paper_title: Applying Systematic Reviews to Diverse Study Types: An Experience Report paper_content: Systematic reviews are one of the key building blocks of evidence-based software engineering. Current guidelines for such reviews are, for a large part, based on standard meta-analytic techniques. However, such quantitative techniques have only limited applicability to software engineering research. In this paper, therefore, we describe our experience with an approach to combine diverse study types in a systematic review of empirical research of agile software development. --- paper_title: A systematic review of evaluation of variability management approaches in software product lines paper_content: ContextVariability management (VM) is one of the most important activities of software product-line engineering (SPLE), which intends to develop software-intensive systems using platforms and mass customization. VM encompasses the activities of eliciting and representing variability in software artefacts, establishing and managing dependencies among different variabilities, and supporting the exploitation of the variabilities for building and evolving a family of software systems. Software product line (SPL) community has allocated huge amount of effort to develop various approaches to dealing with variability related challenges during the last two decade. Several dozens of VM approaches have been reported. However, there has been no systematic effort to study how the reported VM approaches have been evaluated. ObjectiveThe objectives of this research are to review the status of evaluation of reported VM approaches and to synthesize the available evidence about the effects of the reported approaches. MethodWe carried out a systematic literature review of the VM approaches in SPLE reported from 1990s until December 2007. ResultsWe selected 97 papers according to our inclusion and exclusion criteria. The selected papers appeared in 56 publication venues. We found that only a small number of the reviewed approaches had been evaluated using rigorous scientific methods. A detailed investigation of the reviewed studies employing empirical research methods revealed significant quality deficiencies in various aspects of the used quality assessment criteria. The synthesis of the available evidence showed that all studies, except one, reported only positive effects. ConclusionThe findings from this systematic review show that a large majority of the reported VM approaches have not been sufficiently evaluated using scientifically rigorous methods. The available evidence is sparse and the quality of the presented evidence is quite low. The findings highlight the areas in need of improvement, i.e., rigorous evaluation of VM approaches. However, the reported evidence is quite consistent across different studies. That means the proposed approaches may be very beneficial when they are applied properly in appropriate situations. Hence, it can be concluded that further investigations need to pay more attention to the contexts under which different approaches can be more beneficial. --- paper_title: Increasing business flexibility and SOA adoption through effective SOA governance paper_content: Most organizations understand the need to address service-oriented architecture (SOA) governance during SOA adoption. An abundance of information is available defining SOA governance: what it is and what it is not, why it is important, and why organizational change must be addressed. Increasingly business and information technology (IT) stakeholders, executive and technical, acknowledge that SOA governance is essential for realizing the benefits of SOA adoption: building more-flexible IT architectures, improving the fusion between business and IT models, and making business processes more flexible and reusable. However, what is not clear is how an organization gets started. What works and what does not work? More importantly, what is required in SOA governance for organizations to see sustained and realized benefits? This paper describes a framework, the SOA governance model, that can be used to scope and identify what is required for effective SOA governance. Based on client experiences, we describe four approaches to getting started with SOA governance, and we describe how to use these four approaches to make shared services (services used by two or more consumers), reuse, and flexibility a reality. We also discuss lessons learned in using these four approaches. --- paper_title: Variability management in software product lines: a systematic review paper_content: Variability Management (VM) in Software Product Line (SPL) is a key activity that usually affects the degree to which a SPL is successful. SPL community has spent huge amount of resources on developing various approaches to dealing with variability related challenges over the last decade. To provide an overview of different aspects of the proposed VM approaches, we carried out a systematic literature review of the papers reporting VM in SPL. This paper presents and discusses the findings from this systematic literature review. The results reveal the chronological backgrounds of various approaches over the history of VM research, and summarize the key issues that drove the evolution of different approaches. This study has also identified several gaps that need to be filled by future efforts in this line of research. --- paper_title: A method for evaluating rigor and industrial relevance of technology evaluations paper_content: One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies. This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering. The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research field does not show any improvements in terms of industrial relevance over time. --- paper_title: Variability management in software product lines: a systematic review paper_content: Variability Management (VM) in Software Product Line (SPL) is a key activity that usually affects the degree to which a SPL is successful. SPL community has spent huge amount of resources on developing various approaches to dealing with variability related challenges over the last decade. To provide an overview of different aspects of the proposed VM approaches, we carried out a systematic literature review of the papers reporting VM in SPL. This paper presents and discusses the findings from this systematic literature review. The results reveal the chronological backgrounds of various approaches over the history of VM research, and summarize the key issues that drove the evolution of different approaches. This study has also identified several gaps that need to be filled by future efforts in this line of research. --- paper_title: A systematic review of evaluation of variability management approaches in software product lines paper_content: ContextVariability management (VM) is one of the most important activities of software product-line engineering (SPLE), which intends to develop software-intensive systems using platforms and mass customization. VM encompasses the activities of eliciting and representing variability in software artefacts, establishing and managing dependencies among different variabilities, and supporting the exploitation of the variabilities for building and evolving a family of software systems. Software product line (SPL) community has allocated huge amount of effort to develop various approaches to dealing with variability related challenges during the last two decade. Several dozens of VM approaches have been reported. However, there has been no systematic effort to study how the reported VM approaches have been evaluated. ObjectiveThe objectives of this research are to review the status of evaluation of reported VM approaches and to synthesize the available evidence about the effects of the reported approaches. MethodWe carried out a systematic literature review of the VM approaches in SPLE reported from 1990s until December 2007. ResultsWe selected 97 papers according to our inclusion and exclusion criteria. The selected papers appeared in 56 publication venues. We found that only a small number of the reviewed approaches had been evaluated using rigorous scientific methods. A detailed investigation of the reviewed studies employing empirical research methods revealed significant quality deficiencies in various aspects of the used quality assessment criteria. The synthesis of the available evidence showed that all studies, except one, reported only positive effects. ConclusionThe findings from this systematic review show that a large majority of the reported VM approaches have not been sufficiently evaluated using scientifically rigorous methods. The available evidence is sparse and the quality of the presented evidence is quite low. The findings highlight the areas in need of improvement, i.e., rigorous evaluation of VM approaches. However, the reported evidence is quite consistent across different studies. That means the proposed approaches may be very beneficial when they are applied properly in appropriate situations. Hence, it can be concluded that further investigations need to pay more attention to the contexts under which different approaches can be more beneficial. --- paper_title: A method for evaluating rigor and industrial relevance of technology evaluations paper_content: One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies. This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering. The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research field does not show any improvements in terms of industrial relevance over time. --- paper_title: Mathematical notation in formal specification: too difficult for the masses? paper_content: The phrase "not much mathematics required" can imply a variety of skill levels. When this phrase is applied to computer scientists, software engineers, and clients in the area of formal specification, the word "much" can be widely misinterpreted with disastrous consequences. A small experiment in reading specifications revealed that students already trained in discrete mathematics and the specification notation performed very poorly; much worse than could reasonably be expected if formal methods proponents are to be believed. --- paper_title: A method for evaluating rigor and industrial relevance of technology evaluations paper_content: One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies. This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering. The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research field does not show any improvements in terms of industrial relevance over time. --- paper_title: A method for evaluating rigor and industrial relevance of technology evaluations paper_content: One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies. This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering. The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research field does not show any improvements in terms of industrial relevance over time. --- paper_title: On searching relevant studies in software engineering paper_content: BACKGROUND: Systematic Literature Review (SLR) has become an important research methodology in software engineering since 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is quite time consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need of a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries. ::: ::: OBJECTIVE: The main objective of the research reported in this paper is to improve the search step of doing SLRs in SE by devising and evaluating systematic and practical approaches to identifying relevant studies in SE. ::: ::: OUTCOMES: We have systematically selected and analytically studied a large number of papers to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLR, we have devised a systematic approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of 'quasi-gold standard', which consists of collection of known studies and corresponding 'quasi-sensitivity' into the search process for evaluating search performance. We report the case study and its finding to demonstrate that the approach is able to improve the rigor of search process in an SLR, and can serves as the supplements to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using several case studies with varying topics in software engineering. --- paper_title: Systematic literature reviews in software engineering - A tertiary study paper_content: Context: In a previous study, we reported on a systematic literature review (SLR), based on a manual search of 13 journals and conferences undertaken in the period 1st January 2004 to 30th June 2007. Objective: The aim of this on-going research is to provide an annotated catalogue of SLRs available to software engineering researchers and practitioners. This study updates our previous study using a broad automated search. Method: We performed a broad automated search to find SLRs published in the time period 1st January 2004 to 30th June 2008. We contrast the number, quality and source of these SLRs with SLRs found in the original study. Results: Our broad search found an additional 35 SLRs corresponding to 33 unique studies. Of these papers, 17 appeared relevant to the undergraduate educational curriculum and 12 appeared of possible interest to practitioners. The number of SLRs being published is increasing. The quality of papers in conferences and workshops has improved as more researchers use SLR guidelines. Conclusion: SLRs appear to have gone past the stage of being used solely by innovators but cannot yet be considered a main stream software engineering research methodology. They are addressing a wide range of topics but still have limitations, such as often failing to assess primary study quality. --- paper_title: A systematic review of evaluation of variability management approaches in software product lines paper_content: ContextVariability management (VM) is one of the most important activities of software product-line engineering (SPLE), which intends to develop software-intensive systems using platforms and mass customization. VM encompasses the activities of eliciting and representing variability in software artefacts, establishing and managing dependencies among different variabilities, and supporting the exploitation of the variabilities for building and evolving a family of software systems. Software product line (SPL) community has allocated huge amount of effort to develop various approaches to dealing with variability related challenges during the last two decade. Several dozens of VM approaches have been reported. However, there has been no systematic effort to study how the reported VM approaches have been evaluated. ObjectiveThe objectives of this research are to review the status of evaluation of reported VM approaches and to synthesize the available evidence about the effects of the reported approaches. MethodWe carried out a systematic literature review of the VM approaches in SPLE reported from 1990s until December 2007. ResultsWe selected 97 papers according to our inclusion and exclusion criteria. The selected papers appeared in 56 publication venues. We found that only a small number of the reviewed approaches had been evaluated using rigorous scientific methods. A detailed investigation of the reviewed studies employing empirical research methods revealed significant quality deficiencies in various aspects of the used quality assessment criteria. The synthesis of the available evidence showed that all studies, except one, reported only positive effects. ConclusionThe findings from this systematic review show that a large majority of the reported VM approaches have not been sufficiently evaluated using scientifically rigorous methods. The available evidence is sparse and the quality of the presented evidence is quite low. The findings highlight the areas in need of improvement, i.e., rigorous evaluation of VM approaches. However, the reported evidence is quite consistent across different studies. That means the proposed approaches may be very beneficial when they are applied properly in appropriate situations. Hence, it can be concluded that further investigations need to pay more attention to the contexts under which different approaches can be more beneficial. --- paper_title: Requirements Engineering for Software Product Lines: A Systematic Literature Review paper_content: Context: Software product line engineering (SPLE) is a growing area showing promising results in research and practice. In order to foster its further development and acceptance in industry, it is necessary to assess the quality of the research so that proper evidence for adoption and validity are ensured. This holds in particular for requirements engineering (RE) within SPLE, where a growing number of approaches have been proposed. Objective: This paper focuses on RE within SPLE and has the following goals: assess research quality, synthesize evidence to suggest important implications for practice, and identify research trends, open problems, and areas for improvement. Method: A systematic literature review was conducted with three research questions and assessed 49 studies, dated from 1990 to 2009. Results: The evidence for adoption of the methods is not mature, given the primary focus on toy examples. The proposed approaches still have serious limitations in terms of rigor, credibility, and validity of their findings. Additionally, most approaches still lack tool support addressing the heterogeneity and mostly textual nature of requirements formats as well as address only the proactive SPLE adoption strategy. Conclusions: Further empirical studies should be performed with sufficient rigor to enhance the body of evidence in RE within SPLE. In this context, there is a clear need for conducting studies comparing alternative methods. In order to address scalability and popularization of the approaches, future research should be invested in tool support and in addressing combined SPLE adoption strategies. --- paper_title: Systematic literature reviews in software engineering – A systematic literature review paper_content: Background: In 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference. Aims: This study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence. Method: We used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings. Results: Of 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4. Conclusions: Currently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners. ---
Title: Variability in Quality Attributes of Service-Based Software Systems: A Systematic Literature Review Section 1: Introduction Description 1: Introduce the concept of variability and its significance, especially in the context of service-based software systems. State the objective of the paper. Section 2: Background Description 2: Provide definitions and descriptions of service-based systems, quality attributes, and variability as used in the paper. Section 2.1: Service-based systems and quality attributes Description 2.1: Describe the principles of service-orientation and the adopted definitions of quality attributes. Section 2.2: Variability Description 2.2: Explain the concept of variability, its introduction through variation points, and differentiate between design time and runtime variability. Section 2.3: Lack of existing reviews Description 2.3: Highlight the existing gap in systematic reviews specifically focusing on variability in quality attributes of service-based systems. Section 3: Paper goal and contributions Description 3: Articulate the specific goals of the paper following the Goal-Question-Metric (GQM) perspectives and identify the target audience. Section 4: Paper structure Description 4: Provide an overview of the paper's organization and structure. Section 5: Research method Description 5: Describe the systematic literature review method used, its significance, and the protocol followed. Section 5.1: Research questions Description 5.1: List and explain the research questions that the review seeks to answer. Section 5.2: Search strategy Description 5.2: Detail the search strategy used to identify relevant studies, including preliminary searches, reviews, trial searches, and expert consultation. Section 5.3: Search method Description 5.3: Explain the method of conducting an automatic search using search strings on electronic data sources and manual searches to form a "quasi-gold" standard. Section 5.4: Search terms for automatic search Description 5.4: Outline the steps and terms used to generate search strings for automatic searches. Section 5.5: Search scope and sources to be searched Description 5.5: Define the scope of the search in terms of publication period and source. Section 5.6: Search process Description 5.6: Describe the stages of the study selection process, from searching databases to final inclusion/exclusion decisions. Section 5.7: Inclusion and exclusion criteria Description 5.7: List the inclusion and exclusion criteria used to filter relevant studies. Section 5.8: Quality criteria Description 5.8: Explain the quality criteria used to evaluate the papers, relating to rationale, context, research design, data analysis, findings, bias, and credibility. Section 5.9: Data collection Description 5.9: Describe the data extraction process and the use of data extraction forms for analysis. Section 5.10: Data analysis Description 5.10: Explain how data were summarized, synthetized, and analyzed using descriptive statistics and frequency analysis. Section 6: Results and analysis Description 6: Present the findings of the systematic review in response to the research questions. Section 6.1: Results overview and demographics Description 6.1: Provide an overview of the identified studies and highlight trends in published papers. Section 6.2: RQ1: What quality attributes do existing methods for variability in quality attributes of service-based systems handle? Description 6.2: Analyze the quality attributes addressed by the identified studies, identifying common sets and specific domains. Section 6.3: RQ2: What software development activities are addressed by existing methods for handling variability in quality attributes of service-based systems? Description 6.3: Evaluate and categorize the software development activities considered by the studies. Section 6.4: RQ3: What solution types are used by methods to handle variability in quality attributes of service-based systems? Description 6.4: Identify and classify the types of solutions used in the proposed methods. Section 6.5: RQ4: What evidence is available to adopt proposed methods for handling variability in quality attributes of service-based systems? Description 6.5: Assess the credibility of the studies based on citation counts, quality scores, evidence levels, and evaluation approaches. Section 6.6: RQ5: Are methods only applicable to variability of design-time or run-time quality attributes? Description 6.6: Determine the focus of the methods on design-time or runtime quality attributes. Section 6.7: RQ6: Is there support for practitioners concerning how to use current methods? Description 6.7: Evaluate the support available for practitioners in terms of practical implementations and tool support. Section 7: Discussion of results Description 7: Summarize the main findings, discuss limitations, and address threats to validity. Section 7.1: Focus on certain quality attributes Description 7.1: Discuss the emphasis on specific quality attributes in current research. Section 7.2: Impact of product line engineering Description 7.2: Reflect on the influence of product line engineering on current variability methods. Section 7.3: Poor evidence of proposed methods Description 7.3: Evaluate the evidence supporting proposed methods, highlighting gaps in industrial evidence. Section 7.4: Implications for practitioners and researchers Description 7.4: Discuss the relevance of the findings for both academic and industrial audiences. Section 7.5: Research direction for future work Description 7.5: Suggest future research directions based on the identified gaps and limitations. Section 7.6: Inaccuracy and bias in selected papers for review Description 7.6: Address potential inaccuracies and biases during the paper selection process. Section 7.7: Inaccuracy and bias in data extraction Description 7.7: Discuss measures taken to mitigate bias and inaccuracy in data extraction. Section 7.8: Deviations from the procedures for systematic reviews Description 7.8: Note deviations from standard systematic review procedures and their implications. Section 7.9: Evaluation of review Description 7.9: Evaluate the systematic review based on predefined quality questions. Section 8: Conclusions Description 8: Summarize the conclusions of the systematic review, highlighting key findings and recommendations for future research.
A survey on heterogeneous transfer learning
18
--- paper_title: A Survey on Transfer Learning paper_content: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. --- paper_title: Cross-Domain Learning Based Traditional Chinese Medicine Medical Record Classification paper_content: In Traditional Chinese Medicine(TCM) area, medical records are the objective record of a doctor's diagnosis and treatment and they are the basis of the TCM development. However, existing medical records of TCM are derived from books, medical cases, Web and most of them lack the categories information. In this paper, we propose a text classification method for the TCM medical record based on cross-domain topic model. First, we transform the physical books into the digital documents, then tokenize and filter the documents with domain lexicons to achieve the significative sequences of words which largely maintain the topics of original documents. Second, we use the cross domain topic model named Topic Relevance Weighting Model(TRWM) to generate the features. Finally, the generated features are leveraged for the medical records classification and compared with the baselines. The experimental results validate the effectiveness of our method. --- paper_title: A Survey on Transfer Learning paper_content: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. --- paper_title: A survey of transfer learning paper_content: Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. --- paper_title: A survey of transfer learning paper_content: Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. --- paper_title: A survey of transfer learning paper_content: Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. --- paper_title: Deep learning applications and challenges in big data analytics paper_content: Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning. --- paper_title: A survey of open source tools for machine learning with big data in the Hadoop ecosystem paper_content: With an ever-increasing amount of options, the task of selecting machine learning tools for big data can be difficult. The available tools have advantages and drawbacks, and many have overlapping uses. The world’s data is growing rapidly, and traditional tools for machine learning are becoming insufficient as we move towards distributed and real-time processing. This paper is intended to aid the researcher or professional who understands machine learning but is inexperienced with big data. In order to evaluate tools, one should have a thorough understanding of what to look for. To that end, this paper provides a list of criteria for making selections along with an analysis of the advantages and drawbacks of each. We do this by starting from the beginning, and looking at what exactly the term “big data” means. From there, we go on to the Hadoop ecosystem for a look at many of the projects that are part of a typical machine learning architecture and an understanding of how everything might fit together. We discuss the advantages and disadvantages of three different processing paradigms along with a comparison of engines that implement them, including MapReduce, Spark, Flink, Storm, and H2O. We then look at machine learning libraries and frameworks including Mahout, MLlib, SAMOA, and evaluate them based on criteria such as scalability, ease of use, and extensibility. There is no single toolkit that truly embodies a one-size-fits-all solution, so this paper aims to help make decisions smoother by providing as much information as possible and quantifying what the tradeoffs will be. Additionally, throughout this paper, we review recent research in the field using these tools and talk about possible future directions for toolkit-based learning. --- paper_title: A survey of transfer learning paper_content: Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. --- paper_title: Heterogeneous Unsupervised Cross-domain Transfer Learning. paper_content: Transfer learning addresses the problem of how to leverage previously acquired knowledge (a source domain) to improve the efficiency of learning in a new domain (the target domain). Although transfer learning has been widely researched in the last decade, existing research still has two restrictions: 1) the feature spaces of the domains must be homogeneous; and 2) the target domain must have at least a few labeled instances. These restrictions significantly limit transfer learning models when transferring knowledge across domains, especially in the big data era. To completely break through both of these bottlenecks, a theorem for reliable unsupervised knowledge transfer is proposed to avoid negative transfers, and a Grassmann manifold is applied to measure the distance between heterogeneous feature spaces. Based on this theorem and the Grassmann manifold, this study proposes two heterogeneous unsupervised knowledge transfer (HeUKT) models - known as RLG and GLG. The RLG uses a linear monotonic map (LMM) to reliably project two heterogeneous feature spaces onto a latent feature space and applies geodesic flow kernel (GFK) model to transfers knowledge between two the projected domains. The GLG optimizes the LMM to achieve the highest possible accuracy and guarantees that the geometric properties of the domains remain unchanged during the transfer process. To test the overall effectiveness of two models, this paper reorganizes five public datasets into ten heterogeneous cross-domain tasks across three application fields: credit assessment, text classification, and cancer detection. Extensive experiments demonstrate that the proposed models deliver superior performance over current benchmarks, and that these HeUKT models are a promising way to give computers the associative ability to judge unknown things using related known knowledge. --- paper_title: A Survey on Transfer Learning paper_content: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. --- paper_title: Transfer Learning across Feature-Rich Heterogeneous Feature Spaces via Feature-Space Remapping (FSR) paper_content: Transfer learning aims to improve performance on a target task by utilizing previous knowledge learned from source tasks. In this paper we introduce a novel heterogeneous transfer learning technique, Feature-Space Remapping (FSR), which transfers knowledge between domains with different feature spaces. This is accomplished without requiring typical feature-feature, feature instance, or instance-instance co-occurrence data. Instead we relate features in different feature-spaces through the construction of metafeatures. We show how these techniques can utilize multiple source datasets to construct an ensemble learner which further improves performance. We apply FSR to an activity recognition problem and a document classification problem. The ensemble technique is able to outperform all other baselines and even performs better than a classifier trained using a large amount of labeled data in the target domain. These problems are especially difficult because, in addition to having different feature-spaces, the marginal probability distributions and the class labels are also different. This work extends the state of the art in transfer learning by considering large transfer across dramatically different spaces. --- paper_title: A survey of transfer learning paper_content: Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. --- paper_title: Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation paper_content: In this paper, we study the heterogeneous domain adaptation (HDA) problem, in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. By introducing two different projection matrices, we first transform the data from two domains into a common subspace such that the similarity between samples across different domains can be measured. We then propose a new feature mapping function for each domain, which augments the transformed samples with their original features and zeros. Existing supervised learning methods ( e.g., SVM and SVR) can be readily employed by incorporating our newly proposed augmented feature representations for supervised HDA. As a showcase, we propose a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM. We show that the proposed formulation can be equivalently derived as a standard Multiple Kernel Learning (MKL) problem, which is convex and thus the global solution can be guaranteed. To additionally utilize the unlabeled data in the target domain, we further propose the semi-supervised HFA (SHFA) which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples. Comprehensive experiments on three different applications clearly demonstrate that our SHFA and HFA outperform the existing HDA methods. --- paper_title: Semi-supervised Subspace Co-Projection for Multi-class Heterogeneous Domain Adaptation paper_content: Heterogeneous domain adaptation aims to exploit labeled training data from a source domain for learning prediction models in a target domain under the condition that the two domains have different input feature representation spaces. In this paper, we propose a novel semi-supervised subspace co-projection method to address multi-class heterogeneous domain adaptation. The proposed method projects the instances of the two domains into a co-located latent subspace to bridge the feature divergence gap across domains, while simultaneously training prediction models in the co-projected representation space with labeled training instances from both domains. It also exploits the unlabeled data to promote the consistency of co-projected subspaces from the two domains based on a maximum mean discrepancy criterion. Moreover, to increase the stability and discriminative informativeness of the subspace co-projection, we further exploit the error-correcting output code schemes to incorporate more binary prediction tasks shared across domains into the learning process. We formulate this semi-supervised learning process as a non-convex joint minimization problem and develop an alternating optimization algorithm to solve it. To investigate the empirical performance of the proposed approach, we conduct experiments on cross-lingual text classification and cross-domain digit image classification tasks with heterogeneous feature spaces. The experimental results demonstrate the efficacy of the proposed method on these heterogeneous domain adaptation problems. --- paper_title: Heterogeneous domain adaptation network based on autoencoder paper_content: Abstract Heterogeneous domain adaptation is a more challenging problem than homogeneous domain adaptation. The transfer effect is not ideally caused by shallow structure which cannot adequately describe the probability distribution and obtain more effective features. In this paper, we propose a heterogeneous domain adaptation network based on autoencoder, in which two sets of autoencoder networks are used to project the source-domain and target-domain data to a shared feature space to obtain more abstractive feature representations. In the last feature and classification layer, the marginal and conditional distributions can be matched by empirical maximum mean discrepancy metric to reduce distribution difference. To preserve the consistency of geometric structure and label information, a manifold alignment term based on labels is introduced. The classification performance can be improved further by making full use of label information of both domains. The experimental results of 16 cross-domain transfer tasks verify that HDANA outperforms several state-of-the-art methods. --- paper_title: Efficient Learning of Domain-invariant Image Representations paper_content: Abstract: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. --- paper_title: Transfer Neural Trees for Heterogeneous Domain Adaptation paper_content: Heterogeneous domain adaptation (HDA) addresses the task of associating data not only across dissimilar domains but also described by different types of features. Inspired by the recent advances of neural networks and deep learning, we propose Transfer Neural Trees (TNT) which jointly solves cross-domain feature mapping, adaptation, and classification in a NN-based architecture. As the prediction layer in TNT, we further propose Transfer Neural Decision Forest (Transfer-NDF), which effectively adapts the neurons in TNT for adaptation by stochastic pruning. Moreover, to address semi-supervised HDA, a unique embedding loss term for preserving prediction and structural consistency between target-domain data is introduced into TNT. Experiments on classification tasks across features, datasets, and modalities successfully verify the effectiveness of our TNT. --- paper_title: Feature Space Independent Semi-Supervised Domain Adaptation via Kernel Matching paper_content: Domain adaptation methods aim to learn a good prediction model in a label-scarce target domain by leveraging labeled patterns from a related source domain where there is a large amount of labeled data. However, in many practical domain adaptation learning scenarios, the feature distribution in the source domain is different from that in the target domain. In the extreme, the two distributions could differ completely when the feature representation of the source domain is totally different from that of the target domain. To address the problems of substantial feature distribution divergence across domains and heterogeneous feature representations of different domains, we propose a novel feature space independent semi-supervised kernel matching method for domain adaptation in this work. Our approach learns a prediction function on the labeled source data while mapping the target data points to similar source data points by matching the target kernel matrix to a submatrix of the source kernel matrix based on a Hilbert Schmidt Independence Criterion. We formulate this simultaneous learning and mapping process as a non-convex integer optimization problem and present a local minimization procedure for its relaxed continuous form. We evaluate the proposed kernel matching method using both cross domain sentiment classification tasks of Amazon product reviews and cross language text classification tasks of Reuters multilingual newswire stories. Our empirical results demonstrate that the proposed kernel matching method consistently and significantly outperforms comparison methods on both cross domain classification problems with homogeneous feature spaces and cross domain classification problems with heterogeneous feature spaces. --- paper_title: Supervised and Semi-Supervised Multi-View Canonical Correlation Analysis Ensemble for Heterogeneous Domain Adaptation in Remote Sensing Image Classification paper_content: In this paper, we present the supervised multi-view canonical correlation analysis ensemble (SMVCCAE) and its semi-supervised version (SSMVCCAE), which are novel techniques designed to address heterogeneous domain adaptation problems, i.e., situations in which the data to be processed and recognized are collected from different heterogeneous domains. Specifically, the multi-view canonical correlation analysis scheme is utilized to extract multiple correlation subspaces that are useful for joint representations for data association across domains. This scheme makes homogeneous domain adaption algorithms suitable for heterogeneous domain adaptation problems. Additionally, inspired by fusion methods such as Ensemble Learning (EL), this work proposes a weighted voting scheme based on canonical correlation coefficients to combine classification results in multiple correlation subspaces. Finally, the semi-supervised MVCCAE extends the original procedure by incorporating multiple speed-up spectral regression kernel discriminant analysis (SRKDA). To validate the performances of the proposed supervised procedure, a single-view canonical analysis (SVCCA) with the same base classifier (Random Forests) is used. Similarly, to evaluate the performance of the semi-supervised approach, a comparison is made with other techniques such as Logistic label propagation (LLP) and the Laplacian support vector machine (LapSVM). All of the approaches are tested on two real hyperspectral images, which are considered the target domain, with a classifier trained from synthetic low-dimensional multispectral images, which are considered the original source domain. The experimental results confirm that multi-view canonical correlation can overcome the limitations of SVCCA. Both of the proposed procedures outperform the ones used in the comparison with respect to not only the classification accuracy but also the computational efficiency. Moreover, this research shows that canonical correlation weighted voting (CCWV) is a valid option with respect to other ensemble schemes and that because of their ability to balance diversity and accuracy, canonical views extracted using partially joint random view generation are more effective than those obtained by exploiting disjoint random view generation. --- paper_title: Co-transfer learning via joint transition probability graph based method paper_content: This paper studies a new machine learning strategy called co-transfer learning. Unlike many previous learning problems, we focus on how to use labeled data of different feature spaces to enhance the classification of different learning spaces simultaneously. For instance, we make use of both labeled images and labeled text data to help learn models for classifying image data and text data together. An important component of co-transfer learning is to build different relations to link different feature spaces, thus knowledge can be co-transferred across different spaces. Our idea is to model the problem as a joint transition probability graph. The transition probabilities can be constructed by using the intra-relationships based on affinity metric among instances and the inter-relationships based on co-occurrence information among instances from different spaces. The proposed algorithm computes ranking of labels to indicate the importance of a set of labels to an instance by propagating the ranking score of labeled instances via the random walk with restart. The main contribution of this paper is to (i) propose a co-transfer learning (CT-Learn) framework that can perform learning simultaneously by co-transferring knowledge across different spaces; (ii) show the theoretical properties of the random walk for such joint transition probability graph so that the proposed learning model can be used effectively; (iii) develop an efficient algorithm to compute ranking scores and generate the possible labels for a given instance. Experimental results on benchmark data (image-text and English-Chinese-French classification data sets) have shown that the proposed algorithm is computationally efficient, and effective in learning across different spaces. In the comparison, we find that the classification performance of the CT-Learn algorithm is better than those of the other tested transfer learning algorithms. --- paper_title: Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace paper_content: We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance. --- paper_title: Cross-Language Text Classification Using Structural Correspondence Learning paper_content: We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce task-specific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of inter-language correspondence modeling. ::: ::: We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas. --- paper_title: Heterogeneous defect prediction paper_content: Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance. --- paper_title: Heterogeneous domain adaptation method for video annotation paper_content: In this study, the authors study the video annotation problem over heterogeneous domains, in which data from the image source domain and the video target domain is represented by heterogeneous features with different dimensions and physical meanings. A novel feature learning method, called heterogeneous discriminative analysis of canonical correlation (HDCC), is proposed to discover a common feature subspace in which heterogeneous features can be compared. The HDCC utilises discriminative information from the source domain as well as topology information from the target domain to learn two different projection matrices. By using these two matrices, heterogeneous data can be projected onto a common subspace and different features can be compared. They additionally design a group weighting learning framework for multi-domain adaptation to effectively leverage knowledge learned from the source domain. Under this framework, source domain images are organised in groups according to their semantic meanings, and different weights are assigned to these groups according to their relevancies to the target domain videos. Extensive experiments on the Columbia Consumer Video and Kodak datasets demonstrate the effectiveness of their HDCC and group weighting methods. --- paper_title: Hybrid heterogeneous transfer learning through deep learning paper_content: Most previous heterogeneous transfer learning methods learn a cross-domain feature mapping between heterogeneous feature spaces based on a few cross-domain instance-correspondences, and these corresponding instances are assumed to be representative in the source and target domains respectively. However, in many real-world scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise due to the bias issue of the correspondences in the target or (and) source domain(s). In this case, a classifier trained on the labeled transformed-source-domain data may not be useful for the target domain. In this paper, we present a new transfer learning framework called Hybrid Heterogeneous Transfer Learning (HHTL), which allows the corresponding instances across domains to be biased in either the source or target domain. Specifically, we propose a deep learning approach to learn a feature mapping between cross-domain heterogeneous features as well as a better feature representation for mapped data to reduce the bias issue caused by the cross-domain correspondences. Extensive experiments on several multilingual sentiment classification tasks verify the effectiveness of our proposed approach compared with some baseline methods. --- paper_title: Heterogeneous transfer learning for image classification paper_content: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. --- paper_title: Large Margin Classification Using the Perceptron Algorithm paper_content: We introduce and analyze a new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margins. Compared to Vapnik's algorithm, however, ours is much simpler to implement, and much more efficient in terms of computation time. We also show that our algorithm can be efficiently used in very high dimensional spaces using kernel functions. We performed some experiments using our algorithm, and some variants of it, for classifying images of handwritten digits. The performance of our algorithm is close to, but not as good as, the performance of maximal-margin classifiers on the same problem, while saving significantly on computation time and programming effort. --- paper_title: OTL: A Framework of Online Transfer Learning paper_content: In this paper, we investigate a new machine learning framework called Online Transfer Learning (OTL) that aims to transfer knowledge from some source domain to an online learning task on a target domain. We do not assume the target data follows the same class or generative distribution as the source data, and our key motivation is to improve a supervised online learning task in a target domain by exploiting the knowledge that had been learned from large amount of training data in source domains. OTL is in general challenging since data in both domains not only can be different in their class distributions but can be also different in their feature representations. As a first attempt to this problem, we propose techniques to address two kinds of OTL tasks: one is to perform OTL in a homogeneous domain, and the other is to perform OTL across heterogeneous domains. We show the mistake bounds of the proposed OTL algorithms, and empirically examine their performance on several challenging OTL tasks. Encouraging results validate the efficacy of our techniques. --- paper_title: Online Passive-Aggressive Algorithms paper_content: We present a unified view for online classification, regression, and uni-class problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the non-realizable case. A conversion of our main online algorithm to the setting of batch learning is also discussed. The end result is new algorithms and accompanying loss bounds for the hinge-loss. --- paper_title: Online Heterogeneous Transfer Learning by Weighted Offline and Online Classifiers paper_content: In this paper, we study online heterogeneous transfer learning (HTL) problems where offline labeled data from a source domain is transferred to enhance the online classification performance in a target domain. The main idea of our proposed algorithm is to build an offline classifier based on heterogeneous similarity constructed by using labeled data from a source domain and unlabeled co-occurrence data which can be easily collected from web pages and social networks. We also construct an online classifier based on data from a target domain, and combine the offline and online classifiers by using the Hedge weighting strategy to update their weights for ensemble prediction. The theoretical analysis of error bound of the proposed algorithm is provided. Experiments on a real-world data set demonstrate the effectiveness of the proposed algorithm. --- paper_title: Online learning: Theory, algorithms, and applications paper_content: Online learning is the process of answering a sequence of questions given knowledge of the correct answers to previous questions and possibly additional available information. Answering questions in an intelligent fashion and being able to make rational decisions as a result is a basic feature of everyday life. Will it rain today (so should I take an umbrella)? Should I fight the wild animal that is after me, or should I run away? Should I open an attachment in an email message or is it a virus? The study of online learning algorithms is thus an important domain in machine learning, and one that has interesting theoretical properties and practical applications. This dissertation describes a novel framework for the design and analysis of online learning algorithms. We show that various online learning algorithms can all be derived as special cases of our algorithmic framework. This unified view explains the properties of existing algorithms and also enables us to derive several new interesting algorithms. Online learning is performed in a sequence of consecutive rounds, where at each round the learner is given a question and is required to provide an answer to this question. After predicting an answer, the correct answer is revealed and the learner suffers a loss if there is a discrepancy between his answer and the correct one. The algorithmic framework for online learning we propose in this dissertation stems from a connection that we make between the notions of regret in online learning and weak duality in convex optimization. Regret bounds are the common thread in the analysis of online learning algorithms. A regret bound measures the performance of an online algorithm relative to the performance of a competing prediction mechanism, called a competing hypothesis. The competing hypothesis can be chosen in hindsight from a class of hypotheses, after observing the entire sequence of question- answer pairs. Over the years, competitive analysis techniques have been refined and extended to numerous prediction problems by employing complex and varied notions of progress toward a good competing hypothesis. We propose a new perspective on regret bounds which is based on the notion of duality in convex optimization. Regret bounds are universal in the sense that they hold for any possible fixed hypothesis in a given hypothesis class. We therefore cast the universal bound as a lower bound --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Online Heterogeneous Transfer Learning by Weighted Offline and Online Classifiers paper_content: In this paper, we study online heterogeneous transfer learning (HTL) problems where offline labeled data from a source domain is transferred to enhance the online classification performance in a target domain. The main idea of our proposed algorithm is to build an offline classifier based on heterogeneous similarity constructed by using labeled data from a source domain and unlabeled co-occurrence data which can be easily collected from web pages and social networks. We also construct an online classifier based on data from a target domain, and combine the offline and online classifiers by using the Hedge weighting strategy to update their weights for ensemble prediction. The theoretical analysis of error bound of the proposed algorithm is provided. Experiments on a real-world data set demonstrate the effectiveness of the proposed algorithm. --- paper_title: Frustratingly Easy Domain Adaptation paper_content: We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough ``target'' data to do slightly better than just using only ``source'' data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multi-domain adaptation problem, where one has data from a variety of different domains. --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Kernel Methods for Pattern Analysis paper_content: Kernel methods provide a powerful and unified framework for pattern discovery, motivating algorithms that can act on general types of data (e.g. strings, vectors or text) and look for general types of relations (e.g. rankings, classifications, regressions, clusters). The application areas range from neural networks and pattern recognition to machine learning and data mining. This book, developed from lectures and tutorials, fulfils two major roles: firstly it provides practitioners with a large toolkit of algorithms, kernels and solutions ready to use for standard pattern discovery problems in fields such as bioinformatics, text analysis, image analysis. Secondly it provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so. --- paper_title: Heterogeneous domain adaptation using previously learned classifier for object detection problem paper_content: When a trained classifier on specific domain (source domain) is applied in a different domain (target domain) the accuracy is degraded significantly. The main reason for this degradation is the distribution difference between the source and target domains. Domain adaptation aims to lessen this accuracy degradation. In this paper, we focus on adaptation for heterogeneous domains (where the source and target domain may have different feature spaces) and propose a novel algorithm which uses the pre-learned source classifier to adapt a trained target classifier. In this method, a max-margin classifier is trained on the target data and is adapted using the offset of the source classifier. The main strength of this adaptation is its low complexity and high speed which makes it a proper adaptation choice for problems with large-size datasets such as object detection. We test our method on human detection datasets and the experimental results show the significant improvement in accuracy, in comparison to several baselines. --- paper_title: A SVM-based model-transferring method for heterogeneous domain adaptation paper_content: In many real classification scenarios the distribution of test (target) domain is different from the training (source) domain. The distribution shift between the source and target domains may cause the source classifier not to gain the expected accuracy on the target data. Domain adaptation has been introduced to solve the accuracy-dropping problem caused by distribution shift phenomenon between domains. In this paper, we study model-transferring methods as a practical branch of adaptation methods, which adapt the source classifier to new domains without using the source samples. We introduce a new SVM-based model-transferring method, in which a max-margin classifier is trained on labeled target samples and is adapted using the offset of the source classifier. We call it Heterogeneous Max-Margin Classifier Adaptation Method, abbreviated as HMCA. The main strength of HMCA is its applicability for heterogeneous domains where the source and target domains may have different feature types. This property is important because the previously proposed model-transferring methods do not provide any solution for heterogeneous problems. We also introduce a new similarity metric that reliably measures adaptability between two domains according to HMCA structure. In the situation that we have access to several source classifiers, the metric can be used to select the most appropriate one for adaptation. We test HMCA on two different computer vision problems (pedestrian detection and image classification). The experimental results show the advantage in accuracy rate for our approach in comparison to several baselines. We propose a new SVM-based model-transferring method for adaptation.Our method applies adaptation in the one-dimensional discrimination space.The proposed method can handle heterogeneous domains.Based on proposed model-transferring method, we design a new metric for measuring the adaptability between two domains. --- paper_title: Heterogeneous Domain Adaptation for Multiple Classes paper_content: In this paper, we present an efficient multi-class heterogeneous domain adaptation method, where data from source and target domains are represented by heterogeneous features of different dimensions. Specifically, we propose to reconstruct a sparse feature transformation matrix to map the weight vector of classifiers learned from the source domain to the target domain. We cast this learning task as a compressed sensing problem, where each binary classifier induced from multiple classes can be deemed as a measurement sensor. Based on the compressive sensing theory, the estimation error of the transformation matrix decreases with the increasing number of classifiers. Therefore, to guarantee reconstruction performance, we construct sufficiently many binary classifiers based on the error correcting output coding. Extensive experiments are conducted on both a toy dataset and three real-world datasets to verify the superiority of our proposed method over existing state-of-the-art HDA methods in terms of prediction accuracy. --- paper_title: Compressed sensing paper_content: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Principal Component Analysis paper_content: When large multivariate datasets are analyzed, it is often desirable to reduce their dimensionality. Principal component analysis is one technique for doing this. It replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables. Often, it is possible to retain most of the variability in the original variables with q very much smaller than p. Despite its apparent simplicity, principal component analysis has a number of subtleties, and it has many uses and extensions. A number of choices associated with the technique are briefly discussed, namely, covariance or correlation, how many components, and different normalization constraints, as well as confusion with factor analysis. Various uses and extensions are outlined. ::: ::: ::: Keywords: ::: ::: dimension reduction; ::: factor analysis; ::: multivariate analysis; ::: variance maximization --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Solving Multiclass Learning Problems via Error-Correcting Output Codes paper_content: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k "classes"). The definition is acquired by studying collections of training examples of the form (xi, f(xi)). Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that--like the other methods--the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. --- paper_title: A framework for learning predictive structures from multiple tasks and unlabeled data paper_content: One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Heterogeneous transfer learning for image classification paper_content: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. --- paper_title: Towards Semantic Knowledge Propagation from Text Corpus to Web Images paper_content: In this paper, we study the problem of transfer learning from text to images in the context of network data in which link based bridges are available to transfer the knowledge between the different domains. The problem of classification of image data is often much more challenging than text data because of the following two reasons: (a) Labeled text data is very widely available for classification purposes. On the other hand, this is often not the case for image data, in which a lot of images are available from many sources, but many of them are often not labeled. (b) The image features are not directly related to semantic concepts inherent in class labels. On the other hand, since text data tends to have natural semantic interpretability (because of their human origins), they are often more directly related to class labels. Therefore, the relationships between the images and text features also provide additional hints for the classification process in terms of the image feature transformations which provide the most effective results. The semantic challenges of image features are glaringly evident, when we attempt to recognize complex abstract concepts, and the visual features often fail to discriminate such concepts. However, the copious availability of bridging relationships between text and images in the context of web and social network data can be used in order to design for effective classifiers for image data. One of our goals in this paper is to develop a mathematical model for the functional relationships between text and image features, so as indirectly transfer semantic knowledge through feature transformations. This feature transformation is accomplished by mapping instances from different domains into a common space of unspecific topics. This is used as a bridge to semantically connect the two heterogeneous spaces. This is also helpful for the cases where little image data is available for the classification process. We evaluate our knowledge transfer techniques on an image classification task with labeled text corpora and show the effectiveness with respect to competing algorithms. --- paper_title: Translated Learning : Transfer Learning across Different Feature Spaces † paper_content: This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled text data to help learn a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a "bridge" to link one feature space (known as the "source space") to another space (known as the "target space") through a translator in order to migrate the knowledge from source to target. The translated learning solution uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing back to the instances in the target spaces. We show that this path of linkage can be modeled using a Markov chain and risk minimization. Through experiments on the text-aided image classification and cross-language classification tasks, we demonstrate that our translated learning framework can greatly outperform many state-of-the-art baseline methods. --- paper_title: Document Language Models, Query Models, and Risk Minimization for Information Retrieval paper_content: We present a framework for information retrieval that combines document models and query models using a probabilistic ranking function based on Bayesian decision theory. The framework suggests an operational retrieval model that extends recent developments in the language modeling approach to information retrieval. A language model for each document is estimated, as well as a language model for each query, and the retrieval problem is cast in terms of risk minimization. The query language model can be exploited to model user preferences, the context of a query, synonomy and word senses. While recent work has incorporated word translation models for this purpose, we introduce a new method using Markov chains defined on a set of documents to estimate the query models. The Markov chain method has connections to algorithms from link analysis and social networks. The new approach is evaluated on TREC collections and compared to the basic language modeling approach and vector space models together with query expansion using Rocchio. Significant improvements are obtained over standard query expansion methods for strong baseline TF-IDF systems, with the greatest improvements attained for short queries on Web data. --- paper_title: Towards Semantic Knowledge Propagation from Text Corpus to Web Images paper_content: In this paper, we study the problem of transfer learning from text to images in the context of network data in which link based bridges are available to transfer the knowledge between the different domains. The problem of classification of image data is often much more challenging than text data because of the following two reasons: (a) Labeled text data is very widely available for classification purposes. On the other hand, this is often not the case for image data, in which a lot of images are available from many sources, but many of them are often not labeled. (b) The image features are not directly related to semantic concepts inherent in class labels. On the other hand, since text data tends to have natural semantic interpretability (because of their human origins), they are often more directly related to class labels. Therefore, the relationships between the images and text features also provide additional hints for the classification process in terms of the image feature transformations which provide the most effective results. The semantic challenges of image features are glaringly evident, when we attempt to recognize complex abstract concepts, and the visual features often fail to discriminate such concepts. However, the copious availability of bridging relationships between text and images in the context of web and social network data can be used in order to design for effective classifiers for image data. One of our goals in this paper is to develop a mathematical model for the functional relationships between text and image features, so as indirectly transfer semantic knowledge through feature transformations. This feature transformation is accomplished by mapping instances from different domains into a common space of unspecific topics. This is used as a bridge to semantically connect the two heterogeneous spaces. This is also helpful for the cases where little image data is available for the classification process. We evaluate our knowledge transfer techniques on an image classification task with labeled text corpora and show the effectiveness with respect to competing algorithms. --- paper_title: Translated Learning : Transfer Learning across Different Feature Spaces † paper_content: This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled text data to help learn a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a "bridge" to link one feature space (known as the "source space") to another space (known as the "target space") through a translator in order to migrate the knowledge from source to target. The translated learning solution uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing back to the instances in the target spaces. We show that this path of linkage can be modeled using a Markov chain and risk minimization. Through experiments on the text-aided image classification and cross-language classification tasks, we demonstrate that our translated learning framework can greatly outperform many state-of-the-art baseline methods. --- paper_title: Information-theoretic metric learning paper_content: In this paper, we present an information-theoretic approach to learning a Mahalanobis distance function. We formulate the problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the distance function. We express this problem as a particular Bregman optimization problem---that of minimizing the LogDet divergence subject to linear constraints. Our resulting algorithm has several advantages over existing methods. First, our method can handle a wide variety of constraints and can optionally incorporate a prior on the distance function. Second, it is fast and scalable. Unlike most existing methods, no eigenvalue computations or semi-definite programming are required. We also present an online version and derive regret bounds for the resulting algorithm. Finally, we evaluate our method on a recent error reporting system for software called Clarify, in the context of metric learning for nearest neighbor classification, as well as on standard data sets. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Frustratingly Easy Domain Adaptation paper_content: We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough ``target'' data to do slightly better than just using only ``source'' data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multi-domain adaptation problem, where one has data from a variety of different domains. --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Random Forests paper_content: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression. --- paper_title: Supervised heterogeneous domain adaptation via random forests paper_content: Heterogeneity of features and lack of correspondence between data points of different domains are the two primary challenges while performing feature transfer. In this paper, we present a novel supervised domain adaptation algorithm (SHDA-RF) that learns the mapping between heterogeneous features of different dimensions. Our algorithm uses the shared label distributions present across the domains as pivots for learning a sparse feature transformation. The shared label distributions and the relationship between the feature spaces and the label distributions are estimated in a supervised manner using random forests. We conduct extensive experiments on three diverse datasets of varying dimensions and sparsity to verify the superiority of the proposed approach over other baseline and state of the art transfer approaches. --- paper_title: Relations Between Two Sets of Variates paper_content: Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each --- paper_title: Manifold regularization: A geometric framework for learning from labeled and unlabeled examples paper_content: We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: A General Framework for Manifold Alignment paper_content: Manifold alignment has been found to be useful in many fields of machine learning and data mining. In this paper we summarize our work in this area and introduce a general framework for manifold alignment. This framework generates a family of approaches to align manifolds by simultaneously matching the corresponding instances and preserving the local geometry of each given manifold. Some approaches like semi-supervised alignment and manifold projections can be obtained as special cases. Our framework can also solve multiple manifold alignment problems and be adapted to handle the situation when no correspondence information is available. The approaches are described and evaluated both theoretically and experimentally, providing results showing useful knowledge transfer from one domain to another. Novel applications of our methods including identification of topics shared by multiple document collections, and biological structure alignment are discussed in the paper. --- paper_title: Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation paper_content: In this paper, we study the heterogeneous domain adaptation (HDA) problem, in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. By introducing two different projection matrices, we first transform the data from two domains into a common subspace such that the similarity between samples across different domains can be measured. We then propose a new feature mapping function for each domain, which augments the transformed samples with their original features and zeros. Existing supervised learning methods ( e.g., SVM and SVR) can be readily employed by incorporating our newly proposed augmented feature representations for supervised HDA. As a showcase, we propose a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM. We show that the proposed formulation can be equivalently derived as a standard Multiple Kernel Learning (MKL) problem, which is convex and thus the global solution can be guaranteed. To additionally utilize the unlabeled data in the target domain, we further propose the semi-supervised HFA (SHFA) which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples. Comprehensive experiments on three different applications clearly demonstrate that our SHFA and HFA outperform the existing HDA methods. --- paper_title: Efficient Learning of Domain-invariant Image Representations paper_content: Abstract: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Learning Cross-Domain Landmarks for Heterogeneous Domain Adaptation paper_content: While domain adaptation (DA) aims to associate the learning tasks across data domains, heterogeneous domain adaptation (HDA) particularly deals with learning from cross-domain data which are of different types of features. In other words, for HDA, data from source and target domains are observed in separate feature spaces and thus exhibit distinct distributions. In this paper, we propose a novel learning algorithm of Cross-Domain Landmark Selection (CDLS) for solving the above task. With the goal of deriving a domain-invariant feature subspace for HDA, our CDLS is able to identify representative cross-domain data, including the unlabeled ones in the target domain, for performing adaptation. In addition, the adaptation capabilities of such cross-domain landmarks can be determined accordingly. This is the reason why our CDLS is able to achieve promising HDA performance when comparing to state-of-the-art HDA methods. We conduct classification experiments using data across different features, domains, and modalities. The effectiveness of our proposed method can be successfully verified. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Incremental Discriminant Learning for Heterogeneous Domain Adaptation paper_content: This paper proposes a new incremental learning method for heterogeneous domain adaptation, in which the training data from both source domain and target domains are acquired sequentially, represented by heterogeneous features. Two different projection matrices are learned to map the data from two domains into a discriminative common subspace, where the intra-class samples are closely-related to each other, the inter-class samples are well-separated from each other, and the data distribution mismatch between the source and target domains is reduced. Different from previous work, our method is capable of incrementally optimizing the projection matrices when the training data becomes available as a data stream instead of being given completely in advance. With the gradually coming training data, the new projection matrices are computed by updating the existing ones using an eigenspace merging algorithm, rather than repeating the learning from the begin by keeping the whole training data set. Therefore, our incremental learning solution for the projection matrices can significantly reduce the computational complexity and memory space, which makes it applicable to a wider set of heterogeneous domain adaptation scenarios with a large training dataset. Furthermore, our method is neither restricted to the corresponding training instances in the source and target domains nor restricted to the same type of feature, which meaningfully relaxes the requirement of training data. Comprehensive experiments on three benchmark datasets clearly demonstrate the effectiveness and efficiency of our method. --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Kernel Methods for Pattern Analysis paper_content: Kernel methods provide a powerful and unified framework for pattern discovery, motivating algorithms that can act on general types of data (e.g. strings, vectors or text) and look for general types of relations (e.g. rankings, classifications, regressions, clusters). The application areas range from neural networks and pattern recognition to machine learning and data mining. This book, developed from lectures and tutorials, fulfils two major roles: firstly it provides practitioners with a large toolkit of algorithms, kernels and solutions ready to use for standard pattern discovery problems in fields such as bioinformatics, text analysis, image analysis. Secondly it provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so. --- paper_title: Learning from Multiple Outlooks paper_content: We propose a novel problem formulation of learning a single task when the data are provided in different feature spaces. Each such space is called an outlook, and is assumed to contain both labeled and unlabeled data. The objective is to take advantage of the data from all the outlooks to better classify each of the outlooks. We devise an algorithm that computes optimal affine mappings from different outlooks to a target outlook by matching moments of the empirical distributions. We further derive a probabilistic interpretation of the resulting algorithm and a sample complexity bound indicating how many samples are needed to adequately find the mapping. We report the results of extensive experiments on activity recognition tasks that show the value of the proposed approach in boosting performance. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Predictive Modeling with Heterogeneous Sources paper_content: Lack of labeled training examples is a common problem for many applications. In the same time, there is usually an abundance of labeled data from related tasks. But they have different distributions and outputs (e.g., different class labels, and different scales of regression values). Conjecture that there is only a limited number of vaccine efficacy examples against the new epidemic swine flu H1N1, whereas there exists a large amount of labeled vaccine data against previous years’ flu. However, it is difficult to directly apply the older flu vaccine data as training examples because of the difference in data distribution and efficacy output criteria between different viruses. To increase the sources of labeled data, we propose a method to utilize these examples whose marginal distribution and output criteria can be different. The idea is to first select a subset of source examples similar in distribution to the target data; all the selected instances are then “re-scaled” and assigned new output values from the labeled space of the target task. A new predictive model is built on the enlarged training set. We derive a generalization bound that specifically considers distribution difference and further evaluate the model on a number of applications. For an siRNA efficacy prediction problem, we extract examples from 4 heterogeneous regression tasks and 2 classification tasks to learn the target model, and achieve an average improvement of 30% in accuracy. --- paper_title: Efficient Estimation of Word Representations in Vector Space paper_content: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. --- paper_title: Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation paper_content: In this paper, we study the heterogeneous domain adaptation (HDA) problem, in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. By introducing two different projection matrices, we first transform the data from two domains into a common subspace such that the similarity between samples across different domains can be measured. We then propose a new feature mapping function for each domain, which augments the transformed samples with their original features and zeros. Existing supervised learning methods ( e.g., SVM and SVR) can be readily employed by incorporating our newly proposed augmented feature representations for supervised HDA. As a showcase, we propose a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM. We show that the proposed formulation can be equivalently derived as a standard Multiple Kernel Learning (MKL) problem, which is convex and thus the global solution can be guaranteed. To additionally utilize the unlabeled data in the target domain, we further propose the semi-supervised HFA (SHFA) which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples. Comprehensive experiments on three different applications clearly demonstrate that our SHFA and HFA outperform the existing HDA methods. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Heterogeneous transfer learning for image classification paper_content: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. --- paper_title: Cotransfer Learning Using Coupled Markov Chains with Restart paper_content: This article studies cotransfer learning, a machine learning strategy that uses labeled data to enhance the classification of different learning spaces simultaneously. The authors model the problem as a coupled Markov chain with restart. The transition probabilities in the coupled Markov chain can be constructed using the intrarelationships based on the affinity metric among instances in the same space, and the interrelationships based on co-occurrence information among instances from different spaces. The learning algorithm computes ranking of labels to indicate the importance of a set of labels to an instance by propagating the ranking score of labeled instances via the coupled Markov chain with restart. Experimental results on benchmark data (multiclass image-text and English-Spanish-French classification datasets) have shown that the learning algorithm is computationally efficient, and effective in learning across different spaces. --- paper_title: Towards Semantic Knowledge Propagation from Text Corpus to Web Images paper_content: In this paper, we study the problem of transfer learning from text to images in the context of network data in which link based bridges are available to transfer the knowledge between the different domains. The problem of classification of image data is often much more challenging than text data because of the following two reasons: (a) Labeled text data is very widely available for classification purposes. On the other hand, this is often not the case for image data, in which a lot of images are available from many sources, but many of them are often not labeled. (b) The image features are not directly related to semantic concepts inherent in class labels. On the other hand, since text data tends to have natural semantic interpretability (because of their human origins), they are often more directly related to class labels. Therefore, the relationships between the images and text features also provide additional hints for the classification process in terms of the image feature transformations which provide the most effective results. The semantic challenges of image features are glaringly evident, when we attempt to recognize complex abstract concepts, and the visual features often fail to discriminate such concepts. However, the copious availability of bridging relationships between text and images in the context of web and social network data can be used in order to design for effective classifiers for image data. One of our goals in this paper is to develop a mathematical model for the functional relationships between text and image features, so as indirectly transfer semantic knowledge through feature transformations. This feature transformation is accomplished by mapping instances from different domains into a common space of unspecific topics. This is used as a bridge to semantically connect the two heterogeneous spaces. This is also helpful for the cases where little image data is available for the classification process. We evaluate our knowledge transfer techniques on an image classification task with labeled text corpora and show the effectiveness with respect to competing algorithms. --- paper_title: Co-transfer learning via joint transition probability graph based method paper_content: This paper studies a new machine learning strategy called co-transfer learning. Unlike many previous learning problems, we focus on how to use labeled data of different feature spaces to enhance the classification of different learning spaces simultaneously. For instance, we make use of both labeled images and labeled text data to help learn models for classifying image data and text data together. An important component of co-transfer learning is to build different relations to link different feature spaces, thus knowledge can be co-transferred across different spaces. Our idea is to model the problem as a joint transition probability graph. The transition probabilities can be constructed by using the intra-relationships based on affinity metric among instances and the inter-relationships based on co-occurrence information among instances from different spaces. The proposed algorithm computes ranking of labels to indicate the importance of a set of labels to an instance by propagating the ranking score of labeled instances via the random walk with restart. The main contribution of this paper is to (i) propose a co-transfer learning (CT-Learn) framework that can perform learning simultaneously by co-transferring knowledge across different spaces; (ii) show the theoretical properties of the random walk for such joint transition probability graph so that the proposed learning model can be used effectively; (iii) develop an efficient algorithm to compute ranking scores and generate the possible labels for a given instance. Experimental results on benchmark data (image-text and English-Chinese-French classification data sets) have shown that the proposed algorithm is computationally efficient, and effective in learning across different spaces. In the comparison, we find that the classification performance of the CT-Learn algorithm is better than those of the other tested transfer learning algorithms. --- paper_title: Translated Learning : Transfer Learning across Different Feature Spaces † paper_content: This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled text data to help learn a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a "bridge" to link one feature space (known as the "source space") to another space (known as the "target space") through a translator in order to migrate the knowledge from source to target. The translated learning solution uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing back to the instances in the target spaces. We show that this path of linkage can be modeled using a Markov chain and risk minimization. Through experiments on the text-aided image classification and cross-language classification tasks, we demonstrate that our translated learning framework can greatly outperform many state-of-the-art baseline methods. --- paper_title: Random walk with restart: fast solutions and applications paper_content: How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the “connection subgraphs”, personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block-wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman–Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150 × speed up with 90%+ quality preservation. --- paper_title: Measuring statistical dependence with Hilbert-Schmidt norms paper_content: We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Feature Space Independent Semi-Supervised Domain Adaptation via Kernel Matching paper_content: Domain adaptation methods aim to learn a good prediction model in a label-scarce target domain by leveraging labeled patterns from a related source domain where there is a large amount of labeled data. However, in many practical domain adaptation learning scenarios, the feature distribution in the source domain is different from that in the target domain. In the extreme, the two distributions could differ completely when the feature representation of the source domain is totally different from that of the target domain. To address the problems of substantial feature distribution divergence across domains and heterogeneous feature representations of different domains, we propose a novel feature space independent semi-supervised kernel matching method for domain adaptation in this work. Our approach learns a prediction function on the labeled source data while mapping the target data points to similar source data points by matching the target kernel matrix to a submatrix of the source kernel matrix based on a Hilbert Schmidt Independence Criterion. We formulate this simultaneous learning and mapping process as a non-convex integer optimization problem and present a local minimization procedure for its relaxed continuous form. We evaluate the proposed kernel matching method using both cross domain sentiment classification tasks of Amazon product reviews and cross language text classification tasks of Reuters multilingual newswire stories. Our empirical results demonstrate that the proposed kernel matching method consistently and significantly outperforms comparison methods on both cross domain classification problems with homogeneous feature spaces and cross domain classification problems with heterogeneous feature spaces. --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation paper_content: In this paper, we study the heterogeneous domain adaptation (HDA) problem, in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. By introducing two different projection matrices, we first transform the data from two domains into a common subspace such that the similarity between samples across different domains can be measured. We then propose a new feature mapping function for each domain, which augments the transformed samples with their original features and zeros. Existing supervised learning methods ( e.g., SVM and SVR) can be readily employed by incorporating our newly proposed augmented feature representations for supervised HDA. As a showcase, we propose a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM. We show that the proposed formulation can be equivalently derived as a standard Multiple Kernel Learning (MKL) problem, which is convex and thus the global solution can be guaranteed. To additionally utilize the unlabeled data in the target domain, we further propose the semi-supervised HFA (SHFA) which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples. Comprehensive experiments on three different applications clearly demonstrate that our SHFA and HFA outperform the existing HDA methods. --- paper_title: Integrating structured biological data by kernel maximum mean discrepancy paper_content: MOTIVATION ::: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. ::: ::: ::: RESULTS ::: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. ::: ::: ::: CONCLUSIONS ::: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. ::: ::: ::: AVAILABILITY ::: http://www.dbs.ifi.lmu.de/~borgward/MMD. --- paper_title: Semi-supervised Subspace Co-Projection for Multi-class Heterogeneous Domain Adaptation paper_content: Heterogeneous domain adaptation aims to exploit labeled training data from a source domain for learning prediction models in a target domain under the condition that the two domains have different input feature representation spaces. In this paper, we propose a novel semi-supervised subspace co-projection method to address multi-class heterogeneous domain adaptation. The proposed method projects the instances of the two domains into a co-located latent subspace to bridge the feature divergence gap across domains, while simultaneously training prediction models in the co-projected representation space with labeled training instances from both domains. It also exploits the unlabeled data to promote the consistency of co-projected subspaces from the two domains based on a maximum mean discrepancy criterion. Moreover, to increase the stability and discriminative informativeness of the subspace co-projection, we further exploit the error-correcting output code schemes to incorporate more binary prediction tasks shared across domains into the learning process. We formulate this semi-supervised learning process as a non-convex joint minimization problem and develop an alternating optimization algorithm to solve it. To investigate the empirical performance of the proposed approach, we conduct experiments on cross-lingual text classification and cross-domain digit image classification tasks with heterogeneous feature spaces. The experimental results demonstrate the efficacy of the proposed method on these heterogeneous domain adaptation problems. --- paper_title: Efficient Learning of Domain-invariant Image Representations paper_content: Abstract: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Solving Multiclass Learning Problems via Error-Correcting Output Codes paper_content: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k "classes"). The definition is acquired by studying collections of training examples of the form (xi, f(xi)). Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that--like the other methods--the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Efficient Learning of Domain-invariant Image Representations paper_content: Abstract: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. --- paper_title: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms paper_content: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Random Forests paper_content: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression. --- paper_title: Supervised and Semi-Supervised Multi-View Canonical Correlation Analysis Ensemble for Heterogeneous Domain Adaptation in Remote Sensing Image Classification paper_content: In this paper, we present the supervised multi-view canonical correlation analysis ensemble (SMVCCAE) and its semi-supervised version (SSMVCCAE), which are novel techniques designed to address heterogeneous domain adaptation problems, i.e., situations in which the data to be processed and recognized are collected from different heterogeneous domains. Specifically, the multi-view canonical correlation analysis scheme is utilized to extract multiple correlation subspaces that are useful for joint representations for data association across domains. This scheme makes homogeneous domain adaption algorithms suitable for heterogeneous domain adaptation problems. Additionally, inspired by fusion methods such as Ensemble Learning (EL), this work proposes a weighted voting scheme based on canonical correlation coefficients to combine classification results in multiple correlation subspaces. Finally, the semi-supervised MVCCAE extends the original procedure by incorporating multiple speed-up spectral regression kernel discriminant analysis (SRKDA). To validate the performances of the proposed supervised procedure, a single-view canonical analysis (SVCCA) with the same base classifier (Random Forests) is used. Similarly, to evaluate the performance of the semi-supervised approach, a comparison is made with other techniques such as Logistic label propagation (LLP) and the Laplacian support vector machine (LapSVM). All of the approaches are tested on two real hyperspectral images, which are considered the target domain, with a classifier trained from synthetic low-dimensional multispectral images, which are considered the original source domain. The experimental results confirm that multi-view canonical correlation can overcome the limitations of SVCCA. Both of the proposed procedures outperform the ones used in the comparison with respect to not only the classification accuracy but also the computational efficiency. Moreover, this research shows that canonical correlation weighted voting (CCWV) is a valid option with respect to other ensemble schemes and that because of their ability to balance diversity and accuracy, canonical views extracted using partially joint random view generation are more effective than those obtained by exploiting disjoint random view generation. --- paper_title: Semi-supervised Subspace Co-Projection for Multi-class Heterogeneous Domain Adaptation paper_content: Heterogeneous domain adaptation aims to exploit labeled training data from a source domain for learning prediction models in a target domain under the condition that the two domains have different input feature representation spaces. In this paper, we propose a novel semi-supervised subspace co-projection method to address multi-class heterogeneous domain adaptation. The proposed method projects the instances of the two domains into a co-located latent subspace to bridge the feature divergence gap across domains, while simultaneously training prediction models in the co-projected representation space with labeled training instances from both domains. It also exploits the unlabeled data to promote the consistency of co-projected subspaces from the two domains based on a maximum mean discrepancy criterion. Moreover, to increase the stability and discriminative informativeness of the subspace co-projection, we further exploit the error-correcting output code schemes to incorporate more binary prediction tasks shared across domains into the learning process. We formulate this semi-supervised learning process as a non-convex joint minimization problem and develop an alternating optimization algorithm to solve it. To investigate the empirical performance of the proposed approach, we conduct experiments on cross-lingual text classification and cross-domain digit image classification tasks with heterogeneous feature spaces. The experimental results demonstrate the efficacy of the proposed method on these heterogeneous domain adaptation problems. --- paper_title: Heterogeneous Domain Adaptation for Multiple Classes paper_content: In this paper, we present an efficient multi-class heterogeneous domain adaptation method, where data from source and target domains are represented by heterogeneous features of different dimensions. Specifically, we propose to reconstruct a sparse feature transformation matrix to map the weight vector of classifiers learned from the source domain to the target domain. We cast this learning task as a compressed sensing problem, where each binary classifier induced from multiple classes can be deemed as a measurement sensor. Based on the compressive sensing theory, the estimation error of the transformation matrix decreases with the increasing number of classifiers. Therefore, to guarantee reconstruction performance, we construct sufficiently many binary classifiers based on the error correcting output coding. Extensive experiments are conducted on both a toy dataset and three real-world datasets to verify the superiority of our proposed method over existing state-of-the-art HDA methods in terms of prediction accuracy. --- paper_title: Efficient Learning of Domain-invariant Image Representations paper_content: Abstract: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. --- paper_title: Transfer Neural Trees for Heterogeneous Domain Adaptation paper_content: Heterogeneous domain adaptation (HDA) addresses the task of associating data not only across dissimilar domains but also described by different types of features. Inspired by the recent advances of neural networks and deep learning, we propose Transfer Neural Trees (TNT) which jointly solves cross-domain feature mapping, adaptation, and classification in a NN-based architecture. As the prediction layer in TNT, we further propose Transfer Neural Decision Forest (Transfer-NDF), which effectively adapts the neurons in TNT for adaptation by stochastic pruning. Moreover, to address semi-supervised HDA, a unique embedding loss term for preserving prediction and structural consistency between target-domain data is introduced into TNT. Experiments on classification tasks across features, datasets, and modalities successfully verify the effectiveness of our TNT. --- paper_title: Deep Neural Decision Forests paper_content: We present a novel approach to enrich classification trees with the representation learning ability of deep (neural) networks within an end-to-end trainable architecture. We combine these two worlds via a stochastic and differentiable decision tree model, which steers the formation of latent representations within the hidden layers of a deep network. The proposed model differs from conventional deep networks in that a decision forest provides the final predictions and it differs from conventional decision forests by introducing a principled, joint and global optimization of split and leaf node parameters. Our approach compares favourably to other state-of-the-art deep models on a large-scale image classification task like ImageNet. --- paper_title: Random Forests paper_content: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression. --- paper_title: Learning with Augmented Features for Heterogeneous Domain Adaptation paper_content: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods. --- paper_title: Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation paper_content: In this paper, we study the heterogeneous domain adaptation (HDA) problem, in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. By introducing two different projection matrices, we first transform the data from two domains into a common subspace such that the similarity between samples across different domains can be measured. We then propose a new feature mapping function for each domain, which augments the transformed samples with their original features and zeros. Existing supervised learning methods ( e.g., SVM and SVR) can be readily employed by incorporating our newly proposed augmented feature representations for supervised HDA. As a showcase, we propose a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM. We show that the proposed formulation can be equivalently derived as a standard Multiple Kernel Learning (MKL) problem, which is convex and thus the global solution can be guaranteed. To additionally utilize the unlabeled data in the target domain, we further propose the semi-supervised HFA (SHFA) which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples. Comprehensive experiments on three different applications clearly demonstrate that our SHFA and HFA outperform the existing HDA methods. --- paper_title: Heterogeneous domain adaptation network based on autoencoder paper_content: Abstract Heterogeneous domain adaptation is a more challenging problem than homogeneous domain adaptation. The transfer effect is not ideally caused by shallow structure which cannot adequately describe the probability distribution and obtain more effective features. In this paper, we propose a heterogeneous domain adaptation network based on autoencoder, in which two sets of autoencoder networks are used to project the source-domain and target-domain data to a shared feature space to obtain more abstractive feature representations. In the last feature and classification layer, the marginal and conditional distributions can be matched by empirical maximum mean discrepancy metric to reduce distribution difference. To preserve the consistency of geometric structure and label information, a manifold alignment term based on labels is introduced. The classification performance can be improved further by making full use of label information of both domains. The experimental results of 16 cross-domain transfer tasks verify that HDANA outperforms several state-of-the-art methods. --- paper_title: Recognizing heterogeneous cross-domain data via generalized joint distribution adaptation paper_content: In this paper, we propose a novel algorithm of Generalized Joint Distribution Adaptation (G-JDA) for heterogeneous domain adaptation (HDA), which associates and recognizes cross-domain data observed in different feature spaces (and thus with different dimensionality). With the objective to derive a domain-invariant feature subspace for relating source and target-domain data, our G-JDA learns a pair of feature projection matrices (one for each domain), which allows us to eliminate the difference between projected cross-domain heterogeneous data by matching their marginal and class-conditional distributions. We conduct experiments on cross-domain classification tasks using data across different features, datasets, and modalities. We confirm that our G-JDA would perform favorably against state-of-the-art HDA approaches. --- paper_title: Efficient Learning of Domain-invariant Image Representations paper_content: Abstract: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. --- paper_title: Transfer Neural Trees for Heterogeneous Domain Adaptation paper_content: Heterogeneous domain adaptation (HDA) addresses the task of associating data not only across dissimilar domains but also described by different types of features. Inspired by the recent advances of neural networks and deep learning, we propose Transfer Neural Trees (TNT) which jointly solves cross-domain feature mapping, adaptation, and classification in a NN-based architecture. As the prediction layer in TNT, we further propose Transfer Neural Decision Forest (Transfer-NDF), which effectively adapts the neurons in TNT for adaptation by stochastic pruning. Moreover, to address semi-supervised HDA, a unique embedding loss term for preserving prediction and structural consistency between target-domain data is introduced into TNT. Experiments on classification tasks across features, datasets, and modalities successfully verify the effectiveness of our TNT. --- paper_title: Heterogeneous Domain Adaptation Using Manifold Alignment paper_content: We propose a manifold alignment based approach for heterogeneous domain adaptation. A key aspect of this approach is to construct mappings to link different feature spaces in order to transfer knowledge across domains. The new approach can reuse labeled data from multiple source domains in a target domain even in the case when the input domains do not share any common features or instances. As a pre-processing step, our approach can also be combined with existing domain adaptation approaches to learn a common feature space for all input domains. This paper extends existing manifold alignment approaches by making use of labels rather than correspondences to align the manifolds. This extension significantly broadens the application scope of manifold alignment, since the correspondence relationship required by existing alignment approaches is hard to obtain in many applications. --- paper_title: Deep learning paper_content: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. --- paper_title: Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace paper_content: We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance. --- paper_title: Relations Between Two Sets of Variates paper_content: Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each --- paper_title: Marginalized Denoising Autoencoders for Domain Adaptation paper_content: Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters--in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB™, significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks. --- paper_title: Multimodal Deep Learning paper_content: Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning. --- paper_title: Hybrid heterogeneous transfer learning through deep learning paper_content: Most previous heterogeneous transfer learning methods learn a cross-domain feature mapping between heterogeneous feature spaces based on a few cross-domain instance-correspondences, and these corresponding instances are assumed to be representative in the source and target domains respectively. However, in many real-world scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise due to the bias issue of the correspondences in the target or (and) source domain(s). In this case, a classifier trained on the labeled transformed-source-domain data may not be useful for the target domain. In this paper, we present a new transfer learning framework called Hybrid Heterogeneous Transfer Learning (HHTL), which allows the corresponding instances across domains to be biased in either the source or target domain. Specifically, we propose a deep learning approach to learn a feature mapping between cross-domain heterogeneous features as well as a better feature representation for mapped data to reduce the bias issue caused by the cross-domain correspondences. Extensive experiments on several multilingual sentiment classification tasks verify the effectiveness of our proposed approach compared with some baseline methods. --- paper_title: Inferring a Semantic Representation of Text via Cross-Language Correlation Analysis paper_content: The problem of learning a semantic representation of a text document from data is addressed, in the situation where a corpus of unlabeled paired documents is available, each pair being formed by a short English document and its French translation. This representation can then be used for any retrieval, categorization or clustering task, both in a standard and in a cross-lingual setting. By using kernel functions, in this case simple bag-of-words inner products, each part of the corpus is mapped to a high-dimensional space. The correlations between the two spaces are then learnt by using kernel Canonical Correlation Analysis. A set of directions is found in the first and in the second space that are maximally correlated. Since we assume the two representations are completely independent apart from the semantic content, any correlation between them should reflect some semantic similarity. Certain patterns of English words that relate to a specific meaning should correlate with certain patterns of French words corresponding to the same meaning, across the corpus. Using the semantic representation obtained in this way we first demonstrate that the correlations detected between the two versions of the corpus are significantly higher than random, and hence that a representation based on such features does capture statistical patterns that should reflect semantic information. Then we use such representation both in cross-language and in single-language retrieval tasks, observing performance that is consistently and significantly superior to LSI on the same data. --- paper_title: Deep learning paper_content: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Extracting and composing robust features with denoising autoencoders paper_content: Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. --- paper_title: Domain Adaptation Problems: A DASVM Classification Technique and a Circular Validation Strategy paper_content: This paper addresses pattern classification in the framework of domain adaptation by considering methods that solve problems in which training data are assumed to be available only for a source domain different (even if related) from the target domain of (unlabeled) test data. Two main novel contributions are proposed: 1) a domain adaptation support vector machine (DASVM) technique which extends the formulation of support vector machines (SVMs) to the domain adaptation framework and 2) a circular indirect accuracy assessment strategy for validating the learning of domain adaptation classifiers when no true labels for the target--domain instances are available. Experimental results, obtained on a series of two-dimensional toy problems and on two real data sets related to brain computer interface and remote sensing applications, confirmed the effectiveness and the reliability of both the DASVM technique and the proposed circular validation strategy. --- paper_title: Heterogeneous domain adaptation method for video annotation paper_content: In this study, the authors study the video annotation problem over heterogeneous domains, in which data from the image source domain and the video target domain is represented by heterogeneous features with different dimensions and physical meanings. A novel feature learning method, called heterogeneous discriminative analysis of canonical correlation (HDCC), is proposed to discover a common feature subspace in which heterogeneous features can be compared. The HDCC utilises discriminative information from the source domain as well as topology information from the target domain to learn two different projection matrices. By using these two matrices, heterogeneous data can be projected onto a common subspace and different features can be compared. They additionally design a group weighting learning framework for multi-domain adaptation to effectively leverage knowledge learned from the source domain. Under this framework, source domain images are organised in groups according to their semantic meanings, and different weights are assigned to these groups according to their relevancies to the target domain videos. Extensive experiments on the Columbia Consumer Video and Kodak datasets demonstrate the effectiveness of their HDCC and group weighting methods. --- paper_title: Domain Adaptation With Structural Correspondence Learning paper_content: Discriminative learning methods are widely used in natural language processing. These methods work best when their training and test data are drawn from the same distribution. For many NLP tasks, however, we are confronted with new domains in which labeled data is scarce or non-existent. In such cases, we seek to adapt existing models from a resource-rich source domain to a resource-poor target domain. We introduce structural correspondence learning to automatically induce correspondences among features from different domains. We test our technique on part of speech tagging and show performance gains for varying amounts of source and target training data, as well as improvements in target domain parsing accuracy using our improved tagger. --- paper_title: Cross-Language Text Classification Using Structural Correspondence Learning paper_content: We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce task-specific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of inter-language correspondence modeling. ::: ::: We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas. --- paper_title: Feature Selection with High-Dimensional Imbalanced Data paper_content: Feature selection is an important topic in data mining, especially for high dimensional datasets. Filtering techniques in particular have received much attention, but detailed comparisons of their performance is lacking. This work considers three filters using classifier performance metrics and six commonly-used filters. All nine filtering techniques are compared and contrasted using five different microarray expression datasets. In addition, given that these datasets exhibit an imbalance between the number of positive and negative examples, the utilization of sampling techniques in the context of feature selection is examined. --- paper_title: Heterogeneous defect prediction paper_content: Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance. --- paper_title: Reducing Features to Improve Code Change-Based Bug Prediction paper_content: Machine learning classifiers have recently emerged as a way to predict the introduction of bugs in changes made to source code files. The classifier is first trained on software history, and then used to predict if an impending change causes a bug. Drawbacks of existing classifier-based bug prediction techniques are insufficient performance for practical use and slow prediction times due to a large number of machine learned features. This paper investigates multiple feature selection techniques that are generally applicable to classification-based bug prediction methods. The techniques discard less important features until optimal classification performance is reached. The total number of features used for training is substantially reduced, often to less than 10 percent of the original. The performance of Naive Bayes and Support Vector Machine (SVM) classifiers when using this technique is characterized on 11 software projects. Naive Bayes using feature selection provides significant improvement in buggy F-measure (21 percent improvement) over prior change classification bug prediction results (by the second and fourth authors [28]). The SVM's improvement in buggy F-measure is 9 percent. Interestingly, an analysis of performance for varying numbers of features shows that strong performance is achieved at even 1 percent of the original number of features. --- paper_title: Choosing software metrics for defect prediction: an investigation on feature selection techniques paper_content: The selection of software metrics for building software quality prediction models is a search-based software engineering problem. An exhaustive search for such metrics is usually not feasible due to limited project resources, especially if the number of available metrics is large. Defect prediction models are necessary in aiding project managers for better utilizing valuable project resources for software quality improvement. The efficacy and usefulness of a fault-proneness prediction model is only as good as the quality of the software measurement data. This study focuses on the problem of attribute selection in the context of software quality estimation. A comparative investigation is presented for evaluating our proposed hybrid attribute selection approach, in which feature ranking is first used to reduce the search space, followed by a feature subset selection. A total of seven different feature ranking techniques are evaluated, while four different feature subset selection approaches are considered. The models are trained using five commonly used classification algorithms. The case study is based on software metrics and defect data collected from multiple releases of a large real-world software system. The results demonstrate that while some feature ranking techniques performed similarly, the automatic hybrid search algorithm performed the best among the feature subset selection methods. Moreover, performances of the defect prediction models either improved or remained unchanged when over 85were eliminated. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: Understanding and using linear programming paper_content: What Is It, and What For?.- Examples.- Integer Programming and LP Relaxation.- Theory of Linear Programming: First Steps.- The Simplex Method.- Duality of Linear Programming.- Not Only the Simplex Method.- More Applications.- Software and Further Reading. --- paper_title: A survey of transfer learning paper_content: Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. --- paper_title: The WM method completed: a flexible fuzzy system approach to data mining paper_content: In this paper, the so-called Wang-Mendel (WM) method for generating fuzzy rules from data is enhanced to make it a comprehensive and flexible fuzzy system approach to data description and prediction. In the description part, the core ideas of the WM method are used to develop three methods to extract fuzzy IF-THEN rules from data. The first method shows how to extract rules for the user-specified cases, the second method generates all the rules that can be generated directly from the data, and the third method extrapolates the rules generated by the second method over the entire domain of interest. In the prediction part, two fuzzy predictive models are constructed based on the fuzzy IF-THEN rules extracted by the methods of the description part. The first model gives a continuous output and is suitable for predicting continuous variables, and the second model gives a piecewise constant output and is suitable for predicting categorical variables. We show that by comparing the prediction accuracy of the fuzzy predictive models with different numbers of fuzzy sets covering the input variables, we can rank the importance of the input variables. We also propose an algorithm to optimize the fuzzy predictive models, and show how to use the models to solve pattern recognition problems. Throughout this paper, we use a set of real data from a steel rolling plant to demonstrate the ideas and test the models. --- paper_title: Towards fuzzy transfer learning for intelligent environments paper_content: By their very nature, Intelligent Environments (IE’s) are infused with complexity, unreliability and uncertainty due to a combination of sensor noise and the human element. The quantity, type and availability of data to model these applications can be a major issue. Each situation is contextually different and constantly changing. The dynamic nature of the implementations present a challenging problem when attempting to model or learn a model of the environment. Training data to construct the model must be within the same feature space and have the same distribution as the target task data, however this is often highly costly and time consuming. There can even be occurrences were a complete lack of labelled target data occurs. It is within these situations that our study is focussed. In this paper we propose a framework to dynamically model IE’s through the use of data sets from differing feature spaces and domains. The framework is constructed using a novel Fuzzy Transfer Learning (FuzzyTL) process. --- paper_title: Generating fuzzy rules by learning from examples paper_content: A general method is developed for generating fuzzy rules from numerical data. The method consists of five steps: dividing the input and output spaces of the given numerical data into fuzzy regions; generating fuzzy rules from the given data; assigning a degree to each of the generated rules for the purpose of resolving conflicts among the generated rules; creating a combined fuzzy-associative-memory (FAM) bank based on both the generated rules and linguistic rules of human experts; and determining a mapping from input space to output space based on the combined FAM bank using a defuzzifying procedure. The mapping is proved to be capable of approximating any real continuous function on a compact set to arbitrary accuracy. The method is applied to predicting a chaotic time series. > --- paper_title: Stacked Generalization paper_content: This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory. --- paper_title: Heterogeneous transfer learning for activity recognition using heuristic search techniques paper_content: Purpose – The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require information about the activities currently being performed, but activity recognition algorithms typically require substantial amounts of labeled training data for each setting. One solution to this problem is to leverage transfer learning techniques to reuse available labeled data in new situations. Design/methodology/approach – This paper introduces three novel heterogeneous transfer learning techniques that reverse the typical transfer model and map the target feature space to the source feature space and apply them to activity recognition in a smart apartment. This paper evaluates the techniques on data from 18 different smart apartments located in an assisted-care facility and compares the results against several baselines. Findings – The three transfer learning techniques are all able to outperform the baseline compar... --- paper_title: Transfer Learning across Feature-Rich Heterogeneous Feature Spaces via Feature-Space Remapping (FSR) paper_content: Transfer learning aims to improve performance on a target task by utilizing previous knowledge learned from source tasks. In this paper we introduce a novel heterogeneous transfer learning technique, Feature-Space Remapping (FSR), which transfers knowledge between domains with different feature spaces. This is accomplished without requiring typical feature-feature, feature instance, or instance-instance co-occurrence data. Instead we relate features in different feature-spaces through the construction of metafeatures. We show how these techniques can utilize multiple source datasets to construct an ensemble learner which further improves performance. We apply FSR to an activity recognition problem and a document classification problem. The ensemble technique is able to outperform all other baselines and even performs better than a classifier trained using a large amount of labeled data in the target domain. These problems are especially difficult because, in addition to having different feature-spaces, the marginal probability distributions and the class labels are also different. This work extends the state of the art in transfer learning by considering large transfer across dramatically different spaces. --- paper_title: Towards fuzzy transfer learning for intelligent environments paper_content: By their very nature, Intelligent Environments (IE’s) are infused with complexity, unreliability and uncertainty due to a combination of sensor noise and the human element. The quantity, type and availability of data to model these applications can be a major issue. Each situation is contextually different and constantly changing. The dynamic nature of the implementations present a challenging problem when attempting to model or learn a model of the environment. Training data to construct the model must be within the same feature space and have the same distribution as the target task data, however this is often highly costly and time consuming. There can even be occurrences were a complete lack of labelled target data occurs. It is within these situations that our study is focussed. In this paper we propose a framework to dynamically model IE’s through the use of data sets from differing feature spaces and domains. The framework is constructed using a novel Fuzzy Transfer Learning (FuzzyTL) process. --- paper_title: Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace paper_content: We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance. --- paper_title: Heterogeneous Unsupervised Cross-domain Transfer Learning. paper_content: Transfer learning addresses the problem of how to leverage previously acquired knowledge (a source domain) to improve the efficiency of learning in a new domain (the target domain). Although transfer learning has been widely researched in the last decade, existing research still has two restrictions: 1) the feature spaces of the domains must be homogeneous; and 2) the target domain must have at least a few labeled instances. These restrictions significantly limit transfer learning models when transferring knowledge across domains, especially in the big data era. To completely break through both of these bottlenecks, a theorem for reliable unsupervised knowledge transfer is proposed to avoid negative transfers, and a Grassmann manifold is applied to measure the distance between heterogeneous feature spaces. Based on this theorem and the Grassmann manifold, this study proposes two heterogeneous unsupervised knowledge transfer (HeUKT) models - known as RLG and GLG. The RLG uses a linear monotonic map (LMM) to reliably project two heterogeneous feature spaces onto a latent feature space and applies geodesic flow kernel (GFK) model to transfers knowledge between two the projected domains. The GLG optimizes the LMM to achieve the highest possible accuracy and guarantees that the geometric properties of the domains remain unchanged during the transfer process. To test the overall effectiveness of two models, this paper reorganizes five public datasets into ten heterogeneous cross-domain tasks across three application fields: credit assessment, text classification, and cancer detection. Extensive experiments demonstrate that the proposed models deliver superior performance over current benchmarks, and that these HeUKT models are a promising way to give computers the associative ability to judge unknown things using related known knowledge. --- paper_title: Learning Kernels for Unsupervised Domain Adaptation with Applications to Visual Object Recognition paper_content: Domain adaptation aims to correct the mismatch in statistical properties between the source domain on which a classifier is trained and the target domain to which the classifier is to be applied. In this paper, we address the challenging scenario of unsupervised domain adaptation, where the target domain does not provide any annotated data to assist in adapting the classifier. Our strategy is to learn robust features which are resilient to the mismatch across domains and then use them to construct classifiers that will perform well on the target domain. To this end, we propose novel kernel learning approaches to infer such features for adaptation. Concretely, we explore two closely related directions. In the first direction, we propose unsupervised learning of a geodesic flow kernel (GFK). The GFK summarizes the inner products in an infinite sequence of feature subspaces that smoothly interpolates between the source and target domains. In the second direction, we propose supervised learning of a kernel that discriminatively combines multiple base GFKs. Those base kernels model the source and the target domains at fine-grained granularities. In particular, each base kernel pivots on a different set of landmarks--the most useful data instances that reveal the similarity between the source and the target domains, thus bridging them to achieve adaptation. Our approaches are computationally convenient, automatically infer important hyper-parameters, and are capable of learning features and classifiers discriminatively without demanding labeled data from the target domain. In extensive empirical studies on standard benchmark recognition datasets, our appraches yield state-of-the-art results compared to a variety of competing methods. --- paper_title: Geodesic flow kernel for unsupervised domain adaptation paper_content: In real-world applications of visual recognition, many factors — such as pose, illumination, or image quality — can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods. --- paper_title: Heterogeneous transfer learning for image classification paper_content: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Indexing by Latent Semantic Analysis paper_content: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising. --- paper_title: Relational learning via collective matrix factorization paper_content: Relational learning is concerned with predicting unknown values of a relation, given a database of entities and observed relations among entities. An example of relational learning is movie rating prediction, where entities could include users, movies, genres, and actors. Relations encode users' ratings of movies, movies' genres, and actors' roles in movies. A common prediction technique given one pairwise relation, for example a #users x #movies ratings matrix, is low-rank matrix factorization. In domains with multiple relations, represented as multiple matrices, we may improve predictive accuracy by exploiting information from one relation while predicting another. To this end, we propose a collective matrix factorization model: we simultaneously factor several matrices, sharing parameters among factors when an entity participates in multiple relations. Each relation can have a different value type and error distribution; so, we allow nonlinear relationships between the parameters and outputs, using Bregman divergences to measure error. We extend standard alternating projection algorithms to our model, and derive an efficient Newton update for the projection. Furthermore, we propose stochastic optimization methods to deal with large, sparse matrices. Our model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems. Our model can handle any pairwise relational schema and a wide variety of error models. We demonstrate its efficiency, as well as the benefit of sharing parameters among relations. --- paper_title: Building text features for object image classification paper_content: We introduce a text-based image feature and demonstrate that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. We do not inspect or correct the tags and expect that they are noisy. We obtain the text feature of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. Our text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. We test the performance of this feature using PASCAL VOC 2006 and 2007 datasets. Our feature performs well, consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. --- paper_title: Heterogeneous Transfer Learning for Image Clustering via the SocialWeb paper_content: In this paper, we present a new learning scenario, heterogeneous transfer learning, which improves learning performance when the data can be in different feature spaces and where no correspondence between data instances in these spaces is provided. In the past, we have classified Chinese text documents using English training data under the heterogeneous transfer learning framework. In this paper, we present image clustering as an example to illustrate how unsupervised learning can be improved by transferring knowledge from auxiliary heterogeneous data obtained from the social Web. Image clustering is useful for image sense disambiguation in query-based image search, but its quality is often low due to imagedata sparsity problem. We extend PLSA to help transfer the knowledge from social Web data, which have mixed feature representations. Experiments on image-object clustering and scene clustering tasks show that our approach in heterogeneous transfer learning based on the auxiliary data is indeed effective and promising. --- paper_title: CLUSTERING-BASED NETWORK INTRUSION DETECTION paper_content: Recently data mining methods have gained importance in addressing network security issues, including network intrusion detection — a challenging task in network security. Intrusion detection systems aim to identify attacks with a high detection rate and a low false alarm rate. Classification-based data mining models for intrusion detection are often ineffective in dealing with dynamic changes in intrusion patterns and characteristics. Consequently, unsupervised learning methods have been given a closer look for network intrusion detection. We investigate multiple centroid-based unsupervised clustering algorithms for intrusion detection, and propose a simple yet effective self-labeling heuristic for detecting attack and normal clusters of network traffic audit data. The clustering algorithms investigated include, k-means, Mixture-Of-Spherical Gaussians, Self-Organizing Map, and Neural-Gas. The network traffic datasets provided by the DARPA 1998 offline intrusion detection project are used in our empirical investigation, which demonstrates the feasibility and promise of unsupervised learning methods for network intrusion detection. In addition, a comparative analysis shows the advantage of clustering-based methods over supervised classification techniques in identifying new or unseen attack types. --- paper_title: Some methods for classification and analysis of multivariate observations paper_content: The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special --- paper_title: Self-taught clustering paper_content: This paper focuses on a new clustering task, called self-taught clustering. Self-taught clustering is an instance of unsupervised transfer learning, which aims at clustering a small collection of target unlabeled data with the help of a large amount of auxiliary unlabeled data. The target and auxiliary data can be different in topic distribution. We show that even when the target data are not sufficient to allow effective learning of a high quality feature representation, it is possible to learn the useful features with the help of the auxiliary data on which the target data can be clustered effectively. We propose a co-clustering based self-taught clustering algorithm to tackle this problem, by clustering the target and auxiliary data simultaneously to allow the feature representation from the auxiliary data to influence the target data through a common set of features. Under the new data representation, clustering on the target data can be improved. Our experiments on image clustering show that our algorithm can greatly outperform several state-of-the-art clustering methods when utilizing irrelevant unlabeled auxiliary data. --- paper_title: Probabilistic Latent Semantic Analysis paper_content: Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two-mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments. --- paper_title: Unsupervised Learning by Probabilistic Latent Semantic Analysis paper_content: This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis. --- paper_title: Heterogeneous Transfer Learning for Image Clustering via the SocialWeb paper_content: In this paper, we present a new learning scenario, heterogeneous transfer learning, which improves learning performance when the data can be in different feature spaces and where no correspondence between data instances in these spaces is provided. In the past, we have classified Chinese text documents using English training data under the heterogeneous transfer learning framework. In this paper, we present image clustering as an example to illustrate how unsupervised learning can be improved by transferring knowledge from auxiliary heterogeneous data obtained from the social Web. Image clustering is useful for image sense disambiguation in query-based image search, but its quality is often low due to imagedata sparsity problem. We extend PLSA to help transfer the knowledge from social Web data, which have mixed feature representations. Experiments on image-object clustering and scene clustering tasks show that our approach in heterogeneous transfer learning based on the auxiliary data is indeed effective and promising. --- paper_title: Heterogeneous transfer learning for image classification paper_content: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. --- paper_title: Learning Transferred Weights From Co-Occurrence Data for Heterogeneous Transfer Learning paper_content: One of the main research problems in heterogeneous transfer learning is to determine whether a given source domain is effective in transferring knowledge to a target domain, and then to determine how much of the knowledge should be transferred from a source domain to a target domain. The main objective of this paper is to solve this problem by evaluating the relatedness among given domains through transferred weights. We propose a novel method to learn such transferred weights with the aid of co-occurrence data, which contain the same set of instances but in different feature spaces. Because instances with the same category should have similar features, our method is to compute their principal components in each feature space such that co-occurrence data can be rerepresented by these principal components. The principal component coefficients from different feature spaces for the same instance in the co-occurrence data have the same order of significance for describing the category information. By using these principal component coefficients, the Markov Chain Monte Carlo method is employed to construct a directed cyclic network where each node is a domain and each edge weight is the conditional dependence from one domain to another domain. Here, the edge weight of the network can be employed as the transferred weight from a source domain to a target domain. The weight values can be taken as a prior for setting parameters in the existing heterogeneous transfer learning methods to control the amount of knowledge transferred from a source domain to a target domain. The experimental results on synthetic and real-world data sets are reported to illustrate the effectiveness of the proposed method that can capture strong or weak relations among feature spaces, and enhance the learning performance of heterogeneous transfer learning. --- paper_title: Co-transfer learning via joint transition probability graph based method paper_content: This paper studies a new machine learning strategy called co-transfer learning. Unlike many previous learning problems, we focus on how to use labeled data of different feature spaces to enhance the classification of different learning spaces simultaneously. For instance, we make use of both labeled images and labeled text data to help learn models for classifying image data and text data together. An important component of co-transfer learning is to build different relations to link different feature spaces, thus knowledge can be co-transferred across different spaces. Our idea is to model the problem as a joint transition probability graph. The transition probabilities can be constructed by using the intra-relationships based on affinity metric among instances and the inter-relationships based on co-occurrence information among instances from different spaces. The proposed algorithm computes ranking of labels to indicate the importance of a set of labels to an instance by propagating the ranking score of labeled instances via the random walk with restart. The main contribution of this paper is to (i) propose a co-transfer learning (CT-Learn) framework that can perform learning simultaneously by co-transferring knowledge across different spaces; (ii) show the theoretical properties of the random walk for such joint transition probability graph so that the proposed learning model can be used effectively; (iii) develop an efficient algorithm to compute ranking scores and generate the possible labels for a given instance. Experimental results on benchmark data (image-text and English-Chinese-French classification data sets) have shown that the proposed algorithm is computationally efficient, and effective in learning across different spaces. In the comparison, we find that the classification performance of the CT-Learn algorithm is better than those of the other tested transfer learning algorithms. --- paper_title: Improving Markov Chain Monte Carlo Model Search for Data Mining paper_content: The motivation of this paper is the application of MCMC model scoring procedures to data mining problems, involving a large number of competing models and other relevant model choice aspects. --- paper_title: Heterogeneous Transfer Learning for Image Clustering via the SocialWeb paper_content: In this paper, we present a new learning scenario, heterogeneous transfer learning, which improves learning performance when the data can be in different feature spaces and where no correspondence between data instances in these spaces is provided. In the past, we have classified Chinese text documents using English training data under the heterogeneous transfer learning framework. In this paper, we present image clustering as an example to illustrate how unsupervised learning can be improved by transferring knowledge from auxiliary heterogeneous data obtained from the social Web. Image clustering is useful for image sense disambiguation in query-based image search, but its quality is often low due to imagedata sparsity problem. We extend PLSA to help transfer the knowledge from social Web data, which have mixed feature representations. Experiments on image-object clustering and scene clustering tasks show that our approach in heterogeneous transfer learning based on the auxiliary data is indeed effective and promising. --- paper_title: Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace paper_content: We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance. --- paper_title: Heterogeneous transfer learning for image classification paper_content: Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset. --- paper_title: Learning Transferred Weights From Co-Occurrence Data for Heterogeneous Transfer Learning paper_content: One of the main research problems in heterogeneous transfer learning is to determine whether a given source domain is effective in transferring knowledge to a target domain, and then to determine how much of the knowledge should be transferred from a source domain to a target domain. The main objective of this paper is to solve this problem by evaluating the relatedness among given domains through transferred weights. We propose a novel method to learn such transferred weights with the aid of co-occurrence data, which contain the same set of instances but in different feature spaces. Because instances with the same category should have similar features, our method is to compute their principal components in each feature space such that co-occurrence data can be rerepresented by these principal components. The principal component coefficients from different feature spaces for the same instance in the co-occurrence data have the same order of significance for describing the category information. By using these principal component coefficients, the Markov Chain Monte Carlo method is employed to construct a directed cyclic network where each node is a domain and each edge weight is the conditional dependence from one domain to another domain. Here, the edge weight of the network can be employed as the transferred weight from a source domain to a target domain. The weight values can be taken as a prior for setting parameters in the existing heterogeneous transfer learning methods to control the amount of knowledge transferred from a source domain to a target domain. The experimental results on synthetic and real-world data sets are reported to illustrate the effectiveness of the proposed method that can capture strong or weak relations among feature spaces, and enhance the learning performance of heterogeneous transfer learning. --- paper_title: Cross-Language Text Classification Using Structural Correspondence Learning paper_content: We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce task-specific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of inter-language correspondence modeling. ::: ::: We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas. --- paper_title: A Probabilistic Programming Approach for Outlier Detection in Healthcare Claims paper_content: Healthcare is an integral component in people's lives, especially for the rising elderly population. Medicare is one such healthcare program that provides for the needs of the elderly. It is imperative that these healthcare programs are affordable, but this is not always the case. Out of the many possible factors for the rising cost of healthcare, claims fraud is a major contributor, but its impact can be lessened through effective fraud detection. We propose a general outlier detection model, based on Bayesian inference, using probabilistic programming. Our model provides probability distributions rather than just point values, as with most common outlier detection methods. Credible intervals are also generated to further enhance confidence that the detected outliers should in fact be considered outliers. Two case studies are presented demonstrating our model's effectiveness in detecting outliers. The first case study uses temperature data in order to provide a clear comparison of several outlier detection techniques. The second case study uses a Medicare dataset to showcase our proposed outlier detection model. Our results show that the successful detection of outliers, which indicate possible fraudulent activities, can provide effective and meaningful results for further investigation within medical specialties or by using real-world, medical provider fraud investigation cases. --- paper_title: Analysis of Transfer Learning Performance Measures paper_content: In machine learning applications, there are scenarios of having no labeled training data, due to the data being rare or too expensive to obtain. In these cases, it is desirable to use readily available labeled data, that is similar to, but not the same as, the domain application of interest. Transfer learning algorithms are used to build high-performance classifiers, when the training data has different distribution characteristics from the testing data. For a transfer learning environment, it is not possible to use validation techniques (such as cross validation or data splitting) to set the desired performance of a classifier, due to the lack of labeled training data from the test domain. As a result, the area under the receiver operating characteristic curve (AUC) performance measure may not be predictive of the actual classifier performance. In an environment where validation techniques are not possible, the relationship between AUC and classification accuracy is needed to better characterize transfer learning algorithm performance. This paper provides relative performance analysis of state-of-the-art transfer learning algorithms and traditional machine learning algorithms, addressing the correlation between AUC and classification accuracy under domain class imbalance conditions with statistical analysis provided. --- paper_title: Towards Semantic Knowledge Propagation from Text Corpus to Web Images paper_content: In this paper, we study the problem of transfer learning from text to images in the context of network data in which link based bridges are available to transfer the knowledge between the different domains. The problem of classification of image data is often much more challenging than text data because of the following two reasons: (a) Labeled text data is very widely available for classification purposes. On the other hand, this is often not the case for image data, in which a lot of images are available from many sources, but many of them are often not labeled. (b) The image features are not directly related to semantic concepts inherent in class labels. On the other hand, since text data tends to have natural semantic interpretability (because of their human origins), they are often more directly related to class labels. Therefore, the relationships between the images and text features also provide additional hints for the classification process in terms of the image feature transformations which provide the most effective results. The semantic challenges of image features are glaringly evident, when we attempt to recognize complex abstract concepts, and the visual features often fail to discriminate such concepts. However, the copious availability of bridging relationships between text and images in the context of web and social network data can be used in order to design for effective classifiers for image data. One of our goals in this paper is to develop a mathematical model for the functional relationships between text and image features, so as indirectly transfer semantic knowledge through feature transformations. This feature transformation is accomplished by mapping instances from different domains into a common space of unspecific topics. This is used as a bridge to semantically connect the two heterogeneous spaces. This is also helpful for the cases where little image data is available for the classification process. We evaluate our knowledge transfer techniques on an image classification task with labeled text corpora and show the effectiveness with respect to competing algorithms. --- paper_title: An Investigation of Transfer Learning and Traditional Machine Learning Algorithms paper_content: Previous research focusing on the evaluation of transfer learning algorithms has predominantly used real-world datasets to measure an algorithm's performance. A test with a real-world dataset exposes an algorithm to a single instance of distribution difference between the training (source) and test (target) datasets. These previous works have not measured performance over a wide-range of source and target distribution differences. We propose to use a test framework that creates many source and target datasets from a single base dataset, representing a diverse-range of distribution differences. These datasets will be used as a stress test to measure an algorithm's performance. The stress test process will measure and compare different transfer learning algorithms and traditional learning algorithms. The unique contributions of this paper, with respect to transfer learning, are defining a test framework, defining multiple distortion profiles, defining a stress test suite, and the evaluation and comparison of different transfer learning and traditional machine learning algorithms over a wide-range of distributions. --- paper_title: Comparing Boosting and Bagging Techniques With Noisy and Imbalanced Data paper_content: This paper compares the performance of several boosting and bagging techniques in the context of learning from imbalanced and noisy binary-class data. Noise and class imbalance are two well-established data characteristics encountered in a wide range of data mining and machine learning initiatives. The learning algorithms studied in this paper, which include SMOTEBoost, RUSBoost, Exactly Balanced Bagging, and Roughly Balanced Bagging, combine boosting or bagging with data sampling to make them more effective when data are imbalanced. These techniques are evaluated in a comprehensive suite of experiments, for which nearly four million classification models were trained. All classifiers are assessed using seven different performance metrics, providing a complete perspective on the performance of these techniques, and results are tested for statistical significance via analysis-of-variance modeling. The experiments show that the bagging techniques generally outperform boosting, and hence in noisy data environments, bagging is the preferred method for handling class imbalance. --- paper_title: Transfer Learning across Feature-Rich Heterogeneous Feature Spaces via Feature-Space Remapping (FSR) paper_content: Transfer learning aims to improve performance on a target task by utilizing previous knowledge learned from source tasks. In this paper we introduce a novel heterogeneous transfer learning technique, Feature-Space Remapping (FSR), which transfers knowledge between domains with different feature spaces. This is accomplished without requiring typical feature-feature, feature instance, or instance-instance co-occurrence data. Instead we relate features in different feature-spaces through the construction of metafeatures. We show how these techniques can utilize multiple source datasets to construct an ensemble learner which further improves performance. We apply FSR to an activity recognition problem and a document classification problem. The ensemble technique is able to outperform all other baselines and even performs better than a classifier trained using a large amount of labeled data in the target domain. These problems are especially difficult because, in addition to having different feature-spaces, the marginal probability distributions and the class labels are also different. This work extends the state of the art in transfer learning by considering large transfer across dramatically different spaces. --- paper_title: Online Heterogeneous Transfer Learning by Weighted Offline and Online Classifiers paper_content: In this paper, we study online heterogeneous transfer learning (HTL) problems where offline labeled data from a source domain is transferred to enhance the online classification performance in a target domain. The main idea of our proposed algorithm is to build an offline classifier based on heterogeneous similarity constructed by using labeled data from a source domain and unlabeled co-occurrence data which can be easily collected from web pages and social networks. We also construct an online classifier based on data from a target domain, and combine the offline and online classifiers by using the Hedge weighting strategy to update their weights for ensemble prediction. The theoretical analysis of error bound of the proposed algorithm is provided. Experiments on a real-world data set demonstrate the effectiveness of the proposed algorithm. --- paper_title: Supervised and Semi-Supervised Multi-View Canonical Correlation Analysis Ensemble for Heterogeneous Domain Adaptation in Remote Sensing Image Classification paper_content: In this paper, we present the supervised multi-view canonical correlation analysis ensemble (SMVCCAE) and its semi-supervised version (SSMVCCAE), which are novel techniques designed to address heterogeneous domain adaptation problems, i.e., situations in which the data to be processed and recognized are collected from different heterogeneous domains. Specifically, the multi-view canonical correlation analysis scheme is utilized to extract multiple correlation subspaces that are useful for joint representations for data association across domains. This scheme makes homogeneous domain adaption algorithms suitable for heterogeneous domain adaptation problems. Additionally, inspired by fusion methods such as Ensemble Learning (EL), this work proposes a weighted voting scheme based on canonical correlation coefficients to combine classification results in multiple correlation subspaces. Finally, the semi-supervised MVCCAE extends the original procedure by incorporating multiple speed-up spectral regression kernel discriminant analysis (SRKDA). To validate the performances of the proposed supervised procedure, a single-view canonical analysis (SVCCA) with the same base classifier (Random Forests) is used. Similarly, to evaluate the performance of the semi-supervised approach, a comparison is made with other techniques such as Logistic label propagation (LLP) and the Laplacian support vector machine (LapSVM). All of the approaches are tested on two real hyperspectral images, which are considered the target domain, with a classifier trained from synthetic low-dimensional multispectral images, which are considered the original source domain. The experimental results confirm that multi-view canonical correlation can overcome the limitations of SVCCA. Both of the proposed procedures outperform the ones used in the comparison with respect to not only the classification accuracy but also the computational efficiency. Moreover, this research shows that canonical correlation weighted voting (CCWV) is a valid option with respect to other ensemble schemes and that because of their ability to balance diversity and accuracy, canonical views extracted using partially joint random view generation are more effective than those obtained by exploiting disjoint random view generation. --- paper_title: Heterogeneous domain adaptation method for video annotation paper_content: In this study, the authors study the video annotation problem over heterogeneous domains, in which data from the image source domain and the video target domain is represented by heterogeneous features with different dimensions and physical meanings. A novel feature learning method, called heterogeneous discriminative analysis of canonical correlation (HDCC), is proposed to discover a common feature subspace in which heterogeneous features can be compared. The HDCC utilises discriminative information from the source domain as well as topology information from the target domain to learn two different projection matrices. By using these two matrices, heterogeneous data can be projected onto a common subspace and different features can be compared. They additionally design a group weighting learning framework for multi-domain adaptation to effectively leverage knowledge learned from the source domain. Under this framework, source domain images are organised in groups according to their semantic meanings, and different weights are assigned to these groups according to their relevancies to the target domain videos. Extensive experiments on the Columbia Consumer Video and Kodak datasets demonstrate the effectiveness of their HDCC and group weighting methods. --- paper_title: Co-transfer learning via joint transition probability graph based method paper_content: This paper studies a new machine learning strategy called co-transfer learning. Unlike many previous learning problems, we focus on how to use labeled data of different feature spaces to enhance the classification of different learning spaces simultaneously. For instance, we make use of both labeled images and labeled text data to help learn models for classifying image data and text data together. An important component of co-transfer learning is to build different relations to link different feature spaces, thus knowledge can be co-transferred across different spaces. Our idea is to model the problem as a joint transition probability graph. The transition probabilities can be constructed by using the intra-relationships based on affinity metric among instances and the inter-relationships based on co-occurrence information among instances from different spaces. The proposed algorithm computes ranking of labels to indicate the importance of a set of labels to an instance by propagating the ranking score of labeled instances via the random walk with restart. The main contribution of this paper is to (i) propose a co-transfer learning (CT-Learn) framework that can perform learning simultaneously by co-transferring knowledge across different spaces; (ii) show the theoretical properties of the random walk for such joint transition probability graph so that the proposed learning model can be used effectively; (iii) develop an efficient algorithm to compute ranking scores and generate the possible labels for a given instance. Experimental results on benchmark data (image-text and English-Chinese-French classification data sets) have shown that the proposed algorithm is computationally efficient, and effective in learning across different spaces. In the comparison, we find that the classification performance of the CT-Learn algorithm is better than those of the other tested transfer learning algorithms. --- paper_title: Towards fuzzy transfer learning for intelligent environments paper_content: By their very nature, Intelligent Environments (IE’s) are infused with complexity, unreliability and uncertainty due to a combination of sensor noise and the human element. The quantity, type and availability of data to model these applications can be a major issue. Each situation is contextually different and constantly changing. The dynamic nature of the implementations present a challenging problem when attempting to model or learn a model of the environment. Training data to construct the model must be within the same feature space and have the same distribution as the target task data, however this is often highly costly and time consuming. There can even be occurrences were a complete lack of labelled target data occurs. It is within these situations that our study is focussed. In this paper we propose a framework to dynamically model IE’s through the use of data sets from differing feature spaces and domains. The framework is constructed using a novel Fuzzy Transfer Learning (FuzzyTL) process. --- paper_title: The pairwise attribute noise detection algorithm paper_content: Analyzing the quality of data prior to constructing data mining models is emerging as an important issue. Algorithms for identifying noise in a given data set can provide a good measure of data quality. Considerable attention has been devoted to detecting class noise or labeling errors. In contrast, limited research work has been devoted to detecting instances with attribute noise, in part due to the difficulty of the problem. We present a novel approach for detecting instances with attribute noise and demonstrate its usefulness with case studies using two different real-world software measurement data sets. Our approach, called Pairwise Attribute Noise Detection Algorithm (PANDA), is compared with a nearest neighbor, distance-based outlier detection technique (denoted DM) investigated in related literature. Since what constitutes noise is domain specific, our case studies uses a software engineering expert to inspect the instances identified by the two approaches to determine whether they actually contain noise. It is shown that PANDA provides better noise detection performance than the DM algorithm. --- paper_title: Translated Learning : Transfer Learning across Different Feature Spaces † paper_content: This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled text data to help learn a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a "bridge" to link one feature space (known as the "source space") to another space (known as the "target space") through a translator in order to migrate the knowledge from source to target. The translated learning solution uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing back to the instances in the target spaces. We show that this path of linkage can be modeled using a Markov chain and risk minimization. Through experiments on the text-aided image classification and cross-language classification tasks, we demonstrate that our translated learning framework can greatly outperform many state-of-the-art baseline methods. --- paper_title: Transfer Learning on Heterogenous Feature Spaces via Spectral Transformation paper_content: Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as~50\%, compared with the methods using only the examples from the target task. --- paper_title: Designing a Testing Framework for Transfer Learning Algorithms (Application Paper) paper_content: Most works covering the topic of transfer learning propose an algorithm to solve a given domain adaptation problem, then test the algorithm using real-world datasets. A test with a real-world dataset represents a single transfer learning test condition, which partially measures an algorithm's performance. Previous research has placed little emphasis on developing a comprehensive and uniform test for transfer learning algorithms. With this in mind, a test framework is proposed, comprising of distortion profiles which define a comprehensive test suite. The unique contribution of this paper is the definition of a test framework that measures a more complete profile of a transfer learning algorithm's capability, facilitating the identification of relative poor and good performance areas. As a proof of concept, the test framework is used to test a homogeneous transfer learning algorithm. The test framework will be the basis for a number of future applications. ---
Title: A survey on heterogeneous transfer learning Section 1: Introduction Description 1: Write an introduction about traditional machine learning assumptions and the necessity for transfer learning in real-world scenarios. Section 2: Transfer learning Description 2: Describe what transfer learning (TL) is, its goals, and the distinction between homogeneous and heterogeneous transfer learning. Section 3: Homogeneous transfer learning Description 3: Explain homogeneous transfer learning, its characteristics, and various approaches to bridge the data distribution gap between domains. Section 4: Heterogeneous transfer learning Description 4: Discuss heterogeneous transfer learning, its complexities, and how it addresses non-equivalent and non-overlapping feature spaces. Section 5: Negative transfer Description 5: Describe the concept of negative transfer, its implications on TL, and why it occurs when the source domain is not sufficiently related to the target domain. Section 6: Big data application Description 6: Explain how transfer learning can be applied to big data environments, leveraging available datasets to enhance target tasks while avoiding costly data collection efforts. Section 7: Paper overview/contributions Description 7: Provide an overview of the paper’s objectives, including the survey of 38 heterogeneous transfer learning methods, their discussion, analysis, and future research directions. Section 8: Methods which require limited target labels Description 8: Survey techniques that require labeled source data and limited labeled target data, enhancing performance using supplementary labeled data from a related source domain. Section 9: Methods which require limited target labels and accept unlabeled target instances Description 9: Survey semi-supervised techniques that incorporate unlabeled target instances to enhance model performance using both labeled source data and limited labeled target data. Section 10: Methods which require no target labels Description 10: Survey techniques that require labeled source data but do not rely on labeled target instances, utilizing unlabeled target data for training. Section 11: Methods which require limited target labels and no source labels Description 11: Discuss techniques that use limited labeled target data and only unlabeled source data for enhancing target classifier performance. Section 12: Methods which require no target or source labels Description 12: Present unsupervised HTL methods which operate without any labeled data, leveraging auxiliary domains for tasks like clustering. Section 13: Methods for HTL preprocessing Description 13: Discuss preprocessing methods used before employing HTL algorithms to select optimal parameters and improve overall performance. Section 14: Discussion Description 14: Compare and analyze characteristics and empirical studies of the surveyed HTL methods, identifying patterns, differences, and shortcomings. Section 15: Comparative analysis Description 15: Conduct a comparative analysis of all surveyed HTL methods, summarizing commonalities, application specifics, and research gaps. Section 16: Performance analysis Description 16: Analyze the performance of HTL methods based on empirical studies, comparing them with common baseline algorithms. Section 17: Conclusion Description 17: Write a conclusion summarizing the importance of HTL, the challenges it addresses, and the insights gained from surveyed methods. Section 18: Future work Description 18: Discuss potential directions for future research in HTL, focusing on areas like scalability, negative transfer prevention, and addressing label space differences.
A Survey of Continuous-Time Computation Theory
4
--- paper_title: Some mathematical limitations of the general-purpose analog computer paper_content: We prove that the Dirichlet problem on the disc cannot be solved by the general-purpose analog computer, by constructing, on the boundary, a function u"0 that does satisfy an algebraic differential equation, but whose Poisson integral u satisfies no algebraic differential equation on some line segment inside the disc. --- paper_title: DNA solution of hard computational problems paper_content: DNA experiments are proposed to solve the famous "SAT" problem of computer science. This is a special case of a more general method that can solve NP-complete problems. The advantage of these results is the huge parallelism inherent in DNA-based computing. It has the potential to yield vast speedups over conventional electronic-based computers for such search problems. --- paper_title: Algorithms for quantum computation: discrete logarithms and factoring paper_content: A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factor: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. We thus give the first examples of quantum cryptanalysis. > --- paper_title: On the computational power of neural nets paper_content: This paper deals with finite networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a “sigmoidal” scalar nonlinearity to a linear combination of the previous states of all units. We prove that one may simulate all Turing Machines by rational nets. In particular, one can do this in linear time, and there is a net made up of about 1,000 processors which computes a universal partial-recursive function. Products (high order nets) are not required, contrary to what had been stated in the literature. Furthermore, we assert a similar theorem about non-deterministic Turing Machines. Consequences for undecidability and complexity issues about nets are discussed too. --- paper_title: Universal Computation and Other Capabilities of Hybrid and Continuous Dynamical Systems paper_content: We explore the simulation and computational capabilities of hybrid and continuous dynamical systems. The continuous dynamical systems considered are ordinary differential equations (ODEs). For hybrid systems we concentrate on models that combine ODEs and discrete dynamics (e.g., finite automata). We review and compare four such models from the literature. Notions of simulation of a discrete dynamical system by a continuous one are developed. We show that hybrid systems whose equations can describe a precise binary timing pulse (exact clock) can simulate arbitrary reversible discrete dynamical systems defined on closed subsets of R n . The simulations require continuous ODEs in IR2n with the exact clock as input. All four hybrid systems models studied here can implement exact clocks. We also prove that any discrete dynamical system in rn can be simulated by continuous ODEs in Rt2n+1. We use this to show that smooth ODEs in RI3 can simulate arbitrary Turing machines, and hence possess the power of universal computation. We use the famous asynchronous arbiter problem to distinguish between hybrid and continuous dynamical systems. We prove that one cannot build an arbiter with devices described by a system of Lipschitz ODEs. On the other hand, all four hybrid systems models considered can implement arbiters even if their ODEs are Lipschitz. --- paper_title: On some Relations between Dynamical Systems and Transition Systems paper_content: In this paper we define a precise notion of abstraction relation between continuous dynamical systems and discrete state-transition systems. Our main result states that every Turing Machine can be realized by a dynamical system with piecewise-constant derivatives in a 3-dimensional space and thus the reachability problem for such systems is undecidable for 3 dimensions. A decision procedure for 2-dimensional systems has been recently reported by Maler and Pnueli. On the other hand we show that some non-deterministic finite automata cannot be realized by any continuous dynamical system with less than 3 dimensions. --- paper_title: The computability and complexity of optical beam tracing paper_content: The ray-tracing problem is considered for optical systems consisting of a set of refractive or reflective surfaces. It is assumed that the position and the tangent of the incident angle of the initial light ray are rational. The computability and complexity of the ray-tracing problems are investigated for various optical models. The results show that, depending on the optical model, ray tracing is sometimes undecidable, sometimes PSPACE-hard, and sometimes in PSPACE. > --- paper_title: On a theory of computation over the real numbers; NP completeness, recursive functions and universal machines paper_content: A model for computation over an arbitrary (ordered) ring R is presented. In this general setting, universal machines, partial recursive functions, and NP-complete problems are obtained. While the theory reflects of classical over Z (e.g. the computable functions are the recursive functions), it also reflects the special mathematical character of the underlying ring R (e.g. complements of Julia sets provide natural examples of recursively enumerable undecidable sets over the reals) and provides a natural setting for studying foundational issues concerning algorithms in numerical analysis. > --- paper_title: Analog computation with continuous ODEs paper_content: Demonstrates simple, low-dimensional systems of ODEs that can simulate arbitrary finite automata, push-down automata, and Turing machines. We conclude that there are systems of ODEs in R/sup 3/ with continuous vector fields possessing the power of universal computation. Further, such computations can be made robust to small errors in coding of the input or measurement of the output. As such, they represent physically realizable computation. We make precise what we mean by "simulation" of digital machines by continuous dynamical systems. We also discuss elements that a more comprehensive ODE-based model of analog computation should contain. The "axioms" of such a model are based on considerations from physics. > --- paper_title: The Computational Power of Continuous Time Neural Networks paper_content: We investigate the computational power of continuous-time neural networks with Hopfield-type units. We prove that polynomial-size networks with saturated-linear response functions are at least as powerful as polynomially space-bounded Turing machines. --- paper_title: Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems paper_content: The author establishes a number of properties associated with the dynamical system H=(H,(H,N)), where H and N are symmetric n-by-n matrices and (A,B)=AB-BA. The most important of these come from the fact that this equation is equivalent to a certain gradient flow on the space of orthogonal matrices. Particular emphasis is placed on the role of this equation as an analog computer. For example, it is shown how to map the data associated with a linear programming problem into H(0) and N in such a way as to have H=(H(H,N)) evolve to a solution of the linear programming problem. This result can be applied to find systems that solve a variety of generic combinatorial optimization problems, and it also provides an algorithm for diagonalizing symmetric matrices. > --- paper_title: Computability with low-dimensional dynamical systems paper_content: It has been known for a short time that a class of recurrent neural networks has universal computational abilities. These networks can be viewed as iterated piecewise-linear maps in a high-dimensional space. In this paper, we show that similar systems in dimension two are also capable of universal computations. On the contrary, it is necessary to resort to more complex systems (e.g., iterated piecewise-monotone maps) in order to retain this capability in dimension one. --- paper_title: Complexity theory and genetics paper_content: We introduce a population genetics model in which the operators are effectively computable-computable in polynomial time on probabilistic Turing machines. We shall show that in this model a population can encode easily large amount of information from environment into genetic code. Then it can process the information as a parallel computer. More precisely, we show that it can stimulate polynomial space computations in polynomially many steps, even if the recombination rules are very simple. > --- paper_title: Optical Computing: A Survey for Computer Scientists paper_content: Optical Computers provides the first in-depth review of the possibilities and limitations of optical data processing. --- paper_title: On the computational power of neural nets paper_content: This paper deals with finite networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a “sigmoidal” scalar nonlinearity to a linear combination of the previous states of all units. We prove that one may simulate all Turing Machines by rational nets. In particular, one can do this in linear time, and there is a net made up of about 1,000 processors which computes a universal partial-recursive function. Products (high order nets) are not required, contrary to what had been stated in the literature. Furthermore, we assert a similar theorem about non-deterministic Turing Machines. Consequences for undecidability and complexity issues about nets are discussed too. --- paper_title: Universal Computation and Other Capabilities of Hybrid and Continuous Dynamical Systems paper_content: We explore the simulation and computational capabilities of hybrid and continuous dynamical systems. The continuous dynamical systems considered are ordinary differential equations (ODEs). For hybrid systems we concentrate on models that combine ODEs and discrete dynamics (e.g., finite automata). We review and compare four such models from the literature. Notions of simulation of a discrete dynamical system by a continuous one are developed. We show that hybrid systems whose equations can describe a precise binary timing pulse (exact clock) can simulate arbitrary reversible discrete dynamical systems defined on closed subsets of R n . The simulations require continuous ODEs in IR2n with the exact clock as input. All four hybrid systems models studied here can implement exact clocks. We also prove that any discrete dynamical system in rn can be simulated by continuous ODEs in Rt2n+1. We use this to show that smooth ODEs in RI3 can simulate arbitrary Turing machines, and hence possess the power of universal computation. We use the famous asynchronous arbiter problem to distinguish between hybrid and continuous dynamical systems. We prove that one cannot build an arbiter with devices described by a system of Lipschitz ODEs. On the other hand, all four hybrid systems models considered can implement arbiters even if their ODEs are Lipschitz. --- paper_title: Analog computation with continuous ODEs paper_content: Demonstrates simple, low-dimensional systems of ODEs that can simulate arbitrary finite automata, push-down automata, and Turing machines. We conclude that there are systems of ODEs in R/sup 3/ with continuous vector fields possessing the power of universal computation. Further, such computations can be made robust to small errors in coding of the input or measurement of the output. As such, they represent physically realizable computation. We make precise what we mean by "simulation" of digital machines by continuous dynamical systems. We also discuss elements that a more comprehensive ODE-based model of analog computation should contain. The "axioms" of such a model are based on considerations from physics. > --- paper_title: Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems paper_content: The author establishes a number of properties associated with the dynamical system H=(H,(H,N)), where H and N are symmetric n-by-n matrices and (A,B)=AB-BA. The most important of these come from the fact that this equation is equivalent to a certain gradient flow on the space of orthogonal matrices. Particular emphasis is placed on the role of this equation as an analog computer. For example, it is shown how to map the data associated with a linear programming problem into H(0) and N in such a way as to have H=(H(H,N)) evolve to a solution of the linear programming problem. This result can be applied to find systems that solve a variety of generic combinatorial optimization problems, and it also provides an algorithm for diagonalizing symmetric matrices. > --- paper_title: Computability with low-dimensional dynamical systems paper_content: It has been known for a short time that a class of recurrent neural networks has universal computational abilities. These networks can be viewed as iterated piecewise-linear maps in a high-dimensional space. In this paper, we show that similar systems in dimension two are also capable of universal computations. On the contrary, it is necessary to resort to more complex systems (e.g., iterated piecewise-monotone maps) in order to retain this capability in dimension one. --- paper_title: Logical reversibility of computation paper_content: The usual general-purpose computing automaton (e.g.. a Turing machine) is logically irreversible- its transition function lacks a single-valued inverse. Here it is shown that such machines may he made logically reversible at every step, while retainillg their simplicity and their ability to do general computations. This result is of great physical interest because it makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step. In the first stage of its computation the logically reversible automaton parallels the corresponding irreversible automaton, except that it saves all intermediate results, there by avoiding the irreversible operation of erasure. The second stage consists of printing out the desired output. The third stage then reversibly disposes of all the undesired intermediate results by retracing the steps of the first stage in backward order (a process which is only possible because the first stage has been carried out reversibly), there by restoring the machine (except for the now-written output tape) to its original condition. The final machine configuration thus contains the desired output and a reconstructed copy of the input, but no other undesired data. The foregoing results are demonstrated explicitly using a type of three-tape Turing machine. The biosynthesis of messenger RNA is discussed as a physical example of reversible computation. --- paper_title: Universal Computation and Other Capabilities of Hybrid and Continuous Dynamical Systems paper_content: We explore the simulation and computational capabilities of hybrid and continuous dynamical systems. The continuous dynamical systems considered are ordinary differential equations (ODEs). For hybrid systems we concentrate on models that combine ODEs and discrete dynamics (e.g., finite automata). We review and compare four such models from the literature. Notions of simulation of a discrete dynamical system by a continuous one are developed. We show that hybrid systems whose equations can describe a precise binary timing pulse (exact clock) can simulate arbitrary reversible discrete dynamical systems defined on closed subsets of R n . The simulations require continuous ODEs in IR2n with the exact clock as input. All four hybrid systems models studied here can implement exact clocks. We also prove that any discrete dynamical system in rn can be simulated by continuous ODEs in Rt2n+1. We use this to show that smooth ODEs in RI3 can simulate arbitrary Turing machines, and hence possess the power of universal computation. We use the famous asynchronous arbiter problem to distinguish between hybrid and continuous dynamical systems. We prove that one cannot build an arbiter with devices described by a system of Lipschitz ODEs. On the other hand, all four hybrid systems models considered can implement arbiters even if their ODEs are Lipschitz. --- paper_title: Analog computation with continuous ODEs paper_content: Demonstrates simple, low-dimensional systems of ODEs that can simulate arbitrary finite automata, push-down automata, and Turing machines. We conclude that there are systems of ODEs in R/sup 3/ with continuous vector fields possessing the power of universal computation. Further, such computations can be made robust to small errors in coding of the input or measurement of the output. As such, they represent physically realizable computation. We make precise what we mean by "simulation" of digital machines by continuous dynamical systems. We also discuss elements that a more comprehensive ODE-based model of analog computation should contain. The "axioms" of such a model are based on considerations from physics. > ---
Title: A Survey of Continuous-Time Computation Theory Section 1: Introduction Description 1: Introduce the renewed interest in analog computation, the advancements prompting this interest, and set the stage for the theoretical discussions to follow. Section 2: Unconstrained Models Description 2: Discuss mathematically-based continuous-time computation models that are implementation-wise unconstrained. Section 3: Constrained Models Description 3: Detail the models of analog computation that correspond to idealized versions of existing devices, such as mechanical and electronic differential analyzers and neural networks. Section 4: Computational Complexity Description 4: Examine the limited existing work on computational complexity in continuous-time systems and suggest possible directions for defining and understanding complexity in these systems. Section 5: Conclusion and Open Problems Description 5: Summarize the current state of research in continuous-time computation theory, highlight open problems, and suggest directions for future work.
Particle Swarm Optimization in Wireless-Sensor Networks: A Brief Survey
7
--- paper_title: Particle swarm optimization paper_content: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described. --- paper_title: The Design Space of Wireless Sensor Networks ∗ paper_content: In the recent past, wireless sensor networks have found their way into a wide variety of applications and systems with vastly varying requirements and characteristics. As a consequence, it is becoming increasingly difficult to discuss typical requirements regarding hardware issues and software support. This is particularly problematic in a multidisciplinary research area such as wireless sensor networks, where close collaboration between users, application domain experts, hardware designers, and software developers is needed to implement efficient systems. In this article we discuss the consequences of this fact with regard to the design space of wireless sensor networks by considering its various dimensions. We justify our view by demonstrating that specific existing applications occupy different points in the design space. --- paper_title: Wireless sensor network survey paper_content: A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges. --- paper_title: A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems paper_content: Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA. --- paper_title: Foundations of Global Genetic Optimization paper_content: This book is devoted to the application of genetic algorithms in continuous global optimization. Some of their properties and behavior are highlighted and formally justified. Various optimization techniques and their taxonomy are the background for detailed discussion. The nature of continuous genetic search is explained by studying the dynamics of probabilistic measure, which is utilized to create subsequent populations. This approach shows that genetic algorithms can be used to extract some areas of the search domain more effectively than to find isolated local minima. The biological metaphor of such behavior is the whole population surviving by rapid exploration of new regions of feeding rather than caring for a single individual. One group of strategies that can make use of this property are two-phase global optimization methods. In the first phase the central parts of the basins of attraction are distinguished by genetic population analysis. Afterwards, the minimizers are found by convex optimization methods executed in parallel. --- paper_title: Biomimicry of bacterial foraging for distributed optimization and control paper_content: We explain the biology and physics underlying the chemotactic (foraging) behavior of E. coli bacteria. We explain a variety of bacterial swarming and social foraging behaviors and discuss the control system on the E. coli that dictates how foraging should proceed. Next, a computer program that emulates the distributed optimization process represented by the activity of social bacterial foraging is presented. To illustrate its operation, we apply it to a simple multiple-extremum function minimization problem and briefly discuss its relationship to some existing optimization algorithms. The article closes with a brief discussion on the potential uses of biomimicry of social foraging to develop adaptive controllers and cooperative control strategies for autonomous vehicles. For this, we provide some basic ideas and invite the reader to explore the concepts further. --- paper_title: Bio-inspired node localization in wireless sensor networks paper_content: Many applications of wireless sensor networks (WSNs) require location information of the randomly deployed nodes. A common solution to the localization problem is to deploy a few special beacon nodes having location awareness, which help the ordinary nodes to localize. In this approach, non-beacon nodes estimate their locations using noisy distance measurements from three or more non-collinear beacons they can receive signals from. In this paper, the ranging-based localization task is formulated as a multidimensional optimization problem, and addressed using bio-inspired algorithms, exploiting their quick convergence to quality solutions. An investigation on distributed iterative localization is presented in this paper. Here, the nodes that get localized in an iteration act as references for remaining nodes to localize. The problem has been addressed using particle swarm optimization (PSO) and bacterial foraging algorithm (BFA). A comparison of the performances of PSO and BFA in terms of the number of nodes localized, localization accuracy and computation time is presented. --- paper_title: Performance Comparison of Optimization Algorithms for Clustering in Wireless Sensor Networks paper_content: Clustering in wireless sensor networks (WSNs) is one of the techniques that can expand the lifetime of the whole network through data aggregation at the cluster head. This paper presents performance comparison between particle swarm optimization (PSO) and genetic algorithms (GA) with a new cost function that has the objective of simultaneously minimizing the intra-cluster distance and optimizing the energy consumption of the network. Furthermore, a comparison is made with the well known cluster-based protocols developed for WSNs, LEACH (low-energy adaptive clustering hierarchy) and LEACH-C, the later being an improved version of LEACH, as well as the traditional K-means clustering algorithm. Simulation results demonstrate that the proposed protocol using PSO algorithm has higher efficiency and can achieve better network lifetime and data delivery at the base station over its comparatives. --- paper_title: A Survey on Wireless Sensor Networks Deployment paper_content: In recent years extensive research has opened challenging issues for wireless sensor networks (WSNs) deployment. Among numerous challenges faced while designing architectures and protocols, maintaining connectivity and maximizing the network lifetime stand out as critical considerations. WSNs are formed by a large number of resource-constrained and inexpensive nodes, which has an impact on protocol design and network scalability. Sensor networks have enabled a range of applications where the objective is to observe an environment and collect information about the observed phenomena or events. This has lead to the emergence of a new generation sensor networks called sensor actuator networks. Approaches developed to query sensor-actuator networks (SANETs) are either application-specific or generic. Application-specific SANETs provide limited reusability, are net cost effective and may require extensive programming efforts to make the network able to serve new applications. A WSNs should be able to operate for long time with little or no external management. The sensor nodes must be able to configurate themselves in the presence of adverse situations. In this work, dealing with challenges for WSNs deployment, we start with mobility-based communication in WSNs. Then, we introduce service-oriented SANETs (SOSANETs) as an approach to build customizable SANETs. In the second part, we describe localization systems and analyze self configurability, situation awareness and intrusion detection system. In the third part, we present wireless distributed detection as well as a model for WSN simulation. Finally, conclusions and proposals for future research are given. --- paper_title: Improving sensing coverage of wireless sensor networks by employing mobile robots paper_content: To provide proper coverage of their random deployment regions, wireless sensor networks (WSN) should employ abundant nodes. In this paper we correct such situations by employing some mobile robots as mobile nodes in WSN which can actively move to desired locations for repairing the broken networks. According to the pre-research work we know that the number of nodes employed by WSN is closely relevant to quality of service (QoS) in sensing coverage when the nodes are randomly deployed in the target region. A modified particle swarm optimization (PSO) named particle swarm genetic optimization (PSGO), which imports selection and mutation operators in PSO to overcome the premature fault of classical PSO, is proposed to redeploy the mobile robots according to the node density for repairing the sensing coverage hole after their initial random deployment. It is suggested by the simulated experiment results that the WSN employing the mobile robots can improve the QoS in sensing coverage than the stationary WSN by redeploying mobile robots according to the node density. --- paper_title: An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment paper_content: The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms. --- paper_title: Allocating Multiple Base Stations under General Power Consumption by the Particle Swarm Optimization paper_content: In this paper, a two-tiered wireless sensor networks consisting of small sensor nodes, application nodes and base-stations is considered. An algorithm based on particle swarm optimization (PSO) is proposed for multiple base stations under general power-consumption constraints. The proposed approach can search for nearly optimal BS locations in heterogeneous sensor networks, where application nodes may own different data transmission rates, initial energies and parameter values. Experimental results also show the good performance of the proposed PSO approach and the effects of the parameters on the results --- paper_title: A Theory of Network Localization paper_content: In this paper, we provide a theoretical foundation for the problem of network localization in which some nodes know their locations and other nodes determine their locations by measuring the distances to their neighbors. We construct grounded graphs to model network localization and apply graph rigidity theory to test the conditions for unique localizability and to construct uniquely localizable networks. We further study the computational complexity of network localization and investigate a subclass of grounded graphs where localization can be computed efficiently. We conclude with a discussion of localization in sensor networks where the sensors are placed randomly --- paper_title: Wireless Sensor Network Localization Techniques paper_content: Wireless sensor network localization is an important area that attracted significant research interest. This interest is expected to grow further with the proliferation of wireless sensor network applications. This paper provides an overview of the measurement techniques in sensor network localization and the one-hop localization algorithms based on these measurements. A detailed investigation on multi-hop connectivity-based and distance-based localization algorithms are presented. A list of open research problems in the area of distance-based sensor network localization is provided with discussion on possible approaches to them. --- paper_title: Optimizing the Localization of a Wireless Sensor Network in Real Time Based on a Low-Cost Microcontroller paper_content: In this paper, a low-cost microcontroller-based system that uses the pedometer measurement and communication ranging between neighboring nodes of a wireless sensor network for localization is presented. Unlike most of the existing methods that require good network connectivity, the proposed system works well in a sparse network. As the localization requires solving of nonlinear equations in real time, two optimization approaches, namely, the Gauss-Newton algorithm and the particle swarm optimization have been studied. The localization and optimization algorithms have been implemented with a microcontroller. The performance has been evaluated with experimental results. --- paper_title: Localization in wireless sensor networks using particle swarm optimization paper_content: This paper proposes a novel and computationally efficient global optimization method based on swarm intelligence for locating nodes in a WSN environment. The mean squared range error of all neighbouring anchor nodes is taken as the objective function for this non linear optimization problem. The Particle Swarm Optimization (PSO) is a high performance stochastic global optimization tool that ensures the minimization of the objective function, without being trapped into local optima. The easy implementation and low memory requirement features of PSO make it suitable for highly resource constrained WSN environments. Computational experiments on data drawn from simulated WSNs show better convergence characteristics than the existing Simulated Annealing based WSN localization. --- paper_title: A particle swarm optimization approach for the localization of a wireless sensor network paper_content: For many wireless sensor network applications, the localization of sensor nodes is an essential requirement. In this paper, a low cost localization system based on the measurements from a pedometer and communication ranging between neighboring nodes is presented. Unlike most of the existing methods that require good network connectivity, the proposed system works well in a sparse network. The localization information is obtained through a probability based algorithm that requires the solving of a nonlinear optimization problem. To obtain the optimum location of the sensor nodes, the particle swarm optimization (PSO) scheme that can be realized with a microcontroller for real time application is investigated in this paper. Experimental results show that the proposed approach yields good performance. --- paper_title: Bio-inspired node localization in wireless sensor networks paper_content: Many applications of wireless sensor networks (WSNs) require location information of the randomly deployed nodes. A common solution to the localization problem is to deploy a few special beacon nodes having location awareness, which help the ordinary nodes to localize. In this approach, non-beacon nodes estimate their locations using noisy distance measurements from three or more non-collinear beacons they can receive signals from. In this paper, the ranging-based localization task is formulated as a multidimensional optimization problem, and addressed using bio-inspired algorithms, exploiting their quick convergence to quality solutions. An investigation on distributed iterative localization is presented in this paper. Here, the nodes that get localized in an iteration act as references for remaining nodes to localize. The problem has been addressed using particle swarm optimization (PSO) and bacterial foraging algorithm (BFA). A comparison of the performances of PSO and BFA in terms of the number of nodes localized, localization accuracy and computation time is presented. --- paper_title: Optimization of Sensor Node Locations in a Wireless Sensor Network paper_content: In this paper, a localization system for unknown emitter nodes in a wireless sensor network (WSN) system is presented. For this system, it is assumed that there are four anchor nodes with known locations and one or more unknown nodes which transmit RF signals that can be received by the four anchor nodes. The only available information to the system is the received signal strength indicator which is in general not very accurate. To obtain better estimated location of the sensor nodes, the particle swarm optimization (PSO) scheme that can be realized in real time is investigated in this paper. Both simulation and experimental results of the proposed approach are presented. --- paper_title: Energy-efficient communication protocol for wireless microsensor networks paper_content: Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated. --- paper_title: Particle Swarm Optimisers for Cluster formation in Wireless Sensor Networks paper_content: We describe the results of a performance evaluation of four extensions of Particle Swarm Optimisation (PSO) to reduce energy consumption in wireless sensor networks. Communication distances are an important factor to be reduced in sensor networks. By using clustering in a sensor network we can reduce the total communication distance, thus increasing the life of a network. We adopt a distance based clustering criterion for sensor network optimisation. From PSO perspective, we study the suitability of four different PSO algorithms for our application and propose modifications. An important modification proposed is to use a boundary checking routine for re-initialisation of a particle which moves outside the set boundary. --- paper_title: An application-specific protocol architecture for wireless microsensor networks paper_content: Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches. --- paper_title: Cluster Heads Election Analysis for Multi-hop Wireless Sensor Networks Based on Weighted Graph and Particle Swarm Optimization paper_content: Continued advances of wireless communication technologies have enabled the deployment of large scale wireless sensor networks. The sensors' limited power makes energy consumption a critical issue. In single-hop wireless sensor networks, cluster heads election method based on residual energy can obtain better energy efficiency than the method in which cluster heads are elected in turns or by probabilities. Is it the same in multi-hop wireless sensor networks? In this paper we proposed and evaluated a routing optimization scheme based on graph theory and particle swarm optimization algorithm for multi-hop wireless sensor network. Our algorithm synthesized the intuitionist advantages of graph theory and optimal search capability of PSO. The result in multi-hop networks is completely different from that in single-hop wireless sensor networks. The result shows that there is very little difference from these methods. The reason is discussed in detail. --- paper_title: Performance Comparison of Optimization Algorithms for Clustering in Wireless Sensor Networks paper_content: Clustering in wireless sensor networks (WSNs) is one of the techniques that can expand the lifetime of the whole network through data aggregation at the cluster head. This paper presents performance comparison between particle swarm optimization (PSO) and genetic algorithms (GA) with a new cost function that has the objective of simultaneously minimizing the intra-cluster distance and optimizing the energy consumption of the network. Furthermore, a comparison is made with the well known cluster-based protocols developed for WSNs, LEACH (low-energy adaptive clustering hierarchy) and LEACH-C, the later being an improved version of LEACH, as well as the traditional K-means clustering algorithm. Simulation results demonstrate that the proposed protocol using PSO algorithm has higher efficiency and can achieve better network lifetime and data delivery at the base station over its comparatives. --- paper_title: Energy-Aware Clustering for Wireless Sensor Networks using Particle Swarm Optimization paper_content: Wireless sensor networks (WSNs) are mainly characterized by their limited and non-replenishable energy supply. Hence, the need for energy efficient infrastructure is becoming increasingly more important since it impacts upon the network operational lifetime. Sensor node clustering is one of the techniques that can expand the lifespan of the whole network through data aggregation at the cluster head. In this paper, we present an energy-aware clustering for wireless sensor networks using particle swarm optimization (PSO) algorithm which is implemented at the base station. We define a new cost function, with the objective of simultaneously minimizing the intra-cluster distance and optimizing the energy consumption of the network. The performance of our protocol is compared with the well known cluster-based protocol developed for WSNs, LEACH (low-energy adaptive clustering hierarchy) and LEACH-C, the later being an improved version of LEACH. Simulation results demonstrate that our proposed protocol can achieve better network lifetime and data delivery at the base station over its comparatives. --- paper_title: Data-aggregation techniques in sensor networks: a survey paper_content: Wireless sensor networks consist of sensor nodes with sensing and com- munication capabilities. We focus on data-aggregation problems in energy- constrained sensor networks. The main goal of data-aggregation algorithms is to gather and aggregate data in an energy efficient manner so that net- work lifetime is enhanced. In this article we present a survey of data-aggre- gation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency, and data accuracy. We conclude with possible future research directions. --- paper_title: Optimal Power Scheduling for Correlated Data Fusion in Wireless Sensor Networks via Constrained PSO paper_content: Optimal power scheduling for distributed detection in a Gaussian sensor network is addressed for both independent and correlated observations. We assume amplify-and-forward local processing at each node. The wireless link between sensors and the fusion center is assumed to undergo fading and coefficients are assumed to be available at the transmitting sensors. The objective is to minimize the total network power to achieve a desired fusion error probability at the fusion center. For i.i.d. observations, the optimal power allocation is derived analytically in closed form. When observations are correlated, first, an easy to optimize upper bound is derived for sufficiently small correlations and the power allocation scheme is derived accordingly. Next, an evolutionary computation technique based on particle swarm optimization is developed to find the optimal power allocation for arbitrary correlations. The optimal power scheduling scheme suggests that the sensors with poor observation quality and bad channels should be inactive to save the total power expenditure of the system. It is shown that the probability of fusion error performance based on the optimal power allocation scheme outperforms the uniform power allocation scheme especially when either the number of sensors is large or the local observation quality is good. --- paper_title: Swarm intelligence based optimization and control of decentralized serial sensor networks paper_content: In this paper threshold design and hierarchy management of serial sensor networks employed for distributed detection is accomplished using a hybrid of ant colony optimization and particle swarm optimization. The particle swarm optimization determines the optimal thresholds, decision rules for the sensors. The ant colony optimization algorithm determines the hierarchy of sensor decision communication, affecting the accuracy. The problem of hierarchy management is known as ldquowho reports to whom?rdquo problem in sensor networks. The new algorithm is tested on a suite of 10 heterogeneous sensors. Probabilistic measures including probability of error and Bayesian risk are adopted to evaluate the performance of the sensor network. The new sensor management methodology is compared to (a) static hierarchy network, (b) a network with the best sensor at the top of the hierarchy and (c) incrementally best hierarchy. Results show 40% performance improvements in terms of Bayesian risk value. --- paper_title: Dynamic sensor management using multi-objective particle swarm optimizer paper_content: This paper presents a Swarm Intelligence based approach for sensor management of a multi sensor network. Alternate sensor configurations and fusion strategies are evaluated by swarm agents, and an optimum configuration and fusion strategy evolves. An evolutionary algorithm, particle swarm optimization, is modified to optimize two objectives: accuracy and time. The output of the algorithm is the choice of sensors, individual sensor’s thresholds and the optimal decision fusion rule. The results achieved show the capability of the algorithm in selecting optimal configuration for a given requirement consisting of multiple objectives. ---
Title: Particle Swarm Optimization in Wireless-Sensor Networks: A Brief Survey Section 1: PSO Algorithm Description 1: This section briefly outlines the Particle Swarm Optimization (PSO) algorithm, including its principles, implementation, and the parameters typically considered in PSO research. Section 2: Other Optimization Algorithms Description 2: This section provides an overview of traditional optimization methods and other heuristic algorithms, comparing their computational complexities and advantages against PSO. Section 3: Optimal WSN Deployment Description 3: This section discusses the applications of PSO in optimal Wireless Sensor Network (WSN) deployment, covering strategies for stationary and mobile node positioning and base station positioning. Section 4: Node Localization in WSNs Description 4: This section explores how PSO is used for node localization in WSNs, detailing various approaches and their effectiveness in different conditions, including stationary and dynamic scenarios. Section 5: Energy-Aware Clustering (EAC) in WSNs Description 5: This section delves into the role of PSO in energy-aware clustering to enhance the network's lifespan by optimizing cluster-head selection and clustering strategies. Section 6: Data Aggregation in WSNs Description 6: This section discusses how PSO contributes to data aggregation, optimizing transmission power allocation, defining local thresholds, and configuring sensors for improved data fusion and reduced communication overhead. Section 7: Conclusion Description 7: This section summarizes the advantages and limitations of PSO in solving various WSN issues, projects future research directions, and provides concluding remarks on the applicability of PSO in WSN environments.
COMPUTATIONAL ASPECTS OF MONOTONE DUALIZATION: A BRIEF SURVEY
9
--- paper_title: An O(nm)-time algorithm for computing the dual of a regular Boolean function paper_content: Abstract We consider the problem of dualizing a positive Boolean function ƒ: Bn → B given in irredundant disjunctive normal form (DNF), that is, obtaining the irredundant DNF form of its dual ƒ d (x) = ƒ ( x ) . The function f is said to be regular if there is a linear order ≳ on {1,…,n} such that i≳j, xi = 0, and xj = 1 imply ƒ(x) ⩽ ƒ(x + u i − u j ) , where uk denote unit vectors. A previous algorithm of the authors, the Hop-Skip-and-Jump algorithm, dualizes a regular function in polynomial time. We use this algorithm to give an explicit expression for the irredundant DNF of ƒd in terms of the one for ƒ. We show that if the irredundant DNF for ƒ has m ⩾ 2 terms, then the one for ƒd has at most (n − 2)m + 1, and can be computed in O(nm) time. This can be applied to solve regular set-covering problems in O(nm) time. --- paper_title: Average Case Self-Duality of Monotone Boolean Functions paper_content: The problem of determining whether a monotone boolean function is self-dual has numerous applications in Logic and AI. The applications include theory revision, model-based diagnosis, abductive explanations and learning monotone boolean functions. It is not known whether self-duality of monotone boolean functions can be tested in polynomial time, though a quasi-polynomial time algorithm exists. We describe another quasi-polynomial time algorithm for solving the self-duality problem of monotone boolean functions and analyze its average-case behaviour on a set of randomly generated instances. --- paper_title: Abduction and the Dualization Problem paper_content: Computing abductive explanations is an important problem, which has been studied extensively in Artificial Intelligence (AI) and related disciplines. While computing some abductive explanation for a literal χ with respect to a set of abducibles A from a Horn propositional theory Σ is intractable under the traditional representation of Σ by a set of Horn clauses, the problem is polynomial under model-based theory representation, where Σ is represented by its characteristic models. Furthermore, computing all the (possibly exponentially) many explanations is polynomial-time equivalent to the problem of dualizing a positive CNF, which is a well-known problem whose precise complexity in terms of the theory of NP-completeness is not known yet. In this paper, we first review the monotone dualization problem and its connection to computing all abductive explanations for a query literal and some related problems in knowledge discovery. We then investigate possible generalizations of this connection to abductive queries beyond literals. Among other results, we find that the equivalence for generating all explanations for a clause query (resp., term query) χ to the monotone dualization problem holds if χ contains at most k positive (resp., negative) literals for constant k, while the problem is not solvable in polynomial total-time, i.e., in time polynomial in the combined size of the input and the output, unless P=NP for general clause resp. term queries. Our results shed new light on the computational nature of abduction and Horn theories in particular, and might be interesting also for related problems, which remains to be explored. --- paper_title: Generating all maximal independent sets of bounded-degree hypergraphs paper_content: We show that any monotone function with a read-k CNF representation can be learned in terms of its DNF representation with membership queries alone in time polynomial in the DNF size and n (the number of variables) assuming k is some fixed constant. The problem is motivated by the well-studied open problem of enumerating all maximal independent sets of a given hypergraph. Our algorithm gives a solution for the bounded degree case and works even if the hypergraph is not input, but rather only queries are available as to which sets are independent. --- paper_title: Generating All Maximal Independent Sets: NP-Hardness and Polynomial-Time Algorithms paper_content: Suppose that an independence system $(E,\mathcal {I})$ is characterized by a subroutine which indicates in unit time whether or not a given subset of E is independent. It is shown that there is no algorithm for generating all the K maximal independent sets of such an independence system in time polynomial in $|E|$ and K, unless $\mathcal {P} = \mathcal {NP}$. However, it is possible to apply ideas of Paull and Unger and of Tsukiyama et al. to obtain polynomial-time algorithms for a number of special cases, e.g. the efficient generation of all maximal feasible solutions to a knapsack problem. The algorithmic techniques bear an interesting relationship with those of Read for the enumeration of graphs and other combinatorial configurations. --- paper_title: Polynomial-time algorithms for regular set-covering and threshold synthesis paper_content: Abstract A set-covering problem is called regular if a cover always remains a cover when any column in it is replaced by an earlier column. From the input of the problem - the coefficient matrix of the set-covering inequalities - it is possible to check in polynomial time whether the problem is regular or can be made regular by permuting the columns. If it is, then all the minimal covers are generated in polynomial time, and one of them is an optimal solution. The algorithm also yields an explicit bound for the number of minimal covers. These results can be used to check in polynomial time whether a given set-covering problem is equivalent to some knapsack problem without additional variables, or equivalently to recognize positive threshold functions in polynomial time. However, the problem of recognizing when an arbitrary Boolean function is threshold is NP-complete. It is also shown that the list of maximal non-covers is essentially the most compact input possible, even if it is known in advance that the problem is regular. --- paper_title: Dualization of regular Boolean functions paper_content: Abstract A monotonic Boolean function is regular if its variables are naturally ordered by decreasing ‘strength’, so that shifting to the right the non-zero entries of any binary false point always yields another false point. Peled and Simeone recently published a polynomial algorithm to generate the maximal false points (MFP's) of a regular function from a list of its minimal true points (MTP's). Another efficient algorithm for this problem is presented here, based on characterization of the MFP's of a regular function in terms of its MTP's. This result is also used to derive a new upper bound on the number of MFP's of a regular function. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: A New Algorithm for Generating Prime Implicants paper_content: This paper describes an algorithm which will generate all the prime implicants of a Boolean function. The algorithm is different from those previously given in the literature, and in many cases it is more efficient. It is proved that the algorithm will find all the prime implicants. The algorithm may possibly generate some nonprime implicants. However, using frequency orderings on literals, the experiments with the algorithm show that it usually generates very few ( possibly none) nonprime implicants. Furthermore, the algorithm may be used to find the minimal sums of a Boolean function. The algorithm is implemented by a computer program in the LISP language. --- paper_title: An incremental method for generating prime implicants/implicates paper_content: Given the recent investigation of Clause Management Systems (CMSs) for ArtificialIntelligence applications, there is an urgent need for an efficient incremental method for generating prime implicants. Given a set of clauses F, a set of prime implicants II of F and a clause C"1 the problem can be formulated as finding the set of prime implicants for II U {C}. Intuitively, the property of implicants, being prime implies that any effort to generate prime implicants from a set of prime implicants will not yield any new prime implicants but themselves. In this paper, we exploit the properties of prime implicants and propose an incremental method for generating prime implicants from a set of existing prime implicants plus a new clause. The correctness proof and complexity analysis of the incremental method are presented, and the intricacy of subsumptions in the incremental method is also examined. Additionally, the role of prime implicants in the CMS is also mentioned. --- paper_title: Learning conjunctions of Horn clauses paper_content: An algorithm for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses is presented. (A Horn clause is a disjunction of literals, all but at most one of which is a negated variable). The algorithm uses equivalence queries and membership queries to produce a formula that is logically equivalent to the unknown formula to be learned. The amount of time used by the algorithm is polynomial in the number of variables and the number of clauses in the unknown formula. > --- paper_title: Hypergraph Transversal Computation and Related Problems in Logic and AI paper_content: Generating minimal transversals of a hypergraph is an important problem which has many applications in Computer Science. In the present paper, we address this problem and its decisional variant, i.e., the recognition of the transversal hypergraph for another hypergraph. We survey some results on problems which are known to be related to computing the transversal hypergraph, where we focus on problems in propositional Logic and AI. Some of the results have been established already some time ago, and were announced but their derivation was not widely disseminated. We then address recent developments on the computational complexity of computing resp. recognizing the transversal hypergraph. The precise complexity of these problems is not known to date, and is in fact open for more than 20 years now. --- paper_title: Queries and Concept Learning paper_content: We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these queries for formal domains, including the regular languages, restricted classes of context-free languages, the pattern languages, and restricted types of prepositional formulas. Some general lower bound techniques are given. Equivalence queries are compared with Valiant's criterion of probably approximately correct identification under random sampling. --- paper_title: An efficient incremental algorithm for generating all maximal independent sets in hypergraphs of bounded dimension paper_content: We show that for hypergraphs of bounded edge size, the problem of extending a given list of maximal independent sets is NC-reducible to the computation of an arbitrary maximal independent set for an induced sub-hypergraph. The latter problem is known to be in RNC. In particular, our reduction yields an incremental RNC dualization algorithm for hypergraphs of bounded edge size, a problem previously known to be solvable in polynomial incremental time. We also give a similar parallel algorithm for the dualization problem on the product of arbitrary lattices which have a bounded number of immediate predecessors for each element. --- paper_title: Almost all monotone Boolean functions are polynomially learnable using membership queries paper_content: We consider exact learning or identification of monotone Boolean functions by only using membership queries. It is shown that almost all monotone Boolean functions are polynomially identifiable in the input number of variables as well as the output being the sum of the sizes of the CNF and DNF representations. --- paper_title: Exact Transversal Hypergraphs and Application to Boolean µ-Functions paper_content: Call an hypergraph, that is a family of subsets (edges) from a finite vertex set, an exact transversal hypergraph iff each of its minimal transversals, i.e., minimal vertex subsets that intersect each edge, meets each edge in a singleton. We show that such hypergraphs are recognizable in polynomial time and that their minimal transversals as well as their maximal independent sets can be generated in lexicographic order with polynomial delay between subsequent outputs, which is impossible in the general case unless P= NP. The results obtained are applied to monotone Boolean ?-functions, that are Boolean functions defined by a monotone Boolean expression (that is, built with ?, ? only) in which no variable occurs repeatedly. We also show that recognizing such functions from monotone Boolean expressions is co-NP-hard, thus complementing Mundici's result that this problem is in co-NP. --- paper_title: Dual-Bounded Generating Problems: All Minimal Integer Solutions for a Monotone System of Linear Inequalities paper_content: We consider the problem of enumerating all minimal integer solutions of a monotone system of linear inequalities. We first show that, for any monotone system of r linear inequalities in n variables, the number of maximal infeasible integer vectors is at most rn times the number of minimal integer solutions to the system. This bound is accurate up to a polylog(r) factor and leads to a polynomial-time reduction of the enumeration problem to a natural generalization of the well-known dualization problem for hypergraphs, in which dual pairs of hypergraphs are replaced by dual collections of integer vectors in a box. We provide a quasi-polynomial algorithm for the latter dualization problem. These results imply, in particular, that the problem of incrementally generating all minimal integer solutions to a monotone system of linear inequalities can be done in quasi-polynomial time. --- paper_title: An O(mn) algorithm for regular set-covering problems paper_content: A clutter L is a collection of m subsets of a ground set E(L) = {x1,…, xn} with the property that, for every pair Ai, Aj ϵ L, Ai is neither contained nor contains Aj, A transversal of L is a subset of E(L) intersecting every member of L. ::: ::: If we associate with each element xj ϵ E(L) a weight cj, the problem of finding a transversal having minimum weight is equivalent to the following set-covering problem ::: min{cTx|MLx ⩾ 1m, xj ϵ {0, 1}, j = 1,…, n} ::: where ML is the matrix whose rows are the incidence vectors of the subsets Ai ϵ L and 1m denotes the vector with m ones. ::: ::: A set-covering problem is regular if there exists an ordering of the variables σ = (x1,…, xn) such that, for every feasible solution x with xi = 1, xj = 0, j < i, the vector x + ej − ei is also a feasible solution, where ei is the ith unit vector. The matrix M of a regular set-covering problem is said to be regular. ::: ::: A regular clutter is any clutter whose incidence matrix is regular. In this paper we describe some properties of regular clutters and propose an algorithm which, in O(mn) steps, generates all the minimal transversals of a regular clutter L and produces the transversal having minimum weight. --- paper_title: Efficient dualization of O(log n)-term monotone disjunctive normal forms paper_content: This paper shows that O(logn)-term monotone disjunctive normal forms (DNFs) φ can be dualized in incremental polynomial time, where n is the number of variables in φ. This improves upon the trivial result that k-term monotone DNFs can be dualized in polynomial time, where k is bounded by some constant. --- paper_title: Generating Dual-Bounded Hypergraphs paper_content: This article surveys some recent results on the generation of implicitly given hypergraphs and their applications in Boolean and integer programming, data mining, reliability theory, and combinatorics. Given a monotone property ~ over the subsets of a finite set V, we consider the problem of incrementally generating the family F π of all minimal subsets satisfying property ~ , when ~ is given by a polynomial-time satisfiability oracle. For a number of interesting monotone properties, the family F π turns out to be uniformly dual-bounded , allowing for the incrementally efficient enumeration of the members of F π. Important applications include the efficient generation of minimal infrequent sets of a database (data mining), minimal connectivity ensuring collections of subgraphs from a given list (reliability theory), minimal feasible solutions to a system of monotone inequalities in integer variables (integer programming), minimal spanning collections of subspaces from a given list (linear algebra) and max... --- paper_title: Polynomial Time Recognition Of 2-Monotonic Positive Boolean Functions Given By An Oracle paper_content: We consider the problem of identifying an unknown Boolean function $f$ by asking an oracle the functional values $f(a)$ for a selected set of test vectors $a \in \{0,1\}^{n}$. Furthermore, we assume that $f$ is a positive (or monotone) function of $n$ variables. It is not yet known whether or not the whole task of generating test vectors and checking if the identification is completed can be carried out in polynomial time in $n$ and $m$, where $m=|\min T(f)| + |\max F(f)|$ and $\min T(f)$ (respectively, $\max F(f))$ denotes the set of minimal true (respectively, maximal false) vectors of $f$. To partially answer this question, we propose here two polynomial-time algorithms that, given an unknown positive function $f$ of $n$ variables, decide whether or not $f$ is 2-monotonic and, if $f$ is 2-monotonic, output both sets $\min T(f)$ and $\max F(f)$. The first algorithm uses $O(nm^{2} + n^{2}m)$ time and $O(nm)$ queries, while the second one uses $O(n^{3}m)$ time and $O(n^{3}m)$ queries. --- paper_title: A Fast and Simple Algorithm for Identifying 2-Monotonic Positive Boolean Functions paper_content: Consider the problem of identifying minT(f) and maxF(f) of a positive (i.e., monotone) Boolean functionf, by using membership queries only, where minT(f) (maxF(f)) denotes the set of minimal true vectors (maximum false vectors) off. Moreover, as the existence of a polynomial total time algorithm (i.e., polynomial time in the length of input and output) for this problem is still open, we consider here a restricted problem: given an unknown positive functionfofnvariables, decide whetherfis 2-monotonic or not, and iffis 2-monotonic, output both minT(f) and maxF(f). For this problem, we propose a simple algorithm, which is based on the concept of maximum latency, and we show that it usesO(n2m) time andO(n2m) queries, wherem=|minT(f)|+|maxF(f)|. This answers affirmatively the conjecture raised in Boroset al.Lecture Notes in Comput. Sci.557(1991), 104?115, Boroset al.SIAM J. Comput.26(1997), 93?109, and is an improvement over the two algorithms discussed therein: one usesO(n3m) time andO(n3m) queries, and the other usesO(nm2+n2m) time andO(nm) queries. --- paper_title: New results on monotone dualization and generating hypergraph transversals paper_content: We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem. --- paper_title: The Maximum Latency and Identification of Positive Boolean Functions paper_content: Consider the problem of identifying min T(f) and max F(f) of a positive (i.e., monotone) Boolean function f, by using membership queries only, where min T(f) (maxF(f)) denotes the set of minimal true vectors (maximal false vectors) of f. It is known that an incrementally polynomial algorithm exists if and only if there is a polynomial time algorithm to check the existence of an unknown vector for given sets MT\(\subseteq \) min T(f) and MF\(\subseteq \) max F(f). Unfortunately, however, the complexity of this problem is still unknown. To answer this question partially, we introduce in this paper a measure for the difficulty of finding an unknown vector, which is called the maximum latency. If the maximum latency is constant, then an unknown vector can be found in polynomial time and there is an incrementally polynomial algorithm for identification. Several subclasses of positive functions are shown to have constant maximum latency, e.g., 2-monotonic positive functions, Δ-partial positive threshold functions and matroid functions, while the class of general positive functions has maximum latency not smaller than [n/4]+1 and the class of positive k-DNF functions has Ω(√n) maximum latency. --- paper_title: Generating Maximal Independent Sets for Hypergraphs with Bounded Edge-Intersections paper_content: Given a finite set V, and integers k > 1 and r > 0, denote by A(k, r) the class of hypergraphs A ⊆ 2 V with (k, r)-bounded intersections, i.e. in which the intersection of any k distinct hyperedges has size at most r. We consider the problem MIS(A,I): given a hypergraph A and a subfamily I C I(A), of its maximal independent sets (MIS) I(A), either extend this subfamily by constructing a new MIS I ∈ I(A) \ I or prove that there are no more MIS, that is I = I(A). We show that for hypergraphs A E A(k,r) with k + r ≤ const, problem MIS(A,I) is NC-reducible to problem MIS(A',O) of generating a single MIS for a partial subhypergraph A' of A. In particular, for this class of hypergraphs, we get an incremental polynomial algorithm for generating all MIS. Furthermore, combining this result with the currently known algorithms for finding a single maximal independent set of a hypergraph, we obtain efficient parallel algorithms for incrementally generating all MIS for hypergraphs in the classes A(1, c), A(c, 0), and A(2,1), where c is a constant. We also show that, for A ∈ A(k,r), where k + r < const, the problem of generating all MIS of A can be solved in incremental polynomial-time with space polynomial only in the size of A. --- paper_title: NP-Completeness: A Retrospective paper_content: For a quarter of a century now, NP-completeness has been computer science's favorite paradigm, fad, punching bag, buzzword, alibi, and intellectual export. This paper is a fragmentary commentary on its origins, its nature, its impact, and on the attributes that have made it so pervasive and contagious. --- paper_title: An SE-tree-based prime implicant generation algorithm paper_content: Prime implicants/implicates (PIs) have been shown to be a useful tool in several problem domains. In model-based diagnosis (MBD), de Kleer et al. (Proc. AAAI-90) have used PIs to characterize diagnoses. We present a PI generation algorithm which, although based on thegeneral SE-tree-based search framework, is effectively an improvement of aparticular PI generation algorithm proposed by Slagle et al. (IEEE Trans. Comput. 19(4) (1970)). The improvement is achieved via adecomposition tactic which is boosted by the SE-tree-based framework. The new algorithm is also more flexible in a number of ways. We present empirical results comparing the new algorithm to the old one, as well as to current PI generation algorithms. --- paper_title: On Generating the Irredundant Conjunctive and Disjunctive Normal Forms of Monotone Boolean Functions paper_content: Let f : {0,1} n → {0,1} be a monotone Boolean function whose value at any point x ∈ {0,1} n can be determined in time t. Denote by c=Λ I ∈ C ∨ i ∈ I x i the irredundant CNF off, where C is the set of the prime implicates of f. Similarly, let d=∨ J ∈ D Λ j ∈ J x j be the irredundant DNF of the same function, where D is the set of the prime implicants of f. We show that given subsets C' ⊆ C and D' ⊆ D such that (C',D') ¬= (C,D), a new term in (C\C')∪(D\D') can be found in time O(n(t+n))+m o(log m) , where m=|C'|+|D'|. In particular, if f(x) can be evaluated for every x ∈ {0,1} n in polynomial time, then the forms c and d can be jointly generated in incremental quasi-polynomial time. On the other hand, even for the class of Λ, ∨-formulae f of depth 2, i.e., for CNFs or DNFs, it is unlikely that uniform sampling from within the set of the prime implicates and implicants of f can be carried out in time bounded by a quasi-polynomial 2 polylog(.) in the input size of f. We also show that for some classes of polynomial-time computable monotone Boolean functions it is NP-hard to test either of the conditions D' =D or C' =C. This provides evidence that for each of these classes neither conjunctive nor disjunctive irredundant normal forms can be generated in total (or incremental) quasi-polynomial time. Such classes of monotone Boolean functions naturally arise in game theory, networks and relay contact circuits, convex programming, and include a subset of Λ, ∨-formulae of depth 3. --- paper_title: Self-Duality of Bounded Monotone Boolean Functions and Related Problems paper_content: In this paper we show the equivalence between the problem of determining self-duality of a boolean function in DNF and a special type of satisfiability problem called NAESPI. Eiter and Gottlob [8] use a result from [2] to show that self-duality of monotone boolean functions which have bounded clause sizes (by some constant) can be determined in polynomial time. We show that the self-duality of instances in the class studied by Eiter and Gottlob can be determined in time linear in the number of clauses in the input, thereby strengthening their result. Domingo [7] recently showed that self-duality of boolean functions where each clause is bounded by (√log n) can be solved in polynomial time. Our linear time algorithm for solving the clauses with bounded size infact solves the (√log n) bounded self-duality problem in O(n2 √log n) time, which is better bound then the algorithm of Domingo [7], O(n3). ::: ::: Another class of self-dual functions arising naturally in application domain has the property that every pair of terms in f intersect in at most constant number of variables. The equivalent subclass of NAESPI is the c-bounded NAESPI. We also show that c-bounded NAESPI can be solved in polynomial time when c is some constant. We also give an alternative characterization of almost self-dual functions proposed by Bioch and Ibaraki [5] in terms of NAESPI instances which admit solutions of a 'particular' type. --- paper_title: Complexity of Identification and Dualization of Positive Boolean Functions paper_content: Abstract We consider in this paper the problem of identifying min T(ƒ) and max F(ƒ) of a positive (i.e., monotone) Boolean function ƒ, by using membership queries only, where min T(ƒ) (max F(ƒ)) denotes the set of minimal true vectors (maximal false vectors) of ƒ. It is shown that the existence of an incrementally polynomial algorithm for this problem is equivalent to the existence of the following algorithms, where ƒ and g are positive Boolean functions: • An incrementally polynomial algorithm to dualize ƒ; • An incrementally polynomial algorithm to self-dualize ƒ; • A polynomial algorithm to decide if ƒ and are mutually dual; • A polynomial algorithm to decide if ƒ is self-dual; • A polynomial algorithm to decide if ƒ is saturated; • A polynomial algorithm in |min (ƒ)| + |max (ƒ)| to identify min (ƒ) only. Some of these are already well known open problems in the respective fields. Other related topics, including various equivalent problems encountered in hypergraph theory and theory of coteries (used in distributed systems), are also discussed. --- paper_title: Identifying the Minimal Transversals of a Hypergraph and Related Problems paper_content: The paper considers two decision problems on hypergraphs, hypergraph saturation and recognition of the transversal hypergraph, and discusses their significance for several search problems in applied computer science. Hypergraph saturation (i.e., given a hypergraph $\cal H$, decide if every subset of vertices is contained in or contains some edge of $\cal H$) is shown to be co-NP-complete. A certain subproblem of hypergraph saturation, the saturation of simple hypergraphs (i.e., Sperner families), is shown to be under polynomial transformation equivalent to transversal hypergraph recognition; i.e., given two hypergraphs ${\cal H}_{1}, {\cal H}_{2}$, decide if the sets in ${\cal H}_{2}$ are all the minimal transversals of ${\cal H}_{1}$. The complexity of the search problem related to the recognition of the transversal hypergraph, the computation of the transversal hypergraph, is an open problem. This task needs time exponential in the input size; it is unknown whether an output-polynomial algorithm exists. For several important subcases, for instance if an upper or lower bound is imposed on the edge size or for acyclic hypergraphs, output-polynomial algorithms are presented. Computing or recognizing the minimal transversals of a hypergraph is a frequent problem in practice, which is pointed out by identifying important applications in database theory, Boolean switching theory, logic, and artificial intelligence (AI), particularly in model-based diagnosis. --- paper_title: Efficient Read-Restricted Monotone CNF/DNF Dualization by Learning with Membership Queries paper_content: We consider exact learning monotone CNF formulas in which each variable appears at most some constant k times (“read-k” monotone CNF). Let f : l0,1r^n → l0,1r be expressible as a read-k monotone CNF formula for some natural number k. We give an incremental output polynomial time algorithm for exact learning both the read-k CNF and (not necessarily read restricted) DNF descriptions of f. The algorithm‘s only method of obtaining information about f is through membership queries, i.e., by inquiring about the value f(x) for points x ∈ l0,1r^n. The algorithm yields an incremental polynomial output time solution to the (read-k) monotone CNF/DNF dualization problem. The unrestricted versions remain open problems of importance. --- paper_title: A New Algorithm for Generating Prime Implicants paper_content: This paper describes an algorithm which will generate all the prime implicants of a Boolean function. The algorithm is different from those previously given in the literature, and in many cases it is more efficient. It is proved that the algorithm will find all the prime implicants. The algorithm may possibly generate some nonprime implicants. However, using frequency orderings on literals, the experiments with the algorithm show that it usually generates very few ( possibly none) nonprime implicants. Furthermore, the algorithm may be used to find the minimal sums of a Boolean function. The algorithm is implemented by a computer program in the LISP language. --- paper_title: Evaluation of an Algorithm for the Transversal Hypergraph Problem paper_content: The Transversal Hypergraph Problem is the problem of computing, given a hypergraph, the set of its minimal transversals, i.e. the hypergraph whose hyperedges are all minimal hitting sets of the given one. This problem turns out to be central in various fields of Computer Science. We present and experimentally evaluate a heuristic algorithm for the problem, which seems able to handle large instances and also possesses some nice features especially desirable in problems with large output such as the Transversal Hypergraph Problem. --- paper_title: Algorithms for inferring functional dependencies from relations paper_content: Abstract The dependency inference problem is to find a cover of the set of functional dependencies that hold in a given relation. The problem has applications in relational database design, in query optimization, and in artificial intelligence. The problem is exponential in the number of attributes. We develop two algorithms with better best case behavior than the simple one. One algorithm reduces the problem to computing the transversal of a hypergraph. The other is based on repeatedly sorting the relation with respect to a set of attributes. --- paper_title: New results on monotone dualization and generating hypergraph transversals paper_content: We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem. --- paper_title: An SE-tree-based prime implicant generation algorithm paper_content: Prime implicants/implicates (PIs) have been shown to be a useful tool in several problem domains. In model-based diagnosis (MBD), de Kleer et al. (Proc. AAAI-90) have used PIs to characterize diagnoses. We present a PI generation algorithm which, although based on thegeneral SE-tree-based search framework, is effectively an improvement of aparticular PI generation algorithm proposed by Slagle et al. (IEEE Trans. Comput. 19(4) (1970)). The improvement is achieved via adecomposition tactic which is boosted by the SE-tree-based framework. The new algorithm is also more flexible in a number of ways. We present empirical results comparing the new algorithm to the old one, as well as to current PI generation algorithms. --- paper_title: A fast algorithm for computing hypergraph transversals and its application in mining emerging patterns paper_content: Computing the minimal transversals of a hypergraph is an important problem in computer science that has significant applications in data mining. We present a new algorithm for computing hypergraph transversals and highlight their close connection to an important class of patterns known as emerging patterns. We evaluate our technique on a number of large datasets and show that it outperforms previous approaches by a factor of 9-29 times. --- paper_title: A Practical Fast Algorithm for Enumerating Minimal SetCoverings paper_content: For a set family F defined on a grand set E, a subset of F covering all the elements of E is called a set covering. The enumeration problem of minimal set covering is equal to the enumeration problem of hypergraph dualization, minimal hitting sets, and other other many problems, and have been studied intensively. However, the existence of an output polynomial algorithm for this problem is still open. However, there proposed an algorithm whose average computation time on computational experiments for random generated instances is output polynomial. In this paper, we propose a practical fast algorithm obtained by improving this algorithm, and by computational experiments, show that the average computation time for randomly generated instances is O(|E|) per output, by computational experiments. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: Learning conjunctions of Horn clauses paper_content: An algorithm for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses is presented. (A Horn clause is a disjunction of literals, all but at most one of which is a negated variable). The algorithm uses equivalence queries and membership queries to produce a formula that is logically equivalent to the unknown formula to be learned. The amount of time used by the algorithm is polynomial in the number of variables and the number of clauses in the unknown formula. > --- paper_title: Exact Transversal Hypergraphs and Application to Boolean µ-Functions paper_content: Call an hypergraph, that is a family of subsets (edges) from a finite vertex set, an exact transversal hypergraph iff each of its minimal transversals, i.e., minimal vertex subsets that intersect each edge, meets each edge in a singleton. We show that such hypergraphs are recognizable in polynomial time and that their minimal transversals as well as their maximal independent sets can be generated in lexicographic order with polynomial delay between subsequent outputs, which is impossible in the general case unless P= NP. The results obtained are applied to monotone Boolean ?-functions, that are Boolean functions defined by a monotone Boolean expression (that is, built with ?, ? only) in which no variable occurs repeatedly. We also show that recognizing such functions from monotone Boolean expressions is co-NP-hard, thus complementing Mundici's result that this problem is in co-NP. --- paper_title: On Generating the Irredundant Conjunctive and Disjunctive Normal Forms of Monotone Boolean Functions paper_content: Let f : {0,1} n → {0,1} be a monotone Boolean function whose value at any point x ∈ {0,1} n can be determined in time t. Denote by c=Λ I ∈ C ∨ i ∈ I x i the irredundant CNF off, where C is the set of the prime implicates of f. Similarly, let d=∨ J ∈ D Λ j ∈ J x j be the irredundant DNF of the same function, where D is the set of the prime implicants of f. We show that given subsets C' ⊆ C and D' ⊆ D such that (C',D') ¬= (C,D), a new term in (C\C')∪(D\D') can be found in time O(n(t+n))+m o(log m) , where m=|C'|+|D'|. In particular, if f(x) can be evaluated for every x ∈ {0,1} n in polynomial time, then the forms c and d can be jointly generated in incremental quasi-polynomial time. On the other hand, even for the class of Λ, ∨-formulae f of depth 2, i.e., for CNFs or DNFs, it is unlikely that uniform sampling from within the set of the prime implicates and implicants of f can be carried out in time bounded by a quasi-polynomial 2 polylog(.) in the input size of f. We also show that for some classes of polynomial-time computable monotone Boolean functions it is NP-hard to test either of the conditions D' =D or C' =C. This provides evidence that for each of these classes neither conjunctive nor disjunctive irredundant normal forms can be generated in total (or incremental) quasi-polynomial time. Such classes of monotone Boolean functions naturally arise in game theory, networks and relay contact circuits, convex programming, and include a subset of Λ, ∨-formulae of depth 3. --- paper_title: On the frequency of the most frequently occurring variable in dual monotone DNFs paper_content: Abstract Let f ( X l ,…, X N )=⋁ I ∈ F ⋀ i ∈ I X i and g ( X l ,…, X N )=⋁ I ∈ G ⋀ i ∈ I X i be a pair of dual monotone irredundant disjunctive normal forms, where F and G are the sets of the prime implicants of tf and g , respectively. For a variable x i , i = 1, …, n , let μ i = # { I ∈ F | i ∈ I }/| F | and v i = # { I ∈ G | i ∈ I }/| G | be the frequencies with which x i occurs in f and g . It is easily seen that max { μ 1 , v 1 , …, μ n , v n } ⩾ 1/log(| F | + | G |). We give examples of arbitrarily large F and G for which the above bound is tight up to a factor of 2. --- paper_title: Dual-Bounded Generating Problems: Partial And Multiple Transversals Of A Hypergraph paper_content: We consider two generalizations of the notion of transversal to a finite hypergraph, the so-called multiple and partial transversals. Multiple transversals naturally arise in 0-1 programming, while partial transversals are related to data mining and machine learning. We show that for an arbitrary hypergraph the families of multiple and partial transversals are both dual-bounded in the sense that the size of the corresponding dual hypergraph is bounded by a polynomial in the cardinality and the length of description of the input hypergraph. Our bounds are based on new inequalities of extremal set theory and threshold Boolean logic, which may be of independent interest. We also show that the problems of generating all multiple and all partial transversals for a given hypergraph are polynomial-time reducible to the generation of all ordinary transversals for another hypergraph, i.e., to the well-known dualization problem for hypergraphs. As a corollary, we obtain incremental quasi-polynomial-time algorithms for both of the above problems, as well as for the generation of all the minimal binary solutions for an arbitrary monotone system of linear inequalities. --- paper_title: An intersection inequality for discrete distributions and related generation problems paper_content: Given two finite sets of points X, Y in Rn which can be separated by a nonnegative linear function, and such that the componentwise minimum of any two distinct points in X is dominated by some point in Y, we show that |X| = n|Y|. As a consequence of this result, we obtain quasi-polynomial time algorithms for generating all maximal integer feasible solutions for a given monotone system of separable inequalities, for generating all p-inefficient points of a given discrete probability distribution, and for generating all maximal empty hyper-rectangles for a given set of points in Rn. This provides a substantial improvement over previously known exponential algorithms for these generation problems related to Integer and Stochastic Programming, and Data Mining. Furthermore, we give an incremental polynomial time generation algorithm for monotone systems with fixed number of separable inequalities, which, for the very special case of one inequality, implies that for discrete probability distributions with independent coordinates, both p-efficient and p-inefficient points can be separately generated in incremental polynomial time. --- paper_title: Abduction and the Dualization Problem paper_content: Computing abductive explanations is an important problem, which has been studied extensively in Artificial Intelligence (AI) and related disciplines. While computing some abductive explanation for a literal χ with respect to a set of abducibles A from a Horn propositional theory Σ is intractable under the traditional representation of Σ by a set of Horn clauses, the problem is polynomial under model-based theory representation, where Σ is represented by its characteristic models. Furthermore, computing all the (possibly exponentially) many explanations is polynomial-time equivalent to the problem of dualizing a positive CNF, which is a well-known problem whose precise complexity in terms of the theory of NP-completeness is not known yet. In this paper, we first review the monotone dualization problem and its connection to computing all abductive explanations for a query literal and some related problems in knowledge discovery. We then investigate possible generalizations of this connection to abductive queries beyond literals. Among other results, we find that the equivalence for generating all explanations for a clause query (resp., term query) χ to the monotone dualization problem holds if χ contains at most k positive (resp., negative) literals for constant k, while the problem is not solvable in polynomial total-time, i.e., in time polynomial in the combined size of the input and the output, unless P=NP for general clause resp. term queries. Our results shed new light on the computational nature of abduction and Horn theories in particular, and might be interesting also for related problems, which remains to be explored. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: Dual-Bounded Generating Problems: Weighted Transversals of a Hypergraph paper_content: Abstract We consider a generalization of the notion of transversal to a finite hypergraph, the so-called weighted transversals. Given a non-negative weight vector assigned to each hyperedge of an input hypergraph A and a non-negative threshold vector, we define a weighted transversal as a minimal vertex set which intersects all the hyperedges of A except for a sub-family of total weight not exceeding the given threshold vector. Weighted transversals generalize partial and multiple transversals introduced in Boros et al. (SIAM J. Comput. 30 (6) (2001)) and also include minimal binary solutions to non-negative systems of linear inequalities and minimal weighted infrequent sets in databases. We show that the hypergraph of all weighted transversals is dual-bounded, i.e., the size of its transversal hypergraph is polynomial in the number of weighted transversals and the size of the input hypergraph. Our bounds are based on new inequalities of extremal set theory and threshold Boolean logic, which may be of independent interest. For instance, we show that for any row-weighted m×n binary matrix and any threshold weight t, the number of maximal sets of columns whose row support has weight above t is at most m times the number of minimal sets of columns with row support of total weight below t. We also prove that the problem of generating all weighted transversals for a given hypergraph is polynomial-time reducible to the generation of all ordinary transversals for another hypergraph, i.e., to the well-known hypergraph dualization problem. As a corollary, we obtain an incremental quasi-polynomial-time algorithm for generating all weighted transversals for a given hypergraph. This result includes as special cases the generation of all the minimal Boolean solutions to a given system of non-negative linear inequalities and the generation of all minimal weighted infrequent sets of columns for a given binary matrix. --- paper_title: Dual-Bounded Generating Problems: All Minimal Integer Solutions for a Monotone System of Linear Inequalities paper_content: We consider the problem of enumerating all minimal integer solutions of a monotone system of linear inequalities. We first show that, for any monotone system of r linear inequalities in n variables, the number of maximal infeasible integer vectors is at most rn times the number of minimal integer solutions to the system. This bound is accurate up to a polylog(r) factor and leads to a polynomial-time reduction of the enumeration problem to a natural generalization of the well-known dualization problem for hypergraphs, in which dual pairs of hypergraphs are replaced by dual collections of integer vectors in a box. We provide a quasi-polynomial algorithm for the latter dualization problem. These results imply, in particular, that the problem of incrementally generating all minimal integer solutions to a monotone system of linear inequalities can be done in quasi-polynomial time. --- paper_title: An inequality for polymatroid functions and its applications paper_content: An integral-valued set function f:2v ↦ Z is called polymatroid if it is submodular, nondecreasing, and f(φ) = 0. Given a polymatroid function f and an integer threshold t ≥ 1, let α = α(f,t) denote the number of maximal sets X ⊆ V satisfying f(X) < t, let β = β(f,t) be the number of minimal sets X ⊆ V for which f(X) ≥ t, and let n = |V|. We show that if β ≥ 2 then α ≤ β(log t)/c, where c = c(n,β) is the unique positive root of the equation 1 = 2c(nc/log β - 1). In particular, our bound implies that α ≤ (nβ)log t for all β ≥ 1. We also give examples of polymatroid functions with arbitrarily large t, n, α and β for which α ≥ β(0.551 log t)/c. More generally, given a polymatroid function f : 2v ↦ Z and an integral threshold t ≥ 1, consider an arbitrary hypergraph H' such that |H'| ≥ 2 and f(H) ≥ t for all H ∈ H'. Let f' be the family of all maximal independent sets X of H' for which f(X) < t. Then |f'| ≤ |H'|(log t)/c(n,|H'|). As an application, we show that given a system of polymatroid inequalities f1(X) ≥ t1,..., fm(X) ≥ tm with quasi-polynomially bounded right-hand sides t1,....,tm, all minimal feasible solutions to this system can be generated in incremental quasi-polynomial time. In contrast to this result, the generation of all maximal infeasible sets is an NP-hard problem for many polymatroid inequalities of small range. --- paper_title: On Maximal Frequent and Minimal Infrequent Sets in Binary Matrices paper_content: Given an m×n binary matrix A, a subset C of the columns is called t-frequent if there are at least t rows in A in which all entries belonging to C are non-zero. Let us denote by α the number of maximal t-frequent sets of A, and let β denote the number of those minimal column subsets of A which are not t-frequent (so called t-infrequent sets). We prove that the inequality α≤(m−t+1)β holds for any binary matrix A in which not all column subsets are t-frequent. This inequality is sharp, and allows for an incremental quasi-polynomial algorithm for generating all minimal t-infrequent sets. We also prove that the analogous generation problem for maximal t-frequent sets is NP-hard. Finally, we discuss the complexity of generating closed frequent sets and some other related problems. --- paper_title: Generating Dual-Bounded Hypergraphs paper_content: This article surveys some recent results on the generation of implicitly given hypergraphs and their applications in Boolean and integer programming, data mining, reliability theory, and combinatorics. Given a monotone property ~ over the subsets of a finite set V, we consider the problem of incrementally generating the family F π of all minimal subsets satisfying property ~ , when ~ is given by a polynomial-time satisfiability oracle. For a number of interesting monotone properties, the family F π turns out to be uniformly dual-bounded , allowing for the incrementally efficient enumeration of the members of F π. Important applications include the efficient generation of minimal infrequent sets of a database (data mining), minimal connectivity ensuring collections of subgraphs from a given list (reliability theory), minimal feasible solutions to a system of monotone inequalities in integer variables (integer programming), minimal spanning collections of subspaces from a given list (linear algebra) and max... --- paper_title: Dual-Bounded Generating Problems: Partial And Multiple Transversals Of A Hypergraph paper_content: We consider two generalizations of the notion of transversal to a finite hypergraph, the so-called multiple and partial transversals. Multiple transversals naturally arise in 0-1 programming, while partial transversals are related to data mining and machine learning. We show that for an arbitrary hypergraph the families of multiple and partial transversals are both dual-bounded in the sense that the size of the corresponding dual hypergraph is bounded by a polynomial in the cardinality and the length of description of the input hypergraph. Our bounds are based on new inequalities of extremal set theory and threshold Boolean logic, which may be of independent interest. We also show that the problems of generating all multiple and all partial transversals for a given hypergraph are polynomial-time reducible to the generation of all ordinary transversals for another hypergraph, i.e., to the well-known dualization problem for hypergraphs. As a corollary, we obtain incremental quasi-polynomial-time algorithms for both of the above problems, as well as for the generation of all the minimal binary solutions for an arbitrary monotone system of linear inequalities. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: Dual-Bounded Generating Problems: Weighted Transversals of a Hypergraph paper_content: Abstract We consider a generalization of the notion of transversal to a finite hypergraph, the so-called weighted transversals. Given a non-negative weight vector assigned to each hyperedge of an input hypergraph A and a non-negative threshold vector, we define a weighted transversal as a minimal vertex set which intersects all the hyperedges of A except for a sub-family of total weight not exceeding the given threshold vector. Weighted transversals generalize partial and multiple transversals introduced in Boros et al. (SIAM J. Comput. 30 (6) (2001)) and also include minimal binary solutions to non-negative systems of linear inequalities and minimal weighted infrequent sets in databases. We show that the hypergraph of all weighted transversals is dual-bounded, i.e., the size of its transversal hypergraph is polynomial in the number of weighted transversals and the size of the input hypergraph. Our bounds are based on new inequalities of extremal set theory and threshold Boolean logic, which may be of independent interest. For instance, we show that for any row-weighted m×n binary matrix and any threshold weight t, the number of maximal sets of columns whose row support has weight above t is at most m times the number of minimal sets of columns with row support of total weight below t. We also prove that the problem of generating all weighted transversals for a given hypergraph is polynomial-time reducible to the generation of all ordinary transversals for another hypergraph, i.e., to the well-known hypergraph dualization problem. As a corollary, we obtain an incremental quasi-polynomial-time algorithm for generating all weighted transversals for a given hypergraph. This result includes as special cases the generation of all the minimal Boolean solutions to a given system of non-negative linear inequalities and the generation of all minimal weighted infrequent sets of columns for a given binary matrix. --- paper_title: Dual-Bounded Generating Problems: All Minimal Integer Solutions for a Monotone System of Linear Inequalities paper_content: We consider the problem of enumerating all minimal integer solutions of a monotone system of linear inequalities. We first show that, for any monotone system of r linear inequalities in n variables, the number of maximal infeasible integer vectors is at most rn times the number of minimal integer solutions to the system. This bound is accurate up to a polylog(r) factor and leads to a polynomial-time reduction of the enumeration problem to a natural generalization of the well-known dualization problem for hypergraphs, in which dual pairs of hypergraphs are replaced by dual collections of integer vectors in a box. We provide a quasi-polynomial algorithm for the latter dualization problem. These results imply, in particular, that the problem of incrementally generating all minimal integer solutions to a monotone system of linear inequalities can be done in quasi-polynomial time. --- paper_title: An inequality for polymatroid functions and its applications paper_content: An integral-valued set function f:2v ↦ Z is called polymatroid if it is submodular, nondecreasing, and f(φ) = 0. Given a polymatroid function f and an integer threshold t ≥ 1, let α = α(f,t) denote the number of maximal sets X ⊆ V satisfying f(X) < t, let β = β(f,t) be the number of minimal sets X ⊆ V for which f(X) ≥ t, and let n = |V|. We show that if β ≥ 2 then α ≤ β(log t)/c, where c = c(n,β) is the unique positive root of the equation 1 = 2c(nc/log β - 1). In particular, our bound implies that α ≤ (nβ)log t for all β ≥ 1. We also give examples of polymatroid functions with arbitrarily large t, n, α and β for which α ≥ β(0.551 log t)/c. More generally, given a polymatroid function f : 2v ↦ Z and an integral threshold t ≥ 1, consider an arbitrary hypergraph H' such that |H'| ≥ 2 and f(H) ≥ t for all H ∈ H'. Let f' be the family of all maximal independent sets X of H' for which f(X) < t. Then |f'| ≤ |H'|(log t)/c(n,|H'|). As an application, we show that given a system of polymatroid inequalities f1(X) ≥ t1,..., fm(X) ≥ tm with quasi-polynomially bounded right-hand sides t1,....,tm, all minimal feasible solutions to this system can be generated in incremental quasi-polynomial time. In contrast to this result, the generation of all maximal infeasible sets is an NP-hard problem for many polymatroid inequalities of small range. --- paper_title: On Maximal Frequent and Minimal Infrequent Sets in Binary Matrices paper_content: Given an m×n binary matrix A, a subset C of the columns is called t-frequent if there are at least t rows in A in which all entries belonging to C are non-zero. Let us denote by α the number of maximal t-frequent sets of A, and let β denote the number of those minimal column subsets of A which are not t-frequent (so called t-infrequent sets). We prove that the inequality α≤(m−t+1)β holds for any binary matrix A in which not all column subsets are t-frequent. This inequality is sharp, and allows for an incremental quasi-polynomial algorithm for generating all minimal t-infrequent sets. We also prove that the analogous generation problem for maximal t-frequent sets is NP-hard. Finally, we discuss the complexity of generating closed frequent sets and some other related problems. --- paper_title: Generating Dual-Bounded Hypergraphs paper_content: This article surveys some recent results on the generation of implicitly given hypergraphs and their applications in Boolean and integer programming, data mining, reliability theory, and combinatorics. Given a monotone property ~ over the subsets of a finite set V, we consider the problem of incrementally generating the family F π of all minimal subsets satisfying property ~ , when ~ is given by a polynomial-time satisfiability oracle. For a number of interesting monotone properties, the family F π turns out to be uniformly dual-bounded , allowing for the incrementally efficient enumeration of the members of F π. Important applications include the efficient generation of minimal infrequent sets of a database (data mining), minimal connectivity ensuring collections of subgraphs from a given list (reliability theory), minimal feasible solutions to a system of monotone inequalities in integer variables (integer programming), minimal spanning collections of subspaces from a given list (linear algebra) and max... --- paper_title: New results on monotone dualization and generating hypergraph transversals paper_content: We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem. --- paper_title: On Generating the Irredundant Conjunctive and Disjunctive Normal Forms of Monotone Boolean Functions paper_content: Let f : {0,1} n → {0,1} be a monotone Boolean function whose value at any point x ∈ {0,1} n can be determined in time t. Denote by c=Λ I ∈ C ∨ i ∈ I x i the irredundant CNF off, where C is the set of the prime implicates of f. Similarly, let d=∨ J ∈ D Λ j ∈ J x j be the irredundant DNF of the same function, where D is the set of the prime implicants of f. We show that given subsets C' ⊆ C and D' ⊆ D such that (C',D') ¬= (C,D), a new term in (C\C')∪(D\D') can be found in time O(n(t+n))+m o(log m) , where m=|C'|+|D'|. In particular, if f(x) can be evaluated for every x ∈ {0,1} n in polynomial time, then the forms c and d can be jointly generated in incremental quasi-polynomial time. On the other hand, even for the class of Λ, ∨-formulae f of depth 2, i.e., for CNFs or DNFs, it is unlikely that uniform sampling from within the set of the prime implicates and implicants of f can be carried out in time bounded by a quasi-polynomial 2 polylog(.) in the input size of f. We also show that for some classes of polynomial-time computable monotone Boolean functions it is NP-hard to test either of the conditions D' =D or C' =C. This provides evidence that for each of these classes neither conjunctive nor disjunctive irredundant normal forms can be generated in total (or incremental) quasi-polynomial time. Such classes of monotone Boolean functions naturally arise in game theory, networks and relay contact circuits, convex programming, and include a subset of Λ, ∨-formulae of depth 3. --- paper_title: Complexity of Identification and Dualization of Positive Boolean Functions paper_content: Abstract We consider in this paper the problem of identifying min T(ƒ) and max F(ƒ) of a positive (i.e., monotone) Boolean function ƒ, by using membership queries only, where min T(ƒ) (max F(ƒ)) denotes the set of minimal true vectors (maximal false vectors) of ƒ. It is shown that the existence of an incrementally polynomial algorithm for this problem is equivalent to the existence of the following algorithms, where ƒ and g are positive Boolean functions: • An incrementally polynomial algorithm to dualize ƒ; • An incrementally polynomial algorithm to self-dualize ƒ; • A polynomial algorithm to decide if ƒ and are mutually dual; • A polynomial algorithm to decide if ƒ is self-dual; • A polynomial algorithm to decide if ƒ is saturated; • A polynomial algorithm in |min (ƒ)| + |max (ƒ)| to identify min (ƒ) only. Some of these are already well known open problems in the respective fields. Other related topics, including various equivalent problems encountered in hypergraph theory and theory of coteries (used in distributed systems), are also discussed. --- paper_title: Identifying the Minimal Transversals of a Hypergraph and Related Problems paper_content: The paper considers two decision problems on hypergraphs, hypergraph saturation and recognition of the transversal hypergraph, and discusses their significance for several search problems in applied computer science. Hypergraph saturation (i.e., given a hypergraph $\cal H$, decide if every subset of vertices is contained in or contains some edge of $\cal H$) is shown to be co-NP-complete. A certain subproblem of hypergraph saturation, the saturation of simple hypergraphs (i.e., Sperner families), is shown to be under polynomial transformation equivalent to transversal hypergraph recognition; i.e., given two hypergraphs ${\cal H}_{1}, {\cal H}_{2}$, decide if the sets in ${\cal H}_{2}$ are all the minimal transversals of ${\cal H}_{1}$. The complexity of the search problem related to the recognition of the transversal hypergraph, the computation of the transversal hypergraph, is an open problem. This task needs time exponential in the input size; it is unknown whether an output-polynomial algorithm exists. For several important subcases, for instance if an upper or lower bound is imposed on the edge size or for acyclic hypergraphs, output-polynomial algorithms are presented. Computing or recognizing the minimal transversals of a hypergraph is a frequent problem in practice, which is pointed out by identifying important applications in database theory, Boolean switching theory, logic, and artificial intelligence (AI), particularly in model-based diagnosis. --- paper_title: An Efficient Implementation of a Quasi-polynomial Algorithm for Generating Hypergraph Transversals paper_content: Given a finite set V , and a hypergraph H⊆ 2 V , the hyper- graph transversal problem calls for enumerating all minimal hitting sets (transversals) for H. This problem plays an important role in practi- cal applications as many other problems were shown to be polynomially equivalent to it. Fredman and Khachiyan (1996) gave an incremental quasi-polynomial time algorithm for solving the hypergraph transversal problem (9). In this paper, we present an efficient implementation of this algorithm. While we show that our implementation achieves the same bound on the running time as in (9), practical experience with this im- plementation shows that it can be substantially faster. We also show that a slight modification of the algorithm in (9) can be used to give a stronger bound on the running time. --- paper_title: Average Case Self-Duality of Monotone Boolean Functions paper_content: The problem of determining whether a monotone boolean function is self-dual has numerous applications in Logic and AI. The applications include theory revision, model-based diagnosis, abductive explanations and learning monotone boolean functions. It is not known whether self-duality of monotone boolean functions can be tested in polynomial time, though a quasi-polynomial time algorithm exists. We describe another quasi-polynomial time algorithm for solving the self-duality problem of monotone boolean functions and analyze its average-case behaviour on a set of randomly generated instances. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: Almost all monotone Boolean functions are polynomially learnable using membership queries paper_content: We consider exact learning or identification of monotone Boolean functions by only using membership queries. It is shown that almost all monotone Boolean functions are polynomially identifiable in the input number of variables as well as the output being the sum of the sizes of the CNF and DNF representations. --- paper_title: Complexity of Identification and Dualization of Positive Boolean Functions paper_content: Abstract We consider in this paper the problem of identifying min T(ƒ) and max F(ƒ) of a positive (i.e., monotone) Boolean function ƒ, by using membership queries only, where min T(ƒ) (max F(ƒ)) denotes the set of minimal true vectors (maximal false vectors) of ƒ. It is shown that the existence of an incrementally polynomial algorithm for this problem is equivalent to the existence of the following algorithms, where ƒ and g are positive Boolean functions: • An incrementally polynomial algorithm to dualize ƒ; • An incrementally polynomial algorithm to self-dualize ƒ; • A polynomial algorithm to decide if ƒ and are mutually dual; • A polynomial algorithm to decide if ƒ is self-dual; • A polynomial algorithm to decide if ƒ is saturated; • A polynomial algorithm in |min (ƒ)| + |max (ƒ)| to identify min (ƒ) only. Some of these are already well known open problems in the respective fields. Other related topics, including various equivalent problems encountered in hypergraph theory and theory of coteries (used in distributed systems), are also discussed. --- paper_title: An intersection inequality for discrete distributions and related generation problems paper_content: Given two finite sets of points X, Y in Rn which can be separated by a nonnegative linear function, and such that the componentwise minimum of any two distinct points in X is dominated by some point in Y, we show that |X| = n|Y|. As a consequence of this result, we obtain quasi-polynomial time algorithms for generating all maximal integer feasible solutions for a given monotone system of separable inequalities, for generating all p-inefficient points of a given discrete probability distribution, and for generating all maximal empty hyper-rectangles for a given set of points in Rn. This provides a substantial improvement over previously known exponential algorithms for these generation problems related to Integer and Stochastic Programming, and Data Mining. Furthermore, we give an incremental polynomial time generation algorithm for monotone systems with fixed number of separable inequalities, which, for the very special case of one inequality, implies that for discrete probability distributions with independent coordinates, both p-efficient and p-inefficient points can be separately generated in incremental polynomial time. --- paper_title: Dual-Bounded Generating Problems: All Minimal Integer Solutions for a Monotone System of Linear Inequalities paper_content: We consider the problem of enumerating all minimal integer solutions of a monotone system of linear inequalities. We first show that, for any monotone system of r linear inequalities in n variables, the number of maximal infeasible integer vectors is at most rn times the number of minimal integer solutions to the system. This bound is accurate up to a polylog(r) factor and leads to a polynomial-time reduction of the enumeration problem to a natural generalization of the well-known dualization problem for hypergraphs, in which dual pairs of hypergraphs are replaced by dual collections of integer vectors in a box. We provide a quasi-polynomial algorithm for the latter dualization problem. These results imply, in particular, that the problem of incrementally generating all minimal integer solutions to a monotone system of linear inequalities can be done in quasi-polynomial time. --- paper_title: Generating Dual-Bounded Hypergraphs paper_content: This article surveys some recent results on the generation of implicitly given hypergraphs and their applications in Boolean and integer programming, data mining, reliability theory, and combinatorics. Given a monotone property ~ over the subsets of a finite set V, we consider the problem of incrementally generating the family F π of all minimal subsets satisfying property ~ , when ~ is given by a polynomial-time satisfiability oracle. For a number of interesting monotone properties, the family F π turns out to be uniformly dual-bounded , allowing for the incrementally efficient enumeration of the members of F π. Important applications include the efficient generation of minimal infrequent sets of a database (data mining), minimal connectivity ensuring collections of subgraphs from a given list (reliability theory), minimal feasible solutions to a system of monotone inequalities in integer variables (integer programming), minimal spanning collections of subspaces from a given list (linear algebra) and max... --- paper_title: On dualization in products of forests, in paper_content: Let P = P 1 × … x P n be the product of n partially ordered sets, each with an acyclic precedence graph in which either the in-degree or the out-degree of each element is bounded. Given a subset A C P, it is shown that the set of maximal independent elements of A in P can be incrementally generated in quasi-polynomial time. We discuss some applications in data mining related to this dualization problem. --- paper_title: On one criterion of the optimality of an algorithm for evaluating monotonic Boolean functions paper_content: Abstract A criterion of the optimality of an algorithm for evaluating monotonic Boolean functions, which is different from the Shannon criterion, is considered and its practical significance is proved. Upper and lower estimates are obtained for the efficiency of the algorithm for evaluating monotonic Boolean functions which is optimal with respect to the criterion which has been introduced. An algorithm is constructed for evaluating the class of monotonic Boolean functions which are generated by imcompatible systems of linear ineaqualities. This algorithm is optimal with respect to the criterion introduced in this paper, the Shannon criterion, and a number of other criteria subject to certain additional conditions. --- paper_title: A fast parallel algorithm for the maximal independent set problem paper_content: A parallel algorithm is presented which accepts as input a graph G and produces a maximal independent set of vertices in G. On a P-RAM without the concurrent write or concurrent read features, the algorithm executes in O((log n) 4 ) time and uses O((n/log n) 3 ) processors, where n is the number of vertices in G. The algorithm has several novel features that may find other applications. These include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a “dynamic pigeonhole principle” that generalizes the conventional pigeonhole principle. --- paper_title: Generating All Maximal Independent Sets: NP-Hardness and Polynomial-Time Algorithms paper_content: Suppose that an independence system $(E,\mathcal {I})$ is characterized by a subroutine which indicates in unit time whether or not a given subset of E is independent. It is shown that there is no algorithm for generating all the K maximal independent sets of such an independence system in time polynomial in $|E|$ and K, unless $\mathcal {P} = \mathcal {NP}$. However, it is possible to apply ideas of Paull and Unger and of Tsukiyama et al. to obtain polynomial-time algorithms for a number of special cases, e.g. the efficient generation of all maximal feasible solutions to a knapsack problem. The algorithmic techniques bear an interesting relationship with those of Read for the enumeration of graphs and other combinatorial configurations. --- paper_title: A New Algorithm for the Hypergraph Transversal Problem paper_content: We consider the problem of finding all minimal transversals of a hypergraph ${\mathcal H}\subseteq 2^V$, given by an explicit list of its hyperedges. We give a new decomposition technique for solving the problem with the following advantages: (i) Global parallelism: for certain classes of hypergraphs, e.g. hypergraphs of bounded edge size, and any given integer k, the algorithm outputs k minimal transversals of ${\mathcal H}$ in time bounded by ${\rm polylog}(|V|,|{\mathcal H}|,k)$ assuming ${\rm poly}(|V|,|{\mathcal H}|,k)$ number of processors. Except for the case of graphs, none of the previously known algorithms for solving the same problem exhibit this feature. (ii) Using this technique, we also obtain new results on the complexity of generating minimal transversals for new classes of hypergraphs, namely hypergraphs of bounded dual-conformality, and hypergraphs in which every edge intersects every minimal transversal in a bounded number of vertices. --- paper_title: A variant of Reiter's hitting-set algorithm paper_content: In this paper we introduce a variant of Reiter's hitting-set algorithm. This variant produces a hitting-set tree instead of an acyclic directed graph. As an advantage some subset checks necessary for reducing the graph during search can be avoided. --- paper_title: Hypergraph Transversal Computation and Related Problems in Logic and AI paper_content: Generating minimal transversals of a hypergraph is an important problem which has many applications in Computer Science. In the present paper, we address this problem and its decisional variant, i.e., the recognition of the transversal hypergraph for another hypergraph. We survey some results on problems which are known to be related to computing the transversal hypergraph, where we focus on problems in propositional Logic and AI. Some of the results have been established already some time ago, and were announced but their derivation was not widely disseminated. We then address recent developments on the computational complexity of computing resp. recognizing the transversal hypergraph. The precise complexity of these problems is not known to date, and is in fact open for more than 20 years now. --- paper_title: Queries and Concept Learning paper_content: We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these queries for formal domains, including the regular languages, restricted classes of context-free languages, the pattern languages, and restricted types of prepositional formulas. Some general lower bound techniques are given. Equivalence queries are compared with Valiant's criterion of probably approximately correct identification under random sampling. --- paper_title: An efficient incremental algorithm for generating all maximal independent sets in hypergraphs of bounded dimension paper_content: We show that for hypergraphs of bounded edge size, the problem of extending a given list of maximal independent sets is NC-reducible to the computation of an arbitrary maximal independent set for an induced sub-hypergraph. The latter problem is known to be in RNC. In particular, our reduction yields an incremental RNC dualization algorithm for hypergraphs of bounded edge size, a problem previously known to be solvable in polynomial incremental time. We also give a similar parallel algorithm for the dualization problem on the product of arbitrary lattices which have a bounded number of immediate predecessors for each element. --- paper_title: Almost all monotone Boolean functions are polynomially learnable using membership queries paper_content: We consider exact learning or identification of monotone Boolean functions by only using membership queries. It is shown that almost all monotone Boolean functions are polynomially identifiable in the input number of variables as well as the output being the sum of the sizes of the CNF and DNF representations. --- paper_title: The computation of hitting sets: review and new algorithms paper_content: In model-based diagnosis or other research fields, the hitting sets of a set cluster are usually used. In this paper we introduce some algorithms, including the new BHS-tree and Boolean algebraic algorithms. In the BHS-tree algorithm, a binary-tree is used for the computation of hitting sets, and in the Boolean algebraic algorithm, components are represented by Boolean variables. It runs just for one time to catch the minimal hitting sets. We implemented the algorithms and present empirical results in order to show their superiority over other algorithms for computing hitting sets. --- paper_title: A Correction to the Algorithm in Reiter's Theory of Diagnosis paper_content: Reiter [3] has developed a general theory of diagnosis based on first principles. His algorithm computes all diagnoses which explain the differences between the predicted and observed behavior of a given system. Unfortunately, Reiter's description of the algorithm is incorrect in that some diagnoses can be missed under certain conditions. This note presents a revised algorithm and a proof of its correctness. --- paper_title: Polynomial Time Recognition Of 2-Monotonic Positive Boolean Functions Given By An Oracle paper_content: We consider the problem of identifying an unknown Boolean function $f$ by asking an oracle the functional values $f(a)$ for a selected set of test vectors $a \in \{0,1\}^{n}$. Furthermore, we assume that $f$ is a positive (or monotone) function of $n$ variables. It is not yet known whether or not the whole task of generating test vectors and checking if the identification is completed can be carried out in polynomial time in $n$ and $m$, where $m=|\min T(f)| + |\max F(f)|$ and $\min T(f)$ (respectively, $\max F(f))$ denotes the set of minimal true (respectively, maximal false) vectors of $f$. To partially answer this question, we propose here two polynomial-time algorithms that, given an unknown positive function $f$ of $n$ variables, decide whether or not $f$ is 2-monotonic and, if $f$ is 2-monotonic, output both sets $\min T(f)$ and $\max F(f)$. The first algorithm uses $O(nm^{2} + n^{2}m)$ time and $O(nm)$ queries, while the second one uses $O(n^{3}m)$ time and $O(n^{3}m)$ queries. --- paper_title: A Fast and Simple Algorithm for Identifying 2-Monotonic Positive Boolean Functions paper_content: Consider the problem of identifying minT(f) and maxF(f) of a positive (i.e., monotone) Boolean functionf, by using membership queries only, where minT(f) (maxF(f)) denotes the set of minimal true vectors (maximum false vectors) off. Moreover, as the existence of a polynomial total time algorithm (i.e., polynomial time in the length of input and output) for this problem is still open, we consider here a restricted problem: given an unknown positive functionfofnvariables, decide whetherfis 2-monotonic or not, and iffis 2-monotonic, output both minT(f) and maxF(f). For this problem, we propose a simple algorithm, which is based on the concept of maximum latency, and we show that it usesO(n2m) time andO(n2m) queries, wherem=|minT(f)|+|maxF(f)|. This answers affirmatively the conjecture raised in Boroset al.Lecture Notes in Comput. Sci.557(1991), 104?115, Boroset al.SIAM J. Comput.26(1997), 93?109, and is an improvement over the two algorithms discussed therein: one usesO(n3m) time andO(n3m) queries, and the other usesO(nm2+n2m) time andO(nm) queries. --- paper_title: Data mining, hypergraph transversals, and machine learning paper_content: Several data mining problems can be formulated as problems of finding maximally specific sentences that are interesting in a database. We first show that this problem has a close relationship with the hypergraph transversal problem. We then analyze two algorithms that have been previously used in data mining, proving upper bounds on their complexity. The first algorithm is useful when the maximally specific interesting sentences are “small”. We show that this algorithm can also be used to efficiently solve a special case of the hypergraph transversal problem, improving on previous results. The second algorithm utilizes a subroutine for hypergraph transversals, and is applicable in more general situations, with complexity close to a lower bound for the problem. We also relate these problems to the model of exact learning in computational learning theory, and use the correspondence to derive some corollaries. --- paper_title: Algorithms for inferring functional dependencies from relations paper_content: Abstract The dependency inference problem is to find a cover of the set of functional dependencies that hold in a given relation. The problem has applications in relational database design, in query optimization, and in artificial intelligence. The problem is exponential in the number of attributes. We develop two algorithms with better best case behavior than the simple one. One algorithm reduces the problem to computing the transversal of a hypergraph. The other is based on repeatedly sorting the relation with respect to a set of attributes. --- paper_title: New results on monotone dualization and generating hypergraph transversals paper_content: We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem. --- paper_title: The Maximum Latency and Identification of Positive Boolean Functions paper_content: Consider the problem of identifying min T(f) and max F(f) of a positive (i.e., monotone) Boolean function f, by using membership queries only, where min T(f) (maxF(f)) denotes the set of minimal true vectors (maximal false vectors) of f. It is known that an incrementally polynomial algorithm exists if and only if there is a polynomial time algorithm to check the existence of an unknown vector for given sets MT\(\subseteq \) min T(f) and MF\(\subseteq \) max F(f). Unfortunately, however, the complexity of this problem is still unknown. To answer this question partially, we introduce in this paper a measure for the difficulty of finding an unknown vector, which is called the maximum latency. If the maximum latency is constant, then an unknown vector can be found in polynomial time and there is an incrementally polynomial algorithm for identification. Several subclasses of positive functions are shown to have constant maximum latency, e.g., 2-monotonic positive functions, Δ-partial positive threshold functions and matroid functions, while the class of general positive functions has maximum latency not smaller than [n/4]+1 and the class of positive k-DNF functions has Ω(√n) maximum latency. --- paper_title: Generating Maximal Independent Sets for Hypergraphs with Bounded Edge-Intersections paper_content: Given a finite set V, and integers k > 1 and r > 0, denote by A(k, r) the class of hypergraphs A ⊆ 2 V with (k, r)-bounded intersections, i.e. in which the intersection of any k distinct hyperedges has size at most r. We consider the problem MIS(A,I): given a hypergraph A and a subfamily I C I(A), of its maximal independent sets (MIS) I(A), either extend this subfamily by constructing a new MIS I ∈ I(A) \ I or prove that there are no more MIS, that is I = I(A). We show that for hypergraphs A E A(k,r) with k + r ≤ const, problem MIS(A,I) is NC-reducible to problem MIS(A',O) of generating a single MIS for a partial subhypergraph A' of A. In particular, for this class of hypergraphs, we get an incremental polynomial algorithm for generating all MIS. Furthermore, combining this result with the currently known algorithms for finding a single maximal independent set of a hypergraph, we obtain efficient parallel algorithms for incrementally generating all MIS for hypergraphs in the classes A(1, c), A(c, 0), and A(2,1), where c is a constant. We also show that, for A ∈ A(k,r), where k + r < const, the problem of generating all MIS of A can be solved in incremental polynomial-time with space polynomial only in the size of A. --- paper_title: Exact Learning of Subclasses of CDNF Formulas with Membership Queries paper_content: We consider the exact, learuability of subclasses of Boolean formulas from membership queries alone. We show how to combine known learning algorithms that use membership and equivalence queries to obtain new learning results only with memberships. In particular we show the exact learuability of read-k monotone formulas, Sat-k\(\mathcal{O}\)(log n)-CDNF, and \(\mathcal{O}(\sqrt {\log n} )\)-size CDNF from membership queries only. --- paper_title: A theory of the learnable paper_content: Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. In this paper we regard learning as the phenomenon of knowledge acquisition in the absence of explicit programming. We give a precise methodology for studying this phenomenon from a computational viewpoint. It consists of choosing an appropriate information gathering mechanism, the learning protocol, and exploring the class of concepts that can be learnt using it in a reasonable (polynomial) number of steps. We find that inherent algorithmic complexity appears to set serious limits to the range of concepts that can be so learnt. The methodology and results suggest concrete principles for designing realistic learning systems. --- paper_title: Complexity of Identification and Dualization of Positive Boolean Functions paper_content: Abstract We consider in this paper the problem of identifying min T(ƒ) and max F(ƒ) of a positive (i.e., monotone) Boolean function ƒ, by using membership queries only, where min T(ƒ) (max F(ƒ)) denotes the set of minimal true vectors (maximal false vectors) of ƒ. It is shown that the existence of an incrementally polynomial algorithm for this problem is equivalent to the existence of the following algorithms, where ƒ and g are positive Boolean functions: • An incrementally polynomial algorithm to dualize ƒ; • An incrementally polynomial algorithm to self-dualize ƒ; • A polynomial algorithm to decide if ƒ and are mutually dual; • A polynomial algorithm to decide if ƒ is self-dual; • A polynomial algorithm to decide if ƒ is saturated; • A polynomial algorithm in |min (ƒ)| + |max (ƒ)| to identify min (ƒ) only. Some of these are already well known open problems in the respective fields. Other related topics, including various equivalent problems encountered in hypergraph theory and theory of coteries (used in distributed systems), are also discussed. --- paper_title: Efficient Read-Restricted Monotone CNF/DNF Dualization by Learning with Membership Queries paper_content: We consider exact learning monotone CNF formulas in which each variable appears at most some constant k times (“read-k” monotone CNF). Let f : l0,1r^n → l0,1r be expressible as a read-k monotone CNF formula for some natural number k. We give an incremental output polynomial time algorithm for exact learning both the read-k CNF and (not necessarily read restricted) DNF descriptions of f. The algorithm‘s only method of obtaining information about f is through membership queries, i.e., by inquiring about the value f(x) for points x ∈ l0,1r^n. The algorithm yields an incremental polynomial output time solution to the (read-k) monotone CNF/DNF dualization problem. The unrestricted versions remain open problems of importance. --- paper_title: On one criterion of the optimality of an algorithm for evaluating monotonic Boolean functions paper_content: Abstract A criterion of the optimality of an algorithm for evaluating monotonic Boolean functions, which is different from the Shannon criterion, is considered and its practical significance is proved. Upper and lower estimates are obtained for the efficiency of the algorithm for evaluating monotonic Boolean functions which is optimal with respect to the criterion which has been introduced. An algorithm is constructed for evaluating the class of monotonic Boolean functions which are generated by imcompatible systems of linear ineaqualities. This algorithm is optimal with respect to the criterion introduced in this paper, the Shannon criterion, and a number of other criteria subject to certain additional conditions. --- paper_title: An Efficient Implementation of a Quasi-polynomial Algorithm for Generating Hypergraph Transversals paper_content: Given a finite set V , and a hypergraph H⊆ 2 V , the hyper- graph transversal problem calls for enumerating all minimal hitting sets (transversals) for H. This problem plays an important role in practi- cal applications as many other problems were shown to be polynomially equivalent to it. Fredman and Khachiyan (1996) gave an incremental quasi-polynomial time algorithm for solving the hypergraph transversal problem (9). In this paper, we present an efficient implementation of this algorithm. While we show that our implementation achieves the same bound on the running time as in (9), practical experience with this im- plementation shows that it can be substantially faster. We also show that a slight modification of the algorithm in (9) can be used to give a stronger bound on the running time. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: Detailed Description of an Algorithm for Enumeration of Maximal Frequent Sets with Irredundant Dualization paper_content: We describe an implementation of an algorithm for enumerating all maximal frequent sets using irredundant dualization, which is an improved version of that of Gunopulos et al. The algorithm of Gunopulos et al. solves many dualization problems, and takes long computation time. We interleaves dualization with the main algorithm, and reduce the computation time for dualization by as long as one dualization. This also reduces the space complexity. Moreover, we accelerate the computation by using sparseness. --- paper_title: An Efficient Algorithm for the Transversal Hypergraph Generation paper_content: The Transversal Hypergraph Generation is the problem of generating, given a hypergraph, the set of its minimal transversals, i.e., the hypergraph whose hyperedges are the minimal hitting sets of the given one. The purpose of this paper is to present an efficient and practical algorithm for solving this problem. We show that the proposed algorithm operates in a way that rules out regeneration and, thus, its memory requirements are polynomially bounded to the size of the input hypergraph. Although no time bound for the algorithm is given, experimental evaluation and comparison with other approaches have shown that it behaves well in practice and it can successfully handle large problem instances. --- paper_title: Evaluation of an Algorithm for the Transversal Hypergraph Problem paper_content: The Transversal Hypergraph Problem is the problem of computing, given a hypergraph, the set of its minimal transversals, i.e. the hypergraph whose hyperedges are all minimal hitting sets of the given one. This problem turns out to be central in various fields of Computer Science. We present and experimentally evaluate a heuristic algorithm for the problem, which seems able to handle large instances and also possesses some nice features especially desirable in problems with large output such as the Transversal Hypergraph Problem. --- paper_title: Minimizing the Average Query Complexity of Learning Monotone Boolean Functions paper_content: This paper addresses the problem of completely reconstructing deterministic monotone Boolean functions via membership queries. The minimum average query complexity is guaranteed via recursion, where partially ordered sets (posets) make up the overlapping subproblems. For problems with up to 4 variables, the posets' optimality conditions are summarized in the form of an evaluative criterion. The evaluative criterion extends then computational feasibility to problems involving up to about 20 variables. A frameworkfor unbiased average case comparison of monotone Boolean function inference algorithms is developed using unequal probability sampling. The unbiased empirical results show that an implementation of the subroutine considered as a standard in the literature performs almost twice as many queries as the evaluative criterion on the average. It should also be noted that the first algorithm ever designed for this problem performed consistently within two percentage points of the evaluative criterion. As such, it prevails, by far, as the most efficient of the many preexisting algorithms. --- paper_title: The computation of hitting sets: review and new algorithms paper_content: In model-based diagnosis or other research fields, the hitting sets of a set cluster are usually used. In this paper we introduce some algorithms, including the new BHS-tree and Boolean algebraic algorithms. In the BHS-tree algorithm, a binary-tree is used for the computation of hitting sets, and in the Boolean algebraic algorithm, components are represented by Boolean variables. It runs just for one time to catch the minimal hitting sets. We implemented the algorithms and present empirical results in order to show their superiority over other algorithms for computing hitting sets. --- paper_title: A fast algorithm for computing hypergraph transversals and its application in mining emerging patterns paper_content: Computing the minimal transversals of a hypergraph is an important problem in computer science that has significant applications in data mining. We present a new algorithm for computing hypergraph transversals and highlight their close connection to an important class of patterns known as emerging patterns. We evaluate our technique on a number of large datasets and show that it outperforms previous approaches by a factor of 9-29 times. --- paper_title: A Practical Fast Algorithm for Enumerating Minimal SetCoverings paper_content: For a set family F defined on a grand set E, a subset of F covering all the elements of E is called a set covering. The enumeration problem of minimal set covering is equal to the enumeration problem of hypergraph dualization, minimal hitting sets, and other other many problems, and have been studied intensively. However, the existence of an output polynomial algorithm for this problem is still open. However, there proposed an algorithm whose average computation time on computational experiments for random generated instances is output polynomial. In this paper, we propose a practical fast algorithm obtained by improving this algorithm, and by computational experiments, show that the average computation time for randomly generated instances is O(|E|) per output, by computational experiments. --- paper_title: Bidual Horn Functions and Extensions paper_content: Partially defined Boolean functions (pdBf) , where are disjoint sets of true and false vectors, generalize total Boolean functions by allowing that the function values on some input vectors are unknown. The main issue with pdBfs is the extension problem, which is deciding, given a pdBf, whether it is interpolated by a function from a given class of total Boolean functions, and computing a formula for . In this paper, we consider extensions of bidual Horn functions, which are the Boolean functions such that both and its dual function are Horn. They are intuitively appealing for considering extensions because they give a symmetric role to positive and negative information (i.e., true and false vectors) of a pdBf, which is not possible with arbitrary Horn functions. Bidual Horn functions turn out to constitute an intermediate class between positive and Horn functions which retains several benign properties of positive functions. Besides the extension problem, we study recognition of bidual Horn functions from Boolean formulas and properties of normal form expressions. We show that finding a bidual Horn extension and checking biduality of a Horn DNF is feasible in polynomial time, and that the latter is intractable from arbitrary formulas. We also give characterizations of shortest DNF expressions of a bidual Horn function and show how to compute such an expression from a Horn DNF for in polynomial time; for arbitrary Horn functions, this is NP-hard. Furthermore, we show that a polynomial total algorithm for dualizing a bidual Horn function exists if and only if there is such an algorithm for dualizing a positive function. --- paper_title: Translating between Horn Representations and their Characteristic Models paper_content: Characteristic models are an alternative, model based, representation for Horn expressions. It has been shown that these two representations are incomparable and each has its advantages over the other. It is therefore natural to ask what is the cost of translating, back and forth, between these representations. Interestingly, the same translation questions arise in database theory, where it has applications to the design of relational databases. This paper studies the computational complexity of these problems. ::: ::: Our main result is that the two translation problems are equivalent under polynomial reductions, and that they are equivalent to the corresponding decision problem. Namely, translating is equivalent to deciding whether a given set of models is the set of characteristic models for a given Horn expression. ::: ::: We also relate these problems to the hypergraph transversal problem, a well known problem which is related to other applications in AI and for which no polynomial time algorithm is known. It is shown that in general our translation problems are at least as hard as the hypergraph transversal problem, and in a special case they are equivalent to it. --- paper_title: An Efficient Implementation of a Quasi-polynomial Algorithm for Generating Hypergraph Transversals paper_content: Given a finite set V , and a hypergraph H⊆ 2 V , the hyper- graph transversal problem calls for enumerating all minimal hitting sets (transversals) for H. This problem plays an important role in practi- cal applications as many other problems were shown to be polynomially equivalent to it. Fredman and Khachiyan (1996) gave an incremental quasi-polynomial time algorithm for solving the hypergraph transversal problem (9). In this paper, we present an efficient implementation of this algorithm. While we show that our implementation achieves the same bound on the running time as in (9), practical experience with this im- plementation shows that it can be substantially faster. We also show that a slight modification of the algorithm in (9) can be used to give a stronger bound on the running time. --- paper_title: On the Complexity of Dualization of Monotone Disjunctive Normal Forms paper_content: We show that the duality of a pair of monotone disjunctive normal forms of sizencan be tested inno(logn)time. --- paper_title: A New Algorithm for the Hypergraph Transversal Problem paper_content: We consider the problem of finding all minimal transversals of a hypergraph ${\mathcal H}\subseteq 2^V$, given by an explicit list of its hyperedges. We give a new decomposition technique for solving the problem with the following advantages: (i) Global parallelism: for certain classes of hypergraphs, e.g. hypergraphs of bounded edge size, and any given integer k, the algorithm outputs k minimal transversals of ${\mathcal H}$ in time bounded by ${\rm polylog}(|V|,|{\mathcal H}|,k)$ assuming ${\rm poly}(|V|,|{\mathcal H}|,k)$ number of processors. Except for the case of graphs, none of the previously known algorithms for solving the same problem exhibit this feature. (ii) Using this technique, we also obtain new results on the complexity of generating minimal transversals for new classes of hypergraphs, namely hypergraphs of bounded dual-conformality, and hypergraphs in which every edge intersects every minimal transversal in a bounded number of vertices. --- paper_title: Detailed Description of an Algorithm for Enumeration of Maximal Frequent Sets with Irredundant Dualization paper_content: We describe an implementation of an algorithm for enumerating all maximal frequent sets using irredundant dualization, which is an improved version of that of Gunopulos et al. The algorithm of Gunopulos et al. solves many dualization problems, and takes long computation time. We interleaves dualization with the main algorithm, and reduce the computation time for dualization by as long as one dualization. This also reduces the space complexity. Moreover, we accelerate the computation by using sparseness. --- paper_title: Parameterized enumeration, transversals, and imperfect phylogeny reconstruction paper_content: We study parameterized enumeration problems where we are interested in all solutions of limited size rather than just some solution of minimum cardinality. (Actually, we have to enumerate the inclusion-minimal solutions in order to get fixed-parameter tractable (FPT) results.) Two novel concepts are the notion of a full kernel that contains all small solutions and implicit enumeration of solutions in form of compressed descriptions. In particular, we study combinatorial and computational bounds for the transversal hypergraph (vertex covers in graphs is a special case), restricted to hyperedges with at most k elements. As an example, we apply the results and further special-purpose techniques to almost-perfect phylogeny reconstruction, a problem in computational biology. --- paper_title: Evaluation of an Algorithm for the Transversal Hypergraph Problem paper_content: The Transversal Hypergraph Problem is the problem of computing, given a hypergraph, the set of its minimal transversals, i.e. the hypergraph whose hyperedges are all minimal hitting sets of the given one. This problem turns out to be central in various fields of Computer Science. We present and experimentally evaluate a heuristic algorithm for the problem, which seems able to handle large instances and also possesses some nice features especially desirable in problems with large output such as the Transversal Hypergraph Problem. --- paper_title: Minimizing the Average Query Complexity of Learning Monotone Boolean Functions paper_content: This paper addresses the problem of completely reconstructing deterministic monotone Boolean functions via membership queries. The minimum average query complexity is guaranteed via recursion, where partially ordered sets (posets) make up the overlapping subproblems. For problems with up to 4 variables, the posets' optimality conditions are summarized in the form of an evaluative criterion. The evaluative criterion extends then computational feasibility to problems involving up to about 20 variables. A frameworkfor unbiased average case comparison of monotone Boolean function inference algorithms is developed using unequal probability sampling. The unbiased empirical results show that an implementation of the subroutine considered as a standard in the literature performs almost twice as many queries as the evaluative criterion on the average. It should also be noted that the first algorithm ever designed for this problem performed consistently within two percentage points of the evaluative criterion. As such, it prevails, by far, as the most efficient of the many preexisting algorithms. --- paper_title: The computation of hitting sets: review and new algorithms paper_content: In model-based diagnosis or other research fields, the hitting sets of a set cluster are usually used. In this paper we introduce some algorithms, including the new BHS-tree and Boolean algebraic algorithms. In the BHS-tree algorithm, a binary-tree is used for the computation of hitting sets, and in the Boolean algebraic algorithm, components are represented by Boolean variables. It runs just for one time to catch the minimal hitting sets. We implemented the algorithms and present empirical results in order to show their superiority over other algorithms for computing hitting sets. --- paper_title: New Algorithms for Enumerating All Maximal Cliques paper_content: In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G=(V,E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n 2) space and the other runs with O(Δ4) time delay and in O(n+m) space, where Δ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n × n matrices, and the latter one requires O(nm) time as a preprocessing. --- paper_title: A fast algorithm for computing hypergraph transversals and its application in mining emerging patterns paper_content: Computing the minimal transversals of a hypergraph is an important problem in computer science that has significant applications in data mining. We present a new algorithm for computing hypergraph transversals and highlight their close connection to an important class of patterns known as emerging patterns. We evaluate our technique on a number of large datasets and show that it outperforms previous approaches by a factor of 9-29 times. ---
Title: COMPUTATIONAL ASPECTS OF MONOTONE DUALIZATION: A BRIEF SURVEY Section 1: Introduction Description 1: Introduce the monotone dualization problem, its significance, historical context, and application areas. Section 2: Preliminaries Description 2: Provide basic definitions and concepts related to Boolean functions, CNF, DNF, and their properties necessary for understanding monotone dualization. Section 3: Simple Algorithms Description 3: Discuss basic algorithms for dualization, including clause-based and variable-based decomposition methods, and their limitations. Section 4: Fredman and Khachiyan's Results Description 4: Detail the seminal results by Fredman and Khachiyan, their algorithms A and B, and the theoretical implications of their work on the complexity of the dualization problem. Section 5: Follow-Up Work Description 5: Summarize the subsequent research influenced by Fredman and Khachiyan's work, focusing on different computational aspects, limited nondeterminism, and probabilistic approaches. Section 6: Generalizations to Posets Description 6: Explain extensions of the dualization problem to partially ordered sets (posets) and the associated computational challenges and results. Section 7: Other Algorithms Description 7: Cover various alternative algorithms for dualization, including those based on learning theory, partitioning methods, and parallel computation. Section 8: Implementations and Experiments Description 8: Review practical implementations and experimental evaluations of dualization algorithms, discussing performance aspects and comparative studies. Section 9: Discussion and Conclusion Description 9: Conclude with a discussion on open problems, future research directions, and a summary of the state of the art in the computational aspects of monotone dualization.
Tools for simulating humanoid robot dynamics: A survey based on user feedback
11
--- paper_title: Extensive analysis of Linear Complementarity Problem (LCP) solver performance on randomly generated rigid body contact problems paper_content: The Linear Complementarity Problem (LCP) is a key problem in robot dynamics, optimization, and simulation. Common experience with dynamic robotic simulations suggests that the numerical robustness of the LCP solver often determines simulation usability: if the solver fails to find a solution or finds a solution with significant residual error, interpenetration can result, the simulation can gain energy, or both. This paper undertakes the first comprehensive evaluation of LCP solvers across the space of multi-rigid body contact problems. We evaluate the performance of these solvers along the dimensions of solubility, solution quality, and running time. --- paper_title: MIRA - middleware for robotic applications paper_content: In this paper, we present MIRA, a new middleware for robotic applications. It is designed for use in real-world applications and for research and teaching. In comparison to many other existing middlewares, MIRA employs novel techniques for communication that are described in this paper. Moreover, we present benchmarks that analyze the performance of the most commonly used middlewares ROS, Yarp, LCM, Player, Urbi, and MOOS. Using these benchmarks, we can show that MIRA outperforms the other middlewares in terms of latency and computation time. --- paper_title: An evaluation of methods for modeling contact in multibody simulation paper_content: Modeling contact in multibody simulation is a difficult problem frequently characterized by numerically brittle algorithms, long running times, and inaccurate (with respect to theory) models. We present a comprehensive evaluation of four methods for contact modeling on seven benchmark scenarios in order to quantify the performance of these methods with respect to robustness and speed. We also assess the accuracy of these methods where possible. We conclude the paper with a prescriptive description in order to guide the user of multibody simulation. --- paper_title: Three-dimensional impact: energy-based modeling of tangential compliance paper_content: Impact is indispensable in robotic manipulation tasks in which objects and/or manipulators move at high speeds. Applied research using impact has been hindered by underdeveloped computational foundations for rigid-body collision. This paper studies the computation of tangential impulse as two rigid bodies in the space collide at a point with both tangential compliance and friction. It extends Stronge's spring-based planar contact structure to three dimensions by modeling the contact point as a massless particle able to move tangentially on one body while connected to an infinitesimal region on the other body via three orthogonal springs. Slip or stick is indicated by whether the particle is still or moving. Impact analysis is carried out using normal impulse rather than time as the only independent variable, unlike in previous work on tangential compliance. This is due to the ability to update the energies stored in the three springs. Collision is governed by a system of differential equations that are solvable numerically. Modularity of the impact model makes it easy to be integrated into a multibody system, with one copy at each contact, in combination with a model for multiple impacts that governs normal impulses at different contacts. --- paper_title: Control of elastic soft robots based on real-time finite element method paper_content: In this paper, we present a new method for the control of soft robots with elastic behavior, piloted by several actuators. The central contribution of this work is the use of the Finite Element Method (FEM), computed in real-time, in the control algorithm. The FEM based simulation computes the nonlinear deformations of the robots at interactive rates. The model is completed by Lagrange multipliers at the actuation zones and at the end-effector position. A reduced compliance matrix is built in order to deal with the necessary inversion of the model. Then, an iterative algorithm uses this compliance matrix to find the contribution of the actuators (force and/or position) that will deform the structure so that the terminal end of the robot follows a given position. Additional constraints, like rigid or deformable obstacles, or the internal characteristics of the actuators are integrated in the control algorithm. We illustrate our method using simulated examples of both serial and parallel structures and we validate it on a real 3D soft robot made of silicone. --- paper_title: Realistic Haptic Rendering of Interacting Deformable Objects in Virtual Environments paper_content: A new computer haptics algorithm to be used in general interactive manipulations of deformable virtual objects is presented. In multimodal interactive simulations, haptic feedback computation often comes from contact forces. Subsequently, the fidelity of haptic rendering depends significantly on contact space modeling. Contact and friction laws between deformable models are often simplified in up to date methods. They do not allow a "realistic" rendering of the subtleties of contact space physical phenomena (such as slip and stick effects due to friction or mechanical coupling between contacts). In this paper, we use Signorini's contact law and Coulomb's friction law as a computer haptics basis. Real-time performance is made possible thanks to a linearization of the behavior in the contact space, formulated as the so-called Delassus operator, and iteratively solved by a Gauss-Seidel type algorithm. Dynamic deformation uses corotational global formulation to obtain the Delassus operator in which the mass and stiffness ratio are dissociated from the simulation time step. This last point is crucial to keep stable haptic feedback. This global approach has been packaged, implemented, and tested. Stable and realistic 6D haptic feedback is demonstrated through a clipping task experiment. --- paper_title: Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in MuJoCo paper_content: We describe a full-featured simulation pipeline implemented in the MuJoCo physics engine. It includes multi-joint dynamics in generalized coordinates, holonomic constraints, dry joint friction, joint and tendon limits, frictionless and frictional contacts that can have sliding, torsional and rolling friction. The forward dynamics of a 27-dof humanoid with 10 contacts are evaluated in 0.1 msec. Since the simulation is stable at 10 msec timesteps, it can run 100 times faster than real-time on a single core of a desktop processor. Furthermore the entire simulation pipeline can be inverted analytically, an order-ofmagnitude faster than the corresponding forward dynamics. We soften all constraints, in a way that avoids instabilities and unrealistic penetrations associated with earlier spring-damper methods and yet is sufficient to allow inversion. Constraints are imposed via impulses, using an extended version of the velocitystepping approach. For holomonic constraints the extension involves a soft version of the Gauss principle. For all other constraints we extend our earlier work on complementarity-free contact dynamics – which were already known to be invertible via an iterative solver – and develop a new formulation allowing analytical inversion. --- paper_title: Robot and Multibody Dynamics: Analysis and Algorithms paper_content: Part I Basics: Serial Chain Dynamics.- Spatial vectors.- Single rigid body dynamics.- Differential kinematics for a serial-chain system.- The mass matrix.- Equations of motion for a serial chain system.- Articulated body models for serial chains.- Operator factorization and inversion of the mass matrix.- Forward dynamics.- Part II General Multibody Systems.- Tree topology systems.- The operational space inertia.- Closed chain system dynamics.- Multi-arm manipulators.- Systems with hinge flexibility.- Systems with link flexibility.- Part III Advanced Topics and Applications.- Under-actuated systems.- Free-flying space manipulators.- Mass matrix sensitives.- Linearized dynamics models.- Sensitivity of innovations factors.- Diagnolized lagrangian dynamics.- Overview of optimal linear estimation theory. --- paper_title: Extensive analysis of Linear Complementarity Problem (LCP) solver performance on randomly generated rigid body contact problems paper_content: The Linear Complementarity Problem (LCP) is a key problem in robot dynamics, optimization, and simulation. Common experience with dynamic robotic simulations suggests that the numerical robustness of the LCP solver often determines simulation usability: if the solver fails to find a solution or finds a solution with significant residual error, interpenetration can result, the simulation can gain energy, or both. This paper undertakes the first comprehensive evaluation of LCP solvers across the space of multi-rigid body contact problems. We evaluate the performance of these solvers along the dimensions of solubility, solution quality, and running time. --- paper_title: A convex, smooth and invertible contact model for trajectory optimization paper_content: Trajectory optimization is done most efficiently when an inverse dynamics model is available. Here we develop the first model of contact dynamics defined in both the forward and inverse directions. The contact impulse is the solution to a convex optimization problem: minimize kinetic energy in contact space subject to non-penetration and friction-cone constraints. We use a custom interior-point method to make the optimization problem unconstrained; this is key to defining the forward and inverse dynamics in a consistent way. The resulting model has a parameter which sets the amount of contact smoothing, facilitating continuation methods for optimization. We implemented the proposed contact solver in our new physics engine (MuJoCo). A full Newton step of trajectory optimization for a 3D walking gait takes only 160 msec, on a 12-core PC. --- paper_title: MuJoCo: A physics engine for model-based control paper_content: We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available. --- paper_title: An evaluation of methods for modeling contact in multibody simulation paper_content: Modeling contact in multibody simulation is a difficult problem frequently characterized by numerically brittle algorithms, long running times, and inaccurate (with respect to theory) models. We present a comprehensive evaluation of four methods for contact modeling on seven benchmark scenarios in order to quantify the performance of these methods with respect to robustness and speed. We also assess the accuracy of these methods where possible. We conclude the paper with a prescriptive description in order to guide the user of multibody simulation. --- paper_title: Extensive analysis of Linear Complementarity Problem (LCP) solver performance on randomly generated rigid body contact problems paper_content: The Linear Complementarity Problem (LCP) is a key problem in robot dynamics, optimization, and simulation. Common experience with dynamic robotic simulations suggests that the numerical robustness of the LCP solver often determines simulation usability: if the solver fails to find a solution or finds a solution with significant residual error, interpenetration can result, the simulation can gain energy, or both. This paper undertakes the first comprehensive evaluation of LCP solvers across the space of multi-rigid body contact problems. We evaluate the performance of these solvers along the dimensions of solubility, solution quality, and running time. --- paper_title: An opensource simulator for cognitive robotics research: The prototype of the icub humanoid robot simulator paper_content: This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the "RobotCub" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform. --- paper_title: MuJoCo: A physics engine for model-based control paper_content: We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available. --- paper_title: Yarp Based Plugins for Gazebo Simulator paper_content: This paper presents a set of plugins for the Gazebo simulator that enables the interoperability between a robot, controlled using the YARP framework, and Gazebo itself. Gazebo is an open-source simulator that can handle different Dynamic Engines developed by the Open Source Robotics Foundation. Since our plugins conform with the YARP layer used on the real robot, applications written for our robots, COMAN and iCub, can be run on the simulator with no changes. Our plugins have two main components: a YARP interface with the same API as the real robot interface, and a Gazebo plugin which handles simulated joints, encoders, IMUs, force/torque sensors and synchronization. Different modules and tasks for COMAN and iCub have been developed using Gazebo and our plugins as a testbed before moving to the real robots. --- paper_title: Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in MuJoCo paper_content: We describe a full-featured simulation pipeline implemented in the MuJoCo physics engine. It includes multi-joint dynamics in generalized coordinates, holonomic constraints, dry joint friction, joint and tendon limits, frictionless and frictional contacts that can have sliding, torsional and rolling friction. The forward dynamics of a 27-dof humanoid with 10 contacts are evaluated in 0.1 msec. Since the simulation is stable at 10 msec timesteps, it can run 100 times faster than real-time on a single core of a desktop processor. Furthermore the entire simulation pipeline can be inverted analytically, an order-ofmagnitude faster than the corresponding forward dynamics. We soften all constraints, in a way that avoids instabilities and unrealistic penetrations associated with earlier spring-damper methods and yet is sufficient to allow inversion. Constraints are imposed via impulses, using an extended version of the velocitystepping approach. For holomonic constraints the extension involves a soft version of the Gauss principle. For all other constraints we extend our earlier work on complementarity-free contact dynamics – which were already known to be invertible via an iterative solver – and develop a new formulation allowing analytical inversion. --- paper_title: Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in MuJoCo paper_content: We describe a full-featured simulation pipeline implemented in the MuJoCo physics engine. It includes multi-joint dynamics in generalized coordinates, holonomic constraints, dry joint friction, joint and tendon limits, frictionless and frictional contacts that can have sliding, torsional and rolling friction. The forward dynamics of a 27-dof humanoid with 10 contacts are evaluated in 0.1 msec. Since the simulation is stable at 10 msec timesteps, it can run 100 times faster than real-time on a single core of a desktop processor. Furthermore the entire simulation pipeline can be inverted analytically, an order-ofmagnitude faster than the corresponding forward dynamics. We soften all constraints, in a way that avoids instabilities and unrealistic penetrations associated with earlier spring-damper methods and yet is sufficient to allow inversion. Constraints are imposed via impulses, using an extended version of the velocitystepping approach. For holomonic constraints the extension involves a soft version of the Gauss principle. For all other constraints we extend our earlier work on complementarity-free contact dynamics – which were already known to be invertible via an iterative solver – and develop a new formulation allowing analytical inversion. --- paper_title: Robot and Multibody Dynamics: Analysis and Algorithms paper_content: Part I Basics: Serial Chain Dynamics.- Spatial vectors.- Single rigid body dynamics.- Differential kinematics for a serial-chain system.- The mass matrix.- Equations of motion for a serial chain system.- Articulated body models for serial chains.- Operator factorization and inversion of the mass matrix.- Forward dynamics.- Part II General Multibody Systems.- Tree topology systems.- The operational space inertia.- Closed chain system dynamics.- Multi-arm manipulators.- Systems with hinge flexibility.- Systems with link flexibility.- Part III Advanced Topics and Applications.- Under-actuated systems.- Free-flying space manipulators.- Mass matrix sensitives.- Linearized dynamics models.- Sensitivity of innovations factors.- Diagnolized lagrangian dynamics.- Overview of optimal linear estimation theory. --- paper_title: Extensive analysis of Linear Complementarity Problem (LCP) solver performance on randomly generated rigid body contact problems paper_content: The Linear Complementarity Problem (LCP) is a key problem in robot dynamics, optimization, and simulation. Common experience with dynamic robotic simulations suggests that the numerical robustness of the LCP solver often determines simulation usability: if the solver fails to find a solution or finds a solution with significant residual error, interpenetration can result, the simulation can gain energy, or both. This paper undertakes the first comprehensive evaluation of LCP solvers across the space of multi-rigid body contact problems. We evaluate the performance of these solvers along the dimensions of solubility, solution quality, and running time. --- paper_title: Tools for dynamics simulation of robots: a survey based on user feedback paper_content: The number of tools for dynamics simulation has grown in the last years. It is necessary for the robotics community to have elements to ponder which of the available tools is the best for their research. As a complement to an objective and quantitative comparison, difficult to obtain since not all the tools are open-source, an element of evaluation is user feedback. With this goal in mind, we created an online survey about the use of dynamical simulation in robotics. This paper reports the analysis of the participants' answers and a descriptive information fiche for the most relevant tools. We believe this report will be helpful for roboticists to choose the best simulation tool for their researches. --- paper_title: A convex, smooth and invertible contact model for trajectory optimization paper_content: Trajectory optimization is done most efficiently when an inverse dynamics model is available. Here we develop the first model of contact dynamics defined in both the forward and inverse directions. The contact impulse is the solution to a convex optimization problem: minimize kinetic energy in contact space subject to non-penetration and friction-cone constraints. We use a custom interior-point method to make the optimization problem unconstrained; this is key to defining the forward and inverse dynamics in a consistent way. The resulting model has a parameter which sets the amount of contact smoothing, facilitating continuation methods for optimization. We implemented the proposed contact solver in our new physics engine (MuJoCo). A full Newton step of trajectory optimization for a 3D walking gait takes only 160 msec, on a 12-core PC. --- paper_title: MuJoCo: A physics engine for model-based control paper_content: We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available. --- paper_title: An evaluation of methods for modeling contact in multibody simulation paper_content: Modeling contact in multibody simulation is a difficult problem frequently characterized by numerically brittle algorithms, long running times, and inaccurate (with respect to theory) models. We present a comprehensive evaluation of four methods for contact modeling on seven benchmark scenarios in order to quantify the performance of these methods with respect to robustness and speed. We also assess the accuracy of these methods where possible. We conclude the paper with a prescriptive description in order to guide the user of multibody simulation. --- paper_title: Three-dimensional impact: energy-based modeling of tangential compliance paper_content: Impact is indispensable in robotic manipulation tasks in which objects and/or manipulators move at high speeds. Applied research using impact has been hindered by underdeveloped computational foundations for rigid-body collision. This paper studies the computation of tangential impulse as two rigid bodies in the space collide at a point with both tangential compliance and friction. It extends Stronge's spring-based planar contact structure to three dimensions by modeling the contact point as a massless particle able to move tangentially on one body while connected to an infinitesimal region on the other body via three orthogonal springs. Slip or stick is indicated by whether the particle is still or moving. Impact analysis is carried out using normal impulse rather than time as the only independent variable, unlike in previous work on tangential compliance. This is due to the ability to update the energies stored in the three springs. Collision is governed by a system of differential equations that are solvable numerically. Modularity of the impact model makes it easy to be integrated into a multibody system, with one copy at each contact, in combination with a model for multiple impacts that governs normal impulses at different contacts. --- paper_title: Realistic Haptic Rendering of Interacting Deformable Objects in Virtual Environments paper_content: A new computer haptics algorithm to be used in general interactive manipulations of deformable virtual objects is presented. In multimodal interactive simulations, haptic feedback computation often comes from contact forces. Subsequently, the fidelity of haptic rendering depends significantly on contact space modeling. Contact and friction laws between deformable models are often simplified in up to date methods. They do not allow a "realistic" rendering of the subtleties of contact space physical phenomena (such as slip and stick effects due to friction or mechanical coupling between contacts). In this paper, we use Signorini's contact law and Coulomb's friction law as a computer haptics basis. Real-time performance is made possible thanks to a linearization of the behavior in the contact space, formulated as the so-called Delassus operator, and iteratively solved by a Gauss-Seidel type algorithm. Dynamic deformation uses corotational global formulation to obtain the Delassus operator in which the mass and stiffness ratio are dissociated from the simulation time step. This last point is crucial to keep stable haptic feedback. This global approach has been packaged, implemented, and tested. Stable and realistic 6D haptic feedback is demonstrated through a clipping task experiment. --- paper_title: Three-dimensional impact: energy-based modeling of tangential compliance paper_content: Impact is indispensable in robotic manipulation tasks in which objects and/or manipulators move at high speeds. Applied research using impact has been hindered by underdeveloped computational foundations for rigid-body collision. This paper studies the computation of tangential impulse as two rigid bodies in the space collide at a point with both tangential compliance and friction. It extends Stronge's spring-based planar contact structure to three dimensions by modeling the contact point as a massless particle able to move tangentially on one body while connected to an infinitesimal region on the other body via three orthogonal springs. Slip or stick is indicated by whether the particle is still or moving. Impact analysis is carried out using normal impulse rather than time as the only independent variable, unlike in previous work on tangential compliance. This is due to the ability to update the energies stored in the three springs. Collision is governed by a system of differential equations that are solvable numerically. Modularity of the impact model makes it easy to be integrated into a multibody system, with one copy at each contact, in combination with a model for multiple impacts that governs normal impulses at different contacts. --- paper_title: Control of elastic soft robots based on real-time finite element method paper_content: In this paper, we present a new method for the control of soft robots with elastic behavior, piloted by several actuators. The central contribution of this work is the use of the Finite Element Method (FEM), computed in real-time, in the control algorithm. The FEM based simulation computes the nonlinear deformations of the robots at interactive rates. The model is completed by Lagrange multipliers at the actuation zones and at the end-effector position. A reduced compliance matrix is built in order to deal with the necessary inversion of the model. Then, an iterative algorithm uses this compliance matrix to find the contribution of the actuators (force and/or position) that will deform the structure so that the terminal end of the robot follows a given position. Additional constraints, like rigid or deformable obstacles, or the internal characteristics of the actuators are integrated in the control algorithm. We illustrate our method using simulated examples of both serial and parallel structures and we validate it on a real 3D soft robot made of silicone. --- paper_title: Realistic Haptic Rendering of Interacting Deformable Objects in Virtual Environments paper_content: A new computer haptics algorithm to be used in general interactive manipulations of deformable virtual objects is presented. In multimodal interactive simulations, haptic feedback computation often comes from contact forces. Subsequently, the fidelity of haptic rendering depends significantly on contact space modeling. Contact and friction laws between deformable models are often simplified in up to date methods. They do not allow a "realistic" rendering of the subtleties of contact space physical phenomena (such as slip and stick effects due to friction or mechanical coupling between contacts). In this paper, we use Signorini's contact law and Coulomb's friction law as a computer haptics basis. Real-time performance is made possible thanks to a linearization of the behavior in the contact space, formulated as the so-called Delassus operator, and iteratively solved by a Gauss-Seidel type algorithm. Dynamic deformation uses corotational global formulation to obtain the Delassus operator in which the mass and stiffness ratio are dissociated from the simulation time step. This last point is crucial to keep stable haptic feedback. This global approach has been packaged, implemented, and tested. Stable and realistic 6D haptic feedback is demonstrated through a clipping task experiment. --- paper_title: Tools for dynamics simulation of robots: a survey based on user feedback paper_content: The number of tools for dynamics simulation has grown in the last years. It is necessary for the robotics community to have elements to ponder which of the available tools is the best for their research. As a complement to an objective and quantitative comparison, difficult to obtain since not all the tools are open-source, an element of evaluation is user feedback. With this goal in mind, we created an online survey about the use of dynamical simulation in robotics. This paper reports the analysis of the participants' answers and a descriptive information fiche for the most relevant tools. We believe this report will be helpful for roboticists to choose the best simulation tool for their researches. --- paper_title: An opensource simulator for cognitive robotics research: The prototype of the icub humanoid robot simulator paper_content: This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the "RobotCub" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform. --- paper_title: Yarp Based Plugins for Gazebo Simulator paper_content: This paper presents a set of plugins for the Gazebo simulator that enables the interoperability between a robot, controlled using the YARP framework, and Gazebo itself. Gazebo is an open-source simulator that can handle different Dynamic Engines developed by the Open Source Robotics Foundation. Since our plugins conform with the YARP layer used on the real robot, applications written for our robots, COMAN and iCub, can be run on the simulator with no changes. Our plugins have two main components: a YARP interface with the same API as the real robot interface, and a Gazebo plugin which handles simulated joints, encoders, IMUs, force/torque sensors and synchronization. Different modules and tasks for COMAN and iCub have been developed using Gazebo and our plugins as a testbed before moving to the real robots. ---
Title: Tools for simulating humanoid robot dynamics: A survey based on user feedback Section 1: INTRODUCTION Description 1: Introduce the importance of dynamics simulation in humanoid robotics, the purpose of the survey, and the lack of existing comparisons among simulation tools. Section 2: SIMULATION TECHNOLOGIES Description 2: Discuss the origin of the technologies used for robot dynamics simulation, challenges for robotics simulation, and introduce the user experience based survey. Section 3: Challenges for robotics simulation Description 3: Explore the stricter requirements for robotics simulation compared to virtual character animation, including numerical stability, contact forces, and actuation systems. Section 4: Physics engines and system simulators Description 4: Differentiate between physics engines and system simulators, and elaborate on their specific characteristics and use cases in robot simulation. Section 5: Assorted software tools Description 5: List and briefly describe a variety of simulation tools, highlighting their main features and intended applications. Section 6: THE USERS POINT OF VIEW Description 6: Present the results of the online survey, detailing the demographics of participants, their preferences, and their evaluation criteria for simulation tools. Section 7: A diversity of tools Description 7: Examine the abundance of available dynamics simulators and the familiarity of researchers with these tools, as well as their experiences in abandoning or adopting them. Section 8: The main currently used tools Description 8: Identify the most currently used simulation tools and provide user ratings and satisfaction levels for specific aspects of these tools. Section 9: A diversity of tools: too many? Description 9: Highlight the lack of a dominant simulator and the significant variants used by researchers for different types of robotics applications. Section 10: A use-case: the iCub simulator Description 10: Detail the decision process and justification for choosing a new simulation platform for the iCub project based on the survey results. Section 11: CONCLUSIONS Description 11: Summarize the key findings of the survey, stressing the importance of open-source projects and community efforts towards improving simulation tools for robotics applications.
A Systematic Literature Review of Flexible E-Procurement Marketplace
7
--- paper_title: Business strategy, human resources, labour market flexibility and competitive advantage paper_content: N.B. Professor Michie was based at Birmingham Business School when this article was first published. The full-text of this article is not available in ORA at this time. Citation: Michie, J. & Sheehan, M. (2005). 'Business strategy, human resources, labour market flexibility and competitive advantage', International Journal of Human Resource Management, 16(3), 445-464. [Available at http://www.informaworld.com/openurl?genre=article&issn=0958%2d5192&volume=16&issue=3&spage=445]. --- paper_title: Manufacturing flexibility and business strategy: An empirical study of small and medium sized firms paper_content: Abstract This study investigates the practice of manufacturing flexibility in small and medium sized firms. Using the data collected from 87 firms from machinery and machine tool industries in Taiwan, we analyzed and prescribed the alignment of various manufacturing flexibility dimensions with business strategies. Several practical approaches to developing manufacturing flexibility in small and medium sized firms were discussed. In addition, statistical results indicate that the compatibility between business strategy and manufacturing flexibility is critical to business performance. The one-to-one relationship between business strategy and manufacturing flexibility is established to enable managers to set clear priorities in investing and developing necessary manufacturing flexibility. --- paper_title: A strategic analysis of electronic marketplaces paper_content: Information systems can serve as intermediaries between the buyers and the sellers in a vertical market, thus creating an "electronic marketplace" A major impact of these electronic market systems is that they typically reduce the search costs buyers must pay to obtain information about the prices and product offerings available in the market. Economic theory suggests that this reduction in search costs plays a major role in determining the implications of these systems for market efficiency and competitive behavior. This article draws on economic models of search and examines how prices, seller profits, and buyer welfare are affected by reducing search costs in commodity and differentiated markets. This reduction results in direct efficiency gains from reduced intermediation costs and in indirect but possibly larger gains in allocational efficiency from better-informed buyers. Because electronic market systems generally reduce buyers' search costs, they ultimately increase the efficiency of interorganizational transactions, in the process affecting the market power of buyers and sellers. The economic characteristics of electronic marketplaces, in addition to their ability to reduce search costs, create numerous possibilities for the strategic use of these systems. --- paper_title: Migrating Agile Methods to Standardized Development Practice paper_content: Situated process and quality frameworks offer a way to resolve the tensions that arise when introducing agile methods into standardized software development engineering. For these to be successful, however, organizations must grasp the opportunity to reintegrate software development management, theory, and practice. --- paper_title: Motives for e-marketplace Participation: Differences and Similarities between Buyers and Suppliers paper_content: The motivation of suppliers as well as buyers for e-marketplace participation is closely linked to the perceived outcome of participation, not only in terms of the benefits of joining an e-marketplace, but also in terms of the possible consequences of not joining. The key issue, therefore, is why organizations decide to buy and/or sell goods or services in e-marketplaces. We develop a theoretical framework for the categorization of motivational factors, resulting in four different types of motives. We then apply the framework to a dataset consisting of 41 case studies covering 20 industries in 12 countries. We conclude that buyers and suppliers have different motives for engaging in e-marketplace activities. Although e-marketplaces are a way of increasing the efficiency of supply chain activities, this is not necessarily done with the sole aim of exploiting suppliers: buyers also use e-marketplaces to find new or alternative suppliers. Similarly, even though demands from existing customers have spurred th... --- paper_title: Electronic marketplaces and price transparency: strategy, information technology, and success paper_content: Electronic marketplaces (EMPs) are widely assumed to increase price transparency and hence lower product prices. Results of empirical studies have been mixed, with several studies showing that product prices have not decreased and others showing that prices have increased in some cases. One explanation is that sellers prefer not to join EMPs with high price transparency, leading highly price transparent EMPs to fail. Therefore, in order to be successful, EMPs might be expected to avoid high price transparency. But that strategy creates a catch-22 for EMPs on the buy side: Why would buyers want to join EMPs in the absence of price transparency and the benefit of lower prices? We argue that successful EMPs must provide compensatory benefits for sellers in the case of high price transparency and for buyers in the case of low price transparency. ::: ::: To understand how EMPs could succeed, regardless of price transparency, we examined the relationships among EMP strategy, price transparency, and performance by analyzing all 19 EMPs that compete by selling a broad range of standard electronics components. We found that all EMPs pursuing a low cost strategy had high price transparency and performed poorly. All EMPs that performed well pursued strategies of differentiation, but, interestingly, not all successful EMPs avoided price transparency: Some EMPs succeeded despite enabling high price transparency. We therefore examined two differentiated EMPs in greater depth-one with high price transparency, the other with low price transparency-to show how they achieved strategic alignment of activities and resources and provided compensatory benefits for their customers. --- paper_title: Electronic marketplaces: A literature review and a call for supply chain management research paper_content: Abstract These days, Internet-based electronic marketplaces (EMs) are getting more and more popular. They emerge in different industries, supporting the exchange of goods and services of different kinds, with and for different types of actors, and are following different architectural principles. Most observers have assumed that EM would come to dominate the e-business landscape. Once you look beyond the publicity, however, you quickly see that most EMs are struggling. The supply chain dimension of an EM is largely neglected and poorly managed, while basic logistics operation is currently hampering turnover and revenues. The Paper at hand examines, based on a critical literature review, the actual EM discussion and calls for more supply chain management research within this field. --- paper_title: A Study of the Value and Impact of B2B E-Commerce: The Case of Web-Based Procurement paper_content: Web-enabled business-to-business (B2B) e-commerce enhances interorganizational coordination, resulting in transaction cost savings and competitive sourcing opportunities for the buyer organization. However, organizations are uncertain whether this is an improvement over existing information technology, such as EDI. In particular, what is the value of B2B e-commerce to a buyer organization, and how can it be measured? What factors most affect the realization of the value of B2B e-commerce? Using the case of Web-based B2B procurement systems, a framework is proposed to quantify and measure the value of B2B e-commerce systems and identify the factors that determine it. The methodology is applied to help a major heavy-equipment manufacturer located in the midwestern United States evaluate the potential of its Web-based procurement system. The preliminary results indicate that, even though all stages of B2B procurement are affected by the Web, the value of Web-based procurement is most determined by the process characteristics, organization of business units, and the "extended enterprise." --- paper_title: Multi-Criteria Markets: An Exploratory Study of Market Process Design paper_content: This paper explores market process design for multi-criteria markets using the electronic market process design work of Ribbers et al. (2002) and Kambil and van Heck (1998). The study utilizes a case study of a market intermediary in the utilities sector to examine how multi-criteria markets differ from price-only alternatives. The study reveals significant differences in the role of the intermediary in the operation of multi-criteria markets, as well as marked differences in market process design in the areas of authentication, product representation and communications/computing. We conclude that these differences represent a fundamental shift in B2B procurement relationships from those described by Kaplan and Sawhney (2000) towards strategic sourcing. --- paper_title: A Descriptive Content Analysis of Trust-Building Measures in B2B Electronic Marketplaces paper_content: Because business-to-business (B2B) electronic marketplaces (e-marketplaces) facilitate transactions between buyers and sellers, they strive to foster a trustworthy trading environment with a variety of trust-building measures. However, little research has been undertaken to explore trust-building measures used in B2B e-marketplaces, or to determine to what extent these measures are applied in B2B e-marketplaces and how they are applied. Based on reviews of the scholarly, trade, and professional literature on trust in electronic commerce, we identified 11 trustbuilding measures used to create trust in B2B e-marketplaces. Zucker’s trust production theory [1986] was applied to understand how these trust-building measures will enhance participants’ trust in buyers and sellers in B2B e-marketplaces or in B2B e-marketplace providers. A descriptive content analysis of 100 B2B e-marketplaces was conducted to survey the current usage of the 11 trust-building measures. Many of the trust-building measures were found to be widely used in the B2B e-marketplaces. However, although they were proven to be effective in building trust-related beliefs in online business environments, several institutional-based trustbuilding measures, such as escrow services, insurance and third-party assurance seals, are not widely used in B2B e-marketplaces. --- paper_title: Online reverse auctions and their role in buyer–supplier relationships paper_content: Abstract Despite the move in recent years towards supplier partnerships, buying firms need at times to make use of competitive procurement strategies for certain purchases. This study examines the impact of reverse auctions on buyer–supplier relationships through six case studies, analysing primarily the supplier perspective through participant interviews. The authors identify that there are potential benefits for both parties in a reverse auction, which can offer tendering and transactional cost advantages. For buyers, it offers a competitive procurement process. The effect on relationships will depend on the extent to which buyers employ the auction as a price weapon, or whether it is used primarily as a process improvement tool. --- paper_title: Adoption of e-procurement and participation of e-marketplace on firm performance: Trust as a moderator paper_content: Today, IT has a major influence on commercial activities, accelerating the adoption of e-procurement and e-marketplace participation in many industries. We examined firm motivations for adopting e-procurement for their operations in thee-marketplace and measured their performance to assess its benefits. Trust was considered as a moderating variable between the relationship of e-procurement adoption and e-marketplace participation. A two-stage analysis, including both a qualitative and quantitative approach, was applied. Hypotheses were developed and a model constructed. A research questionnaire was developed and distributed followed by data analysis and testing. The results showed that firms that adopted e-procurement were more likely to participate in the e-marketplace and that the firm's performance was enhanced after such participation. Trust was shown to have a moderating effect upon firm willingness to adopt e-procurement when it was considering participation in the e-marketplace. --- paper_title: Organizational Buyers' Adoption and Use of B2B Electronic Marketplaces: Efficiency-and Legitimacy-Oriented Perspectives paper_content: Despite the significant opportunities to transform the way that organizations conduct trading activities, few studies have investigated the impetus for organizational strategic moves toward business-to-business (B2B) electronic marketplaces. Drawing on transaction cost theory and institutional theory, this paper identifies two groups of factors-efficiency-and legitimacy-oriented factors, respectively-that can influence organizational buyers' initial adoption of, and the level of participation in, B2B e-marketplaces. The effects of these factors on initial adoption of and participation level in B2B e-marketplaces are empirically tested with data collected, respectively, from 98 potential adopter and 85 current adopter organizations. The results of a partial least squares analysis of the data indicate that the two groups of factors exhibit different patterns in explaining initial adoption in the preadoption period and participation level in the postadoption period. Specifically, all three of the efficiency-oriented factors investigated in this study-product characteristics, demand uncertainty, and market volatility-and their subconstructs exhibit a significant influence on adoption intent or participation level, or both. The results demonstrate that two legitimacy-oriented factors-mimetic pressures and normative pressures-and their subconstructs have a significant impact on adoption intent, but not on participation level. Our findings also indicate that clearly different patterns exist between the two groups of factors in explaining adoption intent and participation level. --- paper_title: Configurable offers and winner determination in multi-attribute auctions paper_content: Abstract The theory of procurement auctions traditionally assumes that the offered quantity and quality is fixed prior to source selection. Multi-attribute reverse auctions allow negotiation over price and qualitative attributes such as color, weight, or delivery time. They promise higher market efficiency through a more effective information exchange of buyer’s preferences and supplier’s offerings. This paper focuses on a number of winner determination problems in multi-attribute auctions. Previous work assumes that multi-attribute bids are described as attribute value pairs and that the entire demand is purchased from a single supplier. Our contribution is twofold: First, we will analyze the winner determination problem in case of multiple sourcing. Second, we will extend the concept of multi-attribute auctions to allow for configurable offers. Configurable offers enable suppliers to specify multiple values and price markups for each attribute. In addition, suppliers can define configuration and discount rules in form of propositional logic statements. These extensions provide suppliers with more flexibility in the specification of their bids and allow for an efficient information exchange among market participants. We will present MIP formulations for the resulting allocation problems and an implementation. --- paper_title: A framework for analyzing flexibility of generic objects paper_content: Flexibility is a loosely defined term used in a number of application areas with different and frequently contradicting views. In this paper, adopting as a starting point the use of the term in the manufacturing and information systems domains, we present a framework for the examination of generic objects utilizing cloud diagrams. We first portray the potential flexibility of a designed object and then its actual flexibility expressing its ability to adapt to changes. Examples of its use illustrate the ideas and their application. We believe that this research will be helpful in offering guidelines to designers of new systems where flexibility is important. --- paper_title: Manufacturing flexibility and business strategy: An empirical study of small and medium sized firms paper_content: Abstract This study investigates the practice of manufacturing flexibility in small and medium sized firms. Using the data collected from 87 firms from machinery and machine tool industries in Taiwan, we analyzed and prescribed the alignment of various manufacturing flexibility dimensions with business strategies. Several practical approaches to developing manufacturing flexibility in small and medium sized firms were discussed. In addition, statistical results indicate that the compatibility between business strategy and manufacturing flexibility is critical to business performance. The one-to-one relationship between business strategy and manufacturing flexibility is established to enable managers to set clear priorities in investing and developing necessary manufacturing flexibility. --- paper_title: A framework for the selection of electronic marketplaces : a content analysis approach paper_content: Although there has recently been an increased practitioner and media focus on electronic marketplaces, there still remains confusion over the advantages of participation. As a consequence organisations are finding difficulty in developing strategies, policies and procedures in relation to the e‐marketplace selection process. In some cases due to the dynamic and evolving environment of electronic trading, there can be a reluctance on the part of buyers and suppliers to participate in e‐marketplaces. Several classification models offer assessments of which type of marketplace are most suitable for different procurement purposes, but they fail to remain relevant in this dynamically changing environment. In this paper a content analysis of research and practitioner articles is carried out to evaluate the issues that prospective participants, seeking to purchase goods and services online, need to address in their selection process. A framework to support electronic marketplace related decision making is proposed, which is based within the contexts of business drivers, internal company issues and e‐marketplace facilitators. --- paper_title: A framework for the sustainability of e‐marketplaces paper_content: Electronic marketplaces have promised many benefits to participants, and hence have aroused considerable interest in the business community. However, the failure of some marketplaces and the success of others have led business managers to question which marketplaces will be successful in the future, and even whether the entire idea is viable. This question is particularly pressing for those considering sponsoring or participating in a marketplace. This exploratory study seeks to address these issues by proposing a framework of the factors that help explain the sustainability of e‐marketplaces. The framework proposed is based upon the findings of interviews carried out with 14 managers based in 11 companies active in the field of e‐marketplaces, and findings from the current literature from this domain. The framework proposed identifies seven factors that can be categorised according to three levels of influence, i.e. the macroeconomic/regulatory level, the industry level, and the firm level. Further work to validate the proposed framework would provide practitioners with additional insight to apply to their e‐marketplace strategies. --- paper_title: A strategic analysis of electronic marketplaces paper_content: Information systems can serve as intermediaries between the buyers and the sellers in a vertical market, thus creating an "electronic marketplace" A major impact of these electronic market systems is that they typically reduce the search costs buyers must pay to obtain information about the prices and product offerings available in the market. Economic theory suggests that this reduction in search costs plays a major role in determining the implications of these systems for market efficiency and competitive behavior. This article draws on economic models of search and examines how prices, seller profits, and buyer welfare are affected by reducing search costs in commodity and differentiated markets. This reduction results in direct efficiency gains from reduced intermediation costs and in indirect but possibly larger gains in allocational efficiency from better-informed buyers. Because electronic market systems generally reduce buyers' search costs, they ultimately increase the efficiency of interorganizational transactions, in the process affecting the market power of buyers and sellers. The economic characteristics of electronic marketplaces, in addition to their ability to reduce search costs, create numerous possibilities for the strategic use of these systems. --- paper_title: Business models for Internet-based e-procurement systems and B2B electronic markets: an exploratory assessment paper_content: Information technology (IT) has long been applied to support the exchange of goods, services and information between organizations. It is with the advent of Internet-based e-procurement systems and business-to-business (B2B) electronic markets that the real opportunities for online transactions have opened up across space and over time. The authors draw on IS and economics theory to investigate the motivation for the various online business models, and the adoption requirements of purchasing firms, through the examination of a set of mini-cases. Our exploratory study finds that private aggregating and negotiating mechanisms are being adopted for large quantity business supply purchases, while public market mechanisms are more often adopted when firms face uncertain and high variance demand. Moreover, market facilitation, expertise sharing and collaboration are gradually attracting more attention, and call for future investigation. --- paper_title: Reshaping It for Business Flexibility paper_content: From the Publisher: ::: Dealing with the issues surrounding the growing need for IT and business integration, this text takes a detailed look at the concepts, developments and "waves of change" in each area. These ideas are then drawn together in a practical synthesis, enabling the reader to create an IT architecture. The emphasis is on practical details and guidelines, reflecting the authors' experience of large organizations. --- paper_title: On the critical success factors for B2B e-marketplace paper_content: B2B e-marketplaces have a profound influence on the traditional market and on the way business is conducted. All kinds of e-marketplaces are emerging and developing now, and we have witnessed both successes and failures among these e-marketplaces. However, the failures are far outnumber the successes. Under this background, the paper discusses the critical success factors for operating e-marketplaces from different perspectives. It is shown that the core of e-marketplaces is to build liquidity and capture value. Based on these analyses, comprehensive critical factors including functional factors, strategic factors and technical Factors are discussed for the success of electronic marketplaces; then a tentative framework for the analysis on the critical success factors is proposed. --- paper_title: Assessing the benefits from e-business transformation through effective enterprise management paper_content: This paper reports on research carried out in 1999-2001 on the use of e-business applications in enterprise resource planning (ERR)-based organisations. Multiple structured interviews were used to collect data on 11 established organisations from a diverse range of industries. The findings are analysed according to the level of sophistication of e-business models and their transformational impact on the organisation. Early adopters of e-business show a trend towards cost reductions and administrative efficiencies from e-procurement and self-service applications used by customers and employees. More mature users focus on strategic advantage and generate this through an evolutionary model of organisational change. Two complex case studies of e-business integration with global suppliers and their corporate customers are analysed to identify specific stages of benefits accrual through the e-business transformation process. Collectively, the set of case studies is used to demonstrate the increased benefits derived from an e-business architecture based on a network of ERP-enabled organisations. --- paper_title: Online reverse auctions and their role in buyer–supplier relationships paper_content: Abstract Despite the move in recent years towards supplier partnerships, buying firms need at times to make use of competitive procurement strategies for certain purchases. This study examines the impact of reverse auctions on buyer–supplier relationships through six case studies, analysing primarily the supplier perspective through participant interviews. The authors identify that there are potential benefits for both parties in a reverse auction, which can offer tendering and transactional cost advantages. For buyers, it offers a competitive procurement process. The effect on relationships will depend on the extent to which buyers employ the auction as a price weapon, or whether it is used primarily as a process improvement tool. --- paper_title: A cross-industry review of B2B critical success factors paper_content: Business‐to‐business international Internet marketing (B2B IIM) has emerged as one of the key drivers in sustaining an organisation’s competitive advantage. However, market entry and communication via the Internet have affected the dynamics and traditional process in B2B commerce. Difficulties resulting from these new trends have been cited in the literature. Research into identifying what are the critical success factors for global market entry is rare. This research presents a comprehensive review in this field. The study identified 21 critical success factors applicable to most of the B2B IIM. These factors were classified into five categories: marketing strategy, Web site, global, internal and external related factors. The significance, importance and implications for each category are discussed and then recommendations are made. --- paper_title: A conceptual model of supply chain flexibility paper_content: This paper presents an integrated conceptual model of supply chain flexibility. It examines flexibility classification schemes and the commonalities of flexibility typologies published in the literature to create a theoretical foundation for analyzing the components of supply chain flexibility. Even though there has been a tremendous amount of research on the topic of flexibility, most of it has been confined to intra‐firm flexibility concerns. As supply chain management goes beyond a firm’s boundaries, the flexibility strategies must also extend beyond the firm. This paper identifies the cross‐enterprise nature of supply chain flexibility and the need to improve flexibility measures across firms. Opportunities are identified for future cross‐functional research that builds on this theoretical foundation and leads to more effective formulation of supply chain strategies. --- paper_title: Organizational Buyers' Adoption and Use of B2B Electronic Marketplaces: Efficiency-and Legitimacy-Oriented Perspectives paper_content: Despite the significant opportunities to transform the way that organizations conduct trading activities, few studies have investigated the impetus for organizational strategic moves toward business-to-business (B2B) electronic marketplaces. Drawing on transaction cost theory and institutional theory, this paper identifies two groups of factors-efficiency-and legitimacy-oriented factors, respectively-that can influence organizational buyers' initial adoption of, and the level of participation in, B2B e-marketplaces. The effects of these factors on initial adoption of and participation level in B2B e-marketplaces are empirically tested with data collected, respectively, from 98 potential adopter and 85 current adopter organizations. The results of a partial least squares analysis of the data indicate that the two groups of factors exhibit different patterns in explaining initial adoption in the preadoption period and participation level in the postadoption period. Specifically, all three of the efficiency-oriented factors investigated in this study-product characteristics, demand uncertainty, and market volatility-and their subconstructs exhibit a significant influence on adoption intent or participation level, or both. The results demonstrate that two legitimacy-oriented factors-mimetic pressures and normative pressures-and their subconstructs have a significant impact on adoption intent, but not on participation level. Our findings also indicate that clearly different patterns exist between the two groups of factors in explaining adoption intent and participation level. --- paper_title: Building trust in business-to-business electronic marketplaces paper_content: Although the concept of trust has been studied in many areas, the trust building strategies of business to business electronic marketplaces are not well studied. This paper analyzed the impact of B2B EMs on trust, and proposed a three dimensional trust building strategy for EMs, including inter-organizational trust between intermediaries and participants, between buyers and sellers, and man machines trust between precipitants and servers. This paper has important managerial implications for B2B EM operators. --- paper_title: A framework for analyzing flexibility of generic objects paper_content: Flexibility is a loosely defined term used in a number of application areas with different and frequently contradicting views. In this paper, adopting as a starting point the use of the term in the manufacturing and information systems domains, we present a framework for the examination of generic objects utilizing cloud diagrams. We first portray the potential flexibility of a designed object and then its actual flexibility expressing its ability to adapt to changes. Examples of its use illustrate the ideas and their application. We believe that this research will be helpful in offering guidelines to designers of new systems where flexibility is important. --- paper_title: Relating strategy and structure to flexible automation: A test of fit and performance implications paper_content: This study analyzed various strategy and structure choices to determine their fit relationship with flexible automation (FA). 1Using the moderator hypothesis, we proposed that the more strategy and structure choices complemented FA's competences, the higher would be the performance impact of FA. Data from 87 FA users indicate that quality and flexibility strategies, described as complementary to FA's strengths, interact positively with FA. Low cost strategy, described as conflicting with FA, interacts negatively. Organic structure, viewed as complementary to FA, has only main effects whereas a mechanistic structure interacts negatively. At the manufacturing level, skill diversity and team approaches, considered as complementary to FA, interact positively. While a subgroup analysis of high-low performers lends additional support to these relationships, analysis of industry subgroups indicates that some relationships are industry specific. We discuss the implications of these findings for research and practice. --- paper_title: Flexibility as Process Mobility: The Management of Plant Capabilities for Quick Response Manufacturing paper_content: This paper examines the relationship between one form of manufacturing flexibility - operational mobility (or the ability to change quickly between products) - and structure, infrastructure and managerial policy at the plant level. Using data from a broader study aimed at exploring the general sources of manufacturing flexibility, the paper provides evidence of the strength of the links between manufacturing flexibility and such factors as scale, technology vintage, computer integration and workforce management. There has been little empirical work on this subject, partly as a result of the difficulties of defining and measuring flexibility. The type of flexibility explored in this paper is specifically the capability of a plant to change between process states quickly. I find that most of the variance in flexibility across plants can be explained by a combination of structural factors, such as the scale of the plant; infrastructural factors, such as the length of service of the operators; and measures of managerial emphasis, such as the perceived importance of making quick changeovers. Of these factors, I find that the scale of a plant does not strongly inhibit its flexibility and that computer integration is either insignificant or detrimental to the flexibility of a plant. Workforce characteristics are also important determinants of flexibility, and the results suggest that less experienced operators may be more flexible in their ability to make certain types of change quickly between products. A strong determinant of the flexibility of a plant is the extent to which operators perceive managers to have emphasized the importance of various performance dimensions. Finally, I show that different forms of flexibility are not necessarily related to each other. This emphasizes the importance of identifying precisely the particular type of flexibility that is to be developed when building manufacturing capabilities. --- paper_title: The flexibility of manufacturing systems paper_content: Many of the new pressures from today's manufacturing environment are turning manufacturing managers' attention to the virtues of developing a flexible manufacturing function. Flexibility, however, has different meanings for different managers and several perfectly legitimate alternative paths exist towards flexible manufacturing. How managers in ten companies view manufacturing flexibility in terms of how they see the contribution of manufacturing flexibility to overall company performance; what types of flexibility they regard as important; and what their desired degree of flexibility is. The results of the investigations in these ten companies are summarised in the form of ten empirical “observations”. Based on these “observations” a check‐list of prescriptions is presented and a hierarchical framework developed into which the various issues raised by the “observations” can be incorporated. --- paper_title: A theoretical framework for analyzing the dimensions of manufacturing flexibility paper_content: Abstract The competitive environment of today has generated an increased interest in flexibility as a response mechanism. While the potential benefits of flexibility are familiar, the concept of flexibility itself is not well-understood. Neither practitioners nor academics agree upon, or know, how flexibility can be gauged or measured in its totality. Consequently, this study seeks to provide a framework for understanding this complex concept and to create a theoretical foundation for the development of generalizable measures for manufacturing flexibility. With this objective in mind, we first critically examine diverse streams of literature to define four constituent elements of flexibility: range-number (R-N), range-heterogeneity (R-H), mobility (M), and uniformity (U). The R-H element is new, and has not been proposed before in prior literature. These four elements can be applied to consistently define different types or dimensions of flexibility. Definitions for 10 flexibility dimensions pertaining to manufacturing are thus obtained. These definitions serve a dual purpose. First, they capture the domain of flexibility. Second, we show in this study how these definitions can be used to generate scale items, thereby facilitating the development of generalizable manufacturing flexibility measures. Several research avenues that can be explored once such measures are developed are also highlighted. --- paper_title: Making manufacturing flexibility operational – part 2 : distinctions and an example paper_content: Structures the concept of flexibility by making clear distinctions in three generic dimensions, describes the use of the framework for manufacturing flexibility by working through a concrete example. The framework was presented in “Making manufacturing flexibility operational – part 1: a framework”, IMS, Vol. 6 No. 2. Makes distinctions between the concept of flexibility in the three generic dimensions: utilized flexibility versus potential flexibility, external flexibility versus internal flexibility, and requested flexibility versus replied flexibility. The framework makes a clear distinction between the internal and the external factors impinging on the company, and brings together the market demand for flexibility, the characteristics of the production system, and the flexibility of the suppliers. Furthermore, pursues the connection from the strategic level to the single resource characteristics in the production system. Using the framework as a systematization for handling flexibility related issues in companies, can be especially useful for managers. --- paper_title: Process range in manufacturing: an empirical study of flexibility paper_content: This paper examines the relationship between one form of manufacturing flexibility---process range---and structure, infrastructure, and managerial policy at the plant level. The paper provides evidence of the strength of the links between manufacturing flexibility and such factors as scale, technology vintage, computer integration, and workforce management. Data from 54 plants in the fine-paper industry are presented, and a model of the determinants of short-term flexibility is developed. The plants examined differed by a factor of 20 in their ability to accommodate large process variation. The evidence suggests that flexibility is strongly negatively related to scale and degree of computer integration, yet positively related to newer vintages of mechanical technology and workforce experience. Some results differ significantly from the prevailing view of the industry, in particular, that newer plants are less flexible. The paper shows that newer machine technology is more flexible once other factors are controlled for. In the longer term, the results show that management has a significant impact on the improvement of flexibility in operations, regardless of the technology and infrastructure in place. Plant network managers' views of flexibility are important. The data suggest that inflexible plants may be inflexible partly as a result of their being considered inflexible by network managers, and never being assigned the product range needed to improve the capability. --- paper_title: Flexibility in Manufacturing: A Survey paper_content: This article is an attempt to survey the vast literature on flexibility in manufacturing that has accumulated over the last 10 to 20 years. The survey begins with a brief review of the classical literature on flexibility in economics and organization theory, which provides a background for manufacturing flexibility. Several kinds of flexibilities in manufacturing are then defined carefully along with their purposes, the means to obtain them, and some suggested measurements and valuations. Then we examine the interrelationships among the several flexibilities. Various empirical studies and analytical/optimization models dealing with these flexibilities are reported and discussed. The article concludes with suggestions for some possible future research directions. --- paper_title: The flexibility of production processes: a general framework paper_content: Various models have been developed over the years to analyze the many facets of the flexibility of production and operations systems. This paper proposes a general framework for the modeling and analysis of flexibility. The argument hinges upon the distinction between flexibility---a property of the technology---and diversity---a property of the environment in which the technology is operated. Flexibility is characterized as a hedge against diversity. Intuitive strategic properties that are conventionally attributed to flexibility are shown to follow directly from this framework. As illustrated by the different examples that are discussed, many existing models can be naturally interpreted in this context. As an application, the effect of load imbalance on a set of parallel machines is analyzed. The problem sheds light on the role of flexibility in queuing network and lot-sizing models of production. --- paper_title: Managing the Flexibility of Flexible Manufacturing Systems for Competitive Edge paper_content: Flexible Manufacturing Systems (FMS) are believed to be a major means for improving both production flexibility and productivity. If well managed, FMS can be a formidable competitive weapon for manufacturing firms. However, there exists confusion concerning how to define the concept of flexibility. One result is the misconception that flexibility may cause the decline of productivity. This paper reviews the existing studies on flexibility and presents a “flexibility map” which clarifies the relationship between various flexibility concepts and measures. This map also provides a foundation for exploring the issue of how flexibility can contribute to the firm’s competitiveness. We further point out the importance of a Total System Flexibility (TSF) concept which considers two important flexibility factors: quickness of response to a change and economic response to the change. A numerical example of routing flexibility is used to demonstrate how flexibility can enhance the competitive or strategic value of FMS. Indeed, total system flexibility can increase rather than reduce productivity, and therefore enhance the firm’s competitiveness. --- paper_title: On measurement and valuation of manufacturing flexibility paper_content: Abstract The concept of flexibility has been a recurring theme in recent manufacturing literature. Manufacturing flexibility is widely acclaimed as a formidable competitive weapon in the arsenal of any manufacturing firm. Yet critical underpinnings of this concept are not well understood despite efforts by various researchers to classify different flexibility types in order to facilitate its measurement and valuation. This paper examines reasons why measurement of flexibility has remained difficult. It proposes that any measure must inevitably depend on factors such as the degree of uncertainty in the environment, management objectives, machine capabilities and configuration (control). Therefore it advocates surrogate measures such as the value of flexibility. A model illustrating how flexibility might be measured and its appropriate level chosen, in a specific scenario, is presented. Numerical examples reveal additional insights that might be useful to firms with characteristics that match those of the m... --- paper_title: Manufacturing flexibility: a strategic perspective paper_content: To help meet competitive realities operations managers need to know more about the strategic aspects of manufacturing flexibility. This paper takes steps toward meeting that need by critically reviewing the literature and establishing a research agenda for the area. A conceptual model, which places flexibility within a broad context, helps to identify certain assumptions of theoretical studies which need to be challenged. The model also provides a basis for identifying specific flexibility dimensions. The manner in which these dimensions may limit the effectiveness of a manufacturing process, and the problems in operationalizing them are discussed. Focusing next on the neglected area of applied work, concepts are presented for analyzing whether desired amounts of flexibility are being achieved and whether the potential for flexibility built into a manufacturing process is being tapped. Once more, a procedure is outlined for altering a plant's types and amounts of flexibility over time. The research agenda, which grows out of the appraisal of theoretical and applied work, indicates the value in studying generic flexibility strategies, the flexibility dimensions, methods of delivery, ways of evaluating and changing a process's flexibility, and above all measurement problems. The conclusions indicate principles for strategic research, some of which have relevance for the development of mathematical models. --- paper_title: Flexibility configurations for the supply chain management paper_content: Abstract Facing a dynamic and complex environment, networked companies often require the coordination of many plants, which produce and deliver goods to customers located in different places, and suppliers, which provide each plant with the required components. A way to optimize the product flows in supply chains (SCs) is to adopt the concept of limited flexibility , that is a particular configuration of product assignments to plants and components to suppliers which can yield many benefits without dramatically increasing the flexibility costs. In this paper, a simulation model is proposed to evaluate the performance of different configurations of a SC. In particular, based on a work-in-process and time performance analysis, the different configurations are analyzed in order to support the selection of suitable flexibility degrees of the operations network. --- paper_title: A survey and critical review of flexibility measures in manufacturing systems paper_content: Abstract Flexibility of manufacturing systems is currently under intensive study, and the need for objective measurement of this important characteristic is widely expressed. Measurement of manufacturing flexibility is being increasingly discussed in the literature on manufacturing systems. This paper surveys the literature dealing with quantification of certain types of flexibility, analyses the proposed measures, and presents critical views. Approaches to developing flexibility measures are compiled and classified. Discussion on the suitability of specific measures is provided. Requirements of flexibility measures and recommendations for future research in this area are also provided. --- paper_title: A conceptual model of supply chain flexibility paper_content: This paper presents an integrated conceptual model of supply chain flexibility. It examines flexibility classification schemes and the commonalities of flexibility typologies published in the literature to create a theoretical foundation for analyzing the components of supply chain flexibility. Even though there has been a tremendous amount of research on the topic of flexibility, most of it has been confined to intra‐firm flexibility concerns. As supply chain management goes beyond a firm’s boundaries, the flexibility strategies must also extend beyond the firm. This paper identifies the cross‐enterprise nature of supply chain flexibility and the need to improve flexibility measures across firms. Opportunities are identified for future cross‐functional research that builds on this theoretical foundation and leads to more effective formulation of supply chain strategies. --- paper_title: Flexibility in logistic systems--modeling and performance evaluation paper_content: Abstract This paper examines potential benefits of flexibility in logistic systems. It briefly reviews flexibility concepts and existing flexibility frameworks before suggesting a bottom-up framework for flexibility in logistic systems. Then, one among the proposed system logistic flexibility types, denoted here trans-routing flexibility, is quantitatively investigated. The research focuses on flexibility's benefits exclusive of cost considerations. Logistics dependability, a customer oriented logistic performance measure is introduced. The analysis is framed in a multi-factor design of experiments (DOE) that considers factors representing changes in operational and environmental conditions of a logistic system and design factors, such as trans-routing flexibility, acting as countermeasures to changes. The model stems from a military logistics scenario easily adaptable to an industrial or service environment. Using DOE to examine the important interactions between the factor effects contributes to a better understanding of the logistics decision problems and their eventual solutions. --- paper_title: Business Process Reengineering and Flexibility: A Case for Unification paper_content: Business process reengineering (BPR) offers a radical approach to improving the performance of an organization. However, although there have been successes BPR is recognized as a high-risk activity, prone to failure. There are a variety of reasons for this and this paper highlights one which is argued to be the lack of attention that BPR pays to flexibility and its inability to cope with a changing environment. The purpose of this article is to raise the issue of flexibility within BPR and an approach is taken that examines flexibility in other business functional areas, such as manufacturing, architecture, information systems, and organizational strategy, where there is an extensive literature that indicates the importance of flexibility. The lessons from these other areas are identified and some of the implications for BPR are highlighted. A number of proposals are made including the suggestion that a form of “flexibility analysis” be adopted as a stage in BPR projects. It is argued that this would help to move the focus of a BPR project away from the current requirements toward a longer term, more flexible, enduring set of requirements. Flexibility analysis also ensures analysis of the kinds of changes that might be required over time, and how such change could be accommodated in the reengineered processes. --- paper_title: A framework for analyzing flexibility of generic objects paper_content: Flexibility is a loosely defined term used in a number of application areas with different and frequently contradicting views. In this paper, adopting as a starting point the use of the term in the manufacturing and information systems domains, we present a framework for the examination of generic objects utilizing cloud diagrams. We first portray the potential flexibility of a designed object and then its actual flexibility expressing its ability to adapt to changes. Examples of its use illustrate the ideas and their application. We believe that this research will be helpful in offering guidelines to designers of new systems where flexibility is important. --- paper_title: BUSINESS STRATEGY, MANUFACTURING FLEXIBILITY, AND ORGANIZATIONAL PERFORMANCE RELATIONSHIPS: A PATH ANALYSIS APPROACH paper_content: It has been argued in the literature that business strategy and manufacturing flexibility independently affect the performance of an organization. However, no empirical examination of the interrelationship among these three constructs has been performed. In this paper, based on a field study of 269 firms in the manufacturing industry, the identified constructs have been used to test a theoretical model using path analysis techniques. Our results indicate that business strategy contributes both directly and indirectly to organizational performance. The findings provide evidence of direct effects of (i) business strategy on manufacturing flexibility and (ii) manufacturing flexibility on organizational performance. --- paper_title: The role of flexibility in online business paper_content: Abstract Companies are trying to be more competitive by establishing a presence on the fast-growing Net. At the same time, the Net's business environment is growing more complex and uncertain. Management literature argues that flexibility can enhance corporate responsiveness to such an environment. However, research on e-businesses has focused mainly on the idea of customizing online offerings. Here is a holistic view of flexibility, investigating how online companies can become flexible through the different functional aspects of their business—technology, human resources, operations, marketing, finance, and management. What are the related opportunities and pitfalls? And what guidelines can managers follow? --- paper_title: THE NEED FOR STRATEGIC FLEXIBILITY paper_content: How can today's business cope with increasing uncertainty? The answer lies in opening up avenues of strategic flexibility. The authors tell how. --- paper_title: Manufacturing flexibility: a strategic perspective paper_content: To help meet competitive realities operations managers need to know more about the strategic aspects of manufacturing flexibility. This paper takes steps toward meeting that need by critically reviewing the literature and establishing a research agenda for the area. A conceptual model, which places flexibility within a broad context, helps to identify certain assumptions of theoretical studies which need to be challenged. The model also provides a basis for identifying specific flexibility dimensions. The manner in which these dimensions may limit the effectiveness of a manufacturing process, and the problems in operationalizing them are discussed. Focusing next on the neglected area of applied work, concepts are presented for analyzing whether desired amounts of flexibility are being achieved and whether the potential for flexibility built into a manufacturing process is being tapped. Once more, a procedure is outlined for altering a plant's types and amounts of flexibility over time. The research agenda, which grows out of the appraisal of theoretical and applied work, indicates the value in studying generic flexibility strategies, the flexibility dimensions, methods of delivery, ways of evaluating and changing a process's flexibility, and above all measurement problems. The conclusions indicate principles for strategic research, some of which have relevance for the development of mathematical models. --- paper_title: A framework for analyzing flexibility of generic objects paper_content: Flexibility is a loosely defined term used in a number of application areas with different and frequently contradicting views. In this paper, adopting as a starting point the use of the term in the manufacturing and information systems domains, we present a framework for the examination of generic objects utilizing cloud diagrams. We first portray the potential flexibility of a designed object and then its actual flexibility expressing its ability to adapt to changes. Examples of its use illustrate the ideas and their application. We believe that this research will be helpful in offering guidelines to designers of new systems where flexibility is important. --- paper_title: E-Supply Chain: Using the Internet to Revolutionize Your Business - How Market Leaders Focus Their Entire Organization on Driving Value to Customers paper_content: Supply chain has emerged as a major focus of business improvement efforts. Unfortunately, not all firms have taken advantage of the concept, resulting in a large gap between successful e-commerce companies and failures. E-Supply Chain explains how the progress of e-commerce is dovetailing with the final stages of the supply chain evolution, and how to take advantage of it in business. The authors show how the convergence of supply chain and e-commerce can catalyze the forging of advanced-level networks that will dominate future markets. In the first wave, some part of virtually every business will transfer to a cyber channel of distribution. In the second wave, networks targeting specific consumers will create new alliances across the full spectrum of supply. In the third wave, advanced networks will form global "value chain constellations" that will become the norm for most future industries. --- paper_title: Global strategies for SMe‐business: applying the SMALL framework paper_content: The World Wide Web (WWW) offers exciting new opportunities for small and medium‐sized enterprises (SMEs) to extend their customer base into the global marketplace. However, in order to exploit these advantages in a global strategy, the SME needs to adopt an entirely different approach to strategic planning and management which can enable it to deploy an extensive infrastructure network based on shared resources with other firms. This paper presents a framework for the analysis and design of global strategies within the organisational context of SMEs using Internet‐based information technologies. Central to the framework – SMALL – is the transformation of the key attributes of an SME environment through a virtual organising perspective. The framework is supported by a number of case examples of SMEs operating in a global context and a detailed analysis of three Australian SMEs. It provides a new perspective to strategies for e‐business in SMEs and to e‐business research. --- paper_title: On the critical success factors for B2B e-marketplace paper_content: B2B e-marketplaces have a profound influence on the traditional market and on the way business is conducted. All kinds of e-marketplaces are emerging and developing now, and we have witnessed both successes and failures among these e-marketplaces. However, the failures are far outnumber the successes. Under this background, the paper discusses the critical success factors for operating e-marketplaces from different perspectives. It is shown that the core of e-marketplaces is to build liquidity and capture value. Based on these analyses, comprehensive critical factors including functional factors, strategic factors and technical Factors are discussed for the success of electronic marketplaces; then a tentative framework for the analysis on the critical success factors is proposed. --- paper_title: Implementation of e-procurement and e-fulfillment processes: A comparison of cases in the motorcycle industry paper_content: Abstract Electronic business is the process which uses Internet technology to simplify certain company processes, improve productivity and increase efficiency. It allows companies to easily communicate with their suppliers, buyers and customers, to integrate “back-office” systems with those used for transactions, to accurately transmit information and to carry out data analysis in order to increase their competitiveness. The aim of this work is to define the parameters which can be used to define the performance of companies which use e-business. Particular attention is given to procurement and fulfillment in order to compare the companies studied and measure their efficiency. Fulfillment means controlling and managing transactions, warehouses, transportations and reverse logistics. This analysis is followed by case studies of two large Italian companies in the field of motorcycles. The market strategy they use and the role of Information and Communication Technologies (ICT) in their procurement and distribution processes is analyzed. This comparison provides useful information regarding the way in which Internet can be used by two companies which operate in the same market. The paper ends with the presentation of an evolutionary model for e-business strategy. The stages of the model go from the use of ICT simply as instruments of communication to the improvement of coordination processes. --- paper_title: Evolution of e-commerce Web sites: A conceptual framework and a longitudinal study paper_content: Before the 1990s, the digital exchange of information between companies was achieved using electronic data interchange (EDI) and needed agreement between the organizations. The early 1990s saw the commercialization of the Internet and the advent of open computer technology and connectivity became affordable for individuals as well as businesses. The consequence was the World Wide Web. As e-commerce activities extended across businesses, enterprises, and industries, a genre of Web sites emerged allowing the integrative management of business operations. Here, we provide an evolutionary perspective of e-commerce Web sites. We posited that there have been four eras. To chart the evolution of e-commerce Web sites, a conceptual framework was developed to characterize such sites. Based on the framework, we conducted a longitudinal study between 1993 and 2001. The result showed that the proposed four eras were clearly discernible. --- paper_title: Achieving flexible information systems: the case for improved analysis paper_content: This paper examines the problems of change that continue to thwart the development of successful information systems. The symptom of maintenance is discussed and a variety of techniques and methodologies that seek to provide solutions are discussed. Changing business needs and user requirements are identified as enduring problems and the technique of Flexibility Analysis is proposed. Finally some preliminary results from a research study are discussed. The study looked at a number of information systems in organizations and examined the changes and enhancements that were subsequently made to those systems and the reasons for them. The results indicate the benefits and practicality of the technique of Flexibility Analysis. --- paper_title: Supply management and e-procurement: creating value added in the supply chain paper_content: Abstract The increasing emphasis on supply chain management is creating a greater focus on the supply management link in the supply chain. This focus will become even more intense as firms continue to adopt e-procurement strategies to leverage the competitive advantages of the Internet. Supply managers need to understand the impact of technology and gain competency in making a business case for e-procurement. The implications are profound for the industrial marketer. --- paper_title: Adapting to changing user requirements paper_content: Abstract Systems which have the capability of adapting to changing user requirements must be founded on accurate and perceptive models of the organisation in which they have to function. Design methods based on active user participation and the use of experimental and prototyping methods help to ensure accurate models. But because systems are expected to survive even when circumstances change, the designers have to have a view of what the future world will look like. A technique which helps to provide such a view is called “ future analysis ”. However, even designs based on the best prediction technique cannot guarantee a fit between the currently designed system and future needs. Hence it is important to design systems with built-in flexibility. A number of methods have been developed which reduce the disruption caused by the amendment or even replacement of a system or system's component. --- paper_title: The role of flexibility in online business paper_content: Abstract Companies are trying to be more competitive by establishing a presence on the fast-growing Net. At the same time, the Net's business environment is growing more complex and uncertain. Management literature argues that flexibility can enhance corporate responsiveness to such an environment. However, research on e-businesses has focused mainly on the idea of customizing online offerings. Here is a holistic view of flexibility, investigating how online companies can become flexible through the different functional aspects of their business—technology, human resources, operations, marketing, finance, and management. What are the related opportunities and pitfalls? And what guidelines can managers follow? --- paper_title: Flexibility as Process Mobility: The Management of Plant Capabilities for Quick Response Manufacturing paper_content: This paper examines the relationship between one form of manufacturing flexibility - operational mobility (or the ability to change quickly between products) - and structure, infrastructure and managerial policy at the plant level. Using data from a broader study aimed at exploring the general sources of manufacturing flexibility, the paper provides evidence of the strength of the links between manufacturing flexibility and such factors as scale, technology vintage, computer integration and workforce management. There has been little empirical work on this subject, partly as a result of the difficulties of defining and measuring flexibility. The type of flexibility explored in this paper is specifically the capability of a plant to change between process states quickly. I find that most of the variance in flexibility across plants can be explained by a combination of structural factors, such as the scale of the plant; infrastructural factors, such as the length of service of the operators; and measures of managerial emphasis, such as the perceived importance of making quick changeovers. Of these factors, I find that the scale of a plant does not strongly inhibit its flexibility and that computer integration is either insignificant or detrimental to the flexibility of a plant. Workforce characteristics are also important determinants of flexibility, and the results suggest that less experienced operators may be more flexible in their ability to make certain types of change quickly between products. A strong determinant of the flexibility of a plant is the extent to which operators perceive managers to have emphasized the importance of various performance dimensions. Finally, I show that different forms of flexibility are not necessarily related to each other. This emphasizes the importance of identifying precisely the particular type of flexibility that is to be developed when building manufacturing capabilities. --- paper_title: Technology flexibility: conceptualization, validation, and measurement paper_content: This research investigates technology flexibility, which is the technology characteristic that allows or enables adjustments and other changes to the business process. Technology flexibility has two dimensions, structural and process flexibility, encompassing both the actual technology application and the people and processes that support it. The flexibility of technology that supports business processes can greatly influence the organization's capacity for change. Existing technology can present opportunities for or barriers to business process flexibility through structural characteristics such as language, platform and design. Technology can also indirectly affect flexibility through the relationship between the technology maintenance organization and the business process owners, change request processing, and other response characteristics. These indirect effects reflect a more organizational perspective of flexibility. This paper asks the question, "what makes technology flexible?" This question is addressed by developing and validating a measurement model of technology flexibility. Constructs and definitions of technology flexibility are developed by examining the concept of flexibility in other disciplines, and the demands imposed on technology by business processes. The purpose of building a measurement model is to show validity for the constructs of technology flexibility. This paper discusses the theory of technology flexibility, develops constructs and determinants of this phenomenon, and proposes a methodology for the validation and study of the flexibility of emerging technologies. --- paper_title: Measuring technology flexibility paper_content: This research investigates technology flexibility, which is the technology characteristic that allows or enables adjustments and other changes to the business process. We develop dimensions and determinants of this phenomenon and demonstrate a methodology for the validation and the study of flexibility. The results of a test of software system flexibility are reported. Technology flexibility has two dimensions, structural and process flexibility, encompassing both the actual technology application and the people and processes that support and use it. The flexibility of technology that supports business processes can greatly influence the organization's capacity for change. Existing technology can present opportunities for, or barriers to, business process flexibility through structural characteristics such as language, platform, and design. Technology can also indirectly affect flexibility through the relationship between the technology maintenance organization and the business process owners, change request processing, and other response characteristics. These indirect effects reflect a more organizational perspective of flexibility. --- paper_title: Process range in manufacturing: an empirical study of flexibility paper_content: This paper examines the relationship between one form of manufacturing flexibility---process range---and structure, infrastructure, and managerial policy at the plant level. The paper provides evidence of the strength of the links between manufacturing flexibility and such factors as scale, technology vintage, computer integration, and workforce management. Data from 54 plants in the fine-paper industry are presented, and a model of the determinants of short-term flexibility is developed. The plants examined differed by a factor of 20 in their ability to accommodate large process variation. The evidence suggests that flexibility is strongly negatively related to scale and degree of computer integration, yet positively related to newer vintages of mechanical technology and workforce experience. Some results differ significantly from the prevailing view of the industry, in particular, that newer plants are less flexible. The paper shows that newer machine technology is more flexible once other factors are controlled for. In the longer term, the results show that management has a significant impact on the improvement of flexibility in operations, regardless of the technology and infrastructure in place. Plant network managers' views of flexibility are important. The data suggest that inflexible plants may be inflexible partly as a result of their being considered inflexible by network managers, and never being assigned the product range needed to improve the capability. --- paper_title: Flexibility as Process Mobility: The Management of Plant Capabilities for Quick Response Manufacturing paper_content: This paper examines the relationship between one form of manufacturing flexibility - operational mobility (or the ability to change quickly between products) - and structure, infrastructure and managerial policy at the plant level. Using data from a broader study aimed at exploring the general sources of manufacturing flexibility, the paper provides evidence of the strength of the links between manufacturing flexibility and such factors as scale, technology vintage, computer integration and workforce management. There has been little empirical work on this subject, partly as a result of the difficulties of defining and measuring flexibility. The type of flexibility explored in this paper is specifically the capability of a plant to change between process states quickly. I find that most of the variance in flexibility across plants can be explained by a combination of structural factors, such as the scale of the plant; infrastructural factors, such as the length of service of the operators; and measures of managerial emphasis, such as the perceived importance of making quick changeovers. Of these factors, I find that the scale of a plant does not strongly inhibit its flexibility and that computer integration is either insignificant or detrimental to the flexibility of a plant. Workforce characteristics are also important determinants of flexibility, and the results suggest that less experienced operators may be more flexible in their ability to make certain types of change quickly between products. A strong determinant of the flexibility of a plant is the extent to which operators perceive managers to have emphasized the importance of various performance dimensions. Finally, I show that different forms of flexibility are not necessarily related to each other. This emphasizes the importance of identifying precisely the particular type of flexibility that is to be developed when building manufacturing capabilities. --- paper_title: Building the Flexible Firm: How to Remain Competitive paper_content: Bespreking van: H.W. Volberda,Building the Flexible Firm: How to Remain Competitive Oxford:Oxford University Press ,1999 019829090X --- paper_title: Process range in manufacturing: an empirical study of flexibility paper_content: This paper examines the relationship between one form of manufacturing flexibility---process range---and structure, infrastructure, and managerial policy at the plant level. The paper provides evidence of the strength of the links between manufacturing flexibility and such factors as scale, technology vintage, computer integration, and workforce management. Data from 54 plants in the fine-paper industry are presented, and a model of the determinants of short-term flexibility is developed. The plants examined differed by a factor of 20 in their ability to accommodate large process variation. The evidence suggests that flexibility is strongly negatively related to scale and degree of computer integration, yet positively related to newer vintages of mechanical technology and workforce experience. Some results differ significantly from the prevailing view of the industry, in particular, that newer plants are less flexible. The paper shows that newer machine technology is more flexible once other factors are controlled for. In the longer term, the results show that management has a significant impact on the improvement of flexibility in operations, regardless of the technology and infrastructure in place. Plant network managers' views of flexibility are important. The data suggest that inflexible plants may be inflexible partly as a result of their being considered inflexible by network managers, and never being assigned the product range needed to improve the capability. --- paper_title: The Decision-Making Paradigm of Organizational Design paper_content: This paper introduces and explicates the decision-making paradigm of organizational design. We argue that the domains of existing design paradigms are declining in scope, and that the nature of current and future organizational environments requires use of a design paradigm that responds to the increasing frequency and criticality of the decision-making process. In particular, we argue that the decision-making paradigm is applicable when the organizational environments are hostile, complex, and turbulent. ::: ::: The focal concept of the decision-making paradigm is that organizations should be designed primarily to facilitate the making of organizational decisions. The paper sets forth the paradigm's six major concepts and discusses the principal domains of its application. The paper also examines the relationships between the decision-making paradigm and the literatures on 1 organizational decision making, 2 the information processing view of organizations, and 3 the need for compatibility between the organization's design and the design of its technologically supported information systems. The paper concludes by identifying ten organizational design guidelines that follow from the decision-making paradigm. --- paper_title: An Empirical Study of Manufacturing Flexibility in Printed Circuit Board Assembly paper_content: This paper addresses the empirical verification of hypotheses that relate to the strategic use and implementation of manufacturing flexibility. We begin with a literature review and framework for analyzing different types of flexibility in manufacturing. Next, we examine some of the propositions in the framework using data from 31 printed circuit-board plants in Europe, Japan, and the United States. Based on our analysis and findings, we then suggest several new strategic insights related to the management of flexibility and some potentially fruitful areas for further theoretical and empirical research. Our findings include: more automation is associated empirically with less flexibility, as found in other studies; nontechnology factors, such as high involvement of workers in problem-solving activities, close relationships with suppliers, and flexible wage schemes, are associated with greater mix, volume, and new-product flexibility; component reusability is significantly correlated with mix and new-product flexibility; achieving high-mix or new-product flexibility does not seem to involve a cost or quality penalty; mix and new-product flexibility are mutually reinforcing and tend to be supported by similar factors; and mix flexibility may reduce volume fluctuations, which could theoretically reduce the need for volume flexibility. --- paper_title: Building the Flexible Firm: How to Remain Competitive paper_content: Bespreking van: H.W. Volberda,Building the Flexible Firm: How to Remain Competitive Oxford:Oxford University Press ,1999 019829090X --- paper_title: Flexibility in Manufacturing: A Survey paper_content: This article is an attempt to survey the vast literature on flexibility in manufacturing that has accumulated over the last 10 to 20 years. The survey begins with a brief review of the classical literature on flexibility in economics and organization theory, which provides a background for manufacturing flexibility. Several kinds of flexibilities in manufacturing are then defined carefully along with their purposes, the means to obtain them, and some suggested measurements and valuations. Then we examine the interrelationships among the several flexibilities. Various empirical studies and analytical/optimization models dealing with these flexibilities are reported and discussed. The article concludes with suggestions for some possible future research directions. --- paper_title: THE NEED FOR STRATEGIC FLEXIBILITY paper_content: How can today's business cope with increasing uncertainty? The answer lies in opening up avenues of strategic flexibility. The authors tell how. --- paper_title: Strategy and Environment: A Conceptual Integration paper_content: An elaboration of the concepts of strategy and environment can be achieved by categorizing environment into its objective and perceived states, and by subdividing strategy according to content (outcomes) or process. The objective environment can be further categorized into “task” and “general.” An alternative subdivision of strategy is primary (domain selection) and secondary (competitive approach). The concepts of strategy and environment are integrated in that primary strategy concerns opportunities in the general environment and secondary strategy involves navigating within a task environment. --- paper_title: Business Process Reengineering and Flexibility: A Case for Unification paper_content: Business process reengineering (BPR) offers a radical approach to improving the performance of an organization. However, although there have been successes BPR is recognized as a high-risk activity, prone to failure. There are a variety of reasons for this and this paper highlights one which is argued to be the lack of attention that BPR pays to flexibility and its inability to cope with a changing environment. The purpose of this article is to raise the issue of flexibility within BPR and an approach is taken that examines flexibility in other business functional areas, such as manufacturing, architecture, information systems, and organizational strategy, where there is an extensive literature that indicates the importance of flexibility. The lessons from these other areas are identified and some of the implications for BPR are highlighted. A number of proposals are made including the suggestion that a form of “flexibility analysis” be adopted as a stage in BPR projects. It is argued that this would help to move the focus of a BPR project away from the current requirements toward a longer term, more flexible, enduring set of requirements. Flexibility analysis also ensures analysis of the kinds of changes that might be required over time, and how such change could be accommodated in the reengineered processes. --- paper_title: Competitive Strategy Under Uncertainty. paper_content: Competitive strategy under uncertainty involves a trade-off between acting early and acting later after the uncertainty is resolved, and another trade-off between focusing resources on one scenario and spreading resources on several scenarios thus maintaining flexibility. This paper analyzes both these trade-offs taking into consideration the nature of uncertainty, industry economics, intensity of competition, and the position of a firm relative to its competitors. --- paper_title: Flexibility Ratios and Manufacturing Strategy paper_content: In this exploratory, empirical study of modernizing durable goods plants, it was found that typical measures of flexibility e.g., number of unique parts and part families are independent. More importantly, plants and firms with greater strategic manufacturing focus, regardless of specific emphasis e.g., cost or quality, scheduled fewer part numbers on new flexible automation systems. This suggests that product focus and strategic focus are related in plants producing discrete parts. When flexibility is emphasized as a strategic manufacturing focus, new automation systems are significantly more likely to have shorter change-over times per part family. In general, part family-changeover time ratios appear to have the greatest potential of measures evaluated for building a useful theory of flexibility in discrete parts manufacturing. An evaluation of changes made in part types and part families during the implementation period showed that product flexibility is pursued as a way to reduce high labor costs in manufacturing. These plants accomplished this end by increasing the number of parts scheduled on new systems. Implications for strategic management of flexibility and scope are presented. --- paper_title: Implementation and management framework for supply chain flexibility paper_content: Purpose – The purpose of this research is to develop a conceptual framework for implementing and managing supply chain flexibility in supply chain organizations. The framework suggests that supply chain flexibility should be implemented and managed using a three‐stage approach: required flexibility identification, implementation and shared responsibility, and feedback and control.Design/methodology/approach – The major components of the proposed framework are based on a review of research in the manufacturing flexibility literature as well as the limited research in supply chain flexibility. The strengths and weaknesses of these frameworks, combined with a published empirical study were analyzed to identify the important issues that must be considered when implementing and managing supply chain flexibility, and those components that need to be incorporated into a new integrated framework.Findings – This framework was constructed by synthesizing the strengths of other conceptual frameworks. As a result, th... --- paper_title: A framework for the sustainability of e‐marketplaces paper_content: Electronic marketplaces have promised many benefits to participants, and hence have aroused considerable interest in the business community. However, the failure of some marketplaces and the success of others have led business managers to question which marketplaces will be successful in the future, and even whether the entire idea is viable. This question is particularly pressing for those considering sponsoring or participating in a marketplace. This exploratory study seeks to address these issues by proposing a framework of the factors that help explain the sustainability of e‐marketplaces. The framework proposed is based upon the findings of interviews carried out with 14 managers based in 11 companies active in the field of e‐marketplaces, and findings from the current literature from this domain. The framework proposed identifies seven factors that can be categorised according to three levels of influence, i.e. the macroeconomic/regulatory level, the industry level, and the firm level. Further work to validate the proposed framework would provide practitioners with additional insight to apply to their e‐marketplace strategies. --- paper_title: A review of manufacturing flexibility paper_content: Abstract In the field of operations management, manufacturing flexibility has been the subject of much academic enquiry. Moreover, the need for this fundamental characteristic has never been more urgent. However, a comprehensive understanding of the subject remains elusive. An extensive review of the literature is used to examine the issues surrounding the concept of manufacturing flexibility. Specifically: the use of manufacturing flexibility as a strategic objective, the relationship flexibility has with environmental uncertainty, the use of taxonomies as a vehicle for furthering understanding of the types of flexibility, the nature of flexibility, and its measurement. Through this process of synthesis, the paper attempts to establish the extent to which knowledge of manufacturing flexibility has now progressed. Suggestions for future research topics in flexibility are also presented. --- paper_title: THE NEED FOR STRATEGIC FLEXIBILITY paper_content: How can today's business cope with increasing uncertainty? The answer lies in opening up avenues of strategic flexibility. The authors tell how. --- paper_title: Manufacturing flexibility: a strategic perspective paper_content: To help meet competitive realities operations managers need to know more about the strategic aspects of manufacturing flexibility. This paper takes steps toward meeting that need by critically reviewing the literature and establishing a research agenda for the area. A conceptual model, which places flexibility within a broad context, helps to identify certain assumptions of theoretical studies which need to be challenged. The model also provides a basis for identifying specific flexibility dimensions. The manner in which these dimensions may limit the effectiveness of a manufacturing process, and the problems in operationalizing them are discussed. Focusing next on the neglected area of applied work, concepts are presented for analyzing whether desired amounts of flexibility are being achieved and whether the potential for flexibility built into a manufacturing process is being tapped. Once more, a procedure is outlined for altering a plant's types and amounts of flexibility over time. The research agenda, which grows out of the appraisal of theoretical and applied work, indicates the value in studying generic flexibility strategies, the flexibility dimensions, methods of delivery, ways of evaluating and changing a process's flexibility, and above all measurement problems. The conclusions indicate principles for strategic research, some of which have relevance for the development of mathematical models. --- paper_title: STRATEGIC FLEXIBILITY FOR HIGH TECHNOLOGY MANOEUVRES: A CONCEPTUAL FRAMEWORK paper_content: Strategic flexibility is proposed as an expedient capability for managing capricious settings, such as those confronted in technology‐intensive arenas. This article examines the historical evolution of the concept of flexibility and analyses its different senses by relating it to other concepts with a ‘family resemblance’. A conceptual framework is subsequently developed, which integrates the temporal and intentional dimensions of flexibility. Four archetypal manoeuvres, derived from the framework, are proposed as a means of attaining strategic flexibility. The deployment of these manoeuvres is exemplified by means of selected strategic engagements of firms in the computer peripherals arena. The article concludes with a discussion of the theoretical and practical implications of the research. --- paper_title: Business Process Reengineering and Flexibility: A Case for Unification paper_content: Business process reengineering (BPR) offers a radical approach to improving the performance of an organization. However, although there have been successes BPR is recognized as a high-risk activity, prone to failure. There are a variety of reasons for this and this paper highlights one which is argued to be the lack of attention that BPR pays to flexibility and its inability to cope with a changing environment. The purpose of this article is to raise the issue of flexibility within BPR and an approach is taken that examines flexibility in other business functional areas, such as manufacturing, architecture, information systems, and organizational strategy, where there is an extensive literature that indicates the importance of flexibility. The lessons from these other areas are identified and some of the implications for BPR are highlighted. A number of proposals are made including the suggestion that a form of “flexibility analysis” be adopted as a stage in BPR projects. It is argued that this would help to move the focus of a BPR project away from the current requirements toward a longer term, more flexible, enduring set of requirements. Flexibility analysis also ensures analysis of the kinds of changes that might be required over time, and how such change could be accommodated in the reengineered processes. --- paper_title: Engineering supply chains to match customer requirements paper_content: Modern day market places are highly varied and cannot be serviced effectively by a single supply chain paradigm. Consequently products and services must be provided to the end consumer via tailored supply chain strategies. This article categorises consumer products and details the specific supply chain management tools and techniques required to service each. A comparison of lean and agile strategies is provided along with a detailed explanation of the integration of the two within a Leagile supply chain. The application of such a strategy for electronic products is provided via a four stage case study. A route map for engineering supply chains to match customer requirements is developed in order to avoid costly and ineffective mismatches of supply chain strategy to product characteristics. --- paper_title: Systematic Review in Software Engineering paper_content: A kit of assemblable components for implantation into the bone of a mammal for use in distraction osteogenesis. The kit comprises a fixture, a footing and a distracter, the fixture including a longitudinally extending body portion with a proximal end and a distal end, the body portion having an exterior surface adapted for contact and integration with bone tissue, the body portion having a generally longitudinally extending bore extending from a proximal opening adjacent the proximal end to a distal opening adjacent the distal end. The footing includes a proximal surface and a distal surface. The distracter comprises a generally rod-shaped body including a distal end and a proximal end, and the proximal end of the distracter is adapted to bear against the footing. There are first and second engaging means on the fixture and the distracter respectively for adjustably locating the fixture relative to the distracter. --- paper_title: Factors influencing clients in the selection of offshore software outsourcing vendors: An exploratory study using a systematic literature review paper_content: Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation's track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990-1999 and 2000-mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in the OSDO business, such as skilled human resource, appropriate infrastructure and quality of products and services. --- paper_title: Systematic Review in Software Engineering paper_content: A kit of assemblable components for implantation into the bone of a mammal for use in distraction osteogenesis. The kit comprises a fixture, a footing and a distracter, the fixture including a longitudinally extending body portion with a proximal end and a distal end, the body portion having an exterior surface adapted for contact and integration with bone tissue, the body portion having a generally longitudinally extending bore extending from a proximal opening adjacent the proximal end to a distal opening adjacent the distal end. The footing includes a proximal surface and a distal surface. The distracter comprises a generally rod-shaped body including a distal end and a proximal end, and the proximal end of the distracter is adapted to bear against the footing. There are first and second engaging means on the fixture and the distracter respectively for adjustably locating the fixture relative to the distracter. --- paper_title: Supply management and e-procurement: creating value added in the supply chain paper_content: Abstract The increasing emphasis on supply chain management is creating a greater focus on the supply management link in the supply chain. This focus will become even more intense as firms continue to adopt e-procurement strategies to leverage the competitive advantages of the Internet. Supply managers need to understand the impact of technology and gain competency in making a business case for e-procurement. The implications are profound for the industrial marketer. --- paper_title: Business Process Reengineering and Flexibility: A Case for Unification paper_content: Business process reengineering (BPR) offers a radical approach to improving the performance of an organization. However, although there have been successes BPR is recognized as a high-risk activity, prone to failure. There are a variety of reasons for this and this paper highlights one which is argued to be the lack of attention that BPR pays to flexibility and its inability to cope with a changing environment. The purpose of this article is to raise the issue of flexibility within BPR and an approach is taken that examines flexibility in other business functional areas, such as manufacturing, architecture, information systems, and organizational strategy, where there is an extensive literature that indicates the importance of flexibility. The lessons from these other areas are identified and some of the implications for BPR are highlighted. A number of proposals are made including the suggestion that a form of “flexibility analysis” be adopted as a stage in BPR projects. It is argued that this would help to move the focus of a BPR project away from the current requirements toward a longer term, more flexible, enduring set of requirements. Flexibility analysis also ensures analysis of the kinds of changes that might be required over time, and how such change could be accommodated in the reengineered processes. --- paper_title: The development of e‐procurement within the ICT manufacturing industry in Ireland paper_content: Purpose – The purpose of this paper is to show that e‐procurement provides manufacturing firms with new and efficient solutions to drive significant value into their business, yet generally the use of internet technologies to accommodate e‐procurement systems remains in a formative stage. Previous research tends to focus on larger economies, so this paper provides a new perspective by presenting evidence from the Irish ICT manufacturing industry.Design/methodology/approach – The research locale is justified on the basis that the ICT manufacturing sector has a greater propensity to adopt technologies such as e‐procurement. In addition, by conducting the research in a small peripheral economy, a gap in the knowledge base is being addressed. The exploratory research adopted a quantitative methodology with a questionnaire instrument being employed to investigate various e‐procurement activities within a sample of top performing ICT manufacturing firms.Findings – Findings show that e‐procurement is developing ... ---
Title: A Systematic Literature Review of Flexible E-Procurement Marketplace Section 1: E-Procurement Marketplace Description 1: Discuss the definition and evolution of e-procurement marketplaces (EPM) and their significance in buyer-seller relationships. Section 2: Flexibility Description 2: Define flexibility and its importance, covering various types and aspects such as operational, hierarchical, measurement, strategic and time horizon. Section 3: From Web Evolution to Development of Flexible EPM Description 3: Analyze the phases of web evolution and how they have influenced the flexible nature of EPMs, highlighting key technological advancements and changes in business processes. Section 4: A Synthesis of Flexibility Types Into an EPM Framework Description 4: Synthesize the different flexibility types identified in the literature into a coherent framework applicable to EPMs, detailing each type and its relevance. Section 5: Research Methodology Description 5: Present the systematic literature review methodology applied in this study, including research questions, search process, inclusion and exclusion criteria, and data extraction and analysis procedures. Section 6: Discussion Description 6: Discuss the findings of the systematic literature review, addressing the research questions and providing insights into the temporal aspects and domain-specific applications of EPM flexibility. Section 7: Conclusion Description 7: Summarize the key findings of the study, the importance of flexibility in EPMs, and suggestions for future research directions in the field.
A Survey of Application Layer Techniques for Adaptive Streaming Of Multimedia
22
--- paper_title: Adaptive vector quantization of image sequences using generalized threshold replenishment paper_content: We describe a new adaptive vector quantization (AVQ) algorithm designed for the coding of nonstationary sources. This new algorithm, generalized threshold replenishment (GTR), differs from prior AVQ algorithms in that it features an explicit, online consideration of both rate and distortion. Rate-distortion cost criteria are used in the determination of nearest-neighbor codewords and as well as in the decision to update the codebook. Results presented indicate that, for the coding of an image sequence: (1) most AVQ algorithms achieve distortion much lower than that of nonadaptive VQ for the same rate (about 1.5 bits/pixel), and (2) the GTR algorithm achieves rate-distortion performance substantially superior to that of other AVQ algorithms for low-rate coding, being the only algorithm to achieve a rate below 1.0 bits/pixel. --- paper_title: A rate control mechanism for packet video in the Internet paper_content: Datagram networks such as the Internet do not provide guaranteed resources such as bandwidth or guaranteed performance measures such as maximum delay. One way to support packet video in these networks is to use feedback mechanisms that adapt the output rate of video coders based on the state of the network. The authors present one such mechanism. They describe the feedback information, and how it is used by the coder control algorithm. They also examine how the need to operate in a multicast environment impacts the design of the control mechanism. This mechanism has been implemented in the H.261 video coder of IVS. IVS is a videoconference system for the Internet developed at INRIA. Experiments indicate that the control mechanism is well suited to the Internet environment. In particular, it makes it possible to establish and maintain quality videoconferences even across congested connections in the Internet. Furthermore, it prevents video sources from swamping the resources of the Internet, which could lead to unacceptable service to all users of the network. > --- paper_title: SAVE: an algorithm for smoothed adaptive video over explicit rate networks paper_content: Supporting compressed video efficiently on networks is a challenge because of its burstiness. Although a large number of applications using compressed video are rate adaptive, it is also important to preserve quality as much as possible. We propose a smoothing and rate adaptation algorithm, called SAVE, that the compressed video source uses in conjunction with explicit rate based control in the network. SAVE smoothes the demand from the source to the network, thus helping achieve good multiplexing gains. SAVE maintains the quality of the video and ensures that the delay at the source buffer does not exceed a bound. We examine the effectiveness of SAVE across 28 different traces (entertainment and teleconferencing videos) using different compression algorithms. --- paper_title: A rate control mechanism for packet video in the Internet paper_content: Datagram networks such as the Internet do not provide guaranteed resources such as bandwidth or guaranteed performance measures such as maximum delay. One way to support packet video in these networks is to use feedback mechanisms that adapt the output rate of video coders based on the state of the network. The authors present one such mechanism. They describe the feedback information, and how it is used by the coder control algorithm. They also examine how the need to operate in a multicast environment impacts the design of the control mechanism. This mechanism has been implemented in the H.261 video coder of IVS. IVS is a videoconference system for the Internet developed at INRIA. Experiments indicate that the control mechanism is well suited to the Internet environment. In particular, it makes it possible to establish and maintain quality videoconferences even across congested connections in the Internet. Furthermore, it prevents video sources from swamping the resources of the Internet, which could lead to unacceptable service to all users of the network. > --- paper_title: Control mechanisms for packet audio in the Internet paper_content: The Internet provides a single class best effort service. From an application's point of view, this service amounts in practice to providing channels with time-varying characteristics such as delay and loss distributions. One way to support real time applications such as interactive audio given this service is to use control mechanisms that adapt the audio coding and decoding processes based on the characteristics of the channels, the goal being to maximize the quality of the audio delivered to the destinations. In this paper, we describe and analyze a set of such control mechanisms. They include a jitter control mechanism and a combined error and rate control mechanism. These mechanisms have been implemented and evaluated over the Internet and the MBone. Experiments indicate that they make it possible to establish and maintain reasonable quality audioconferences even across fairly congested connections. --- paper_title: Time Constrained Bandwidth Smoothing For Interactive Video-On-Demand Systems paper_content: The use of a client-side buffer in the delivery of compressed prerecorded video can be an effective tool for removing the burstiness required of the underlying server and network by smoothing the bandwidth requirements for continuous delivery. Given a fixed-size smoothing buffer, several bandwidth smoothing algorithms have been introduced in the literature that are provably optimal under certain constraints, typically requiring large buffer residency times to realize their optimal properties. The large buffer residency times, however, make VCR functions harder to support. In this paper, we introduce the notion of time constrained bandwidth smoothing. Specifically, we introduce two new algorithms that, in addition to the size of the client side buffer, use a time constraint as a parameter in the bandwidth smoothing plan creation, making the plans more amenable to supporting VCR interactivity. Our results show that the buffer residency times can be reduced, while still allowing the bandwidth allocation to be smoothed for continuous video delivery. --- paper_title: Supporting stored video: reducing rate variability and end-to-end resource requirements through optimal smoothing paper_content: VBR compressed video is known to exhibit significant, multiple-time-scale bit rate variability. In this paper, we consider the transmission of stored video from a server to a client across a high speed network, and explore how the client buffer space can be used most effectively toward reducing the variability of the transmitted bit rate.We present two basic results. First, we present an optimal smoothing algorithm for achieving the greatest possible reduction in rate variability when transmitting stored video to a client with given buffer size. We provide a formal proof of optimality, and demonstrate the performance of the algorithm on a set of long MPEG-1 encoded video traces. Second, we evaluate the impact of optimal smoothing on the network resources needed for video transport, under two network service models: Deterministic Guaranteed service [1, 9] and Renegotiated CBR (RCBR) service [8, 7]. Under both models, we find the impact of optimal smoothing to be dramatic. --- paper_title: Rate-constrained bandwidth smoothing for delivery of stored video paper_content: Bandwidth smoothing techniques for the delivery of compressed prerecorded video have been shown effective in removing the burstiness required for the continuous playback of stored video. Given a fixed client-side buffer, several bandwidth smoothing algorithms have been introduced that are provably optimal under certain constraints. These algorithms, however, may be too aggressive in the amount of data that they prefetch, making it more difficult to support VCR functions that are required for interactive video-on- demand systems. In this paper, we introduce a rate- constrained bandwidth smoothing algorithm for the delivery of stored video that, given a fixed maximum bandwidth rate minimizes both the smoothing buffer requirements as well as the buffer residency requirements. By minimizing the buffer residency times, the clients and servers can remain more tightly coupled making VCR functions easier to support. A comparison between the rate-constrained bandwidth smoothing algorithm and other bandwidth smoothing algorithms is presented using a compressed full-length movie.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Critical Bandwidth Allocation for the Delivery of Compressed Video paper_content: The transportation of compressed video data without loss of picture quality requires the network to support large fluctuations in bandwidth requirements. These fluctuations can be smoothed, but straightforward approaches to smoothing can still suffer from excessive buffering requirements, poor buffer utilization and an excessive number of bandwidth changes. This paper introduces critical bandwidth allocation, which reduces the number of bandwidth changes to a very small number, and achieves the maximum effectiveness from clientside buffers. A comparison between critical bandwidth allocation algorithms and other smoothing algorithms is presented, the sensitivity of the algorithm to jitter is examined, and implications for the design of network services are discussed. --- paper_title: An optimal bandwidth allocation strategy for the delivery of compressed prerecorded video paper_content: The transportation of prerecorded, compressed video data without loss of picture quality requires the network and video servers to support large fluctuations in bandwidth requirements. Fully utilizing a client-side buffer for smoothing bandwidth requirements can limit the fluctuations in bandwidth required from the underlying network and the video-on-demand servers. This paper shows that, for a fixed-size buffer constraint, the critical bandwidth allocation technique results in plans for continuous playback of stored video that have (1) the minimum number of bandwidth increases, (2) the smallest peak bandwidth requirements, and (3) the largest minimum bandwidth requirements. In addition, this paper introduces an optimal bandwidth allocation algorithm which, in addition to the three critical bandwidth allocation properties, minimizes the total number of bandwidth changes necessary for continuous playback. A comparison between the optimal bandwidth allocation algorithm and other critical bandwidth-based algorithms using 17 full-length movie videos and 3 seminar videos is also presented. ---
Title: A Survey of Application Layer Techniques for Adaptive Streaming Of Multimedia Section 1: Introduction Description 1: Provide an overview of the need for adaptive multimedia streaming and the current challenges in supporting multimedia applications over heterogeneous and variable network conditions. Section 2: Compression Level Methods Description 2: Discuss various video compression algorithms and their features which are useful for adaptation, including discrete cosine transformation (DCT), wavelet transforms, and proprietary methods. Section 3: MPEG Compression Standard Description 3: Describe the MPEG set of standards used for video compression, focusing on MPEG-2, including its encoding techniques, types of frames (I, P, B), and adaptability features. Section 4: Wavelet Encoding Description 4: Explore wavelet compression methods, their approaches for video encoding, and the scalability features they support. Section 5: Proprietary Methods Description 5: Cover commercial applications that use proprietary compression and adaptation methods, such as RealVideo and Intel's Indeo. Section 6: Application Streaming Description 6: Discuss adaptive streaming techniques at the application level, including layered encoding, adaptive error control, adaptive synchronization, and smoothing. Section 7: Layered Encoding Description 7: Explain the methodology and benefits of layered encoding for video streaming, including network feedback mechanisms. Section 8: Receiver Driven Multicast Description 8: Describe receiver-driven layered multicast (RLM) and its approach to adapting the transmission of layered video through network feedback. Section 9: Rate Shaping Description 9: Examine rate shaping techniques which adjust the traffic rate generated by the video encoder according to current network conditions using feedback mechanisms. Section 10: Error Control Description 10: Discuss methods to mitigate errors and packet loss, including Automatic Repeat Request (ARQ) and Forward Error Correction (FEC). Section 11: Adaptive FEC for Internet Audio Description 11: Analyze an adaptive FEC-based error control scheme for interactive audio in the Internet. Section 12: Adaptive FEC for Internet Video Description 12: Discuss an adaptive FEC-based error control scheme specifically designed for Internet video. Section 13: Adaptive Synchronization Description 13: Explain methods for adaptive synchronization to solve both intramedia and intermedia synchronization issues in multimedia applications. Section 14: Smoothing Description 14: Explore techniques for smoothing or shaping the video information transmitted to mitigate rate variations of multimedia applications. Section 15: Smoothing Algorithms Description 15: Discuss various algorithms for smoothing compressed video streams to ensure continuous and optimal playback at the client side. Section 16: Online Smoothing Description 16: Examine online smoothing algorithms for live video applications that reduce the resource variability using window-based approaches. Section 17: Proactive Buffering Description 17: Describe rate-constrained bandwidth smoothing techniques for stored video and how they proactively manage buffers and bandwidth. Section 18: Bridging Bandwidth Smoothing and Adaptation Techniques Description 18: Discuss how combining reactive and passive techniques can optimize video quality over best-effort networks by utilizing a priori information. Section 19: Example Adaptive Applications Description 19: Provide examples of commercial adaptive applications such as Real Network Solutions, highlighting the adaptive streaming techniques they employ. Section 20: Operating System Support for Adaptive Multimedia Description 20: Discuss operating system-level support necessary for adaptive multimedia streaming, such as integrated CPU and network-I/O QoS management and adaptive rate-controlled scheduling. Section 21: Related Work Description 21: Present a summary of related research efforts, focusing on techniques that support multimedia streaming and identifying potential for adaptation. Section 22: Summary Description 22: Summarize key points from the survey, emphasizing the importance of adaptive techniques at different layers and the potential benefits of low-level network feedback.
Multivariate and Multiscale Data Assimilation in Terrestrial Systems: A Review
16
--- paper_title: Improvements to the Community Land Model and their impact on the hydrological cycle: COMMUNITY LAND MODEL HYDROLOGY paper_content: [1] The Community Land Model version 3 (CLM3) is the land component of the Community Climate System Model (CCSM). CLM3 has energy and water biases resulting from deficiencies in some of its canopy and soil parameterizations related to hydrological processes. Recent research by the community that utilizes CLM3 and the family of CCSM models has indicated several promising approaches to alleviating these biases. This paper describes the implementation of a selected set of these parameterizations and their effects on the simulated hydrological cycle. The modifications consist of surface data sets based on Moderate Resolution Imaging Spectroradiometer products, new parameterizations for canopy integration, canopy interception, frozen soil, soil water availability, and soil evaporation, a TOPMODEL-based model for surface and subsurface runoff, a groundwater model for determining water table depth, and the introduction of a factor to simulate nitrogen limitation on plant productivity. The results from a set of offline simulations were compared with observed data for runoff, river discharge, soil moisture, and total water storage to assess the performance of the new model (referred to as CLM3.5). CLM3.5 exhibits significant improvements in its partitioning of global evapotranspiration (ET) which result in wetter soils, less plant water stress, increased transpiration and photosynthesis, and an improved annual cycle of total water storage. Phase and amplitude of the runoff annual cycle is generally improved. Dramatic improvements in vegetation biogeography result when CLM3.5 is coupled to a dynamic global vegetation model. Lower than observed soil moisture variability in the rooting zone is noted as a remaining deficiency. --- paper_title: Soil moisture retrieval from space: the Soil Moisture and Ocean Salinity (SMOS) mission paper_content: Microwave radiometry at low frequencies (L-band: 1.4 GHz, 21 cm) is an established technique for estimating surface soil moisture and sea surface salinity with a suitable sensitivity. However, from space, large antennas (several meters) are required to achieve an adequate spatial resolution at L-band. So as to reduce the problem of putting into orbit a large filled antenna, the possibility of using antenna synthesis methods has been investigated. Such a system, relying on a deployable structure, has now proved to be feasible and has led to the Soil Moisture and Ocean Salinity (SMOS) mission, which is described. The main objective of the SMOS mission is to deliver key variables of the land surfaces (soil moisture fields), and of ocean surfaces (sea surface salinity fields). The SMOS mission is based on a dual polarized L-band radiometer using aperture synthesis (two-dimensional [2D] interferometer) so as to achieve a ground resolution of 50 km at the swath edges coupled with multiangular acquisitions. The radiometer will enable frequent and global coverage of the globe and deliver surface soil moisture fields over land and sea surface salinity over the oceans. The SMOS mission was proposed to the European Space Agency (ESA) in the framework of the Earth Explorer Opportunity Missions. It was selected for a tentative launch in 2005. The goal of this paper is to present the main aspects of the baseline mission and describe how soil moisture will be retrieved from SMOS data. --- paper_title: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics paper_content: A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The unbounded error growth found in the extended Kalman filter, which is caused by an overly simplified closure in the error covariance equation, is completely eliminated. Open boundaries can be handled as long as the ocean model is well posed. Well-known numerical instabilities associated with the error covariance equation are avoided because storage and evolution of the error covariance matrix itself are not needed. The results are also better than what is provided by the extended Kalman filter since there is no closure problem and the quality of the forecast error statistics therefore improves. The method should be feasible also for more sophisticated primitive equation models. The computational load for reasonable accuracy is only a fraction of what is required for the extended Kalman filter and is given by the storage of, say, 100 model states for an ensemble size of 100 and thus CPU requirements of the order of the cost of 100 model integrations. The proposed method can therefore be used with realistic nonlinear ocean models on large domains on existing computers, and it is also well suited for parallel computers and clusters of workstations where each processor integrates a few members of the ensemble. --- paper_title: Real-time groundwater flow modeling with the Ensemble Kalman Filter: Joint estimation of states and parameters and the filter inbreeding problem paper_content: [1] Real-time groundwater flow modeling with filter methods is interesting for dynamical groundwater flow systems, for which measurement data in real-time are available. The Ensemble Kalman Filter (EnKF) approach is used here to update states together with parameters by adopting an augmented state vector approach. The performance of EnKF is investigated in a synthetic study with a two-dimensional transient groundwater flow model where (1) only the recharge rate is spatiotemporally variable, (2) only transmissivity is spatially variable with σlnT2 = 1.0 or (3) with σlnT2 = 2.7, and (4) both recharge rate and transmissivity are uncertain (a combination of (1) and (3)). The performance of EnKF for simultaneous state and parameter estimation in saturated groundwater flow problems is investigated in dependence of the number of stochastic realizations, the updating frequency and updating intensity of log-transmissivity, the amount of measurements in space and time, and the method (iterative versus noniterative EnKF), among others. Satisfactory results were also obtained if both transmissivity and recharge rate were uncertain. However, it was found that filter inbreeding is much more severe if hydraulic heads and transmissivities are jointly updated than if only hydraulic heads are updated. The filter inbreeding problem was investigated in more detail and could be strongly reduced with help of a damping parameter, which limits the intensity of the perturbation of the log-transmissivity field. An additional reduction of filter inbreeding could be achieved by combining two measures: (1) inflating the elements of the predicted state covariance matrix on the basis of a comparison between the model uncertainty and the observed errors at the measurement points and (2) starting the flow simulations with a very large number of realizations and then sampling the desired number of realizations after one simulation time step by minimizing the differences between the local cpdfs (and bivariate cpdfs) of hydraulic head for the large ensemble and the corresponding cpdfs for the reduced ensemble. The two measures, which cause very limited CPU costs, allowed making 100 stochastic realizations for the reproduction of the states as efficient as 200–500 untreated stochastic realizations. --- paper_title: Surface-subsurface flow modeling with path-based runoff routing, boundary condition-based coupling, and assimilation of multisource observation data paper_content: Received 21 October 2008; revised 2 September 2009; accepted 16 September 2009; published 13 February 2010. [1] A distributed physically based model incorporating novel approaches for the representation of surface-subsurface processes and interactions is presented. A path-based description of surface flow across the drainage basin is used, with several options for identifying flow directions, for separating channel cells from hillslope cells, and for representing stream channel hydraulic geometry. Lakes and other topographic depressions are identified and specially treated as part of the preprocessing procedures applied to the digital elevation data for the catchment. Threshold-based boundary condition switching is used to partition potential (atmospheric) fluxes into actual fluxes across the land surface and changes in surface storage, thus resolving the exchange fluxes, or coupling, between the surface and subsurface modules. Nested time stepping allows smaller steps to be taken for typically faster and explicitly solved surface runoff routing, while a mesh coarsening option allows larger grid elements to be used for typically slower and more compute-intensive subsurface flow. Sequential data assimilation schemes allow the model predictions to be updated with spatiotemporal observation data of surface and subsurface variables. These approaches are discussed in detail, and the physical and numerical behavior of the model is illustrated over catchment scales ranging from 0.0027 to 356 km 2 , addressing different hydrological processes and highlighting the importance of describing coupled surfacesubsurface flow. --- paper_title: A Network of Terrestrial Environmental Observatories in Germany paper_content: Multicompartment and multiscale long-term observation and research are important prerequisites to tackling the scientific challenges resulting from climate and global change. Long-term monitoring programs are cost intensive and require high analytical standards, however, and the gain of knowledge often requires longer observation times. Nevertheless, several environmental research networks have been established in recent years, focusing on the impact of climate and land use change on terrestrial ecosystems. From 2008 onward, a network of Terrestrial Environmental Observatories (TERENO) has been established in Germany as an interdisciplinary research program that aims to observe and explore the long-term ecological, social, and economic impacts of global change at the regional level. State-of-the-art methods from the field of environmental monitoring, geophysics, and remote sensing will be used to record and analyze states and fluxes for different environmental compartments from groundwater through the vadose zone, surface water, and biosphere, up to the lower atmosphere. --- paper_title: Radiance data assimilation for operational snow and streamflow forecasting paper_content: Estimation of seasonal snowpack, in mountainous regions, is crucial for accurate streamflow prediction. This paper examines the ability of data assimilation (DA) of remotely sensed microwave radiance data to improve snow water equivalent prediction, and ultimately operational streamflow forecasts. Operational streamflow forecasts in the National Weather Service River Forecast Center (NWSRFC) are produced with a coupled SNOW17 (snow model) and SACramento Soil Moisture Accounting (SAC-SMA) model. A comparison of two assimilation techniques, the ensemble Kalman filter (EnKF) and the particle filter (PF), is made using a coupled SNOW17 and the microwave emission model for layered snow pack (MEMLS) model to assimilate microwave radiance data. Microwave radiance data, in the form of brightness temperature (TB), is gathered from the advanced microwave scanning radiometer-earth observing system (AMSR-E) at the 36.5 GHz channel. SWE prediction is validated in a synthetic experiment. The distribution of snowmelt from an experiment with real data is then used to run the SAC-SMA model. Several scenarios on state or joint state-parameter updating with TB data assimilation to SNOW-17 and SAC-SMA models were analyzed, and the results show potential benefit for operational streamflow forecasting. --- paper_title: An adjoint data assimilation approach for estimating parameters in a three-dimensional ecosystem model paper_content: In this paper an ecosystem model, including phytoplankton, zooplankton, nitrate, ammonium, phosphate and detritus, is described. The model is driven by physical fields derived from a three-dimensional physical transport model. Simulation includes nitrate input from a river. Simulated results are then sampled and the sampled data are used in sequential numerical experiments to assess the ability of using an adjoint data assimilation approach for estimating the poorly known parameters of the ecosystem model, such as growth and death rate, half-saturation constant of nutrients, etc. Data with different spatial and temporal resolution over 1 week are assimilated into the ecosystem model. Assimilation of data at 30 grid stations with a sampling interval of 6 h is proved to be adequate for recovering all the parameters of the ecosystem model. Both the spatial and temporal resolution of the data are mutually complementary in the assimilative model. Thus, improvement of either of them can result in improvement of model parameter recoveries. The assimilation of phytoplankton data is essential to recover the model parameters. Phytoplankton is the core of the food web and without the information on phytoplankton, the structure of the ecosystem cannot be constructed correctly. The adjoint method can work well with the noisy data. In the twin experiments with noisy data, the parameters can be recovered but the error is increased. The results of the model and parameter recovery are sensitive to the initial conditions of state variables, so the determination of the initial condition is as important as that of the model parameter. The spatial and temporal resolution and the data type of the observations in Analysis and Modelling Research of the Ecosystem in the Bohai Sea (AMREB) are suitable for the recovery of the model parameters used in this study. --- paper_title: Calibration of the SUCROS emergence and early growth module for sugar beet using optical remote sensing data assimilation paper_content: Crop models are useful for monitoring crop production on a local scale. Their application to a larger area, such as a region, is hampered by the difficulty in determining the value of some of their parameters, which may differ greatly between fields. The use of optical remote sensing helps to overcome this problem. Coupling a radiation transfer model to a crop model makes it possible to simulate reflectance for those times in crop growth for which remote sensing data are available. The inversion of the combined model on these data then makes it possible to estimate new values for certain sensitive parameters of the crop model. This paper describes the use of such a method on a local scale, for sugar beet, focusing on the parameters describing emergence and early crop growth. These processes vary greatly depending on the soil, climate and seedbed preparation, and affect yield significantly. The SUCROS crop model and the SAIL reflectance model were combined. The resulting model was calibrated under standard conditions and then evaluated under test conditions to which the emergence and early growth parameters of the SUCROS model were adjusted. The test conditions seedbed structure was coarser, and the sowing depth was greater than expected. Consequently, emergence occurred later, and the initial leaf area was smaller. The SUCROS simulation using standard values for emergence and early growth parameters did not accurately predict crop growth under these test conditions. The inversion of the combined model using a set of canopy reflectance measurements during crop establishment provided new parameter values that allowed us to accurately estimate crop yield. Application of this method on a regional scale, for yield prediction or agronomic diagnosis, should be of great value. --- paper_title: FLUXNET: A New Tool to Study the Temporal and Spatial Variability of Ecosystem–Scale Carbon Dioxide, Water Vapor, and Energy Flux Densities paper_content: FLUXNET is a global network of micrometeorological flux measurement sites that measure the exchanges of carbon dioxide, water vapor, and energy between the biosphere and atmosphere. At present over 140 sites are operating on a long-term and continuous basis. Vegetation under study includes temperate conifer and broadleaved (deciduous and evergreen) forests, tropical and boreal forests, crops, grasslands, chaparral, wetlands, and tundra. Sites exist on five continents and their latitudinal distribution ranges from 70°N to 30°S. FLUXNET has several primary functions. First, it provides infrastructure for compiling, archiving, and distributing carbon, water, and energy flux measurement, and meteorological, plant, and soil data to the science community. (Data and site information are available online at the FLUXNET Web site, http://www-eosdis.ornl.gov/FLUXNET/.) Second, the project supports calibration and flux intercomparison activities. This activity ensures that data from the regional networks are intercomparable. And third, FLUXNET supports the synthesis, discussion, and communication of ideas and data by supporting project scientists, workshops, and visiting scientists. The overarching goal is to provide information for validating computations of net primary productivity, evaporation, and energy absorption that are being generated by sensors mounted on the NASA Terra satellite. Data being compiled by FLUXNET are being used to quantify and compare magnitudes and dynamics of annual ecosystem carbon and water balances, to quantify the response of stand-scale carbon dioxide and water vapor flux densities to controlling biotic and abiotic factors, and to validate a hierarchy of soil–plant–atmosphere trace gas exchange models. Findings so far include 1) net CO 2 exchange of temperate broadleaved forests increases by about 5.7 g C m −2 day −1 for each additional day that the growing season is extended; 2) the sensitivity of net ecosystem CO 2 exchange to sunlight doubles if the sky is cloudy rather than clear; 3) the spectrum of CO 2 flux density exhibits peaks at timescales of days, weeks, and years, and a spectral gap exists at the month timescale; 4) the optimal temperature of net CO 2 exchange varies with mean summer temperature; and 5) stand age affects carbon dioxide and water vapor flux densities. --- paper_title: Automatic state updating for operational streamflow forecasting via variational data assimilation. paper_content: Summary In operational hydrologic forecasting, to account for errors in the initial and boundary conditions, and in parameters and structures of the hydrologic models, the forecasters routinely make adjustments in real-time to the hydrometeorological input, hydrologic model states and, in certain cases, model parameters based on streamflow observations. Though a great deal of effort has been made in recent years to automate such “run-time modifications” (MOD) by human forecasters to a possible extent, automatic state updating of hydrologic models is yet to be widely accepted or routinely practiced in operational hydrology for a range of reasons. In this paper, we describe a state updating procedure intended specifically for operational streamflow forecasting for gauged headwater basins, and compare its performance with human forecaster MOD through a real-time forecasting experiment. The procedure is based on variational assimilation (VAR) of streamflow, precipitation and potential evaporation (PE) data into lumped soil moisture accounting and routing models operating at a 1-h timestep. The procedure has been in experimental operation since 2003 at the National Weather Service’s (NWS) West Gulf River Forecast Center (WGRFC) in Fort Worth, TX. Also described is a novel parameter estimation and optimization tool, the Adjoint-Based OPTimizer (AB_OPT), used for lumped hydrologic modeling at a 1-h timestep necessary for VAR. --- paper_title: Coupled hydrogeophysical parameter estimation using a sequential Bayesian approach paper_content: Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown. --- paper_title: Novel approach to nonlinear/non-Gaussian Bayesian state estimation paper_content: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter. --- paper_title: Data assimilation methods in the Earth sciences paper_content: Abstract Although remote sensing data are often plentiful, they do not usually satisfy the users’ needs directly. Data assimilation is required to extract information about geophysical fields of interest from the remote sensing observations and to make the data more accessible to users. Remote sensing may provide, for example, measurements of surface soil moisture, snow water equivalent, snow cover, or land surface (skin) temperature. Data assimilation can then be used to estimate variables that are not directly observed from space but are needed for applications, for instance root zone soil moisture or land surface fluxes. The paper provides a brief introduction to modern data assimilation methods in the Earth sciences, their applications, and pertinent research questions. Our general overview is readily accessible to hydrologic remote sensing scientists. Within the general context of Earth science data assimilation, we point to examples of the assimilation of remotely sensed observations in land surface hydrology. --- paper_title: Provision of snow water equivalent from satellite data and the hydrological model PROMET using data assimilation techniques paper_content: Information on snow cover and snow properties is an important factor for hydrology and runoff modelling. Frequent updates of snow cover information can help to improve water balance and discharge calculations. Within the frame of polar view, snow products from multisensoral satellite data are operationally provided to control and update water balance models for large parts of Southern Germany. Optical AVHRR sensors of the NOAA satellite are used for snow mapping and snow line delineation. Although these acquisitions are available several times per day, cloud cover hinders frequent updates of snow cover maps. As an additional remote sensing data source microwave data from ASAR on ENVISAT is used. Since C-band SAR sensors are only sensitive to snow with a high content of liquid water, the application of ASAR is limited to the melting periods. However under these conditions the developed procedure allows not only to delineate the snow cover in a comparable way as from optical data, also the additional information where the snow is melting is provided. In order to demonstrate how the remote sensing products can be used for improved water balance modelling, an application example for the watershed of the Upper Danube will be presented. This testsite is the research area of the integrative research project GLOWA-DANUBE that is conducted by the University of Munich. Model results using the PROMET-model of snow distributions with and without data assimilation of the remote sensing products will be given. Developed data assimilation concepts will be presented. Through data assimilation, the modelled snow cover agrees better with the mapped snow cover information from satellite. The optimised model provides maps of snow water equivalent, that can not directed be assessed by remote sensing. The impact of data assimilation on the modelled runoff will thus further be analysed. --- paper_title: Estimating transpiration and the sensitivity of carbon uptake to water availability in a subalpine forest using a simple ecosystem process model informed by measured net CO2 and H2O fluxes paper_content: Modeling how the role of forests in the carbon cycle will respond to predicted changes in water availability hinges on an understanding of the processes controlling water use in ecosystems. Recent studies in forest ecosystem modeling have employed data-assimilation techniques to generate parameter sets that conform to observations, and predict net ecosystem CO2 exchange (NEE) and its component processes. Since the carbon and water cycles are linked, there should be additional process information available from ecosystem H2O exchange. We coupled SIPNET (Simple Photosynthesis EvapoTranspiration), a simplified model of ecosystem function, with a data-assimilation system to estimate parameters leading to model predictions most closely matching the net CO2 and H2O fluxes measured by eddy covariance in a high-elevation, subalpine forest ecosystem. When optimized using measurements of CO2 exchange, the model matched observed NEE (RMSE = 0.49 g C m−2) but underestimated transpiration calculated independently from sap flow measurements by a factor of 4. Consequently, the carbon-only optimization was insensitive to imposed changes in water availability. Including eddy flux data from both CO2 and H2O exchange to the optimization reduced the model fit to the observed NEE fluxes only slightly (RME = 0.53 g C m−2), however this parameterization also reproduced transpiration calculated from independent sap flow measurements (r2 = 0.67, slope = 0.6). A significant amount of information can be extracted from simultaneous analysis of CO2 and H2O exchange, which improved the accuracy of transpiration estimates from measured evapotranspiration. Conversely, failure to include both CO2 and H2O data streams can generate results that mask the responses of ecosystem carbon cycling to variation in the precipitation. In applying the model conditioned on both CO2 and H2O fluxes to the subalpine forest at the Niwot Ridge AmeriFlux site, we observed that the onset of transpiration is coincident with warm soil temperatures. However, after snow has covered the ground in the fall, we observed significant inter-annual variability in the fraction of evapotranspiration composed of transpiration; evapotranspiration was dominated by transpiration in years when late fall air temperatures were high enough to maintain photosynthesis, but by sublimation from the surface of the snowpack in years when late fall air temperatures were colder and forest photosynthetic activity had ceased. Data-assimilation techniques and simultaneous measurements of carbon and water exchange can be used to quantify the response of net carbon uptake to changes in water availability by using an ecosystem model where the carbon and water cycles are linked. --- paper_title: Hydraulic parameter estimation by remotely-sensed top soil moisture observations with the particle filter paper_content: Summary In a synthetic study we explore the potential of using surface soil moisture measurements obtained from different satellite platforms to retrieve soil moisture profiles and soil hydraulic properties using a sequential data assimilation procedure and a 1D mechanistic soil water model. Four different homogeneous soil types were investigated including loamy sand, loam, silt, and clayey soils. The forcing data including precipitation and potential evapotranspiration were taken from the meteorological station of Aachen (Germany). With the aid of the forward model run, a synthetic data set was designed and observations were generated. The virtual top soil moisture observations were then assimilated to update the states and hydraulic parameters of the model by means of a particle filtering data assimilation method. Our analyses include the effect of assimilation strategy, measurement frequency, accuracy in surface soil moisture measurements, and soils differing in textural and hydraulic properties. With this approach we were able to assess the value of periodic spaceborne observations of top soil moisture for soil moisture profile estimation and identify the adequate conditions (e.g. temporal resolution and measurement accuracy) for remotely sensed soil moisture data assimilation. Updating of both hydraulic parameters and state variables allowed better predictions of top soil moisture contents as compared with updating of states only. An important conclusion is that the assimilation of remotely-sensed top soil moisture for soil hydraulic parameter estimation generates a bias depending on the soil type. Results indicate that the ability of a data assimilation system to correct the soil moisture state and estimate hydraulic parameters is driven by the non linearity between soil moisture and pressure head. --- paper_title: Retrieving soil temperature profile by assimilating MODIS LST products with ensemble Kalman filter paper_content: Abstract Proper estimation of initial state variables and model parameters are vital importance for determining the accuracy of numerical model prediction. In this work, we develop a one-dimensional land data assimilation scheme based on ensemble Kalman filter and Common Land Model version 3.0 (CoLM). This scheme is used to improve the estimation of soil temperature profile. The leaf area index (LAI) is also updated dynamically by MODIS LAI production and the MODIS land surface temperature (LST) products are assimilated into CoLM. The scheme was tested and validated by observations from four automatic weather stations (BTS, DRS, MGS, and DGS) in Mongolian Reference Site of CEOP during the period of October 1, 2002 to September 30, 2003. Results indicate that data assimilation improves the estimation of soil temperature profile about 1 K. In comparison with simulation, the assimilation results of soil heat fluxes also have much improvement about 13 W m− 2 at BTS and DGS and 2 W m− 2 at DRS and MGS, respectively. In addition, assimilation of MODIS land products into land surface model is a practical and effective way to improve the estimation of land surface variables and fluxes. --- paper_title: The importance of the spatial patterns of remotely sensed soil moisture in the improvement of discharge predictions for small-scale basins through data assimilation paper_content: Abstract In this paper, we investigate to which degree information concerning the spatial patterns of remotely sensed soil moisture data are needed in order to improve discharge predictions from hydrological models. For this purpose, we use the TOPMODEL-based Land–Atmosphere Transfer Scheme (TOPLATS). The remotely sensed soil moisture values are determined using C-band backscatter data from the European Space Agency (ESA) European Remote Sensing (ERS) Satellites. A baseline run, without soil moisture assimilation, is established for both the distributed and lumped versions of the land–atmosphere scheme. The modeled discharge matches the observations slightly better for the distributed model than for the lumped model. The remotely sensed soil moisture data are assimilated into the distributed version of the model through the ‘nudging to individual observations’ method, and the ‘statistical correction assimilation’ method. The remotely sensed soil moisture data are also assimilated into the lumped version of the model through the ‘statistical correction assimilation’ method. The statistical correction assimilation method leads to similar, and improved, discharge predictions for both the distributed and lumped models. The nudging to individual observations method leads, for the distributed model, to only slightly better results than the statistical correction assimilation method. As a consequence, it is suggested that it is sufficient to assimilate the statistics (spatial mean and variance) of remotely sensed soil moisture data into lumped hydrological models when one wants to improve hydrological model-based discharge predictions. --- paper_title: The representation of soil moisture freezing and its impact on the stable boundary layer paper_content: The 1993 to 1996 version of the European Centre for Medium-Range Weather Forecasts model had a pronounced near-surface cold bias in winter over continental areas. the problem is illustrated in detail with help of tower observations. It is shown that a positive feedback exists in the land surface boundary-layer coupling that has the potential to amplify model biases. If the surface is cooled too much the boundary layer becomes too stable, reducing the downward heat flux and making the surface even colder. This positive feedback is believed to be stronger in the model than in the real atmosphere, resulting in diurnal temperature cycles that are too large and in excessive soil cooling on a seasonal time-scale in winter. ::: ::: ::: ::: An important contributor to the excessive winter cooling turns out to be the lack of soil moisture freezing in the model. the importance of this process is obvious from soil temperature observations. the seasonal soil temperature evolution shows a clear ‘barrier’ at 0°C due to the thermal inertia of freezing and thawing. A more quantitative illustration is the result of a simple calculation. This shows that the amount of energy necessary to freeze/thaw 1 m3 of wet soil, would cool/warm this soil by about 50 K if the phase transition was not taken into account. ::: ::: ::: ::: To reduce the winter cold bias in the model, three model changes have been tested and are described: (i) the introduction of the process of soil moisture freezing; (ii) revised stability functions to increase the turbulent diffusion of heat in stable situations; and (iii) an increase of the skin-layer conductivity. the effect of these changes on the seasonal evolution of soil and 2 m temperatures is investigated with long runs that have terms that relax towards the operational analysis above the boundary layer. In this way the impact can be studied on the temperature forecasts for the winter of 1995/1996, during which the operational model showed considerable soil temperature drift over Europe. Also, short periods of data assimilation (including 10-day forecasts) have been carried out to study the diurnal time-scales and the impact on model performance. ::: ::: ::: ::: The model changes eliminate to a large extent the systematic 2 m temperature biases for the winter of 1995/1996 over Europe and make the soil temperature evolution much more realistic. the soil moisture freezing, in particular, plays a crucial role by introducing thermal inertia near the freezing point, thereby reducing the annual temperature cycle in the soil. the process of soil moisture freezing leads to a considerable warming of the model's near-surface winter climate over continental areas. --- paper_title: The USDA Natural Resources Conservation Service Soil Climate Analysis Network (SCAN) paper_content: Abstract Surface soil moisture plays an important role in the dynamics of land–atmosphere interactions and many current and upcoming models and satellite sensors. In situ data will be required to provide calibration and validation datasets. Therefore, there is a need for sensor networks at a variety of scales that provide near-real-time soil moisture and temperature data combined with other climate information for use in natural resource planning, drought assessment, water resource management, and resource inventory. The U.S. Department of Agriculture (USDA)–Natural Resources Conservation Service (NRCS)–National Water and Climate Center has established a continental-scale network to address this need, called the Soil Climate Analysis Network (SCAN). This ever-growing network has more than 116 stations located in 39 states, most of which have been installed since 1999. The stations are remotely located and collect hourly atmospheric, soil moisture, and soil temperature data that are available to the public... --- paper_title: Real-Time Data Assimilation for Operational Ensemble Streamflow Forecasting paper_content: Abstract Operational flood forecasting requires that accurate estimates of the uncertainty associated with model-generated streamflow forecasts be provided along with the probable flow levels. This paper demonstrates a stochastic ensemble implementation of the Sacramento model used routinely by the National Weather Service for deterministic streamflow forecasting. The approach, the simultaneous optimization and data assimilation method (SODA), uses an ensemble Kalman filter (EnKF) for recursive state estimation allowing for treatment of streamflow data error, model structural error, and parameter uncertainty, while enabling implementation of the Sacramento model without major modification to its current structural form. Model parameters are estimated in batch using the shuffled complex evolution metropolis stochastic-ensemble optimization approach (SCEM-UA). The SODA approach was implemented using parallel computing to handle the increased computational requirements. Studies using data from the Leaf River... --- paper_title: Dual state-parameter estimation of hydrological models using ensemble Kalman filter paper_content: Hydrologic models are twofold: models for understanding physical processes and models for prediction. This study addresses the latter, which modelers use to predict, for example, streamflow at some future time given knowledge of the current state of the system and model parameters. In this respect, good estimates of the parameters and state variables are needed to enable the model to generate accurate forecasts. In this paper, a dual state–parameter estimation approach is presented based on the Ensemble Kalman Filter (EnKF) for sequential estimation of both parameters and state variables of a hydrologic model. A systematic approach for identification of the perturbation factors used for ensemble generation and for selection of ensemble size is discussed. The dual EnKF methodology introduces a number of novel features: (1) both model states and parameters can be estimated simultaneously; (2) the algorithm is recursive and therefore does not require storage of all past information, as is the case in the batch calibration procedures; and (3) the various sources of uncertainties can be properly addressed, including input, output, and parameter uncertainties. The applicability and usefulness of the dual EnKF approach for ensemble streamflow forecasting is demonstrated using a conceptual rainfall-runoff model. 2004 Elsevier Ltd. All rights reserved. --- paper_title: Global Products Of Vegetation Leaf Area And Fraction Absorbed Par From Year One Of Modis Data paper_content: An algorithm based on the physics of radiative transfer in vegetation canopies for the retrieval of vegetation green leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR) from surface reflectances was developed and implemented for operational processing prior to the launch of the moderate resolution imaging spectroradiometer (MODIS) aboard the TERRA platform in December of 1999. The performance of the algorithm has been extensively tested in prototyping activities prior to operational production. Considerable attention was paid to characterizing the quality of the product and this information is available to the users as quality assessment (QA) accompanying the product. The MODIS LAI/FPAR product has been operationally produced from day one of science data processing from MODIS and is available free of charge to the users from the Earth Resources Observation System (EROS) Data Center Distributed Active Archive Center. Current and planned validation activities are aimed at evaluating the product at several field sites representative of the six structural biomes. Example results illustrating the physics and performance of the algorithm are presented together with initial QA and validation results. Potential users of the product are advised of the provisional nature of the product in view of changes to calibration, geolocation, cloud screening, atmospheric correction and ongoing validation activities. D 2002 Published by Elsevier Science Inc. --- paper_title: A data assimilation technique applied to a predator-prey model paper_content: A new approach for data assimilation, which is based on the adjoint method, but allows the computer code for the adjoint to be constructed directly from the model computer code, is described. This technique is straightforward and reduces the chance of introducing errors in the construction of the adjoint code. Implementation of the technique is illustrated by applying it to a simple predator-prey model in a model fitting mode. A series of identical twin numerical experiments are used to show that this data assimilation approach can successfully recover model parameters as well as initial conditions. However, the ease with which these values are recovered is dependent on the form of the model equations as well as on the type and amount of data that are available. Additional numerical experiments show that sufficient coefficient and parameter recoveries are possible even when the assimilated data contain significant random noise. Thus, for biological systems that can be described by ecosystem models, the adjoint method represents a powerful approach for estimating values for little-known biological parameters, such as initial conditions, growth rates, and mortality rates. --- paper_title: Assimilation of SPOT/VEGETATION NDVI data into a sahelian vegetation dynamics model paper_content: Abstract This paper presents a method to monitor the dynamics of herbaceous vegetation in the Sahel. The approach is based on the assimilation of Normalized Difference Vegetation Index (NDVI) data acquired by the VEGETATION instrument on board SPOT 4/5 into a simple sahelian vegetation dynamics model. The study region is located in the Gourma region of Mali. The vegetation dynamics model is coupled with a radiative transfer model (the SAIL model). First, it is checked that the coupled models allow for a realistic simulation of the seasonal and interannual variability of NDVI over three sampling sites from 1999 to 2004. The data assimilation scheme relies on a parameter identification technique based on an Evolution Strategies algorithm. The simulated above-ground herbage mass resulting from NDVI assimilation is then compared to ground measurements performed over 13 study sites during the period 1999–2004. The assimilation scheme performs well with 404 kg DM/ha of average error ( n = 126 points) and a correlation coefficient of r = 0.80 (to be compared to the 463 kg DM/ha and r = 0.60 of the model performance without data assimilation). Finally, the sensitivity of the herbage mass model estimates to the quality of the meteorological forcing (rainfall and net radiation) is analyzed thanks to a stochastic approach. --- paper_title: Mapping of snow water equivalent and snow depth in boreal and sub-arctic zones by assimilating space-borne microwave radiometer data and ground-based observations paper_content: Abstract The monitoring of snow water equivalent (SWE) and snow depth (SD) in boreal forests is investigated by applying space-borne microwave radiometer data and synoptic snow depth observations. A novel assimilation technique based on (forward) modelling of observed brightness temperatures as a function of snow pack characteristics is introduced. The assimilation technique is a Bayesian approach that weighs the space-borne data and the reference field on SD interpolated from discrete synoptic observations with their estimated statistical accuracy. The results obtained using SSM/I and AMSR-E data for northern Eurasia and Finland indicate that the employment of space-borne data using the assimilation technique improves the SD and SWE retrieval accuracy when compared with the use of values interpolated from synoptic observations. Moreover, the assimilation technique is shown to reduce systematic SWE/SD estimation errors evident in the inversion of space-borne radiometer data. --- paper_title: Surface temperature and water vapour retrieval from MODIS data paper_content: This paper gives operational algorithms for retrieving sea (SST), land surface temperature (LST) and total atmospheric water vapour content (W) using Moderate Resolution Imaging Spectroradiometer (MODIS) data. To this end, the MODTRAN 3.5 radiative transfer program was used to predict radiances for MODIS channels 31, 32, 2, 17, 18 and 19. To analyse atmospheric effects, a simulation with a set of radiosonde observations was used to cover the variability of surface temperature and water vapour concentration on a worldwide scale. These simulated data were split into two sets (DB1 and DB2), the first one (DB1) was used to fit the coefficients of the algorithms, while the second one (DB2) was used to test the fitted coefficients. The results show that the algorithms are capable of producing SST and LST with a standard deviation of 0.3 K and 0.7 K if the satellite data are error free. The LST product has been validated with in situ data from a field campaign carried out in the Mississippi (USA), the results sh... --- paper_title: A strategy for operational implementation of 4D‐Var, using an incremental approach paper_content: SUMMARY An order of magnitude reduction in the cost of fourdimensional variational assimilation (4D-Var) is required before operational implementation is possible. Reconditioning is considered and, although it offers a signi6cant reduction in cost, it seems that it is unlikely to provide a reduction as large as an order of magnitude. An approximation to 4D-Var, namely the incremental approach, is then considered and is shown to produce the same result at the end of the assimilation window as an extended Kalman filter in which no approximations are made in the assimilating model but in which instead a simplitied evolution of the forecast error is introduced. This approach provides the flexibility for a cost-benefit trade-off of 4D-Var to be made. The development of variational four-dimensional assimilation (4D-Var) from the stage of being a theoretical possibility to being a practical reality is progressing at a rapid pace. The first results of four-dimensional variational assimilation using real observations were provided by Thbpaut et al. (1993b) using an adiabatic primitive-equation model at truncations "21 and T42. More recently Andersson ef al. (1994) used 4D-Var with a T63 model to assimilate remotely-sensed data such as infrared and microwave TOVS radiance measurements, while Thdpaut et d. (1993a) used 4D-Var with the same model to assimilate normalized radar backscatter cross-section measurements from the ERS-1 scatterometer. --- paper_title: A mechanistic modelling and data assimilation approach to estimate the carbon / chlorophyll and carbon / nitrogen ratios in a coupled hydrodynamical-biological model paper_content: The principal objective of hydrodynamical-biological models is to provide estimates of the main carbon fluxes such as total and export oceanic production. These models are nitrogen based, that is to say that the variables are expressed in terms of their nitrogen content. Moreover models are calibrated using chlorophyll data sets. Therefore carbon to chlorophyll (C:Chl) and carbon to nitrogen (C:N) ratios have to be assumed. This paper addresses the problem of the representation of these ratios. In a 1D framework at the DYFAMED station (NW Mediterranean Sea) we propose a model which enables the estimation of the basic biogeochemical fluxes and in which the spatio-temporal variability of the C:Chl and C:N ratios is fully represented in a mechanical way. This is achieved through the introduction of new state variables coming from the embedding of a phytoplankton growth model in a more classical Redfieldian NNPZD-DOM model (in which the C:N ratio is assumed to be a constant). Following this modelling step, the parameters of the model are estimated using the adjoint data assimilation method which enables the assimilation of chlorophyll and nitrate data sets collected at DYFAMED in 1997.Comparing the predictions of the new Mechanistic model with those of the classical Redfieldian NNPZD-DOM model which was calibrated with the same data sets, we find that both models reproduce the reference data in a comparable manner. Both fluxes and stocks can be equally well predicted by either model. However if the models are coinciding on an average basis, they are diverging from a variability prediction point of view. In the Mechanistic model biology adapts much faster to its environment giving rise to higher short term variations. Moreover the seasonal variability in total production differs from the Redfieldian NNPZD-DOM model to the Mechanistic model. In summer the Mechanistic model predicts higher production values in carbon unit than the Redfieldian NNPZD-DOM model. In winter the contrary holds. --- paper_title: Comparison of Data Assimilation Techniques for a Coupled Model of Surface and Subsurface Flow paper_content: Data assimilation in the geophysical sciences refers to methodologies to optimally merge model predictions and observations. The ensemble Kalman filter (EnKF) is a statistical sequential data assimilation technique explicitly developed for nonlinear filtering problems. It is based on a Monte Carlo approach that approximates the conditional probability densities of the variables of interest by a finite number of randomly generated model trajectories. In Newtonian relaxation or nudging (NN), which can be viewed as a special case of the classic Kalman filter, model variables are driven toward observations by adding to the model equations a forcing term, or relaxation component, that is proportional to the difference between simulation and observation. The forcing term contains four-dimensional weighting functions that can, ideally, incorporate prior knowledge about the characteristic scales of spatial and temporal variability of the state variable(s) being assimilated. In this study, we examined the EnKF and NN algorithms as implemented for a complex hydrologic model that simulates catchment dynamics, coupling a three-dimensional finite element Richards9 equation solver for variably saturated porous media and a finite difference diffusion wave approximation for surface water flow. We report on the retrieval performance of the two assimilation schemes for a small catchment in Belgium. The results of the comparison show that nudging, while more straightforward and less expensive computationally, is not as effective as the ensemble Kalman filter in retrieving the true system state. We discuss some of the strengths and weaknesses, both physical and numerical, of the NN and EnKF schemes. --- paper_title: Three-dimensional soil moisture profile retrieval by assimilation of near-surface measurements: Simplified Kalman filter covariance forecasting and field application: THREE-DIMENSIONAL SOIL MOISTURE ASSIMILATION paper_content: [1] The Kalman filter data assimilation technique is applied to a distributed three-dimensional soil moisture model for retrieval of the soil moisture profile in a 6 ha catchment using near-surface soil moisture measurements. A simplified Kalman filter covariance forecasting methodology is developed based on forecasting of the state correlations and imposed state variances. This covariance forecasting technique, termed the modified Kalman filter, was then used in a 1 month three-dimensional field application. Two updating scenarios were tested: (1) updating every 2 to 3 days and (2) a single update. The data used were from the Nerrigundah field site, near Newcastle, Australia. This study demonstrates the feasibility of data assimilation in a quasi three-dimensional distributed soil moisture model, provided simplified covariance forecasting techniques are used. It also identifies that (1) the soil moisture profile cannot be retrieved from near-surface soil moisture measurements when the near-surface and deep soil layers become decoupled, such as during extreme drying events; (2) if simulation of the soil moisture profile is already good, the assimilation can result in a slight degradation, but if the simulation is poor, assimilation can yield a significant improvement; (3) soil moisture profile retrieval results are independent of initial conditions; and (4) the required update frequency is a function of the errors in model physics and forcing data. --- paper_title: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics paper_content: A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The unbounded error growth found in the extended Kalman filter, which is caused by an overly simplified closure in the error covariance equation, is completely eliminated. Open boundaries can be handled as long as the ocean model is well posed. Well-known numerical instabilities associated with the error covariance equation are avoided because storage and evolution of the error covariance matrix itself are not needed. The results are also better than what is provided by the extended Kalman filter since there is no closure problem and the quality of the forecast error statistics therefore improves. The method should be feasible also for more sophisticated primitive equation models. The computational load for reasonable accuracy is only a fraction of what is required for the extended Kalman filter and is given by the storage of, say, 100 model states for an ensemble size of 100 and thus CPU requirements of the order of the cost of 100 model integrations. The proposed method can therefore be used with realistic nonlinear ocean models on large domains on existing computers, and it is also well suited for parallel computers and clusters of workstations where each processor integrates a few members of the ensemble. --- paper_title: Adaptive Ensemble Covariance Localization in Ensemble 4D-VAR State Estimation paper_content: An adaptive ensemble covariance localization technique, previously used in ‘‘local’’ forms of the ensemble Kalman filter, is extended to a global ensemble four-dimensional variational data assimilation (4D-VAR) scheme. The purely adaptive part of the localization matrix considered is given by the element-wise square of the correlation matrix of a smoothed ensemble of streamfunction perturbations. It is found that these purely adaptive localization functions have spurious far-field correlations as large as 0.1 with a 128-member ensemble. To attenuate the spurious features of the purely adaptive localization functions, the authors multiply the adaptive localization functions with very broadscale nonadaptive localization functions. Using the Navy’s operational ensemble forecasting system, it is shown that the covariance localization functions obtained by this approach adapt to spatially anisotropic aspects of the flow, move with the flow, and are free of far-field spurious correlations. The scheme is made computationally feasible by (i) a method for inexpensively generating the square root of an adaptively localized global four-dimensional error covariance model in terms of products or modulations of smoothed ensemble perturbations with themselves and with raw ensemble perturbations, and (ii) utilizing algorithms that speed ensemble covariance localization when localization functions are separable, variable-type independent, and/or large scale. In spite of the apparently useful characteristics of adaptive localization, single analysis/forecast experiments assimilating 583 200 observations over both 6- and 12-h data assimilation windows failed to identify any significant difference in the quality of the analyses and forecasts obtained using nonadaptive localization from that obtained using adaptive localization. --- paper_title: Novel approach to nonlinear/non-Gaussian Bayesian state estimation paper_content: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter. --- paper_title: Dual state-parameter estimation of hydrological models using ensemble Kalman filter paper_content: Hydrologic models are twofold: models for understanding physical processes and models for prediction. This study addresses the latter, which modelers use to predict, for example, streamflow at some future time given knowledge of the current state of the system and model parameters. In this respect, good estimates of the parameters and state variables are needed to enable the model to generate accurate forecasts. In this paper, a dual state–parameter estimation approach is presented based on the Ensemble Kalman Filter (EnKF) for sequential estimation of both parameters and state variables of a hydrologic model. A systematic approach for identification of the perturbation factors used for ensemble generation and for selection of ensemble size is discussed. The dual EnKF methodology introduces a number of novel features: (1) both model states and parameters can be estimated simultaneously; (2) the algorithm is recursive and therefore does not require storage of all past information, as is the case in the batch calibration procedures; and (3) the various sources of uncertainties can be properly addressed, including input, output, and parameter uncertainties. The applicability and usefulness of the dual EnKF approach for ensemble streamflow forecasting is demonstrated using a conceptual rainfall-runoff model. 2004 Elsevier Ltd. All rights reserved. --- paper_title: Assimilation of Active Microwave Measurements for Soil Moisture Profile Retrieval Under Laboratory Conditions paper_content: The authors discuss the potential of retrieving information on the soil moisture profile from measurements of the surface soil moisture content through active microwave observations. They use active microwave observations of the surface soil moisture content in a data assimilation framework to show that this allows the retrieval of the entire soil moisture profile. The data assimilation procedure demonstrated is based on the Kalman filter technique. Kalman filtering allows reconstruction of the state vector when at least part of the state variables are observed regularly. The dynamic model of the system used is based on the 1D Richards equation. The observation equation is based on the integral equation model of A. K. Fung et al. (1992) and is used to link the radar observations to surface soil moisture content. Recently, M. Mancini et al. (1999) reported about laboratory experiments investigating the use of active microwave observations to estimate surface soil moisture content. The present authors apply the data assimilation scheme to the radar measurements of these experiments to retrieve the entire soil moisture profile in the soil sample used, and compare these results with the soil moisture profile measurements (using TDR). It is shown that with a limited number of radar measurements accurate retrieval of the entire soil moisture profile is possible. --- paper_title: Assessment of local hydraulic properties from electrical resistivity tomography monitoring of a three-dimensional synthetic tracer test experiment paper_content: [1] In recent years geophysical methods have become increasingly popular for hydrological applications. Time-lapse electrical resistivity tomography (ERT) represents a potentially powerful tool for subsurface solute transport characterization since a full picture of the spatiotemporal evolution of the process can be obtained. However, the quantitative interpretation of tracer tests is difficult because of the uncertainty related to the geoelectrical inversion, the constitutive models linking geophysical and hydrological quantities, and the a priori unknown heterogeneous properties of natural formations. Here an approach based on the Lagrangian formulation of transport and the ensemble Kalman filter (EnKF) data assimilation technique is applied to assess the spatial distribution of hydraulic conductivity K by incorporating time-lapse cross-hole ERT data. Electrical data consist of three-dimensional cross-hole ERT images generated for a synthetic tracer test in a heterogeneous aquifer. Under the assumption that the solute spreads as a passive tracer, for high Peclet numbers the spatial moments of the evolving plume are dominated by the spatial distribution of the hydraulic conductivity. The assimilation of the electrical conductivity 4D images allows updating of the hydrological state as well as the spatial distribution of K. Thus, delineation of the tracer plume and estimation of the local aquifer heterogeneity can be achieved at the same time by means of this interpretation of time-lapse electrical images from tracer tests. We assess the impact on the performance of the hydrological inversion of (i) the uncertainty inherently affecting ERT inversions in terms of tracer concentration and (ii) the choice of the prior statistics of K. Our findings show that realistic ERT images can be integrated into a hydrological model even within an uncoupled inverse modeling framework. The reconstruction of the hydraulic conductivity spatial distribution is satisfactory in the portion of the domain directly covered by the passage of the tracer. Aside from the issues commonly affecting inverse models, the proposed approach is subject to the problem of the filter inbreeding and the retrieval performance is sensitive to the choice of K prior geostatistical parameters. --- paper_title: An Ensemble Kalman Smoother for Nonlinear Dynamics paper_content: It is formally proved that the general smoother for nonlinear dynamics can be formulated as a sequential method, that is, observations can be assimilated sequentially during a forward integration. The general filter can be derived from the smoother and it is shown that the general smoother and filter solutions at the final time become identical, as is expected from linear theory. Then, a new smoother algorithm based on ensemble statistics is presented and examined in an example with the Lorenz equations. The new smoother can be computed as a sequential algorithm using only forward-in-time model integrations. It bears a strong resemblance with the ensemble Kalman filter. The difference is that every time a new dataset is available during the forward integration, an analysis is computed for all previous times up to this time. Thus, the first guess for the smoother is the ensemble Kalman filter solution, and the smoother estimate provides an improvement of this, as one would expect a smoother to do. The method is demonstrated in this paper in an intercomparison with the ensemble Kalman filter and the ensemble smoother introduced by van Leeuwen and Evensen, and it is shown to be superior in an application with the Lorenz equations. Finally, a discussion is given regarding the properties of the analysis schemes when strongly non-Gaussian distributions are used. It is shown that in these cases more sophisticated analysis schemes based on Bayesian statistics must be used. --- paper_title: Improved treatment of uncertainty in hydrologic modeling: Combining the strengths of global optimization and data assimilation paper_content: Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must therefore be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. In practice, however, because of errors in the model structure and the input (forcing) and output data, this has proven to be difficult, leading to considerable uncertainty in the model predictions. This paper surveys the limitations of current model calibration methodologies, which treat the uncertainty in the input-output relationship as being primarily attributable to uncertainty in the parameters and presents a simultaneous optimization and data assimilation (SODA) method, which improves the treatment of uncertainty in hydrologic modeling. The usefulness and applicability of SODA is demonstrated by means of a pilot study using data from the Leaf River watershed in Mississippi and a simple hydrologic model with typical conceptual components. --- paper_title: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics paper_content: A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The unbounded error growth found in the extended Kalman filter, which is caused by an overly simplified closure in the error covariance equation, is completely eliminated. Open boundaries can be handled as long as the ocean model is well posed. Well-known numerical instabilities associated with the error covariance equation are avoided because storage and evolution of the error covariance matrix itself are not needed. The results are also better than what is provided by the extended Kalman filter since there is no closure problem and the quality of the forecast error statistics therefore improves. The method should be feasible also for more sophisticated primitive equation models. The computational load for reasonable accuracy is only a fraction of what is required for the extended Kalman filter and is given by the storage of, say, 100 model states for an ensemble size of 100 and thus CPU requirements of the order of the cost of 100 model integrations. The proposed method can therefore be used with realistic nonlinear ocean models on large domains on existing computers, and it is also well suited for parallel computers and clusters of workstations where each processor integrates a few members of the ensemble. --- paper_title: Surface-subsurface flow modeling with path-based runoff routing, boundary condition-based coupling, and assimilation of multisource observation data paper_content: Received 21 October 2008; revised 2 September 2009; accepted 16 September 2009; published 13 February 2010. [1] A distributed physically based model incorporating novel approaches for the representation of surface-subsurface processes and interactions is presented. A path-based description of surface flow across the drainage basin is used, with several options for identifying flow directions, for separating channel cells from hillslope cells, and for representing stream channel hydraulic geometry. Lakes and other topographic depressions are identified and specially treated as part of the preprocessing procedures applied to the digital elevation data for the catchment. Threshold-based boundary condition switching is used to partition potential (atmospheric) fluxes into actual fluxes across the land surface and changes in surface storage, thus resolving the exchange fluxes, or coupling, between the surface and subsurface modules. Nested time stepping allows smaller steps to be taken for typically faster and explicitly solved surface runoff routing, while a mesh coarsening option allows larger grid elements to be used for typically slower and more compute-intensive subsurface flow. Sequential data assimilation schemes allow the model predictions to be updated with spatiotemporal observation data of surface and subsurface variables. These approaches are discussed in detail, and the physical and numerical behavior of the model is illustrated over catchment scales ranging from 0.0027 to 356 km 2 , addressing different hydrological processes and highlighting the importance of describing coupled surfacesubsurface flow. --- paper_title: Hydrologic Data Assimilation with the Ensemble Kalman Filter paper_content: Soil moisture controls the partitioning of moisture and energy fluxes at the land surface and is a key variable in weather and climate prediction. The performance of the ensemble Kalman filter (EnKF) for soil moisture estimation is assessed by assimilating L-band (1.4 GHz) microwave radiobrightness observations into a land surface model. An optimal smoother (a dynamic variational method) is used as a benchmark for evaluating the filter’s performance. In a series of synthetic experiments the effect of ensemble size and non-Gaussian forecast errors on the estimation accuracy of the EnKF is investigated. With a state vector dimension of 4608 and a relatively small ensemble size of 30 (or 100; or 500), the actual errors in surface soil moisture at the final update time are reduced by 55% (or 70%; or 80%) from the value obtained without assimilation (as compared to 84% for the optimal smoother). For robust error variance estimates, an ensemble of at least 500 members is needed. The dynamic evolution of the estimation error variances is dominated by wetting and drying events with high variances during drydown and low variances when the soil is either very wet or very dry. Furthermore, the ensemble distribution of soil moisture is typically symmetric except under very dry or wet conditions when the effects of the nonlinearities in the model become significant. As a result, the actual errors are consistently larger than ensemble-derived forecast and analysis error variances. This suggests that the update is suboptimal. However, the degree of suboptimality is relatively small and results presented here indicate that the EnKF is a flexible and robust data assimilation option that gives satisfactory estimates even for moderate ensemble sizes. --- paper_title: Data assimilation for transient flow in geologic formations via ensemble Kalman filter paper_content: Formation properties are one of the key factors in numerical modeling of flow and transport in geologic formations in spite of the fact that they may not be completely characterized. The incomplete knowledge or uncertainty in the description of the formation properties leads to uncertainty in simulation results. In this study, the ensemble Kalman filter (EnKF) approach is used for continuously updating model parameters such as hydraulic conductivity and model variables such as pressure head while simultaneously providing an estimate of the uncertainty through assimilating dynamic and static measurements, without resorting to the explicit computation of the covariance or the Jacobian of the state variables. A two-dimensional example is built to demonstrate the capability of EnKF and to analyze its sensitivity with respect to different factors such as the number of realizations, measurement timings, and initial guesses. An additional example is given to illustrate the applicability of EnKF to three-dimensional problems and to examine the model predictability after dynamic data assimilation. It is found from these examples that EnKF provides an efficient approach for obtaining satisfactory estimation of the hydraulic conductivity field with dynamic measurements. After data assimilation the conductivity field matches the reference field very well, and different kinds of incorrect prior knowledge of the formation properties may also be rectified to a certain extent. --- paper_title: An Iterative Ensemble Kalman Filter for Multiphase Fluid Flow Data Assimilation paper_content: Summary The dynamical equations for multiphase flow in porous media are highly non-linear and the number of variables required to characterize the medium is usually large, often two or more variables per simulator gridblock. Neither the extended Kalman filter nor the ensemble Kalman filter is suitable for assimilating data or for characterizing uncertainty for this type of problem. Although the ensemble Kalman filter handles the nonlinear dynamics correctly during the forecast step, it sometimes fails badly in the analysis (or updating) of saturations. This paper focuses on the use of an iterative ensemble Kalman filter for data assimilation in nonlinear problems, especially of the type related to multiphase flow in porous media. Two issues are key: (1) iteration to enforce constraints and (2) ensuring that the resulting ensemble is representative of the conditional pdf (i.e. that the uncertainty quantification is correct). The new algorithm is compared to the ensemble Kalman filter on several highly nonlinear example problems, and shown to be superior in the prediction of uncertainty. --- paper_title: A new data assimilation approach for improving runoff prediction using remotely-sensed soil moisture retrievals paper_content: A number of recent studies have focused on enhancing runoff prediction via the assimilation of remotely-sensed surface soil moisture retrievals into a hydrologic model. The majority of these approaches have viewed the problem from purely a state or parameter estimation perspective in which remotely-sensed soil moisture estimates are assimilated to improve the characterization of pre-storm soil moisture conditions in a hydrologic model, and consequently, its simulation of runoff response to subsequent rainfall. However, recent work has demonstrated that soil moisture retrievals can also be used to filter errors present in satellite-based rainfall accumulation products. This result implies that soil moisture retrievals have potential benefit for characterizing both antecedent moisture conditions (required to estimate sub-surface flow intensities and subsequent surface runoff efficiencies) and storm-scale rainfall totals (required to estimate the total surface runoff volume). In response, this work presents a new sequential data assimilation system that exploits remotely-sensed surface soil moisture retrievals to simultaneously improve estimates of both pre-storm soil moisture conditions and storm-scale rainfall accumulations. Preliminary testing of the system, via a synthetic twin data assimilation experiment based on the Sacramento hydrologic model and data collected from the Model Parameterization Experiment, suggests that the new approach is more efficient at improving stream flow predictions than data assimilation techniques focusing solely on the constraint of antecedent soil moisture conditions. --- paper_title: Surface heat flux estimation with the ensemble Kalman smoother: Joint estimation of state and parameters: ESTIMATION OF SURFACE FLUXES WITH ENKS paper_content: [1] The estimation of surface heat fluxes based on the assimilation of land surface temperature (LST) has been achieved within a variational data assimilation (VDA) framework. Variational approaches require the development of an adjoint model, which is difficult to derive and code in the presence of thresholds and discontinuities. Also, it is computationally expensive to obtain the background error covariance for the variational approaches. Moreover, the variational schemes cannot directly provide statistical information on the accuracy of their estimates. To overcome these shortcomings, we develop an alternative data assimilation (DA) procedure based on ensemble Kalman smoother (EnKS) with the state augmentation method. The unknowns of the assimilation scheme are neutral turbulent heat transfer coefficient (that scales the sum of turbulent heat fluxes) and evaporative fraction, EF (that represents partitioning among the turbulent fluxes). The new methodology is illustrated with an application to the First International Satellite Land Surface Climatology Project Field Experiment (FIFE) that includes areal average hydrometeorological forcings and flux observations. The results indicate that the EnKS model not only provides reasonably accurate estimates of EF and turbulent heat fluxes but also enables us to determine the uncertainty of estimations under various land surface hydrological conditions. The results of the EnKS model are also compared with those of an optimal smoother (a dynamic variational model). It is found that the EnKS model estimates are less than optimal. However, the degree of suboptimality is small, and its outcomes are roughly comparable to those of an optimal smoother. Overall, the results from this test indicate that EnKS is an efficient and flexible data assimilation procedure that is able to extract useful information on the partitioning of available surface energy from LST measurements and eventually provides reliable estimates of turbulent heat fluxes. --- paper_title: Advanced Data Assimilation for Strongly Nonlinear Dynamics paper_content: Advanced data assimilation methods become extremely complicated and challenging when used with strongly nonlinear models. Several previous works have reported various problems when applying existing popular data assimilation techniques with strongly nonlinear dynamics. Common for these techniques is that they can all be considered as extensions to methods that have proved to work well with linear dynamics. This paper examines the properties of three advanced data assimilation methods when used with the highly nonlinear Lorenz equations. The ensemble Kalman filter is used for sequential data assimilation and the recently proposed ensemble smoother method and a gradient descent method are used to minimize two different weak constraint formulations. The problems associated with the use of an approximate tangent linear model when solving the Euler‐Lagrange equations, or when the extended Kalman filter is used, are eliminated when using these methods. All three methods give reasonable consistent results with the data coverage and quality of measurements that are used here and overcome the traditional problems reported in many of the previous papers involving data assimilation with highly nonlinear dynamics. --- paper_title: The ensemble Kalman filter for combined state and parameter estimation: MONTE CARLO TECHNIQUES FOR DATA ASSIMILATION IN LARGE SYSTEMS paper_content: he ensemble Kalman filter (EnKF) [1] is a sequential Monte Carlo method that provides an alternative to the traditional Kalman filter (KF) [2], [3] and adjoint or four-dimensional variational (4DVAR) methods [4]-[6] to better handle large state spaces and nonlinear error evolution. EnKF provides a simple conceptual formulation and ease of implementation, since. --- paper_title: Impact of Multiresolution Active and Passive Microwave Measurements on Soil Moisture Estimation Using the Ensemble Kalman Smoother paper_content: An observing system simulation experiment is developed to test tradeoffs in resolution and accuracy for soil moisture estimation using active and passive L-band remote sensing. Concepts for combined radar and radiometer missions include designs that will provide multiresolution measurements. In this paper, the scientific impacts of instrument performance are analyzed to determine the measurement requirements for the mission concept. The ensemble Kalman smoother (EnKS) is used to merge these multiresolution observations with modeled soil moisture from a land surface model to estimate surface and subsurface soil moisture at 6-km resolution. The model used for assimilation is different from that used to generate "truth." Consequently, this experiment simulates how data assimilation performs in real applications when the model is not a perfect representation of reality. The EnKS is an extension of the ensemble Kalman filter (EnKF) in which observations are used to update states at previous times. Previous work demonstrated that it provides a computationally inexpensive means to improve the results from the EnKF, and that the limited memory in soil moisture can be exploited by employing it as a fixed lag smoother. Here, it is shown that the EnKS can be used in large problems with spatially distributed state vectors and spatially distributed multiresolution observations. The EnKS-based data assimilation framework is used to study the synergy between passive and active observations that have different resolutions and measurement error distributions. The extent to which the design parameters of the EnKS vary depending on the combination of observations assimilated is investigated --- paper_title: Land surface state and flux estimation using the ensemble Kalman smoother during the Southern Great Plains 1997 field experiment paper_content: [1] The ensemble Kalman smoother (EnKS) is employed to estimate surface and subsurface soil moisture and surface energy fluxes during the Southern Great Plains 1997 (SGP97) experiment through the assimilation of observed L band radiobrightness temperatures. Previous work using the ensemble Kalman filter (EnKF) and a simple smoother demonstrated that soil moisture estimation is a reanalysis-type problem. The EnKF uses observations as they become available to update the current state. The EnKS takes the EnKF estimate as its first guess. However, in addition to updating the current state it also updates the best estimate at previous times. The performance of the EnKS is compared to the EnKF and the ensemble open loop in which no measurements are assimilated. Estimated surface soil moisture is compared to gravimetric observations at three locations. Root zone (5–100 cm) soil moisture is evaluated by comparing the resultant latent heat flux to flux tower observations. In a fixed lag smoother, observations are used to update past estimates within a fixed time window. The EnKS can be implemented in a fixed lag formulation in problems with limited memory such as soil moisture estimation. It is shown that there is a trade-off to be made between the improved accuracy with longer lag and the increased computational cost incurred. It is demonstrated that the EnKS is a relatively inexpensive state estimation algorithm suited to operational data assimilation. --- paper_title: Data Assimilation and Inverse Methods in Terms of a Probabilistic Formulation paper_content: Abstract The weak constraint inverse for nonlinear dynamical models is discussed and derived in term of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. They also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. The feasibility of the new methods is illustrated in a two-layer quasigeostrophic ocean model. --- paper_title: A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling paper_content: Abstract During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical–empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by integrating imagery with different spatial, temporal, spectral, and angular resolutions, and the fusion of optical data with data of different origin, such as LIDAR and radar/microwave. --- paper_title: Assimilating remotely sensed snow observations into a macroscale hydrology model paper_content: Abstract Accurate forecasting of snow properties is important for effective water resources management, especially in mountainous areas like the western United States. Current model-based forecasting approaches are limited by model biases and input data uncertainties. Remote sensing offers an opportunity for observation of snow properties, like areal extent and water equivalent, over larger areas. Data assimilation provides a framework for optimally merging information from remotely sensed observations and hydrologic model predictions. An ensemble Kalman filter (EnKF) was used to assimilate remotely sensed snow observations into the variable infiltration capacity (VIC) macroscale hydrologic model over the Snake River basin. The snow cover extent (SCE) product from the moderate resolution imaging spectroradiometer (MODIS) flown on the NASA Terra satellite was used to update VIC snow water equivalent (SWE), for a period of four consecutive winters (1999–2003). A simple snow depletion curve model was used for the necessary SWE–SCE inversion. The results showed that the EnKF is an effective and operationally feasible solution; the filter successfully updated model SCE predictions to better agree with the MODIS observations and ground surface measurements. Comparisons of the VIC SWE estimates following updating with surface SWE observations (from the NRCS SNOTEL network) indicated that the filter performance was a modest improvement over the open-loop (un-updated) simulations. This improvement was more evident for lower to middle elevations, and during snowmelt, while during accumulation the filter and open-loop estimates were very close on average. Subsequently, a preliminary assessment of the potential for assimilating the SWE product from the advanced microwave scanning radiometer (AMSR-E, flown on board the NASA Aqua satellite) was conducted. The results were not encouraging, and appeared to reflect large errors in the AMSR-E SWE product, which were also apparent in comparisons with SNOTEL data. --- paper_title: Identification of time-variant river bed properties with the ensemble Kalman filter paper_content: [1] An adequate characterization of river bed hydraulic conductivities (L) is crucial for a proper assessment of river-aquifer interactions. However, river bed characteristics may change over time due to dynamic morphological processes like scouring or sedimentation what can lead to erroneous model predictions when static leakage parameters are assumed. Sequential data assimilation with the ensemble Kalman filter (EnKF) allows for an update of model parameters in real-time and may thus be capable of assessing the transient behavior of L. Synthetic experiments with a three-dimensional finite element model of the Limmat aquifer in Zurich were used to assess the performance of data assimilation in capturing time-variant river bed properties. Reference runs were generated where L followed different temporal and/or spatial patterns which should mimic real-world sediment dynamics. Hydraulic head (h) data from these reference runs were then used as input data for EnKF which jointly updated h and L. Results showed that EnKF is able to capture the different spatio-temporal patterns of L in the reference runs well. However, the adaptation time was relatively long which was attributed to the fast decrease of ensemble variance. To improve the performance of EnKF also an adaptive filtering approach with covariance inflation was applied that allowed a faster and more accurate adaptation of model parameters. A sensitivity analysis indicated that even for a low amount of observations a reasonable adaptation of L towards the reference values can be achieved and that EnKF is also able to correct for a biased initial ensemble of L. --- paper_title: Land data assimilation and estimation of soil moisture using measurements from the Southern Great Plains 1997 Field Experiment paper_content: Remotely sensed microwave measurements provide useful but indirect observations of surface soil moisture. Ground-based measurements are more direct but are very localized and limited in coverage. Model predictions providea more regional perspective but rely on many simplifications and approximations and depend on inputs that are difficult to obtain over extensive areas. The only effective way to achieve soil moisture estimates with the accuracy and coverage required for hydrologic and meteorological applications is to merge information from satellites, ground-based stations, and models. In this paper we describe a convenient data merging (or data assimilation) procedure based on an ensemble Kalman filter. This procedure is illustrated with an application to the Southern Great Plains 1997 (SGP97) field experiment. It uses land surface and radiative transfer models to derive soil moisture estimates from airborne L band microwave observations and ground-based measurements of micrometeorological variables, soil texture, and vegetation type. The ensemble filter approach is appealing because (1) it can accommodate a wide range of models, (2) it can account for input and measurement uncertainties, (3) it provides information on the accuracy of its estimates, and (4) it is relatively efficient, making large-scale applications feasible. Results from our SGP97 application of the ensemble Kalman filter include large-scale maps (∼10,000 km 2 ) of soil moisture estimates and estimation error standard deviations for the entire month long experiment and comparisons of filter soil moisture and latent heat estimates to ground truth measurements (gravimetric and flux tower observations). The ground truth comparisons show that the filter is able to track soil moisture fluctuations. The filter estimates are significantly better than those from an "open loop" simulation that includes the same ground-based data but does not incorporate radio brightness measurements. Overall, the results from this field test indicate that the ensemble Kalman filter is an accurate, efficient, and flexible data assimilation procedure that is able to extract useful information from remote sensing measurements. --- paper_title: Dual state-parameter estimation of hydrological models using ensemble Kalman filter paper_content: Hydrologic models are twofold: models for understanding physical processes and models for prediction. This study addresses the latter, which modelers use to predict, for example, streamflow at some future time given knowledge of the current state of the system and model parameters. In this respect, good estimates of the parameters and state variables are needed to enable the model to generate accurate forecasts. In this paper, a dual state–parameter estimation approach is presented based on the Ensemble Kalman Filter (EnKF) for sequential estimation of both parameters and state variables of a hydrologic model. A systematic approach for identification of the perturbation factors used for ensemble generation and for selection of ensemble size is discussed. The dual EnKF methodology introduces a number of novel features: (1) both model states and parameters can be estimated simultaneously; (2) the algorithm is recursive and therefore does not require storage of all past information, as is the case in the batch calibration procedures; and (3) the various sources of uncertainties can be properly addressed, including input, output, and parameter uncertainties. The applicability and usefulness of the dual EnKF approach for ensemble streamflow forecasting is demonstrated using a conceptual rainfall-runoff model. 2004 Elsevier Ltd. All rights reserved. --- paper_title: Assimilation of GRACE Terrestrial Water Storage Data into a Land Surface Model: Results for the Mississippi River Basin paper_content: Abstract Assimilation of data from the Gravity Recovery and Climate Experiment (GRACE) system of satellites yielded improved simulation of water storage and fluxes in the Mississippi River basin, as evaluated against independent measurements. The authors assimilated GRACE-derived monthly terrestrial water storage (TWS) anomalies for each of the four major subbasins of the Mississippi into the Catchment Land Surface Model (CLSM) using an ensemble Kalman smoother from January 2003 to May 2006. Compared with the open-loop CLSM simulation, assimilation estimates of groundwater variability exhibited enhanced skill with respect to measured groundwater in all four subbasins. Assimilation also significantly increased the correlation between simulated TWS and gauged river flow for all four subbasins and for the Mississippi River itself. In addition, model performance was evaluated for eight smaller watersheds within the Mississippi basin, all of which are smaller than the scale of GRACE observations. In seven of e... --- paper_title: Hydrological data assimilation with the ensemble Kalman filter: Use of streamflow observations to update states in a distributed hydrological model paper_content: This paper describes an application of the ensemble Kalman filter (EnKF) in which streamflow observations are used to update states in a distributed hydrological model. We demonstrate that the standard implementation of the EnKF is inappropriate because of non-linear relationships between model states and observations. Transforming streamflow into log space before computing error covariances improves filter performance. We also demonstrate that model simulations improve when we use a variant of the EnKF that does not require perturbed observations. Our attempt to propagate information to neighbouring basins was unsuccessful, largely due to inadequacies in modelling the spatial variability of hydrological processes. New methods are needed to produce ensemble simulations that both reflect total model error and adequately simulate the spatial variability of hydrological states and fluxes. --- paper_title: The assimilation of remotely sensed soil brightness temperature imagery into a land surface model using Ensemble Kalman filtering: a case study based on ESTAR measurements during SGP97 paper_content: An Ensemble Kalman filter (EnKF) is used to assimilate airborne measurements of 1.4 GHz surface brightness temperature ðTBÞ acquired during the 1997 Southern Great Plains Hydrology Experiment (SGP97) into the TOPMODEL-based Land–Atmosphere Transfer Scheme (TOPLATS). In this way, the potential of using EnKF-assimilated remote measurements of TB to compensate land surface model predictions for errors arising from a climatological description of rainfall is assessed. The use of a real remotely sensed data source allows for a more complete examination of the challenges faced in implementing assimilation strategies than previous studies where observations were synthetically generated. Results demonstrate that the EnKF is an effective and computationally competitive strategy for the assimilation of remotely sensed TB measurements into land surface models. The EnKF is capable of extracting spatial and temporal trends in root-zone (40 cm) soil water content from TB measurements based solely on surface (5 cm) conditions. The accuracy of surface state and flux predictions made with the EnKF, ESTAR TB measurements, and climatological rainfall data within the Central Facility site during SGP97 are shown to be superior to predictions derived from open loop modeling driven by sparse temporal sampling of rainfall at frequencies consistent with expectations of future missions designed to measure rainfall from space (6–10 observations per day). Specific assimilation challenges posed by inadequacies in land surface model physics and spatial support contrasts between model predictions and sensor retrievals are discussed. 2002 Elsevier Science Ltd. All rights reserved. --- paper_title: Beyond Gaussian Statistical Modeling in Geophysical Data Assimilation paper_content: This review discusses recent advances in geophysical data assimilation beyond Gaussian statistical modeling, in the fields of meteorology, oceanography, as well as atmospheric chemistry. The non-Gaussian features are stressed rather than the nonlinearity of the dynamical models, although both aspects are entangled. Ideas recently proposed to deal with these non-Gaussian issues, in order to improve the state or parameter estimation, are emphasized. The general Bayesian solution to the estimation problem and the techniques to solve it are first presented, as well as the obstacles that hinder their use in high-dimensional and complex systems. Approximations to the Bayesian solution relying on Gaussian, or on second-order moment closure, have been wholly adopted in geophysical data assimilation (e.g., Kalman filters and quadratic variational solutions). Yet, nonlinear and non-Gaussian effects remain. They essentially originate in the nonlinear models and in the non-Gaussian priors. How these effects are handled within algorithms based on Gaussian assumptions is then described. Statistical tools that can diagnose them and measure deviations from Gaussianity are recalled. The following advanced techniques that seek to handle the estimation problem beyond Gaussianity are reviewed: maximum entropy filter, Gaussian anamorphosis, non-Gaussian priors, particle filter with an ensemble Kalman filter as a proposal distribution, maximum entropy on the mean, or strictly Bayesian inferences for large linear models, etc. Several ideas are illustrated with recent or original examples that possess some features of high-dimensional systems. Many of the new approaches are well understood only in special cases and have difficulties that remain to be circumvented. Some of the suggested approaches are quite promising, and sometimes already successful for moderately large though specific geophysical applications. Hints are given as to where progress might come from. --- paper_title: Particle Filters for State Estimation of Jump Markov Linear Systems paper_content: Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem. Our algorithms combine sequential importance sampling, a selection scheme, and Markov chain Monte Carlo methods. They use several variance reduction methods to make the most of the statistical structure of JMLS. Computer simulations are carried out to evaluate the performance of the proposed algorithms. The problems of on-line deconvolution of impulsive processes and of tracking a maneuvering target are considered. It is shown that our algorithms outperform the current methods. --- paper_title: Estimation of regional terrestrial water cycle using multi-sensor remote sensing observations and data assimilation paper_content: Abstract An integrated data assimilation system is implemented over the Red-Arkansas river basin to estimate the regional scale terrestrial water cycle driven by multiple satellite remote sensing data. These satellite products include the Tropical Rainfall Measurement Mission (TRMM), TRMM Microwave Imager (TMI), and Moderate Resolution Imaging Spectroradiometer (MODIS). Also, a number of previously developed assimilation techniques, including the ensemble Kalman filter (EnKF), the particle filter (PF), the water balance constrainer, and the copula error model, and as well as physically based models, including the Variable Infiltration Capacity (VIC), the Land Surface Microwave Emission Model (LSMEM), and the Surface Energy Balance System (SEBS), are tested in the water budget estimation experiments. This remote sensing based water budget estimation study is evaluated using ground observations driven model simulations. It is found that the land surface model driven by the bias-corrected TRMM rainfall produces reasonable water cycle states and fluxes, and the estimates are moderately improved by assimilating TMI 10.67 GHz microwave brightness temperature measurements that provides information on the surface soil moisture state, while it remains challenging to improve the results by assimilating evapotranspiration estimated from satellite-based measurements. --- paper_title: The ensemble Kalman filter for combined state and parameter estimation: MONTE CARLO TECHNIQUES FOR DATA ASSIMILATION IN LARGE SYSTEMS paper_content: he ensemble Kalman filter (EnKF) [1] is a sequential Monte Carlo method that provides an alternative to the traditional Kalman filter (KF) [2], [3] and adjoint or four-dimensional variational (4DVAR) methods [4]-[6] to better handle large state spaces and nonlinear error evolution. EnKF provides a simple conceptual formulation and ease of implementation, since. --- paper_title: FLUXNET: A New Tool to Study the Temporal and Spatial Variability of Ecosystem–Scale Carbon Dioxide, Water Vapor, and Energy Flux Densities paper_content: FLUXNET is a global network of micrometeorological flux measurement sites that measure the exchanges of carbon dioxide, water vapor, and energy between the biosphere and atmosphere. At present over 140 sites are operating on a long-term and continuous basis. Vegetation under study includes temperate conifer and broadleaved (deciduous and evergreen) forests, tropical and boreal forests, crops, grasslands, chaparral, wetlands, and tundra. Sites exist on five continents and their latitudinal distribution ranges from 70°N to 30°S. FLUXNET has several primary functions. First, it provides infrastructure for compiling, archiving, and distributing carbon, water, and energy flux measurement, and meteorological, plant, and soil data to the science community. (Data and site information are available online at the FLUXNET Web site, http://www-eosdis.ornl.gov/FLUXNET/.) Second, the project supports calibration and flux intercomparison activities. This activity ensures that data from the regional networks are intercomparable. And third, FLUXNET supports the synthesis, discussion, and communication of ideas and data by supporting project scientists, workshops, and visiting scientists. The overarching goal is to provide information for validating computations of net primary productivity, evaporation, and energy absorption that are being generated by sensors mounted on the NASA Terra satellite. Data being compiled by FLUXNET are being used to quantify and compare magnitudes and dynamics of annual ecosystem carbon and water balances, to quantify the response of stand-scale carbon dioxide and water vapor flux densities to controlling biotic and abiotic factors, and to validate a hierarchy of soil–plant–atmosphere trace gas exchange models. Findings so far include 1) net CO 2 exchange of temperate broadleaved forests increases by about 5.7 g C m −2 day −1 for each additional day that the growing season is extended; 2) the sensitivity of net ecosystem CO 2 exchange to sunlight doubles if the sky is cloudy rather than clear; 3) the spectrum of CO 2 flux density exhibits peaks at timescales of days, weeks, and years, and a spectral gap exists at the month timescale; 4) the optimal temperature of net CO 2 exchange varies with mean summer temperature; and 5) stand age affects carbon dioxide and water vapor flux densities. --- paper_title: Monte Carlo simulations on adsorptions of benzene and xylenes in sodium-Y zeolites paper_content: Abstract The structures of benzene and C8 isomers ( o -xylene, m -xylene, p -xylene) in NaY simulated by the canonical Monte Carlo method were in good agreement with the experimental data of the available neutron diffraction analysis. All these sorbed compounds are stacked through the aromatic π-electron interaction on the cation of the SII site in the Sodium-Y. The experimental adsorption order ( m -xylene > o -xylene > p -xylene) of xylene isomers in the faujasite was also in good agreement with the order obtained from calculated heats of adsorption. It was shown that the difference in the interaction between the 4-ring sites and the methyl groups of xylene caused the better adsorption of m -xylene compared to those of the others. From these results, our conclusion is that the adsorption of the C8 isomers in the faujasite can be explained by considering only the interaction between the zeolite framework and the adsorbate approximated as a rigid model. Moreover, this type of simulation proved to be very useful to understand the mechanism of the adsorption of aromatic compounds in zeolites like faujasites. --- paper_title: Coupled hydrogeophysical parameter estimation using a sequential Bayesian approach paper_content: Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown. --- paper_title: Novel approach to nonlinear/non-Gaussian Bayesian state estimation paper_content: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter. --- paper_title: Data assimilation methods in the Earth sciences paper_content: Abstract Although remote sensing data are often plentiful, they do not usually satisfy the users’ needs directly. Data assimilation is required to extract information about geophysical fields of interest from the remote sensing observations and to make the data more accessible to users. Remote sensing may provide, for example, measurements of surface soil moisture, snow water equivalent, snow cover, or land surface (skin) temperature. Data assimilation can then be used to estimate variables that are not directly observed from space but are needed for applications, for instance root zone soil moisture or land surface fluxes. The paper provides a brief introduction to modern data assimilation methods in the Earth sciences, their applications, and pertinent research questions. Our general overview is readily accessible to hydrologic remote sensing scientists. Within the general context of Earth science data assimilation, we point to examples of the assimilation of remotely sensed observations in land surface hydrology. --- paper_title: Sequential Imputations and Bayesian Missing Data Problems paper_content: Abstract For missing data problems, Tanner and Wong have described a data augmentation procedure that approximates the actual posterior distribution of the parameter vector by a mixture of complete data posteriors. Their method of constructing the complete data sets is closely related to the Gibbs sampler. Both required iterations, and, similar to the EM algorithm, convergence can be slow. We introduce in this article an alternative procedure that involves imputing the missing data sequentially and computing appropriate importance sampling weights. In many applications this new procedure works very well without the need for iterations. Sensitivity analysis, influence analysis, and updating with new data can be performed cheaply. Bayesian prediction and model selection can also be incorporated. Examples taken from a wide range of applications are used for illustration. --- paper_title: Simultaneous estimation of both soil moisture and model parameters using particle filtering method through the assimilation of microwave signal paper_content: [1] Soil moisture is a very important variable in land surface processes. Both field moisture measurements and estimates from modeling have their limitations when being used to estimate soil moisture on a large spatial scale. Remote sensing is becoming a practical method to estimate soil moisture globally; however, the quality of current soil surface moisture products needs to be improved in order to meet practical requirements. Data assimilation (DA) is a promising approach to merge model dynamics and remote sensing observations, thus having the potential to estimate soil moisture more accurately. In this study, a data assimilation algorithm, which couples the particle filter and the kernel smoothing technique, is presented to estimate soil moisture and soil parameters from microwave signals. A simple hydrological model with a daily time step is utilized to reduce the computational burden in the process of data assimilation. An observation operator based on the ratio of two microwave brightness temperatures at different frequencies is designed to link surface soil moisture with remote sensing measurements, and a sensitivity analysis of this operator is also conducted. Additionally, a variant of particle filtering method is developed for the joint estimation of soil moisture and soil parameters such as texture and porosity. This assimilation scheme is validated against field moisture measurements at the CEOP/Mongolia experiment site and is found to estimate near-surface soil moisture very well. The retrieved soil texture still contains large uncertainties as the retrieved values cannot converge to fixed points or narrow ranges when using different initial soil texture values, but the retrieved soil porosity has relatively small uncertainties. --- paper_title: Global Products Of Vegetation Leaf Area And Fraction Absorbed Par From Year One Of Modis Data paper_content: An algorithm based on the physics of radiative transfer in vegetation canopies for the retrieval of vegetation green leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR) from surface reflectances was developed and implemented for operational processing prior to the launch of the moderate resolution imaging spectroradiometer (MODIS) aboard the TERRA platform in December of 1999. The performance of the algorithm has been extensively tested in prototyping activities prior to operational production. Considerable attention was paid to characterizing the quality of the product and this information is available to the users as quality assessment (QA) accompanying the product. The MODIS LAI/FPAR product has been operationally produced from day one of science data processing from MODIS and is available free of charge to the users from the Earth Resources Observation System (EROS) Data Center Distributed Active Archive Center. Current and planned validation activities are aimed at evaluating the product at several field sites representative of the six structural biomes. Example results illustrating the physics and performance of the algorithm are presented together with initial QA and validation results. Potential users of the product are advised of the provisional nature of the product in view of changes to calibration, geolocation, cloud screening, atmospheric correction and ongoing validation activities. D 2002 Published by Elsevier Science Inc. --- paper_title: Using data assimilation to identify diffuse recharge mechanisms from chemical and physical data in the unsaturated zone: RECHARGE MECHANISMS paper_content: [1] It is difficult to estimate groundwater recharge in semiarid environments, where precipitation and evapotranspiration nearly balance. In such environments, groundwater supplies are sensitive to small changes in the processes that control recharge. Numerical modeling provides the temporal resolution needed to analyze these processes but is highly sensitive to model errors. Natural chloride tracer measurements in the unsaturated zone provide more robust indicators of low recharge rates but yield estimates at coarse time scales that mask most control mechanisms. This study presents a new probabilistic approach for analyzing diffuse recharge in semiarid environments, with an application to study sites in the U.S. southern High Plains. The approach uses data assimilation to combine model predictions and chloride-based recharge estimates. It has the advantage of providing probability distributions rather than point values for uncertain soil and vegetation properties. These can then be used to quantify recharge uncertainty. Estimates of moisture flux time series indicate that percolation (or potential recharge) at the data sites is episodic and exhibits interannual variability. Most percolation occurs during intense rains when crop roots are not fully developed and there is ample antecedent soil moisture. El Nino events can contribute to interannual variability of recharge if they bring rainy winters that provide wet antecedent conditions for spring precipitation. Data assimilation methods that combine modeling and chloride observations provide the high temporal resolution information needed to identify mechanisms controlling diffuse recharge and offer a way to examine the effects of land use change and climatic variability on groundwater resources. --- paper_title: Sequential Monte Carlo Methods for Dynamic Systems paper_content: Abstract We provide a general framework for using Monte Carlo methods in dynamic systems and discuss its wide applications. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ingredients: importance sampling and resampling, rejection sampling, and Markov chain iterations. We provide guidelines on how they should be used and under what circumstance each method is most suitable. Through the analysis of differences and connections, we consolidate these methods into a generic algorithm by combining desirable features. In addition, we propose a general use of Rao-Blackwellization to improve performance. Examples from econometrics and engineering are presented to demonstrate the importance of Rao–Blackwellization and to compare different Monte Carlo procedures. --- paper_title: The importance of parameter resampling for soil moisture data assimilation into hydrologic models using the particle filter paper_content: The Sequential Importance Sampling with Resampling (SISR) particle filter and the SISR with parameter resampling particle filter (SISR-PR) are evaluated for their performance in soil moisture assimilation and the consequent effect on baseflow generation. With respect to the resulting soil moisture time series, both filters perform appropriately. However, the SISR filter has a negative effect on the baseflow due to inconsistency between the parameter values and the states after the assimilation. In order to overcome this inconsistency, parameter resampling is applied along with the SISR filter, to obtain consistent parameter values with the analyzed soil moisture state. Extreme parameter replication, which could lead to a particle collapse, is avoided by the perturbation of the parameters with white noise. Both the modeled soil moisture and baseflow are improved if the complementary parameter resampling is applied. The SISR filter with parameter resampling offers an efficient way to deal with biased observations. The robustness of the methodology is evaluated for 3 model parameter sets and 3 assimilation frequencies. Overall, the results in this paper indicate that the particle filter is a promising tool for hydrologic modeling purposes, but that an additional parameter resampling may be necessary to consistently update all state variables and fluxes within the model. --- paper_title: Variational data assimilation of microwave radiobrightness observations for land surface hydrology applications paper_content: Our ability to accurately describe large-scale variations in soil moisture is severely restricted by process uncertainty and the limited availability of appropriate soil moisture data. Remotely sensed microwave radiobrightness observations can cover large scales but have limited resolution and are only indirectly related to the hydrologic variables of interest. The authors describe a four-dimensional (4D) variational assimilation algorithm that makes best use of available information while accounting for both measurement and model uncertainty. The representer method used is more efficient than a Kalman filter because it avoids explicit propagation of state error covariances. In a synthetic example, which is based on a field experiment, the authors demonstrate estimation performance by examining data residuals. Such tests provide a convenient way to check the statistical assumptions of the approach and to assess its operational feasibility. Internally computed covariances show that the estimation error decreases with increasing soil moisture. An adjoint analysis reveals that trends in model errors in the soil moisture equation can be estimated from daily L-band brightness measurements, whereas model errors in the soil and canopy temperature equations cannot be adequately retrieved from daily data alone. Nonetheless, state estimates obtained from the assimilation algorithm improve significantly on prior model predictions derived without assimilation of radiobrightness data. --- paper_title: A Microwave Satellite Observational Operator for Variational Data Assimilation of Soil Moisture paper_content: An observational operator and its adjoint have been created that are suitable for use within variational data assimilation using polarized 6- and 10-GHz passive microwave satellite observations. When used within a variational data assimilation system, the operator will facilitate NWP soil moisture initialization using existing and future satellite datasets [e.g., NASA Advanced Microwave Scanning Radiometer‐Earth Observing System (AMSR-E), Advanced Earth Observing Satellite-II (ADEOS-II), Department of Defense (DoD) WindSat, and the National Polar Orbiter Environmental Satellite System (NPOESS) Conical Microwave Imager Sounder (CMIS)]. Five primary control variables are used within the operator, and surface soil moisture is explicitly included as one of the control variables. In future studies, this operator and its adjoint will be used to perform land surface model data assimilation experiments to better determine important NWP surface characteristics. In the current study, the adjoint model development and analysis of the potential information content of passive microwave measurements sensitive to the land surface and soil parameters are focused on. The multivariate results clarify the effects of masking phenomena upon the soil moisture signal. Also included is a useful transformation of the adjoint between real and complex number spaces. The transformations are necessary when higher-level mathematical operators, such as powers, use complex number arguments. This can be a common occurrence within radiative transfer models. Thus, the adjoint of this particular observational operator serves as an example of this behavior. Various observational operator results and sensitivity analyses are presented, demonstrating significant utility of the operator for NWP soil moisture data assimilation studies. --- paper_title: Constraining a physically based Soil-Vegetation-Atmosphere Transfer model with surface water content and thermal infrared brightness temperature measurements using a multiobjective approach: CONSTRAINING A PHYSICALLY BASED SVAT MODEL paper_content: [1] This article reports on a multiobjective approach which is carried out on the physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model. This approach is designed for (1) analyzing the model sensitivity to its input parameters under various environmental conditions and (2) assessing input parameters through the combined assimilation of the surface water content and the thermal infrared brightness temperature. To reach these goals, a multiobjective calibration iterative procedure (MCIP) is applied on the Simple Soil Plant Atmosphere Transfer–Remote Sensing (SiSPAT-RS) model. This new multiobjective approach consists of performing successive contractions of the feasible parameter space with the multiobjective generalized sensitivity analysis algorithm. Results show that the MCIP is an original and pertinent approach both for improving model calibration (i.e., reducing the a posteriori preferential ranges) and for driving a detailed SVAT model using various calibration data. The usefulness of the water content of the upper 5 cm and the thermal infrared brightness temperature for retrieving quantitative information about the main input surface parameters is also underlined. This study opens perspectives in the combined assimilation of various multispectral remotely sensed observations, such as passive microwaves and thermal infrared signals. --- paper_title: Variational inverse parameter estimation in a cohesive sediment transport model: An adjoint approach: VARIATIONAL INVERSE METHOD IN SEDIMENT TRANSPORT paper_content: [1] Parameter estimation in the sediment deposition and resuspension process is an important issue in numerical modeling of suspended sediment transport. The sediment settling velocity and resuspension rate are two critical parameters controlling the sediment exchange process between the water column and sediment bed. In this paper a variational inverse data assimilation scheme for estimation of the sediment settling velocity and resuspension rate is developed and tested with a three-dimensional cohesive sediment transport model. The sediment settling velocity and resuspension rate are treated as poorly known parameters in the model and are estimated by the variational inverse scheme using an adjoint approach. A limited-memory quasi-Newtonian conjugate gradient algorithm is used in the minimization process. The variational inverse model is tested in the James River estuary in Virginia by identical twin experiments. Numerical experimental results show that variational inverse data assimilation is a useful tool for retrieving poorly known parameters in a cohesive sediment transport model. --- paper_title: Estimation of Surface Turbulent Fluxes through Assimilation of Radiometric Surface Temperature Sequences paper_content: Abstract A model of land surface energy balance is used as a constraint on the estimation of factors characterizing land surface influences on evaporation and turbulent heat transfer from sequences of radiometric surface temperature measurements. The surface moisture control on evaporation is captured by the dimensionless evaporative fraction (ratio of latent heat flux to the sum of the turbulent fluxes), which is nearly constant for near-peak radiation hours on days without precipitation. The dimensionless parameter capturing the turbulent transfer characteristics (bulk heat transfer coefficient) includes the impacts of both forced and free convection. The mean diurnal pattern and seasonal trends are interpreted in the context of expected surface air layer static stability variations. The approach is tested over the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site (Kansas) where verification data on surface fluxes are available. It is shown that sequent... --- paper_title: From Near-Surface to Root-Zone Soil Moisture Using Different Assimilation Techniques paper_content: Abstract Root-zone soil moisture constitutes an important variable for hydrological and weather forecast models. Microwave radiometers like the L-band instrument on board the European Space Agency’s (ESA) future Soil Moisture and Ocean Salinity (SMOS) mission are being designed to provide estimates of near-surface soil moisture (0–5 cm). This quantity is physically related to root-zone soil moisture through diffusion processes, and both surface and root-zone soil layers are commonly simulated by land surface models (LSMs). Observed time series of surface soil moisture may be used to analyze the root-zone soil moisture using data assimilation systems. In this paper, various assimilation techniques derived from Kalman filters (KFs) and variational methods (VAR) are implemented and tested. The objective is to correct the modeled root-zone soil moisture deficiencies of the newest version of the Interaction between Soil, Biosphere, and Atmosphere scheme (ISBA) LSM, using the observations of the surface soil mo... --- paper_title: Estimation of Roughness Profile in Trapezoidal Open Channels paper_content: In the well-known de Saint Venant equations, the bed roughness-coefficient cannot be measured directly and therefore needs to be estimated. The estimation process is referred to as “parameter identification,” which is a mathematical process based on using the difference between the solution of the model equations and the measured system response. This paper introduces an approach for solving the parameter identification problem in the de Saint Venant equations. The method proposed herein is widely used in gas dynamics; however, it has not been used before for unsteady problem identification of open channel flow parameters. Although the proposed solution procedure will be applied herein to the bed roughness-coefficient, it can be used for other parameters, e.g., cross-sectional area, bed width, etc. Starting with an initial guess of the roughness coefficient, the algorithm iteratively improves the guesses in the direction of the gradient of the least square criterion. The gradient is obtained by means of a variational approach, while the conditions of the criterion minimum are identified by the general method of indefinite Lagrangian multipliers. --- paper_title: Auto-calibration system developed to assimilate AMSR-E data into a land surface model for estimating soil moisture and the surface energy budget paper_content: Low-frequency microwave brightness temperature is strongly affected by near-surface soil moisture; therefore, it can be assimilated into a land surface model to improve modeling of soil moisture and the surface energy budget. This study presents a new variational land system used to assimilate AMSR-E brightness temperature of vertical polarization of 6.9 GHz and 18.7 GHz. The system consists of a land surface model (LSM) used to calculate surface fluxes and soil moisture, a radiative transfer model (RTM) to estimate the microwave brightness temperature, and an optimization scheme to search for optimal values of soil moisture by minimizing the difference between modeled and observed brightness temperature. The LSM is an improved simple biosphere model for sparse vegetation modeling and the RTM is a Q-h model that can account for the effects of surface roughness and vegetation. Several parameters in the LSM and RTM can significantly affect the outputs of the land data assimilation system but their values are either highly variable or unavailable. To solve this problem, we developed a dual-pass assim --- paper_title: Future directions for advanced evapotranspiration modeling: Assimilation of remote sensing data into crop simulation models and SVAT models paper_content: Soil-Vegetation-Atmosphere Transfer Models (SVAT) and Crop Simulation Models describe physical and physiological processes occurring in crop canopies. Remote sensing data may be used through assimilation procedures for constraining or driving SVAT and crop models. These models provide continuous simulation of processes such as evapotranspiration and, thus, direct means for interpolating evapotranspiration between remote sensing data acquisitions (which is not the case for classical evapotranspiration mapping methods). They also give access to variables other than evapotranspiration, such as soil moisture and crop production. We developed the coupling between crop, SVAT and radiative transfer models in order to implement assimilation procedures in various wavelength domains (solar, thermal and microwave). Such coupling makes it possible to transfer information from one model to another and then to use remote sensing information for retrieving model parameters which are not directly related to remote sensing data (such as soil initial water content, plant growth parameters, physical properties of soil and so on). Simple assimilation tests are presented to illustrate the main techniques that may be used for monitoring crop processes and evapotranspiration. An application to a small agricultural area is also performed showing the potential of such techniques for retrieving evapotranspiration and information on irrigation practices over wheat fields. --- paper_title: Estimation of Aquifer Parameters Under Transient and Steady State Conditions: 1. Maximum Likelihood Method Incorporating Prior Information paper_content: In this series of three papers a method is presented to estimate the parameters of groundwater flow models under steady and nonsteady state conditions. The parameters include values and directions of principal hydraulic conductivities (or transmissivities) in anisotropic media, specific storage (or storativity), interior and boundary recharge or leakage rates, coefficients of head-dependent interior and boundary sources, and boundary heads. In transient situations, the initial head distribution can also be estimated if the system is originally at a steady state. Paper 1 of the series discusses some of the advantage in treating the inverse problem statistically and in regularizing its solution by means of penalty criteria based on prior estimates of the parameters. The inverse problem is posed in the framework of maximum likelihood theory cast in a manner that accounts for prior information about the parameters. Since not all the factors which contribute to the prior errors can be quantified statistically at the outset, the covariance matrices of these errors are expressed in terms of several parameters which, if unknown, can be estimated jointly with the hydraulic parameters by a stagewise optimization process. When transient head data are separated by a fixed time interval, the temporal structure of these data is approximated by a lag-one autoregressive model with a correlation coefficient that can be treated as another unknown parameter. Estimation errors are analyzed by examining the lower bound of their covariance matrix in the eigenspace. Paper 1 concludes by suggesting that certain model identification criteria developed in the time series literature, all of which are based on the maximum likelihood concept, might be useful for selecting the best groundwater model (or the best method of parameterizing a particular model) among a number of given alternatives. --- paper_title: Using a multiobjective approach to retrieve information on surface properties used in a SVAT model paper_content: Abstract The reliability of model predictions used in meteorology, agronomy or hydrology is partly linked to an adequate representation of the water and energy balances which are described in so-called SVAT (Soil Vegetation Atmosphere Transfer) models. These models require the specification of many surface properties which can generally be obtained from laboratory or field experiments, using time consuming techniques, or can be derived from textural information. The required accuracy of the surface properties depends on the model complexity and their misspecification can affect model performance. At various time and spatial resolutions, remote sensing provides information related to surface parameters in SVAT models or state variables simulated by SVAT models. In this context, the Simple Soil-Plant-Atmosphere Transfer-Remote Sensing (SiSPAT-RS) model was developed for remote sensing data assimilation objectives. This new version of the physically based SiSPAT model simulates the main surface processes (energy fluxes, soil water content profiles, temperatures) and remote sensing data in the visible, infrared and thermal infrared spectral domains. As a preliminary step before data assimilation in the model, the objectives of this study were (1) to apply a multiobjective approach for retrieving quantitative information about the surface properties from different surface measurements and (2) to determine the potential of the SiSPAT-RS model to be applied with ‘little’ a priori information about input parameters. To reach these goals, the ability of the Multiobjective Generalized Sensitivity Analysis (MOGSA) algorithm to determine and quantify the most influential input parameters of the SiSPAT-RS model on several simulated output variables, was investigated. The results revealed the main influential input parameters according to different contrasted environmental conditions, and contributed to the reduction of their a priori uncertainty range. A procedure for specifying surface properties from MOGSA results was tested on the thermal and hydraulic soil parameters, and evaluated through the SiSPAT-RS model performance. Although slightly lower than a reference simulation, the performance were satisfactory and suggested that complex SVAT models can be driven with little a priori information on soil properties, as in a future context of remote sensing data assimilation. Measurement acquired on a winter wheat field of the ReSeDA (Remote Sensing Data Assimilation) experiment were used in this study. --- paper_title: Estimating root zone soil moisture from surface soil moisture data and soil-vegetation-atmosphere transfer modeling paper_content: We studied the possibility of estimating root zone soil moisture through the combined use of a time series of observed surface soil moisture data and soil-vegetation-atmosphere transfer modeling. The analysis was based on the interactions between soil- biosphere-atmosphere surface scheme and two data sets obtained from soybean crops in 1989 and 1990. These data sets included detailed measurements of soil and vegetation characteristics and mass and energy transfer in the soil-plant-atmosphere system. The data measured during the 3-month experiment in 1989 are used to investigate the accuracy of soil reservoir retrievals, as a function of the time period and frequency of measurements of surface soil moisture involved in the retrieval process. This study contributes to better defining the requirements for the use of remotely sensed microwave measurements of surface soil moisture. --- paper_title: A simplified land data assimilation scheme and its application to soil moisture experiments in 2002 (SMEX02) paper_content: [1] The influences of vegetation and its spatial and temporal heterogeneity on the detection of soil moisture can be significant and may limit the applicability of satellite passive microwave sensors. Sensitivity analysis of an applied soil moisture algorithm using ground-based measurements can show where problems can arise and how they may be circumvented. This paper investigates a method of retrieving a one-dimensional soil moisture profile and the surface and canopy temperature, under the influence of different vegetations and dynamics, by integrating numerical models and passive microwave, visible, and near-infrared measurements via a novel application of data assimilation. The land surface scheme (LSS), which is at the heart of the present land data assimilation scheme (LDAS), is a biophysically based model (simplified biosphere model 2: SiB2) of soil, vegetative and atmospheric interactions. The ground-based microwave radiometer (GBMR) measurements, gathered over the soil moisture experiments in 2002 (SMEX02) in Iowa, were assimilated into the LSS via a radiative transfer model (RTM) using LDAS. Compared to open loop simulations, the results of LDAS are in better agreement with observations. --- paper_title: Automatic differentiation : a tool for variational data assimilation and adjoint sensitivity analysis for flood modeling paper_content: Flood modeling involves catchment scale hydrology and river hydraulics. Analysis and reduction of model uncertainties induce sensitivity analysis, reliable initial and boundary conditions, calibration of empirical parameters. A deterministic approach dealing with the aforementioned estimation and sensitivity analysis problems results in the need of computing the derivatives of a function of model output variables with respect to input variables. Modern automatic differentiation (ad) tools such as Tapenade provide an easier and safe way to fulfill this need. Two applications are presented in this paper: variational data assimilation and adjoint sensitivity analysis. --- paper_title: Recent progress of data assimilation methods in meteorology paper_content: Data assimilation is a methodology for estimating accurately the state of a time-evolving complex system like the atmosphere from observational data and a numerical model of the system. It has become an indispensable tool for meteorological researches as well as for numerical weather prediction, as represented by extensive use of reanalysis datasets for research purposes. New advances of data assimilation methods emerged from the 1980s. This review paper presents the theoretical background and implementation of two advanced data assimilation methods: four-dimensional variational assimilation (4D-Var) and ensemble Kalman filtering (EnKF), which currently draw much attention in the meteorological community. Recent research results in Japan on those methods are reviewed, especially on mesoscale applications of 4D-Var and tests of the local ensemble transform Kalman filter (LETKF). Comparison of 4D-Var and EnKF is also briefly discussed. An outline of the mesoscale 4D-Var system of the Japan Meteorological Agency, which is the first operational 4D-Var for a mesoscale model, is given in Appendix. --- paper_title: From Near-Surface to Root-Zone Soil Moisture Using Year-Round Data paper_content: Abstract Passive microwave remote sensing may provide quantitative information about the water content of a shallow near-surface soil layer. However, the variable of interest for applications such as short- and medium-term meteorological modeling and hydrological studies over vegetated areas is the root-zone soil moisture, which controls plant transpiration. Because near-surface soil moisture is related to root-zone soil moisture through diffusion processes, assimilation algorithms may enable one to retrieve the total soil moisture content from observed surface soil moisture. A variational assimilation method is applied to a new dataset obtained over a fallowland in southwestern France in 1997 and 1998. The new database includes continuous automatic measurements of water content within the top 6-cm soil layer during a 17-month period from January 1997 to May 1998. Once calibrated, the Interactions between Soil, Biosphere, and Atmosphere (ISBA) surface scheme is able to simulate properly the measured surfa... --- paper_title: Mantle circulation models with variational data assimilation: inferring past mantle flow and structure from plate motion histories and seismic tomography paper_content: SUMMARY ::: ::: Mantle convection models require an initial condition some time in the past. Because this initial condition is unknown for Earth, we cannot infer the geological evolution of mantle flow from forward mantle convection calculations even for the most recent Mesozoic and Cenozoic geological history of our planet. Here we introduce a fluid dynamic inverse problem to constrain unknown mantle flow back in time from seismic tomographic observations of the mantle and reconstructions of past plate motions using variational data assimilation. We derive the generalized inverse of mantle convection and explore the initial condition problem in high-resolution, 3-D spherical mantle circulation models for a time period of 100 Myr, roughly comparable to half a mantle overturn. We present a synthetic modelling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from fluid dynamic inverse modelling, assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We also demonstrate that convecting present-day mantle structure back in time by reversing the time-stepping of the energy equation is insufficient to model the mantle structure of the past. The difficulty arises, because such backward convection calculations ignore thermal diffusion effects, and therefore cannot account for the generation of thermal buoyancy in boundary layers as we go back in time. Inverse mantle convection modelling should make it possible to infer a number of flow parameters from observational constraints of the mantle. --- paper_title: What Is an Adjoint Model? paper_content: Adjoint models are powerful tools for many studies that require an estimate of sensitivity of model output (e.g., a forecast) with respect to input. Actual fields of sensitivity are produced directly and efficiently, which can then be used in a variety of applications, including data assimilation, parameter estimation, stability analysis, and synoptic studies. The use of adjoint models as tools for sensitivity analysis is described here using some simple mathematics. An example of sensitivity fields is presented along with a short description of adjoint applications. Limitations of the applications are discussed and some speculations about the future of adjoint models are offered. --- paper_title: Improvements to the Community Land Model and their impact on the hydrological cycle: COMMUNITY LAND MODEL HYDROLOGY paper_content: [1] The Community Land Model version 3 (CLM3) is the land component of the Community Climate System Model (CCSM). CLM3 has energy and water biases resulting from deficiencies in some of its canopy and soil parameterizations related to hydrological processes. Recent research by the community that utilizes CLM3 and the family of CCSM models has indicated several promising approaches to alleviating these biases. This paper describes the implementation of a selected set of these parameterizations and their effects on the simulated hydrological cycle. The modifications consist of surface data sets based on Moderate Resolution Imaging Spectroradiometer products, new parameterizations for canopy integration, canopy interception, frozen soil, soil water availability, and soil evaporation, a TOPMODEL-based model for surface and subsurface runoff, a groundwater model for determining water table depth, and the introduction of a factor to simulate nitrogen limitation on plant productivity. The results from a set of offline simulations were compared with observed data for runoff, river discharge, soil moisture, and total water storage to assess the performance of the new model (referred to as CLM3.5). CLM3.5 exhibits significant improvements in its partitioning of global evapotranspiration (ET) which result in wetter soils, less plant water stress, increased transpiration and photosynthesis, and an improved annual cycle of total water storage. Phase and amplitude of the runoff annual cycle is generally improved. Dramatic improvements in vegetation biogeography result when CLM3.5 is coupled to a dynamic global vegetation model. Lower than observed soil moisture variability in the rooting zone is noted as a remaining deficiency. --- paper_title: Upscaling Hydraulic Properties and Soil Water Flow Processes in Heterogeneous Soils paper_content: This review covers, in a comprehensive manner, the approaches available in the literature to upscale soil water processes and hydraulic parameters in the vadose zone. We distinguish two categories of upscaling methods: forward approaches requiring information about the spatial distribution of hydraulic parameters at a small scale, and inverse modeling approaches requiring information about the spatial and temporal variation of state variables at various scales, including so-called “soft data”. Geostatistical and scaling approaches are crucial to upscale soil water processes and to derive large-scale effective fluxes and parameters from small-scale information. Upscaling approaches include stochastic perturbation methods, the scaleway approach, the stream-tube approach, the aggregation concept, inverse modeling approaches, and data fusion. With all upscaling methods, the estimated effective parameters depend not only on the properties of the heterogeneous flow field but also on boundary conditions. The use of the Richards equation at the field and watershed scale is based more on pragmatism than on a sound physical basis. There are practically no data sets presently available that provide sufficient information to extensively validate existing upscaling approaches. Use of numerical case studies has therefore been most common. More recently and still under development, hydrogeophysical methods combined with ground-based remote sensing techniques promise significant contributions toward providing high-quality data sets. Finally, most of the upscaling literature in vadose zone research has dealt with bare soils or deep vadose zones. There is a need to develop upscaling methods for real world soils, considering root water uptake mechanisms and other soil–plant–atmosphere interactions. --- paper_title: Estimation of Radiative Transfer Parameters from L-Band Passive Microwave Brightness Temperatures Using Advanced Data Assimilation paper_content: ESA's Soil Moisture and Ocean Salinity (SMOS) mission has been designed to extend our knowledge of the Earth's water cycle. Soil Moisture and Ocean Salinity records brightness temperatures at the L-band, which over land are sensitive to soil and vegetation parameters. On the basis of these measurements, soil moisture and vegetation opacity data sets have been derived operationally since 2009 for applications comprising hydrology, numerical weather prediction (NWP), and drought monitoring. We present a method to enhance the knowledge about the temporal evolution of radiative transfer parameters. The radiative transfer model L-Band Microwave Emission of the Biosphere (L-MEB) is used within a data assimilation framework to retrieve vegetation opacity and soil surface roughness. To analyze the ability of the data assimilation approach to track the temporal evolution of these parameters, scenario analyses were performed with increasing complexity. First, the HYDRUS-1D code was used to generate soil moisture and soil temperature time series. On the basis of these data, the L-MEB forward model was run to simulate brightness temperature observations. Finally, the coupled model system HYDRUS-1D and L-MEB were integrated into a data assimilation framework using a particle filter, which is able to update L-MEB states as well as L-MEB parameters. Time invariant and time variable radiative transfer parameters were estimated. Moreover, it was possible to estimate a "bias" term when model simulations show a systematic difference as compared to observations. An application to a USDA-NRCS Soil Climate Analysis Network (SCAN) site showed the good performance of the proposed approach under real conditions. --- paper_title: Assimilation of Disaggregated Microwave Soil Moisture into a Hydrologic Model Using Coarse-Scale Meteorological Data paper_content: Abstract Near-surface soil moisture retrieved from Soil Moisture and Ocean Salinity (SMOS)-type data is downscaled and assimilated into a distributed soil–vegetation–atmosphere transfer (SVAT) model with the ensemble Kalman filter. Because satellite-based meteorological data (notably rainfall) are not currently available at finescale, coarse-scale data are used as forcing in both the disaggregation and the assimilation. Synthetic coarse-scale observations are generated from the Monsoon ‘90 data by aggregating the Push Broom Microwave Radiometer (PBMR) pixels covering the eight meteorological and flux (METFLUX) stations and by averaging the meteorological measurements. The performance of the disaggregation/assimilation coupling scheme is then assessed in terms of surface soil moisture and latent heat flux predictions over the 19-day period of METFLUX measurements. It is found that the disaggregation improves the assimilation results, and vice versa, the assimilation of the disaggregated microwave soil mois... --- paper_title: Dual state-parameter estimation of root zone soil moisture by optimal parameter estimation and extended Kalman filter data assimilation paper_content: Abstract With well-determined hydraulic parameters in a hydrologic model, a traditional data assimilation method (such as the Kalman filter and its extensions) can be used to retrieve root zone soil moisture under uncertain initial state variables (e.g., initial soil moisture content) and good simulated results can be achieved. However, when the key soil hydraulic parameters are incorrect, the error is non-Gaussian, as the Kalman filter will produce a persistent bias in its predictions. In this paper, we propose a method coupling optimal parameters and extended Kalman filter data assimilation (OP-EKF) by combining optimal parameter estimation, the extended Kalman filter (EKF) assimilation method, a particle swarm optimization (PSO) algorithm, and Richards’ equation. We examine the accuracy of estimating root zone soil moisture through the optimal parameters and extended Kalman filter data assimilation method by using observed in situ data at the Meiling experimental station, China. Results indicate that merely using EKF for assimilating surface soil moisture content to obtain soil moisture content in the root zone will produce a persistent bias between simulated and observed values. Using the OP-EKF assimilation method, estimates were clearly improved. If the soil profile is heterogeneous, soil moisture retrieval is accurate in the 0–50 cm soil profile and is inaccurate at 100 cm depth. Results indicate that the method is useful for retrieving root zone soil moisture over large areas and long timescales even when available soil moisture data are limited to the surface layer, and soil moisture content are uncertain and soil hydraulic parameters are incorrect. --- paper_title: Multi-scale assimilation of root zone soil water predictions paper_content: When hydrology model parameters are determined, a traditional data assimilation method (such as Kalman filter) and a hydrology model can estimate the root zone soil water with uncertain state variables (such as initial soil water content). The simulated result can be quite good. However, when a key soil hydraulic property, such as the saturated hydraulic conductivity, is overestimated or underestimated, the traditional soil water assimilation process will produce a persistent bias in its predictions. In this paper, we present and demonstrate a new multi-scale assimilation method by combining the direct insertion assimilation method, particle swarm optimisation (PSO) algorithm and Richards equation. We study the possibility of estimating root zone soil water with a multi-scale assimilation method by using observed in situ data from the Wudaogou experiment station, Huaihe River Basin, China. The results indicate there is a persistent bias between simulated and observed values when the direct insertion assimilation surface soil water content is used to estimate root zone soil water contents. Using a multi-scale assimilation method (PSO algorithm and direct insertion assimilation) and an assumed bottom boundary condition, the results show some obvious improvement, but the root mean square error is still relatively large. When the bottom boundary condition is similar to the actual situation, the multi-scale assimilation method can well represent the root zone soil water content. The results indicate that the method is useful in estimating root zone soil water when available soil water data are limited to the surface layer and the initial soil water content even when the soil hydraulic conductivities are uncertain. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: Multi-scale assimilation of surface soil moisture data for robust root zone moisture predictions paper_content: In the presence of uncertain initial conditions and soil hydraulic properties land surface model performance can be significantly improved by the assimilation of periodic observations of certain state variables, such as the near surface soil moisture as observed from a remote platform. Recently, Montaldo et al. [Water Resour Res 37 (2001) 2889] derived a framework that uses biases between observed and modeled time rates of change of surface soil moisture to quantify biases between modeled and actual root-zone-average soil moisture contents. For very large errors in the saturated conductivity the soil moisture assimilation procedure is continuously working against the drainage errors, resulting in a persistent bias in its predictions. In this paper, we adopt this persistent (directional) bias in soil moisture as evidence of an error in the saturated conductivity. From manipulations of soil water balance equations we derived an expression that quantitatively relates the persistent bias in soil moisture to the estimated error in the saturated hydraulic conductivity. We combined this result with the approach of Montaldo et al. [Water Resour Res 37 (2001) 2889] to form a multi-scale assimilation approach. The multi-scale assimilation system is shown to provide marked improvements in the prediction of root zone soil moisture for a case study using data taken from an experimental catchment near Cork, Ireland. In effect, the root zone moisture is updated to provide a temporal trajectory of the near surface moisture that follows the trajectory of the observed surface moisture, and the hydraulic conductivity is adjusted on the basis of the time averaged corrections applied to the root zone water content. It is anticipated that this approach would be useful in operational forecasting models over large domains, where system parameters would be uncertain and occasional distributed observations would be limited to the near surface zone. --- paper_title: How much improvement can precipitation data fusion achieve with a Multiscale Kalman Smoother-based framework? paper_content: [1] With advancements in measuring techniques and modeling approaches, more and more precipitation data products, with different spatial resolutions and accuracies, become available. Therefore, there is an increasing need to produce a fused precipitation product that can take advantage of the strengths of each individual precipitation data product. This study systematically and quantitatively evaluates the improvements of the fused precipitation data as a result of using the Mulitscale Kalman Smoother-based (i.e., MKS-based) framework. Impacts of two types of errors, i.e., white noise and bias that are associated with individual precipitation products, are investigated through hypothetical experiments. Two measures, correlation and root-mean-square error, are used to evaluate the improvements of the fused precipitation data. Our study shows that the MKS-based framework can significantly recover the loss of precipitation's spatial patterns and magnitudes that are associated with the white noise and bias when the erroneous data at different spatial scales are fused together. Although the erroneous data at a finer resolution are generally more effective in improving the spatial patterns and magnitudes of the erroneous data at a coarser resolution, data at a coarser resolution can also provide valuable information in improving the quality of the data at a finer resolution when they are fused. This study provides insights on the values of the MKS-based framework and a guideline for determining a potentially optimal spatial scale over which improvements in both the spatial patterns and the magnitudes can be maximized based on given data with different spatial resolutions. --- paper_title: A Multiscale Ensemble Filtering System for Hydrologic Data Assimilation. Part I: Implementation and Synthetic Experiment paper_content: Abstract The multiscale autoregressive (MAR) framework was introduced in the last decade to process signals that exhibit multiscale features. It provides the method for identifying the multiscale structure in signals and a filtering procedure, and thus is an efficient way to solve the optimal estimation problem for many high-dimensional dynamic systems. Later, an ensemble version of this multiscale filtering procedure, the ensemble multiscale filter (EnMSF), was developed for estimation systems that rely on Monte Carlo samples, making this technique suitable for a range of applications in geosciences. Following the prototype study that introduced EnMSF, a strategy is devised here to implement the multiscale method in a hydrologic data assimilation system, which runs a land surface model. Assimilation experiments are carried out over the Arkansas–Red River basin, located in the central United States (∼645 000 km2), using the Variable Infiltration Capacity (VIC) model with a computing grid of 1062 pixels. A... --- paper_title: Effects of uncertainty magnitude and accuracy on assimilation of multiscale measurements for snowpack characterization paper_content: [1] Hydrologists regularly utilize data assimilation (DA) techniques to merge remote sensing measurements with physical models in order to characterize hydrologic reservoirs. DA methods generally require an estimate of the uncertainty of the various inputs to the models; in practice, however, the uncertainty of these quantities is often unknown. This paper explores the effects of the unknown uncertainty on the efficiency of a multifrequency, multiscale hydrologic DA scheme for snowpack characterization. Synthetic passive microwave (PM) measurements at 25 km and near-infrared (NIR) measurements at 1 km were assimilated, and both snow water equivalent (SWE) and grain size were estimated at 1 km resolution. It is found that the uncertainty magnitude had a significant effect on the efficiency of both SWE and grain size estimation, but that the uncertainty magnitude had very different effects on these two variables because of the different PM and NIR measurement scales. Secondly, it was found that the uncertainty accuracy had a very important role in this DA scheme and that the filter may degrade the estimate of SWE and grain size if key model inputs are misspecified. Finally, four metrics were used to assess the difference between the PM and NIR measurement innovations and their expected values. It was shown that these metrics could potentially be used in an adaptive filtering scheme to correct misspecified uncertainty. More investigation will be required before the feasibility of such an adaptive filtering scheme is established. These findings have important ramifications for snowpack estimation since it implies that in the context of DA schemes, better use will be made of remote sensing products when better physical characterization of the uncertainty of modeled estimates of snow states is available. --- paper_title: Multiscale assimilation of Advanced Microwave Scanning Radiometer-EOS snow water equivalent and Moderate Resolution Imaging Spectroradiometer snow cover fraction observations in northern Colorado: SATELLITE-OBSERVED SNOW DATA ASSIMILATION paper_content: [1] Eight years (2002–2010) of Advanced Microwave Scanning Radiometer–EOS (AMSR-E) snow water equivalent (SWE) retrievals and Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) observations are assimilated separately or jointly into the Noah land surface model over a domain in Northern Colorado. A multiscale ensemble Kalman filter (EnKF) is used, supplemented with a rule-based update. The satellite data are either left unscaled or are scaled for anomaly assimilation. The results are validated against in situ observations at 14 high-elevation Snowpack Telemetry (SNOTEL) sites with typically deep snow and at 4 lower-elevation Cooperative Observer Program (COOP) sites. Assimilation of coarse-scale AMSR-E SWE and fine-scale MODIS SCF observations both result in realistic spatial SWE patterns. At COOP sites with shallow snowpacks, AMSR-E SWE and MODIS SCF data assimilation are beneficial separately, and joint SWE and SCF assimilation yields significantly improved root-mean-square error and correlation values for scaled and unscaled data assimilation. In areas of deep snow where the SNOTEL sites are located, however, AMSR-E retrievals are typically biased low and assimilation without prior scaling leads to degraded SWE estimates. Anomaly SWE assimilation could not improve the interannual SWE variations in the assimilation results because the AMSR-E retrievals lack realistic interannual variability in deep snowpacks. SCF assimilation has only a marginal impact at the SNOTEL locations because these sites experience extended periods of near-complete snow cover. Across all sites, SCF assimilation improves the timing of the onset of the snow season but without a net improvement of SWE amounts. --- paper_title: Assimilation of Soil Wetness Index and Leaf Area Index into the ISBA-A-gs land surface model: grassland case study paper_content: Abstract. The performance of the joint assimilation in a land surface model of a Soil Wetness Index (SWI) product provided by an exponential filter together with Leaf Area Index (LAI) is investigated. The data assimilation is evaluated with different setups using the SURFEX modeling platform, for a period of seven years (2001–2007), at the SMOSREX grassland site in southwestern France. The results obtained with a Simplified Extended Kalman Filter demonstrate the effectiveness of a joint data assimilation scheme when both SWI and Leaf Area Index are merged into the ISBA-A-gs land surface model. The assimilation of a retrieved Soil Wetness Index product presents several challenges that are investigated in this study. A significant improvement of around 13 % of the root-zone soil water content is obtained by assimilating dimensionless root-zone SWI data. For comparison, the assimilation of in situ surface soil moisture is considered as well. A lower impact on the root zone is noticed. Under specific conditions, the transfer of the information from the surface to the root zone was found not accurate. Also, our results indicate that the assimilation of in situ LAI data may correct a number of deficiencies in the model, such as low LAI values in the senescence phase by using a seasonal-dependent error definition for background and observations. In order to verify the specification of the errors for SWI and LAI products, a posteriori diagnostics are employed. This approach highlights the importance of the assimilation design on the quality of the analysis. The impact of data assimilation scheme on CO2 fluxes is also quantified by using measurements of net CO2 fluxes gathered at the SMOSREX site from 2005 to 2007. An improvement of about 5 % in terms of rms error is obtained. --- paper_title: Multisensor snow data assimilation at the continental scale: The value of Gravity Recovery and Climate Experiment terrestrial water storage information paper_content: [1] This investigation establishes a multisensor snow data assimilation system over North America (from January 2002 to June 2007), toward the goal of better estimation of snowpack (in particular, snow water equivalent and snow depth) via incorporating both Gravity Recovery and Climate Experiment (GRACE) terrestrial water storage (TWS) and Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) information into the Community Land Model. The different properties associated with the SCF and TWS observations are accommodated through a unified approach using the ensemble Kalman filter and smoother. Results show that this multisensor approach can provide significant improvements over a MODIS-only approach, for example, in the Saint Lawrence, Fraser, Mackenzie, Churchill & Nelson, and Yukon river basins and the southwestern rim of Hudson Bay. At middle latitudes, for example, the North Central and Missouri river basins, the inclusion of GRACE information preserves the advantages (compared with the open loop) shown in the MODIS-only run. However, in some high-latitude areas and given months the open loop run shows a comparable or even better performance, implying considerable room for refinements of the multisensor algorithm. In addition, ensemble-based metrics are calculated and interpreted domainwide. They indicate the potential importance of accurate representation of snow water equivalent autocovariance in assimilating TWS observations and the regional and/or seasonal dependence of GRACE’s capability to reduce ensemble variance. These analyses contribute to clarifying the effects of GRACE’s special features (e.g., a vertical integral of different land water changes, coarse spatial and temporal resolution) in the snow data assimilation system. --- paper_title: Improving Spatial Soil Moisture Representation Through Integration of AMSR-E and MODIS Products paper_content: The use of microwave observations has been highlighted as a complementary tool for evaluating land surface properties. Microwave observations are less affected by clouds, water vapor, and aerosol and also contain valuable soil moisture information. However, a critical limitation in microwave observations is the coarse spatial resolution attributed to the complex retrieval process. The objective of the current study is to develop an independent (from ground observations) downscaling approach that merges information from higher spatial resolution MODerate-resolution Imaging Spectroradiometer (MODIS) (~1 km) with lower spatial resolution AMSR-E (~25 km) to obtain soil moisture estimates at the MODIS scale (~1 km). We compare the developed (UCLA) method against a range of previous published approaches. Various key factors (i.e., surface temperature, vegetation indexes, and albedo) derived from MODIS provide information on relative variations in surface wetness conditions and contribute weighting parameters for downscaling the larger AMSR-E soil moisture footprints. Evaluation of the various downscaled soil moisture products is undertaken at the SMEX04 site in southern Arizona. Results show that the UCLA downscaling technique, as well as the previously published Merlin method, significantly improves the limited spatial variability of the current AMSR-E product. Spatial correlation (R) values improved from -0.08 to 0.34 and 0.27 for the Merlin and UCLA methods, respectively. The evaluated triangle-based methods show poorer performance over the study domain. Results from the current study yield insight on the integration of multiscale remote sensing data in various downscaling methods and the usefulness of MODIS observations in compensating for low-resolution microwave observations. --- paper_title: A Land Data Assimilation System for Soil Moisture and Temperature: An Information Content Study paper_content: Abstract A Canadian Land Data Assimilation System (CaLDAS) for the analysis of land surface prognostic variables is designed and implemented at the Meteorological Service of Canada for the initialization of numerical weather prediction and climate models. The assimilation of different data sources for the production of daily soil moisture and temperature analyses is investigated in a set of observing system simulation experiments over North America. A simplified variational technique is adapted to accommodate different observation types at their appropriate time in a 24-h time window. The screen-level observations of temperature and relative humidity, from conventional synoptic surface observations (SYNOP)/aviation routine weather report (METAR)/surface aviation observation (SA) reports, are considered together with presently available satellite observations provided by the Aqua satellite (microwave C-band), Geostationary Operational Environmental Satellite (GOES) [infrared (IR)], and observations availab... --- paper_title: Novel approach to nonlinear/non-Gaussian Bayesian state estimation paper_content: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter. --- paper_title: Three-dimensional soil moisture profile retrieval by assimilation of near-surface measurements: Simplified Kalman filter covariance forecasting and field application: THREE-DIMENSIONAL SOIL MOISTURE ASSIMILATION paper_content: [1] The Kalman filter data assimilation technique is applied to a distributed three-dimensional soil moisture model for retrieval of the soil moisture profile in a 6 ha catchment using near-surface soil moisture measurements. A simplified Kalman filter covariance forecasting methodology is developed based on forecasting of the state correlations and imposed state variances. This covariance forecasting technique, termed the modified Kalman filter, was then used in a 1 month three-dimensional field application. Two updating scenarios were tested: (1) updating every 2 to 3 days and (2) a single update. The data used were from the Nerrigundah field site, near Newcastle, Australia. This study demonstrates the feasibility of data assimilation in a quasi three-dimensional distributed soil moisture model, provided simplified covariance forecasting techniques are used. It also identifies that (1) the soil moisture profile cannot be retrieved from near-surface soil moisture measurements when the near-surface and deep soil layers become decoupled, such as during extreme drying events; (2) if simulation of the soil moisture profile is already good, the assimilation can result in a slight degradation, but if the simulation is poor, assimilation can yield a significant improvement; (3) soil moisture profile retrieval results are independent of initial conditions; and (4) the required update frequency is a function of the errors in model physics and forcing data. --- paper_title: Earth-Viewing L-Band Radiometer Sensing of Sea Surface Scattered Celestial Sky Radiation—Part II: Application to SMOS paper_content: We examine how the rough sea surface scattering of L-band celestial sky radiation might affect the measurements of the future European Space Agency Soil Moisture and Ocean Salinity (SMOS) mission. For this purpose, we combined data from several surveys to build a comprehensive all-sky L-band celestial sky brightness temperature map for the SMOS mission that includes the continuum radiation and the hydrogen line emission rescaled for the SMOS bandwidth. We also constructed a separate map of strong and very localized sources that may exhibit L-band brightness temperatures exceeding 1000 K. Scattering by the roughened ocean surface of radiation from even the strongest localized sources is found to reduce the contributions from these localized strong sources to negligible levels, and rough surface scattering solutions may be obtained with a map much coarser than the original continuum maps. In rough ocean surface conditions, the contribution of the scattered celestial noise to the reconstructed brightness temperatures is not significantly modified by the synthetic antenna weighting function, which makes integration over the synthetic beam unnecessary. The contamination of the reconstructed brightness temperatures by celestial noise exhibits a strong annual cycle with the largest contamination occurring in the descending swaths in September and October, when the specular projection of the field of view is aligned with the Galactic equator. Ocean surface roughness may alter the contamination by over 0.1 K in 30% of the SMOS measurements. Given this potentially large impact of surface roughness, an operational method is proposed to account for it in the SMOS level 2 sea surface salinity algorithm. --- paper_title: The Soil Moisture Active Passive (SMAP) Mission paper_content: The Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey. SMAP will make global measurements of the soil moisture present at the Earth's land surface and will distinguish frozen from thawed land surfaces. Direct observations of soil moisture and freeze/thaw state from space will allow significantly improved estimates of water, energy, and carbon transfers between the land and the atmosphere. The accuracy of numerical models of the atmosphere used in weather prediction and climate projections are critically dependent on the correct characterization of these transfers. Soil moisture measurements are also directly applicable to flood assessment and drought monitoring. SMAP observations can help monitor these natural hazards, resulting in potentially great economic and social benefits. SMAP observations of soil moisture and freeze/thaw timing will also reduce a major uncertainty in quantifying the global carbon balance by helping to resolve an apparent missing carbon sink on land over the boreal latitudes. The SMAP mission concept will utilize L-band radar and radiometer instruments sharing a rotating 6-m mesh reflector antenna to provide high-resolution and high-accuracy global maps of soil moisture and freeze/thaw state every two to three days. In addition, the SMAP project will use these observations with advanced modeling and data assimilation to provide deeper root-zone soil moisture and net ecosystem exchange of carbon. SMAP is scheduled for launch in the 2014-2015 time frame. --- paper_title: An Algorithm for Merging SMAP Radiometer and Radar Data for High-Resolution Soil-Moisture Retrieval paper_content: A robust and simple algorithm is developed to merge L-band radiometer retrievals and L-band radar observations to obtain high-resolution (9-km) soil-moisture estimates from data of the NASA Soil Moisture Active and Passive (SMAP) mission. The algorithm exploits the established accuracy of coarse-scale radiometer soil-moisture retrievals and blends this with the fine-scale spatial heterogeneity detectable by radar observations to produce a high-resolution optimal soil-moisture estimate at 9 km. The capability of the algorithm is demonstrated by implementing the approach using the airborne Passive and Active L-band System (PALS) instrument data set from Soil Moisture Experiments 2002 (SMEX02) and a four-month synthetic data set in an Observation System Simulation Experiment (OSSE) framework. The results indicate that the algorithm has the potential to obtain better soil-moisture accuracy at a high resolution and show an improvement in root-mean-square error of 0.015-0.02-cm3/cm3 volumetric soil moisture over the minimum performance taken to be retrievals based on radiometer measurements resampled to a finer scale. These results are based on PALS data from SMEX02 and a four-month OSSE data set and need to be further confirmed for different hydroclimatic regions using airborne data sets from prelaunch calibration/validation field campaigns of the SMAP mission. --- paper_title: Model-Based Satellite Image Fusion paper_content: A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method. --- paper_title: Image Analysis, Classification, and Change Detection in Remote Sensing paper_content: Images, Arrays, and Vectors. Image Statistics. Transformations. Radiometric Enhancement. Topographic Modeling. Image Registration. Image Sharpening. Change Detection. Unsupervised Classification. Supervised Classification. Hyperspectral Analysis. --- paper_title: Multispectral imagery band sharpening study paper_content: The fusion of multisensor and multiresolution satellite imagery is an effective means of exploiting the complimentary nature of various image types. With band sharpening (a type of imagery fusion), higher spatial resolution panchromatic data is fused with lower resolution multispectral imagery (MSI). This fusion creates a product with the spectral characteristics of the MSI and a spatial resolution approaching that of the panchromatic image (effective ground-sample distance, or GSD). Others have fused 10-m SPOT panchromatic with 20-m SPOT MSI or 30-m Landsat TM (sharpening factors of 2:l and 3:1). In this study, MSI of 10 m to 30 m was sharpened with 5- to 15-m imagery (sharpening factors of 2:1 to 6:l) using four algorithms. The research goals were to (1) determine the validity of the concept of "effective GSD"; (2) determine the relative utility of band sharpening by different factors; and (3) compare the relative effectiveness of different band sharpening algorithms. --- paper_title: Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresol paper_content: This paper compares two general and formal solutions to the problem of fusion of multispectral images with high-resolution panchromatic observations. The former exploits the undecimated discrete wavelet transform, which is an octave bandpass representation achieved from a conventional discrete wavelet transform by omitting all decimators and upsampling the wavelet filter bank. The latter relies on the generalized Laplacian pyramid, which is another oversampled structure obtained by recursively subtracting from an image an expanded decimated lowpass version. Both the methods selectively perform spatial-fre- quencies spectrum substitution from an image to another. In both schemes, context dependency is exploited by thresholding the local correlation coefficient between the images to be merged, to avoid injection of spatial details that are not likely to occur in the target image. Unlike other multiscale fusion schemes, both the present decompositions are not critically subsampled, thus avoiding possible impairments in the fused images, due to missing cancellation of aliasing terms. Results are presented and discussed on SPOT data. --- paper_title: Variational data assimilation of microwave radiobrightness observations for land surface hydrology applications paper_content: Our ability to accurately describe large-scale variations in soil moisture is severely restricted by process uncertainty and the limited availability of appropriate soil moisture data. Remotely sensed microwave radiobrightness observations can cover large scales but have limited resolution and are only indirectly related to the hydrologic variables of interest. The authors describe a four-dimensional (4D) variational assimilation algorithm that makes best use of available information while accounting for both measurement and model uncertainty. The representer method used is more efficient than a Kalman filter because it avoids explicit propagation of state error covariances. In a synthetic example, which is based on a field experiment, the authors demonstrate estimation performance by examining data residuals. Such tests provide a convenient way to check the statistical assumptions of the approach and to assess its operational feasibility. Internally computed covariances show that the estimation error decreases with increasing soil moisture. An adjoint analysis reveals that trends in model errors in the soil moisture equation can be estimated from daily L-band brightness measurements, whereas model errors in the soil and canopy temperature equations cannot be adequately retrieved from daily data alone. Nonetheless, state estimates obtained from the assimilation algorithm improve significantly on prior model predictions derived without assimilation of radiobrightness data. --- paper_title: Multiscale Recursive Estimation, Data Fusion, and Regularization paper_content: We describe a framework for modeling stochastic phenomena at multiple scales and for their efficient estimation or reconstruction given partial and/or noisy measurements which may also be at several scales. In particular multiscale signal representations lead naturally to pyramidal or tree-like data structures in which each level in the tree corresponds to a particular scale of representation. A class of multiscale dynamic models evolving on dyadic trees is introduced. The main focus of this paper is on the description, analysis, and application of an extremely efficient optimal estimation algorithm for this class of models. This algorithm consists of a fine-to-coarse filtering sweep, followed by a coarse-to-fine smoothing step, corresponding to the dyadic tree generalization of Kalman filtering and Rauch-Tung-Striebel smoothing. The Kalman filtering sweep consists of the recursive application of three steps: a measurement update step, a fine-to-coarse prediction step, and a fusion step. We illustrate the use of our methodology for the fusion of multiresolution data and for the efficient solution of "fractal regularizations" of ill-posed signal and image processing problems encountered. > --- paper_title: Soil moisture retrieval from space: the Soil Moisture and Ocean Salinity (SMOS) mission paper_content: Microwave radiometry at low frequencies (L-band: 1.4 GHz, 21 cm) is an established technique for estimating surface soil moisture and sea surface salinity with a suitable sensitivity. However, from space, large antennas (several meters) are required to achieve an adequate spatial resolution at L-band. So as to reduce the problem of putting into orbit a large filled antenna, the possibility of using antenna synthesis methods has been investigated. Such a system, relying on a deployable structure, has now proved to be feasible and has led to the Soil Moisture and Ocean Salinity (SMOS) mission, which is described. The main objective of the SMOS mission is to deliver key variables of the land surfaces (soil moisture fields), and of ocean surfaces (sea surface salinity fields). The SMOS mission is based on a dual polarized L-band radiometer using aperture synthesis (two-dimensional [2D] interferometer) so as to achieve a ground resolution of 50 km at the swath edges coupled with multiangular acquisitions. The radiometer will enable frequent and global coverage of the globe and deliver surface soil moisture fields over land and sea surface salinity over the oceans. The SMOS mission was proposed to the European Space Agency (ESA) in the framework of the Earth Explorer Opportunity Missions. It was selected for a tentative launch in 2005. The goal of this paper is to present the main aspects of the baseline mission and describe how soil moisture will be retrieved from SMOS data. --- paper_title: Bayesian statistical data assimilation for ecosystem models using Markov Chain Monte Carlo paper_content: Abstract This study considers advanced statistical approaches for sequential data assimilation. These are explored in the context of nowcasting and forecasting using nonlinear differential equation based marine ecosystem models assimilating sparse and noisy non-Gaussian multivariate observations. The statistical framework uses a state space model with the goal of estimating the time evolving probability distribution of the ecosystem state. Assimilation of observations relies on stochastic dynamic prediction and Bayesian principles. In this study, a new sequential data assimilation approach is introduced based on Markov Chain Monte Carlo (MCMC). The ecosystem state is represented by an ensemble, or sample, from which distributional properties, or summary statistical measures, can be derived. The Metropolis-Hastings based MCMC approach is compared and contrasted with two other sequential data assimilation approaches: sequential importance resampling, and the (approximate) ensemble Kalman filter (including computational comparisons). A simple illustrative application is provided based on a 0-D nonlinear plankton ecosystem model with multivariate non-Gaussian observations of the ecosystem state from a coastal ocean observatory. The MCMC approach is shown to be straightforward to implement and to effectively characterize the non-Gaussian ecosystem state in both nowcast and forecast experiments. Results are reported which illustrate how non-Gaussian information originates, and how it can be used to characterize ecosystem properties. --- paper_title: Using area-average remotely sensed surface soil moisture in multipatch land data assimilation systems paper_content: In coming years, Land Data Assimilation Systems (LDAS) two-dimensional (2-D) arrays of the relevant land-surface model) are likely to become the routine mechanism by which many predictive weather and climate models will be initiated. If this is so, it will be via assimilation into the LDAS that other data relevant to the land surface, such as remotely sensed estimates of soil moisture, will find value. This paper explores the potential for using low-resolution, remotely sensed observations of microwave brightness temperature to infer soil moisture in an LDAS with a "mosaic-patch" representation of land-surface heterogeneity, by coupling the land-surface model in the LDAS to a physically realistic microwave emission model. The past description of soil water movement by the LDAS is proposed as the most appropriate, LDAS-consistent basis for using remotely sensed estimates of surface soil moisture to infer soil moisture at depth, and the plausibility of this proposal is investigated. Three alternative methods are explored for partitioning soil moisture between modeled patches while altering the area-average soil moisture to correspond to the observed, pixel-average microwave brightness temperature, namely, 1) altering the soil moisture by a factor, which is the same for all the patches in the pixel, 2) altering the soil moisture by adding an amount that is the same for all the patches in the pixel, and 3) altering the change in soil moisture since the last assimilation cycle by a factor which is the same for all the patches in the pixel. --- paper_title: Estimation of Radiative Transfer Parameters from L-Band Passive Microwave Brightness Temperatures Using Advanced Data Assimilation paper_content: ESA's Soil Moisture and Ocean Salinity (SMOS) mission has been designed to extend our knowledge of the Earth's water cycle. Soil Moisture and Ocean Salinity records brightness temperatures at the L-band, which over land are sensitive to soil and vegetation parameters. On the basis of these measurements, soil moisture and vegetation opacity data sets have been derived operationally since 2009 for applications comprising hydrology, numerical weather prediction (NWP), and drought monitoring. We present a method to enhance the knowledge about the temporal evolution of radiative transfer parameters. The radiative transfer model L-Band Microwave Emission of the Biosphere (L-MEB) is used within a data assimilation framework to retrieve vegetation opacity and soil surface roughness. To analyze the ability of the data assimilation approach to track the temporal evolution of these parameters, scenario analyses were performed with increasing complexity. First, the HYDRUS-1D code was used to generate soil moisture and soil temperature time series. On the basis of these data, the L-MEB forward model was run to simulate brightness temperature observations. Finally, the coupled model system HYDRUS-1D and L-MEB were integrated into a data assimilation framework using a particle filter, which is able to update L-MEB states as well as L-MEB parameters. Time invariant and time variable radiative transfer parameters were estimated. Moreover, it was possible to estimate a "bias" term when model simulations show a systematic difference as compared to observations. An application to a USDA-NRCS Soil Climate Analysis Network (SCAN) site showed the good performance of the proposed approach under real conditions. --- paper_title: Computationally Efficient Stochastic Realization for Internal Multiscale Autoregressive Models paper_content: In this paper we develop a stochastic realization theory for multiscale autoregressive (MAR) processes that leads to computationally efficient realization algorithms. The utility of MAR processes has been limited by the fact that the previously known general purpose realization algorithm, based on canonical correlations, leads to model inconsistencies and has complexity quartic in problem size. Our realization theory and algorithms addresses these issues by focusing on the estimation-theoretic concept of predictive efficiency and by exploiting the scale-recursive structure of so-called internal MAR processes. Our realization algorithm has complexity quadratic in problem size and with an approximation we also obtain an algorithm that has complexity linear in problem size. --- paper_title: Validation of SMOS Brightness Temperatures During the HOBE Airborne Campaign, Western Denmark paper_content: The Soil Moisture and Ocean Salinity (SMOS) mission delivers global surface soil moisture fields at high temporal resolution which is of major relevance for water management and climate predictions. Between April 26 and May 9, 2010, an airborne campaign with the L-band radiometer EMIRAD-2 was carried out within one SMOS pixel (44 × 44 km) in the Skjern River Catchment, Denmark. Concurrently, ground sampling was conducted within three 2 × 2 km patches (EMIRAD footprint size) of differing land cover. By means of this data set, the objective of this study is to present the validation of SMOS L1C brightness temperatures TB of the selected node. Data is stepwise compared from point via EMIRAD to SMOS scale. From ground soil moisture samples, TB's are pointwise estimated through the L-band microwave emission of the biosphere model using land cover specific model settings. These TB's are patchwise averaged and compared with EMIRAD TB's. A simple uncertainty assessment by means of a set of model runs with the most influencing parameters varied within a most likely interval results in a considerable spread of TB's (5-20 K). However, for each land cover class, a combination of parameters could be selected to bring modeled and EMIRAD data in good agreement. Thereby, replacing the Dobson dielectric mixing model with the Mironov model decreases the overall RMSE from 11.5 K to 3.8 K. Similarly, EMIRAD data averaged at SMOS scale and corresponding SMOS TB 's show good accordance on the single day where comparison is not prevented by strong radio-frequency interference (RFI) (May 2, avg. RMSE = 9.7 K). While the advantages of solid data sets of high spatial coverage and density throughout spatial scales for SMOS validation could be clearly demonstrated, small temporal variability in soil moisture conditions and RFI contamination throughout the campaign limited the extent of the validation work. Further attempts over longer time frames are planned by means of soil moisture network data as well as studies on the impacts of organic layers under natural vegetation and higher open water fractions at surrounding grid nodes. --- paper_title: Assimilation of passive and active microwave soil moisture retrievals paper_content: [1] Near-surface soil moisture observations from the active microwave ASCAT and the passive microwave AMSR-E satellite instruments are assimilated, both separately and together, into the NASA Catchment land surface model over 3.5 years using an ensemble Kalman filter. The impact of each assimilation is evaluated using in situ soil moisture observations from 85 sites in the US and Australia, in terms of the anomaly time series correlation-coefficient, R. The skill gained by assimilating either ASCAT or AMSR-E was very similar, even when separated by land cover type. Over all sites, the mean root-zone R was significantly increased from 0.45 for an open-loop, to 0.55, 0.54, and 0.56 by the assimilation of ASCAT, AMSR-E, and both, respectively. Each assimilation also had a positive impact over each land cover type sampled. For maximum accuracy and coverage it is recommended that active and passive microwave observations be assimilated together. --- paper_title: Brightness Temperature and Soil Moisture Validation at Different Scales During the SMOS Validation Campaign in the Rur and Erft Catchments, Germany paper_content: The European Space Agency's Soil Moisture and Ocean Salinity (SMOS) satellite was launched in November 2009 and delivers now brightness temperature and soil moisture products over terrestrial areas on a regular three-day basis. In 2010, several airborne campaigns were conducted to validate the SMOS products with microwave emission radiometers at L-band (1.4 GHz). In this paper, we present results from measurements performed in the Rur and Erft catchments in May and June 2010. The measurement sites were situated in the very west of Germany close to the borders to Belgium and The Netherlands. We developed an approach to validate spatial and temporal SMOS brightness temperature products. An area-wide brightness temperature reference was generated by using an area-wide modeling of top soil moisture and soil temperature with the WaSiM-ETH model and radiative transfer calculation based on the L-band Microwave Emission of the Biosphere model. Measurements of the airborne L-band sensors EMIRAD and HUT-2D on-board a Skyvan aircraft as well as ground-based mobile measurements performed with the truck mounted JULBARA L-band radiometer were analyzed for calibration of the simulated brightness temperature reference. Radiative transfer parameters were estimated by a data assimilation approach. By this versatile reference data set, it is possible to validate the spaceborne brightness temperature and soil moisture data obtained from SMOS. However, comparisons with SMOS observations for the campaign period indicate severe differences between simulated and observed SMOS data. --- paper_title: An Ensemble Multiscale Filter for Large Nonlinear Data Assimilation Problems paper_content: Abstract Operational data assimilation problems tend to be very large, both in terms of the number of unknowns to be estimated and the number of measurements to be processed. This poses significant computational challenges, especially for ensemble methods, which are critically dependent on the number of replicates used to derive sample covariances and other statistics. Most efforts to deal with the related problems of computational effort and sampling error in ensemble estimation have focused on spatial localization. The ensemble multiscale Kalman filter described here offers an alternative approach that effectively replaces, at each update time, the prior (or background) sample covariance with a multiscale tree. The tree is composed of nodes distributed over a relatively small number of discrete scales. Global correlations between variables at different locations are described in terms of local relationships between nodes at adjacent scales (parents and children). The Kalman updating process can be carri... --- paper_title: Assimilation of Disaggregated Microwave Soil Moisture into a Hydrologic Model Using Coarse-Scale Meteorological Data paper_content: Abstract Near-surface soil moisture retrieved from Soil Moisture and Ocean Salinity (SMOS)-type data is downscaled and assimilated into a distributed soil–vegetation–atmosphere transfer (SVAT) model with the ensemble Kalman filter. Because satellite-based meteorological data (notably rainfall) are not currently available at finescale, coarse-scale data are used as forcing in both the disaggregation and the assimilation. Synthetic coarse-scale observations are generated from the Monsoon ‘90 data by aggregating the Push Broom Microwave Radiometer (PBMR) pixels covering the eight meteorological and flux (METFLUX) stations and by averaging the meteorological measurements. The performance of the disaggregation/assimilation coupling scheme is then assessed in terms of surface soil moisture and latent heat flux predictions over the 19-day period of METFLUX measurements. It is found that the disaggregation improves the assimilation results, and vice versa, the assimilation of the disaggregated microwave soil mois... --- paper_title: Optimal multiscale Kalman filter for assimilation of near-surface soil moisture into land surface models paper_content: [1] We undertake an alternative and novel approach to assimilation of near-surface soil moisture into land surface models by means of an extension of multiscale Kalman filtering (MKF). While most data assimilation studies rely on the assumption of spatially independent near-surface soil moisture observations to attain computational tractability in large-scale problems, MKF allows us to explicitly and very efficiently model the spatial dependence and scaling properties of near-surface soil moisture fields. Furthermore, MKF has the appealing ability to cope with model predictions and observations made at different spatial scales. Yet another essential feature of our approach is that we resort to the use of the expectation maximization (EM) algorithm in conjunction with MKF so that the statistical parameters inherent to MKF may be optimally determined directly from the data at hand and allowed to vary over time. This constitutes a significant advantage since these parameters (e.g., observation and model error noise variances) essentially determine the performance of the assimilation approach and have so far been most commonly prescribed heuristically and not allowed to evolve in time. We test our approach by assimilating the near-surface soil moisture fields derived from electronically scanned thinned array radiometer (ESTAR) during the Southern Great Plains Hydrology experiment of 1997 (SGP97) into the three-layer variable infiltration capacity (VIC-3L) land surface model. The results show that assimilation significantly improves the short-term predictions of soil moisture and energy fluxes from VIC-3L, especially with regard to capturing the spatial structure of these state variables. Additionally, we find that allowing the statistical parameters of the assimilation algorithm to evolve in time allows for an adequate representation of the time-varying uncertainties in land surface model predictions. --- paper_title: Thermal microwave emission depth and soil moisture remote sensing paper_content: This paper presents some additional theoretical and experimental results on the problem of estimating the soil depth to which soil moisture can be directly measured by VHF radiometers. The experimental work was implemented in August 1992 at the Research Farm of the Institute of Hydrotechnique and Amelioration, Sofia, Bulgaria. The soil thermal microwave emission was measured by an L-band Dicke-type radiometer at 1.65GHz at off-nadir angle from 20 to 60. It was found that the value obtained for the effective penetration depth depends on the definition used in the analysis: the definition of penetration depth is based on soil thermal emissivity or soil reflectivity. Further it was found that the depth to which the soil moisture can be directly measured exhibits a negligible dependence on polarization (horizontal, vertical) as well as on off-nadir observation angle. For the purpose of soil brightness temperature and soil reflectivity computing, a fast and robust method and algorithm were proposed which are b... --- paper_title: Modeling transient groundwater flow by coupling ensemble Kalman filtering and upscaling: COUPLING ENKF AND UPSCALING paper_content: [1] The ensemble Kalman filter (EnKF) is coupled with upscaling to build an aquifer model at a coarser scale than the scale at which the conditioning data (conductivity and piezometric head) had been taken for the purpose of inverse modeling. Building an aquifer model at the support scale of observations is most often impractical since this would imply numerical models with many millions of cells. If, in addition, an uncertainty analysis is required involving some kind of Monte Carlo approach, the task becomes impossible. For this reason, a methodology has been developed that will use the conductivity data at the scale at which they were collected to build a model at a (much) coarser scale suitable for the inverse modeling of groundwater flow and mass transport. It proceeds as follows: (1) Generate an ensemble of realizations of conductivities conditioned to the conductivity data at the same scale at which conductivities were collected. (2) Upscale each realization onto a coarse discretization; on these coarse realizations, conductivities will become tensorial in nature with arbitrary orientations of their principal components. (3) Apply the EnKF to the ensemble of coarse conductivity upscaled realizations in order to condition the realizations to the measured piezometric head data. The proposed approach addresses the problem of how to deal with tensorial parameters, at a coarse scale, in ensemble Kalman filtering while maintaining the conditioning to the fine-scale hydraulic conductivity measurements. We demonstrate our approach in the framework of a synthetic worth-of-data exercise, in which the relevance of conditioning to conductivities, piezometric heads, or both is analyzed. --- paper_title: A data assimilation method for using low-resolution Earth observation data in heterogeneous ecosystems paper_content: [1] We present an approach for dealing with coarse‐resolution Earth observations (EO) in terrestrial ecosystem data assimilation schemes. The use of coarse‐scale observations in ecological data assimilation schemes is complicated by spatial heterogeneity and nonlinear processes in natural ecosystems. If these complications are not appropriately dealt with, then the data assimilation will produce biased results. The “disaggregation” approach that we describe in this paper combines frequent coarse‐resolution observations with temporally sparse fine‐resolution measurements. We demonstrate the approach using a demonstration data set based on measurements of an Arctic ecosystem. In this example, normalized difference vegetation index observations are assimilated into a “zero‐order” model of leaf area index and carbon uptake. The disaggregation approach conserves key ecosystem characteristics regardless of the observation resolution and estimates the carbon uptake to within 1% of the demonstration data set “truth.” Assimilating the same data in the normal manner, but without the disaggregation approach, results in carbon uptake being underestimated by 58% at an observation resolution of 250 m. The disaggregation method allows the combination of multiresolution EO and improves in spatial resolution if observations are located on a grid that shifts from one observation time to the next. Additionally, the approach is not tied to a particular data assimilation scheme, model, or EO product and can cope with complex observation distributions, as it makes no implicit assumptions of normality. --- paper_title: ML estimation of a stochastic linear system with the EM algorithm and its application to speech recognition paper_content: A nontraditional approach to the problem of estimating the parameters of a stochastic linear system is presented. The method is based on the expectation-maximization algorithm and can be considered as the continuous analog of the Baum-Welch estimation algorithm for hidden Markov models. The algorithm is used for training the parameters of a dynamical system model that is proposed for better representing the spectral dynamics of speech for recognition. It is assumed that the observed feature vectors of a phone segment are the output of a stochastic linear dynamical system, and it is shown how the evolution of the dynamics as a function of the segment length can be modeled using alternative assumptions. A phoneme classification task using the TIMIT database demonstrates that the approach is the first effective use of an explicit model for statistical dependence between frames of speech. > --- paper_title: Impact of Accuracy, Spatial Availability, and Revisit Time of Satellite-Derived Surface Soil Moisture in a Multiscale Ensemble Data Assimilation System paper_content: This study evaluates the sensitivity of a multiscale ensemble assimilation system to different configurations of satellite soil moisture observations, namely the retrieval accuracy, spatial availability, and revisit time. We perform horizontally coupled assimilation experiments where pixels are updated not only by observations at the same location but also all in the study domain. Carrying out sensitivity studies within a multiscale assimilation system is a significant advancement over previous studies that used a 1-D assimilation framework where all horizontal grids are uncoupled. Twin experiments are performed with synthetic soil moisture retrievals. The hydrologic modeling system is forced with satellite estimated rainfall, and the assimilation performance is evaluated against model simulations using in-situ measured rainfall. The study shows that the assimilation performance is most sensitive to the spatial availability of soil moisture observations, then to revisit time and least sensitive to retrieval accuracy. The horizontally coupled assimilation system performs reasonably well even with large observation errors, and it is less sensitive to retrieval accuracy than the uncoupled system, as reported by previous studies. This suggests that more information may be extracted from satellite soil moisture observations using multiscale assimilation systems resulting in a potentially higher value of such satellite products. --- paper_title: A simple hydrologically based model of land surface water and energy fluxes for general circulation models paper_content: A generalization of the single soil layer variable infiltration capacity (VIC) land surface hydrological model previously implemented in the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation model (GCM) is described. The new model is comprised of a two-layer characterization of the soil column, and uses an aerodynamic representation of the latent and sensible heat fluxes at the land surface. The infiltration algorithm for the upper layer is essentially the same as for the single layer VIC model, while the lower layer drainage formulation is of the form previously implemented in the Max-Planck-Institut GCM. The model partitions the area of interest (e.g., grid cell) into multiple land surface cover types; for each land cover type the fraction of roots in the upper and lower zone is specified. Evapotranspiration consists of three components: canopy evaporation, evaporation from bare soils, and transpiration, which is represented using a canopy and architectural resistance formulation. Once the latent heat flux has been computed, the surface energy balance is iterated to solve for the land surface temperature at each time step. The model was tested using long-term hydrologic and climatological data for Kings Creek, Kansas to estimate and validate the hydrological parameters, and surface flux data frommore » three First International Satellite Land Surface Climatology Project Field Experiment (FIFE) intensive field campaigns in the summer-fall of 1987 to validate the surface energy fluxes.« less --- paper_title: A Multiscale Ensemble Filtering System for Hydrologic Data Assimilation. Part I: Implementation and Synthetic Experiment paper_content: Abstract The multiscale autoregressive (MAR) framework was introduced in the last decade to process signals that exhibit multiscale features. It provides the method for identifying the multiscale structure in signals and a filtering procedure, and thus is an efficient way to solve the optimal estimation problem for many high-dimensional dynamic systems. Later, an ensemble version of this multiscale filtering procedure, the ensemble multiscale filter (EnMSF), was developed for estimation systems that rely on Monte Carlo samples, making this technique suitable for a range of applications in geosciences. Following the prototype study that introduced EnMSF, a strategy is devised here to implement the multiscale method in a hydrologic data assimilation system, which runs a land surface model. Assimilation experiments are carried out over the Arkansas–Red River basin, located in the central United States (∼645 000 km2), using the Variable Infiltration Capacity (VIC) model with a computing grid of 1062 pixels. A... --- paper_title: The Soil Moisture Active Passive (SMAP) Mission paper_content: The Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey. SMAP will make global measurements of the soil moisture present at the Earth's land surface and will distinguish frozen from thawed land surfaces. Direct observations of soil moisture and freeze/thaw state from space will allow significantly improved estimates of water, energy, and carbon transfers between the land and the atmosphere. The accuracy of numerical models of the atmosphere used in weather prediction and climate projections are critically dependent on the correct characterization of these transfers. Soil moisture measurements are also directly applicable to flood assessment and drought monitoring. SMAP observations can help monitor these natural hazards, resulting in potentially great economic and social benefits. SMAP observations of soil moisture and freeze/thaw timing will also reduce a major uncertainty in quantifying the global carbon balance by helping to resolve an apparent missing carbon sink on land over the boreal latitudes. The SMAP mission concept will utilize L-band radar and radiometer instruments sharing a rotating 6-m mesh reflector antenna to provide high-resolution and high-accuracy global maps of soil moisture and freeze/thaw state every two to three days. In addition, the SMAP project will use these observations with advanced modeling and data assimilation to provide deeper root-zone soil moisture and net ecosystem exchange of carbon. SMAP is scheduled for launch in the 2014-2015 time frame. --- paper_title: Impact of Multiresolution Active and Passive Microwave Measurements on Soil Moisture Estimation Using the Ensemble Kalman Smoother paper_content: An observing system simulation experiment is developed to test tradeoffs in resolution and accuracy for soil moisture estimation using active and passive L-band remote sensing. Concepts for combined radar and radiometer missions include designs that will provide multiresolution measurements. In this paper, the scientific impacts of instrument performance are analyzed to determine the measurement requirements for the mission concept. The ensemble Kalman smoother (EnKS) is used to merge these multiresolution observations with modeled soil moisture from a land surface model to estimate surface and subsurface soil moisture at 6-km resolution. The model used for assimilation is different from that used to generate "truth." Consequently, this experiment simulates how data assimilation performs in real applications when the model is not a perfect representation of reality. The EnKS is an extension of the ensemble Kalman filter (EnKF) in which observations are used to update states at previous times. Previous work demonstrated that it provides a computationally inexpensive means to improve the results from the EnKF, and that the limited memory in soil moisture can be exploited by employing it as a fixed lag smoother. Here, it is shown that the EnKS can be used in large problems with spatially distributed state vectors and spatially distributed multiresolution observations. The EnKS-based data assimilation framework is used to study the synergy between passive and active observations that have different resolutions and measurement error distributions. The extent to which the design parameters of the EnKS vary depending on the combination of observations assimilated is investigated --- paper_title: Optimal multiscale Kalman filter for assimilation of near-surface soil moisture into land surface models paper_content: [1] We undertake an alternative and novel approach to assimilation of near-surface soil moisture into land surface models by means of an extension of multiscale Kalman filtering (MKF). While most data assimilation studies rely on the assumption of spatially independent near-surface soil moisture observations to attain computational tractability in large-scale problems, MKF allows us to explicitly and very efficiently model the spatial dependence and scaling properties of near-surface soil moisture fields. Furthermore, MKF has the appealing ability to cope with model predictions and observations made at different spatial scales. Yet another essential feature of our approach is that we resort to the use of the expectation maximization (EM) algorithm in conjunction with MKF so that the statistical parameters inherent to MKF may be optimally determined directly from the data at hand and allowed to vary over time. This constitutes a significant advantage since these parameters (e.g., observation and model error noise variances) essentially determine the performance of the assimilation approach and have so far been most commonly prescribed heuristically and not allowed to evolve in time. We test our approach by assimilating the near-surface soil moisture fields derived from electronically scanned thinned array radiometer (ESTAR) during the Southern Great Plains Hydrology experiment of 1997 (SGP97) into the three-layer variable infiltration capacity (VIC-3L) land surface model. The results show that assimilation significantly improves the short-term predictions of soil moisture and energy fluxes from VIC-3L, especially with regard to capturing the spatial structure of these state variables. Additionally, we find that allowing the statistical parameters of the assimilation algorithm to evolve in time allows for an adequate representation of the time-varying uncertainties in land surface model predictions. --- paper_title: A soil-vegetation-atmosphere transfer scheme for modeling spatially variable water and energy balance processes paper_content: In support of the eventual goal to integrate remotely sensed observations with coupled land-atmosphere models, a soil-vegetation-atmosphere transfer scheme is presented which can represent spatially variable water and energy balance processes on timescales of minutes to months. This scheme differs from previous schemes developed to address similar objectives in that it: (1) represents horizontal heterogeneity and transport in a TOPMODEL framework, and (2) maintains computational efficiency while representing the processes most important for our applications. The model is based on the original TOPMODEL-based land surface-atmosphere transfer scheme [Famiglietti and Wood, 1994a] with modifications to correct for deficiencies in the representation of ground heat flux, soil column geometry, soil evaporation, transpiration, and the effect of atmospheric stability on energy fluxes. These deficiencies were found to cause errors in the model predictions in quantities such as the sensible heat flux, to which the development of the atmospheric boundary layer is particularly sensitive. Application of the model to the entire First International Satellite Land Surface Climatology Project Field Experiment 1987 experimental period, focusing on Intensive Field Campaigns 3 and 4, shows that it successfully represents the essential processes of interest. --- paper_title: How much improvement can precipitation data fusion achieve with a Multiscale Kalman Smoother-based framework? paper_content: [1] With advancements in measuring techniques and modeling approaches, more and more precipitation data products, with different spatial resolutions and accuracies, become available. Therefore, there is an increasing need to produce a fused precipitation product that can take advantage of the strengths of each individual precipitation data product. This study systematically and quantitatively evaluates the improvements of the fused precipitation data as a result of using the Mulitscale Kalman Smoother-based (i.e., MKS-based) framework. Impacts of two types of errors, i.e., white noise and bias that are associated with individual precipitation products, are investigated through hypothetical experiments. Two measures, correlation and root-mean-square error, are used to evaluate the improvements of the fused precipitation data. Our study shows that the MKS-based framework can significantly recover the loss of precipitation's spatial patterns and magnitudes that are associated with the white noise and bias when the erroneous data at different spatial scales are fused together. Although the erroneous data at a finer resolution are generally more effective in improving the spatial patterns and magnitudes of the erroneous data at a coarser resolution, data at a coarser resolution can also provide valuable information in improving the quality of the data at a finer resolution when they are fused. This study provides insights on the values of the MKS-based framework and a guideline for determining a potentially optimal spatial scale over which improvements in both the spatial patterns and the magnitudes can be maximized based on given data with different spatial resolutions. --- paper_title: Data assimilation for transient flow in geologic formations via ensemble Kalman filter paper_content: Formation properties are one of the key factors in numerical modeling of flow and transport in geologic formations in spite of the fact that they may not be completely characterized. The incomplete knowledge or uncertainty in the description of the formation properties leads to uncertainty in simulation results. In this study, the ensemble Kalman filter (EnKF) approach is used for continuously updating model parameters such as hydraulic conductivity and model variables such as pressure head while simultaneously providing an estimate of the uncertainty through assimilating dynamic and static measurements, without resorting to the explicit computation of the covariance or the Jacobian of the state variables. A two-dimensional example is built to demonstrate the capability of EnKF and to analyze its sensitivity with respect to different factors such as the number of realizations, measurement timings, and initial guesses. An additional example is given to illustrate the applicability of EnKF to three-dimensional problems and to examine the model predictability after dynamic data assimilation. It is found from these examples that EnKF provides an efficient approach for obtaining satisfactory estimation of the hydraulic conductivity field with dynamic measurements. After data assimilation the conductivity field matches the reference field very well, and different kinds of incorrect prior knowledge of the formation properties may also be rectified to a certain extent. --- paper_title: Groundwater parameter estimation using the ensemble Kalman filter with localization paper_content: The ensemble Kalman filter (EnKF), an efficient data assimilation method showing advantages in many numerical experiments, is deficient when used in approximating covariance from an ensemble of small size. Implicit localization is used to add distance-related weight to covariance and filter spurious correlations which weaken the EnKF’s capability to estimate uncertainty correctly. The effect of this kind of localization is studied in two-dimensional (2D) and three-dimensional (3D) synthetic cases. It is found that EnKF with localization can capture reliably both the mean and variance of the hydraulic conductivity field with higher efficiency; it can also greatly stabilize the assimilation process as a small-size ensemble is used. Sensitivity experiments are conducted to explore the effect of localization function format and filter lengths. It is suggested that too long or too short filter lengths will prevent implicit localization from modifying the covariance appropriately. Steep localization functions will greatly disturb local dynamics like the 0-1 function even if the function is continuous; four relatively gentle localization functions succeed in avoiding obvious disturbance to the system and improve estimation. As the degree of localization of the L function increases, the parameter sensitivity becomes weak, making parameter selection easier, but more information may be lost in the assimilation process. --- paper_title: Investigation of flow and transport processes at the MADE site using ensemble Kalman filter paper_content: In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection–dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. --- paper_title: Jointly Mapping Hydraulic Conductivity and Porosity by Assimilating Concentration Data via Ensemble Kalman Filter paper_content: Summary Real-time data from on-line sensors offer the possibility to update environmental simulation models in real-time. Information from on-line sensors concerning contaminant concentrations in groundwater allow for the real-time characterization and control of a contaminant plume. In this paper it is proposed to use the CPU-efficient Ensemble Kalman Filter (EnKF) method, a data assimilation algorithm, for jointly updating the flow and transport parameters (hydraulic conductivity and porosity) and state variables (piezometric head and concentration) of a groundwater flow and contaminant transport problem. A synthetic experiment is used to demonstrate the capability of the EnKF to estimate hydraulic conductivity and porosity by assimilating dynamic head and multiple concentration data in a transient flow and transport model. In this work the worth of hydraulic conductivity, porosity, piezometric head, and concentration data is analyzed in the context of aquifer characterization and prediction uncertainty reduction. The results indicate that the characterization of the hydraulic conductivity and porosity fields is continuously improved as more data are assimilated. Also, groundwater flow and mass transport predictions are improved as more and different types of data are assimilated. The beneficial impact of accounting for multiple concentration data is patent. --- paper_title: Best unbiased ensemble linearization and the quasi-linear Kalman ensemble generator: QUASI-LINEAR KALMAN ENSEMBLE GENERATOR paper_content: [1] Linearized representations of the stochastic groundwater flow and transport equations have been heavily used in hydrogeology, e.g., for geostatistical inversion or generating conditional realizations. The respective linearizations are commonly defined via Jacobians (numerical sensitivity matrices). This study will show that Jacobian-based linearizations are biased with nonminimal error variance in the ensemble sense. An alternative linearization approach will be derived from the principles of unbiasedness and minimum error variance. The resulting paradigm prefers empirical cross covariances from Monte Carlo analyses over those from linearized error propagation and points toward methods like ensemble Kalman filters (EnKFs). Unlike conditional simulation in geostatistical applications, EnKFs condition transient state variables rather than geostatistical parameter fields. Recently, modifications toward geostatistical applications have been tested and used. This study completes the transformation of EnKFs to geostatistical conditioning tools on the basis of best unbiased ensemble linearization. To distinguish it from the original EnKF, the new method is called the Kalman ensemble generator (KEG). The new context of best unbiased ensemble linearization provides an additional theoretical foundation to EnKF-like methods (such as the KEG). Like EnKFs and derivates, the KEG is optimal for Gaussian variables. Toward increased robustness and accuracy in non-Gaussian and nonlinear cases, sequential updating, acceptance/rejection sampling, successive linearization, and a Levenberg-Marquardt formalism are added. State variables are updated through simulation with updated parameters, always guaranteeing the physicalness of all state variables. The KEG combines the computational efficiency of linearized methods with the robustness of EnKFs and accuracy of expensive realization-based methods while drawing on the advantages of conditional simulation over conditional estimation (such as adequate representation of solute dispersion). As proof of concept, a large-scale numerical test case with 200 synthetic sets of flow and tracer data is conducted and analyzed. --- paper_title: An approach to handling non-Gaussianity of parameters and state variables in ensemble Kalman filtering paper_content: Abstract The ensemble Kalman filter (EnKF) is a commonly used real-time data assimilation algorithm in various disciplines. Here, the EnKF is applied, in a hydrogeological context, to condition log-conductivity realizations on log-conductivity and transient piezometric head data. In this case, the state vector is made up of log-conductivities and piezometric heads over a discretized aquifer domain, the forecast model is a groundwater flow numerical model, and the transient piezometric head data are sequentially assimilated to update the state vector. It is well known that all Kalman filters perform optimally for linear forecast models and a multiGaussian-distributed state vector. Of the different Kalman filters, the EnKF provides a robust solution to address non–linearities; however, it does not handle well non-Gaussian state-vector distributions. In the standard EnKF, as time passes and more state observations are assimilated, the distributions become closer to Gaussian, even if the initial ones are clearly non-Gaussian. A new method is proposed that transforms the original state vector into a new vector that is univariate Gaussian at all times. Back transforming the vector after the filtering ensures that the initial non-Gaussian univariate distributions of the state-vector components are preserved throughout. The proposed method is based in normal-score transforming each variable for all locations and all time steps. This new method, termed the normal-score ensemble Kalman filter (NS-EnKF), is demonstrated in a synthetic bimodal aquifer resembling a fluvial deposit, and it is compared to the standard EnKF. The proposed method performs better than the standard EnKF in all aspects analyzed (log-conductivity characterization and flow and transport predictions). --- paper_title: Diffusion wave modeling of distributed catchment dynamics paper_content: A diffusion wave model of distributed catchment dynamics is presented. The effects of catchment topography and river network structure on storm-flow response are incorporated by routing surface runoff in cascade throughout a digital elevation model (DEM) based conceptual transport network, where the Muskingum-Cunge scheme with variable parameters is used to describe surface runoff dynamics. Dynamic scaling of hydraulic geometry is also incorporated in the model by using the 1953 “at-a-station” and “downstream” relationships by \ILeopold and Maddock\N. Numerical experiments indicate that the model is more than 98% mass conservative for possible slope and roughness configurations, which may occur for hillslopes in a natural catchment. Fluctuations in the simulated discharge may occur in response to discontinuities in rainfall excess representation if Courant number \ICu\N during the simulation exceeds a threshold of about 3. Catchment scale simulations with different temporal resolution show that the model response is independent of structural parameters (model consistency). Also, the overall accuracy is preserved for computationally inexpensive space-time discretizations (for which \ICu\N > 3) because fluctuations that may occur at the local scale are dampened when propagating downstream. Comparison of model results with observed outlet hydrographs of the Rio Missiaga experimental catchment (Eastern Italian Alps) show this approach to be capable of describing both overland and channel phases of surface runoff in mountainous catchments. --- paper_title: A detailed model for simulation of catchment scale subsurface hydrologic processes paper_content: A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment. --- paper_title: Estimating geostatistical parameters and spatially-variable hydraulic conductivity within a catchment system using an ensemble smoother paper_content: Groundwater flow models are important tools in assessing baseline conditions and investigating management alternatives in groundwater systems. The usefulness of these models, however, is often hindered by insufficient knowledge regarding the magnitude and spatial distribution of the spatially-distributed parameters, such as hydraulic conductivity ( K ), that govern the response of these models. Proposed parameter estimation methods frequently are demonstrated using simplified aquifer representations, when in reality the groundwater regime in a given watershed is influenced by strongly-coupled surface-subsurface processes. Furthermore, parameter estimation methodologies that rely on a geostatistical structure of K often assume the parameter values of the geostatistical model as known or estimate these values from limited data. In this study, we investigate the use of a data assimilation algorithm, the Ensemble Smoother, to provide enhanced estimates of K within a catchment system using the fully-coupled, surface-subsurface flow model CATHY. Both water table elevation and streamflow data are assimilated to condition the spatial distribution of K . An iterative procedure using the ES update routine, in which geostatistical parameter values defining the true spatial structure of K are identified, is also presented. In this procedure, parameter values are inferred from the updated ensemble of K fields and used in the subsequent iteration to generate the K ensemble, with the process proceeding until parameter values are converged upon. The parameter estimation scheme is demonstrated via a synthetic three-dimensional tilted v-shaped catchment system incorporating stream flow and variably-saturated subsurface flow, with spatio-temporal variability in forcing terms. Results indicate that the method is successful in providing improved estimates of the K field, and that the iterative scheme can be used to identify the geostatistical parameter values of the aquifer system. In general, water table data have a much greater ability than streamflow data to condition K . Future research includes applying the methodology to an actual regional study site. --- paper_title: Parameterization of stream channel geometry in the distributed modeling of catchment dynamics paper_content: A simple and efficient procedure for incorporating the effects of stream channel geometry in the distributed modeling of catchment dynamics is developed. At-a- station and downstream fluvial relationships are combined and the obtained laws of variability in space and time for water-surface width and wetted perimeter are incorporated into a diffusion wave routing model based on the Muskingum-Cunge method with variable parameters. The parameterization obtained is applied to the approximately 840-km 2 Sieve catchment (Central Italian Apennines) to test the possibility of estimating channel geometry parameters from cross-section surveys and to assess the impact of dynamic variations in the channel geometry on catchment dynamics. The use of the estimated channel geometry in surface runoff routing produces a significant improvement in the flood hydrograph description at the catchment outlet with respect to less detailed network parameterizations. In addition, the results obtained from a "downstream" analysis of the velocity field indicate that the stream characteristics related to the locally varying cross-section shape may have a strong control on flow velocities, and thus they should be monitored and synthesized for a comprehensive description of the distributed catchment dynamics. --- paper_title: Ensemble smoother assimilation of hydraulic head and return flow data to estimate hydraulic conductivity distribution paper_content: [1] Numerical groundwater models, frequently used to enhance understanding of the hydrologic and chemical processes in local or regional aquifers, are often hindered by an incomplete representation of the parameters which characterize these processes. In this study, we present the use of a data assimilation algorithm that incorporates all past model results and data measurements, an ensemble smoother (ES) to provide enhanced estimates of aquifer hydraulic conductivity (K) through assimilation of hydraulic head (H) and groundwater return flow volume (RFV) measurements into groundwater model simulation results. On the basis of the Kalman filter methodology, residuals between forecasted model results and measurements, together with covariances between model results at measurement locations and nonmeasurement locations, are used to correct model results. Parameter estimation is achieved by incorporating model parameters into the algorithm, thus allowing the correlation between H, RFV, and K to correct the K fields. The applicability of the ES is demonstrated using a synthetic two-dimensional transient groundwater modeling simulation. Sensitivity analyses are carried out to show the performance of the ES in regard to measurement error, number of measurements, number of assimilation times, correlation length of the K fields, and the number of stream gage locations. Results show that the departure of the K fields from a reference K field is greatly reduced through data assimilation and demonstrate that the ES scheme is a promising alternative to other inverse modeling techniques because of low computational burden and the ability to run the algorithm entirely independent of the groundwater model simulation. --- paper_title: A comparison of Picard and Newton iteration in the numerical solution of multidimensional variably saturated flow problems paper_content: Picard iteration is a widely used procedure for solving the nonlinear equation governing flow in variably saturated porous media. The method is simple to code and computationally cheap, but has been known to fail or converge slowly. The Newton method is more complex and expensive (on a per-iteration basis) than Picard, and as such has not received very much attention. Its robustness and higher rate of convergence, however, make it an attractive alternative to the Picard method, particularly for strongly nonlinear problems. In this paper the Picard and Newton schemes are implemented and compared in one-, two-, and three-dimensional finite element simulations involving both steady state and transient flow. The eight test cases presented highlight different aspects of the performance of the two iterative methods and the different factors that can affect their convergence and efficiency, including problem size, spatial and temporal discretization, initial solution estimates, convergence error norm, mass lumping, time weighting, conductivity and moisture content characteristics, boundary conditions, seepage faces, and the extent of fully saturated zones in the soil. Previous strategies for enhancing the performance of the Picard and Newton schemes are revisited, and new ones are suggested. The strategies include chord slope approximations for the derivatives of the characteristic equations, relaxing convergence requirements along seepage faces, dynamic time step control, nonlinear relaxation, and a mixed Picard-Newton approach. The tests show that the Picard or relaxed Picard schemes are often adequate for solving Richards' equation, but that in cases where these fail to converge or converge slowly, the Newton method should be used. The mixed Picard-Newton approach can effectively overcome the Newton scheme's sensitivity to initial solution estimates, while comparatively poor performance is reported for the various chord slope approximations. Finally, given the reliability and efficiency of current conjugate gradient-like methods for solving linear nonsymmetric systems, the only real drawback of using Newton rather than Picard iteration is the algebraic complexity and computational cost of assembling the derivative terms of the Jacobian matrix, and it is suggested that both methods can be effectively implemented and used in numerical models of Richards' equation. --- paper_title: Ensemble Kalman filter data assimilation for a process-based catchment scale model of surface and subsurface flow: EnKF FOR A MODEL OF SURFACE AND SUBSURFACE FLOW paper_content: [1] A sequential data assimilation procedure based on the ensemble Kalman filter (EnKF) is introduced and tested for a process-based numerical model of coupled surface and subsurface flow. The model is based on the three-dimensional Richards equation for variably saturated porous media and a diffusion wave approximation for overland and channel flow. A one-dimensional soil column experiment and a three-dimensional tilted v-catchment test case are presented. A preliminary analysis of the assimilation scheme is undertaken for the one-dimensional test case in order to validate the implementation by comparison with published results and to assess the influence of various factors on the filter's performance. The numerical results suggest robustness with respect to the ensemble size and provide useful information for the more complex tilted v-catchment test case. The assimilation frequency and the effects induced by data assimilation on the surface and/or subsurface system states are then evaluated for the v-catchment experiment using synthetic observations of pressure head and streamflow. The results suggest that streamflow prediction can be improved by assimilation of pressure head and streamflow, either individually or in tandem, whereas assimilation of streamflow data alone does not improve the subsurface system state. In terms of the global system state, i.e., surface and subsurface variables, frequent updates are especially beneficial when assimilating both pressure head and streamflow. Furthermore, it is shown that better evaluation of the subsurface volume resulting from assimilation of head data is crucial for improving subsequent surface response. --- paper_title: Assimilation of streamflow and in situ soil moisture data into operational distributed hydrologic models: Effects of uncertainties in the data and initial model soil moisture states paper_content: Abstract We assess the potential of updating soil moisture states of a distributed hydrologic model by assimilating streamflow and in situ soil moisture data for high-resolution analysis and prediction of streamflow and soil moisture. The model used is the gridded Sacramento (SAC) and kinematic-wave routing models of the National Weather Service (NWS) Hydrology Laboratory’s Research Distributed Hydrologic Model (HL-RDHM) operating at an hourly time step. The data assimilation (DA) technique used is variational assimilation (VAR). Assimilating streamflow and soil moisture data into distributed hydrologic models is new and particularly challenging due to the large degrees of freedom associated with the inverse problem. This paper reports findings from the first phase of the research in which we assume, among others, perfectly known hydrometeorological forcing. The motivation for the simplification is to reduce the complexity of the problem in favour of improved understanding and easier interpretation even if it may compromise the goodness of the results. To assess the potential, two types of experiments, synthetic and real-world, were carried out for Eldon (ELDO2), a 795-km2 headwater catchment located near the Oklahoma (OK) and Arkansas (AR) border in the U.S. The synthetic experiment assesses the upper bound of the performance of the assimilation procedure under the idealized conditions of no structural or parametric errors in the models, a full dynamic range and no microscale variability in the in situ observations of soil moisture, and perfectly known univariate statistics of the observational errors. The results show that assimilating in situ soil moisture data in addition to streamflow data significantly improves analysis and prediction of soil moisture and streamflow, and that assimilating streamflow observations at interior locations in addition to those at the outlet improves analysis and prediction of soil moisture within the drainage areas of the interior stream gauges and of streamflow at downstream cells along the channel network. To assess performance under more realistic conditions, but still under the assumption of perfectly known hydrometeorological forcing to allow comparisons with the synthetic experiment, an exploratory real-world experiment was carried out in which all other assumptions were lifted. The results show that, expectedly, assimilating interior flows in addition to outlet flow improves analysis as well as prediction of streamflow at stream gauge locations, but that assimilating in situ soil moisture data in addition to streamflow data provides little improvement in streamflow analysis and prediction though it reduces systematic biases in soil moisture simulation. --- paper_title: Hydrology laboratory research modeling system (HL-RMS) of the US national weather service paper_content: This study investigates an approach that combines physically-based and conceptual model features in two stages of distributed modeling: model structure development and estimation of spatially variable parameters. The approach adds more practicality to the process of model parameterization, and facilitates an easier transition from current lumped model-based operational systems to more powerful distributed systems. This combination of physically-based and conceptual model features is implemented within the Hydrology Laboratory Research Modeling System (HL-RMS). HL-RMS consists of a well-tested conceptual water balance model applied on a regular spatial grid linked to physically-based kinematic hillslope and channel routing models. Parameter estimation procedures that combine spatially distributed and ‘integrated’ basin-outlet properties have been developed for the water balance and routing components. High-resolution radar-based precipitation data over a large region are used in testing HL-RMS. Initial tests show that HL-RMS yields results comparable to well-calibrated lumped model simulations in several headwater basins, and it outperforms a lumped model in basins where spatial rainfall variability effects are significant. It is important to note that simulations for two nested basins (not calibrated directly, but parameters from the calibration of the parent basin were applied instead) outperformed lumped simulations even more consistently, which means that HL-RMS has the potential to improve the accuracy and resolution of river runoff forecasts. Published by Elsevier B.V. --- paper_title: Real-Time Variational Assimilation of Hydrologic and Hydrometeorological Data into Operational Hydrologic Forecasting paper_content: Variational assimilation (VAR) of hydrologic and hydrometeorological data into operational hydrologic forecasting is explored. The data assimilated are the hourly real-time observations of streamflow and precipitation, and climatological estimates of potential evaporation (PE). The hydrologic system considered is a single headwater basin for which soil moisture accounting and routing are carried out in a lumped fashion via the Sacramento model (SAC) and the unit hydrograph (UH), respectively. The control variables in the VAR formulation are the fast-varying SAC soil moisture states at the beginning of the assimilation window and the multiplicative adjustment factors to the estimates of mean areal precipitation (MAP) and mean areal potential evaporation (MAPE) for each hour in the assimilation window. In a separate application of VAR as a parameter estimation tool, the estimation of empirical UH is also explored by treating its ordinates as the control variables. To evaluate the assimilation procedure thus developed, streamflow was forecast with and without the aid of VAR for three basins in the southern plains under the assumption of perfectly forecast future mean areal precipitation (FMAP). The streamflow forecasts were then compared with each other and with those based on persistence and the state space-based state-updating procedure, the state-space Sacramento model (SS-SAC). The results indicate that the VAR procedure significantly improves the accuracy of the basic forecast at short lead times and compares favorably with SS-SAC. --- paper_title: The soil and water assessment tool—Historical development applications, and future research directions paper_content: The Soil and Water Assessment Tool (SWAT) model is a continuation of nearly 30 years of modeling efforts conducted by the USDA Agricultural Research Service (ARS). SWAT has gained international acceptance as a robust interdisciplinary watershed modeling tool as evidenced by international SWAT conferences, hundreds of SWAT-related papers presented at numerous other scientific meetings, and dozens of articles published in peer-reviewed journals. The model has also been adopted as part of the U.S. Environmental Protection Agency (USEPA) Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) software package and is being used by many U.S. federal and state agencies, including the USDA within the Conservation Effects Assessment Project (CEAP). At present, over 250 peer-reviewed published articles have been identified that report SWAT applications, reviews of SWAT components, or other research that includes SWAT. Many of these peer-reviewed articles are summarized here according to relevant application categories such as streamflow calibration and related hydrologic analyses, climate change impacts on hydrology, pollutant load assessments, comparisons with other models, and sensitivity analyses and calibration techniques. Strengths and weaknesses of the model are presented, and recommended research needs for SWAT are also provided. --- paper_title: Data assimilation for distributed hydrological catchment modeling via ensemble Kalman filter paper_content: Catchment scale hydrological models are critical decision support tools for water resources management and environment remediation. However, the reliability of hydrological models is inevitably affected by limited measurements and imperfect models. Data assimilation techniques combine complementary information from measurements and models to enhance the model reliability and reduce predictive uncertainties. As a sequential data assimilation technique, the ensemble Kalman filter (EnKF) has been extensively studied in the earth sciences for assimilating in-situ measurements and remote sensing data. Although the EnKF has been demonstrated in land surface data assimilations, there are no systematic studies to investigate its performance in distributed modeling with high dimensional states and parameters. In this paper, we present an assessment on the EnKF with state augmentation for combined state-parameter estimation on the basis of a physical-based hydrological model, Soil and Water Assessment Tool (SWAT). Through synthetic simulation experiments, the capability of the EnKF is demonstrated by assimilating the runoff and other measurements, and its sensitivities are analyzed with respect to the error specification, the initial realization and the ensemble size. It is found that the EnKF provides an efficient approach for obtaining a set of acceptable model parameters and satisfactory runoff, soil water content and evapotranspiration estimations. The EnKF performance could be improved after augmenting with other complementary data, such as soil water content and evapotranspiration from remote sensing retrieval. Sensitivity studies demonstrate the importance of consistent error specification and the potential with small ensemble size in the data assimilation system. --- paper_title: Ensemble Kalman filter data assimilation for a process-based catchment scale model of surface and subsurface flow: EnKF FOR A MODEL OF SURFACE AND SUBSURFACE FLOW paper_content: [1] A sequential data assimilation procedure based on the ensemble Kalman filter (EnKF) is introduced and tested for a process-based numerical model of coupled surface and subsurface flow. The model is based on the three-dimensional Richards equation for variably saturated porous media and a diffusion wave approximation for overland and channel flow. A one-dimensional soil column experiment and a three-dimensional tilted v-catchment test case are presented. A preliminary analysis of the assimilation scheme is undertaken for the one-dimensional test case in order to validate the implementation by comparison with published results and to assess the influence of various factors on the filter's performance. The numerical results suggest robustness with respect to the ensemble size and provide useful information for the more complex tilted v-catchment test case. The assimilation frequency and the effects induced by data assimilation on the surface and/or subsurface system states are then evaluated for the v-catchment experiment using synthetic observations of pressure head and streamflow. The results suggest that streamflow prediction can be improved by assimilation of pressure head and streamflow, either individually or in tandem, whereas assimilation of streamflow data alone does not improve the subsurface system state. In terms of the global system state, i.e., surface and subsurface variables, frequent updates are especially beneficial when assimilating both pressure head and streamflow. Furthermore, it is shown that better evaluation of the subsurface volume resulting from assimilation of head data is crucial for improving subsequent surface response. --- paper_title: Real-time forecasting of water table depth and soil moisture profiles paper_content: We present a method for real-time forecasting of water table depth and soil moisture profiles. The method combines a simple form of data-assimilation with a moving window calibration of a deterministic model describing flow in the unsaturated zone and local as well as regional drainage. The local drainage level is calibrated on-line using a moving window calibration. Assigning more weight to the last available measurements then yields a form of model adaptation that is in between on-line calibration and data-assimilation (i.e. a simplified form of Newtonian nudging). Five-day hydrological forecasts are performed based on 5-day weather forecasts, while on-line observations of phreatic level and soil moisture content are assimilated on a daily basis. Advantages of the proposed method are that it improves the real-time forecasts compared to off-line calibration and ordinary moving window calibration and that it yields physically consistent soil moisture profiles. --- paper_title: Modelling forest transpiration and CO2 fluxes: response to soil moisture stress paper_content: The effects of soil drought on the leaf gas exchange variables of a stomatal conductance model are assessed in the case of woody plants, based on a meta-analysis of data sets available from the literature. For the first stage, 32 data sets obtained in well-watered conditions are analysed. For the second stage, we analyse four data sets including a soil drying cycle under present and doubled atmospheric CO2 concentration (350 and 700mol mol −1 , respectively) for two tree species, maritime pine and sessile oak. Interesting relationships emerge, which suggest that woody plants may adopt one or two strategies when facing drought. A simple parameterisation of the effects of soil drought is proposed to account for the two types of drought responses which are: drought-avoiding (maritime pine) and drought-tolerant (sessile oak). © 2004 Elsevier B.V. All rights reserved. --- paper_title: Data Assimilation for Estimating the Terrestrial Water Budget Using a Constrained Ensemble Kalman Filter paper_content: Abstract A procedure is developed to incorporate equality constraints in Kalman filters, including the ensemble Kalman filter (EnKF), and is referred to as the constrained ensemble Kalman filter (CEnKF). The constraint is carried out as a two-step filtering approach, with the first step being the standard (ensemble) Kalman filter. The second step is the constraint step carried out by another Kalman filter that optimally redistributes any imbalance from the first step. The CEnKF is implemented over a 75 000 km2 domain in the southern Great Plains region of the United States, using the terrestrial water balance as the constraint. The observations, consisting of gridded fields of the upper two soil moisture layers from the Oklahoma Mesonet system, Atmospheric Radiation Measurement Program Cloud and Radiation Testbed (ARM-CART) energy balance Bowen ratio (EBBR) latent heat estimates, and U.S. Geological Survey (USGS) streamflow from unregulated basins, are assimilated into the Variable Infiltration Capacity (... --- paper_title: Joint assimilation of surface soil moisture and LAI observations into a land surface model paper_content: Land Surface Models (LSM) offer a description of land surface processes and set the lower boundary conditions for meteorological models. In particular, the accurate description of those surface variables which display a slow response in time, like root-zone soil moisture or vegetation biomass, is of great importance. Errors in their estimation yield significant inaccuracies in the estimation of heat and water fluxes in Numerical Weather Prediction (NWP) models. In the present study, the ISBA-A-gs LSM is used decoupled from the atmosphere. In this configuration, the model is able to simulate the vegetation growth, and consequently LAI. A simplified 1D-VAR assimilation method is applied to observed surface soil moisture and LAI observations of the SMOSREX site near Toulouse, in south-western France, from 2001 to 2004. This period includes severe droughts in 2003 and 2004. The data are jointly assimilated into ISBA-A-gs in order to analyse the root-zone soil moisture and the vegetation biomass. It is shown that the 1D-VAR improves the model results. The efficiency score of the model (Nash criterion) is increased from 0.79 to 0.86 for root-zone soil moisture and from 0.17 to 0.23 for vegetation biomass. --- paper_title: Modeling ground heat flux in land surface parameterization schemes paper_content: A new ground heat flux parameterization for land surface schemes, such as those used in climate and numerical weather prediction models, is described. Compared with other approaches that lump the canopy layer and ground surface, or empirically based approaches that consider the effect of radiation attenuation through the canopy layer, the new parameterization has several advantages. First, the reduction of radiation available for conducting soil surface exchange under vegetated areas is represented in a manner that assures that heat is conserved in the long term. Second, problems in representing properly the phase of the ground heat flux are alleviated. Finally, the approach is relatively simple and is computationally efficient, requiring only two soil thermal layers. Comparison of the method with analytical solutions for special cases shows that the new method approximates the analytical solution very well for different conditions, and that the new method is superior to the force-restore and the Crank-Nicholson method. Model-derived ground heat heat fluxes for the French HAPEX-MOBILHY (Hydrology-Atmosphere Pilot Experiment - Modelisation de Bilan Hydrique) site and the Brazilian ABRACOS (Anglo-Brazilian Amazonian Climate Observation Study) cleared ranch land site are shown to be in close agreement with observations. Sensitivity analyses show that if the attenuation of radiation under vegetation and soil heat storage are ignored, the daytime peak and nighttime minima of ground heat flux, latent and sensible heat fluxes, and surface temperature can be significantly in error. In particular, neglecting the radiation attenuation through the canopy layer can result in significant overestimation (underestimation) of daytime (nighttime) ground heat flux, while neglecting soil heat storage can result in significant phase errors. --- paper_title: One-dimensional statistical dynamic representation of subgrid spatial variability of precipitation in the two-layer variable infiltration capacity model paper_content: The two-layer variable infiltration capacity (VIC-2L) model is extended to incorporate a representation of subgrid variability in precipitation, using an analytical one-dimensional statistical dynamic representation for partial area coverage of precipitation. The analytical approach allows the effects of subgrid-scale spatial variability of precipitation on surface fluxes, runoff production, and soil moisture to be represented explicitly. With this method, spatially integrated representations of surface fluxes, runoff, and soil moisture due to subgrid-scale spatial variability in precipitation, infiltration, and vegetation cover are obtained. The results are compared with those obtained using an exhaustive pixel-based approach, and the results obtained by applying uniform precipitation over the precipitation-covered area. The precipitation coverage over a grid cell is shown to play a primary role in estimating the surface fluxes, runoff, and soil moisture. In general, the spatial distribution of precipitation within the precipitation-covered area plays a secondary role, in part because VIC-2L represents the subgrid spatial variability of soil properties. While the analytical approach gives good approximations to the pixel-based approach, and is superior to the uniform precipitation approach in general, the differences are not large. --- paper_title: Root zone soil moisture from the assimilation of screen‐level variables and remotely sensed soil moisture paper_content: [1] In most operational NWP models, root zone soil moisture is constrained using observations of screen-level temperature and relative humidity. While this generally improves low-level atmospheric forecasts, it often leads to unrealistic model soil moisture. Consequently, several NWP centers are moving toward also assimilating remotely sensed near-surface soil moisture observations. Within this context, an EKF is used to compare the assimilation of screen-level observations and near-surface soil moisture data from AMSR-E into the ISBA land surface model over July 2006. Several issues regarding the use of each data type are exposed, and the potential to use the AMSR-E data, either in place of or together with the screen-level data, is examined. When the two data types are assimilated separately, there is little agreement between the root zone soil moisture updates generated by each, indicating that for this experiment the AMSR-E data could not have replaced the screen-level data to obtain similar surface turbulent fluxes. For the screen-level variables, there is a persistent diurnal cycle in the model-observations bias, which is not related to soil moisture. The resulting diurnal cycle in the analysis increments demonstrates how assimilating screen-level observations can lead to unrealistic soil moisture updates, reinforcing the need to assimilate alternative data sets. However, when the two data types are assimilated together, the near-surface soil moisture provides a much weaker constraint of the root zone soil moisture than the screen-level observations do, and the inclusion of the AMSR-E data does not substantially change the results compared to the assimilation of screen-level variables alone. --- paper_title: On the Efficacy of Combining Thermal and Microwave Satellite Data as Observational Constraints for Root-Zone Soil Moisture Estimation paper_content: Abstract Data assimilation applications require the development of appropriate mathematical operators to relate model states to satellite observations. Two such “observation” operators were developed and used to examine the conditions under which satellite microwave and thermal observations provide effective constraints on estimated soil moisture. The first operator uses a two-layer surface energy balance (SEB) model to relate root-zone moisture with top-of-canopy temperature. The second couples SEB and microwave radiative transfer models to yield top-of-atmosphere brightness temperature from surface layer moisture content. Tangent linear models for these operators were developed to examine the sensitivity of modeled observations to variations in soil moisture. Assuming a standard deviation in the observed surface temperature of 0.5 K and maximal model sensitivity, the error in the analysis moisture content decreased by 11% for a background error of 0.025 m3 m−3 and by 29% for a background error of 0.05 m... --- paper_title: Multiscale modeling of spatially variable water and energy balance processes paper_content: This paper presents the model development component of a body of research which addresses aggregation and scaling in multiscale hydrological modeling. Water and energy balance models are developed at the local and catchment scales and at the macroscale by aggregating a simple soil-vegetation-atmosphere transfer scheme (SVATS) across scales in a topographic framework. A spatially distributed approach is followed to aggregate the SVATS to the catchment scale. A statistical-dynamical approach is utilized to simplify the large-scale modeling problem and to aggregate the SVATS to the macroscale. The resulting macroscale hydrological model is proposed for use as a land surface parameterization in atmospheric models. It differs greatly from the current generation of land surface parameterizations owing to its simplified representation of vertical process physics and its statistical representation of horizontally heterogeneous runoff and energy balance processes. The spatially distributed model formulation is explored to understand the role of spatial variability in determining areal-average fluxes and the dynamics of hydrological processes. The simpler macroscale formulation is analyzed to determine how it represents these important dynamics, with implications for the parameterization of runoff and energy balance processes in atmospheric models. --- paper_title: Joint Assimilation of Surface Temperature and L-Band Microwave Brightness Temperature in Land Data Assimilation paper_content: Soil moisture and soil temperature are tightly coupled variables in land surface models. The objective of this study was to evaluate the impact of the joint assimilation of soil moisture and land surface temperature data in a land surface model on soil moisture and soil temperature characterization. Three synthetic tests evaluated the joint assimilation of surface temperature (measured by MODIS) and brightness temperature (from L-band) into the Community Land Model using the local ensemble transform Kalman filter (LETKF). The following three tests were performed for dry and wet conditions: (i) assimilating surface temperature observations only --- paper_title: An interactive vegetation SVAT model tested against data from six contrasting sites paper_content: The interactions between soil, biosphere, and atmosphere scheme (ISBA) is modified in order to account for the atmospheric carbon dioxide concentration on the stomatal aperture. The physiological stomatal resistance scheme proposed by Jacobs (1994) is employed to describe photosynthesis and its coupling with stomatal resistance at leaf level. In addition, the plant response to soil water stress is driven by a normalized soil moisture factor applied to the mesophyll conductance. The computed vegetation net assimilation can be used to feed a simple growth submodel, and to predict the density of vegetation cover. Only two parameters are needed to calibrate the growth model: the leaf life expectancy and the effective biomass per unit leaf area. The new soil–vegetation–atmosphere transfer (SVAT) scheme, called ISBA–A–gs, is tested against data from six micrometeorological databases for vegetation ranging from temperate grassland to tropical forest. It is shown that ISBA–A–gs is able to simulate the water budget and the CO2 flux correctly. Also, the leaf area index predicted by the calibrated model agrees well with observations over canopy types ranging from shortcycled crops to evergreen grasslands or forests. Once calibrated, the model is able to adapt the vegetation density in response to changes in the precipitation distribution. --- paper_title: A simple hydrologically based model of land surface water and energy fluxes for general circulation models paper_content: A generalization of the single soil layer variable infiltration capacity (VIC) land surface hydrological model previously implemented in the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation model (GCM) is described. The new model is comprised of a two-layer characterization of the soil column, and uses an aerodynamic representation of the latent and sensible heat fluxes at the land surface. The infiltration algorithm for the upper layer is essentially the same as for the single layer VIC model, while the lower layer drainage formulation is of the form previously implemented in the Max-Planck-Institut GCM. The model partitions the area of interest (e.g., grid cell) into multiple land surface cover types; for each land cover type the fraction of roots in the upper and lower zone is specified. Evapotranspiration consists of three components: canopy evaporation, evaporation from bare soils, and transpiration, which is represented using a canopy and architectural resistance formulation. Once the latent heat flux has been computed, the surface energy balance is iterated to solve for the land surface temperature at each time step. The model was tested using long-term hydrologic and climatological data for Kings Creek, Kansas to estimate and validate the hydrological parameters, and surface flux data frommore » three First International Satellite Land Surface Climatology Project Field Experiment (FIFE) intensive field campaigns in the summer-fall of 1987 to validate the surface energy fluxes.« less --- paper_title: Ability of the land surface model ISBA-A-gs to simulate leaf area index at the global scale: Comparison with satellites products paper_content: [1] The land surface model (LSM) ISBA-A-gs (Interactions between Soil, Biosphere and Atmosphere, CO2-reactive) is specifically designed to simulate leaf stomatal conductance and leaf area index (LAI) in response to climate, soil properties, and atmospheric carbon dioxide concentration. The model is run at the global scale, forced by the GSWP-2 meteorological data at a resolution of 1° for the period of 1986–1995. We test the model by comparing the simulated LAI values against three satellite-derived data sets (ISLSCP Initiative II data, MODIS data and ECOCLIMAP data) and find that the model reproduces the major patterns of spatial and temporal variability in global vegetation. As a result, the mean of the maximum annual LAI estimates of the model falls within the range of the various satellite data sets. Despite no explicit representation of phenology, the model captures the seasonal cycle in LAI well and shows realistic variations in start of the growing season as a function of latitude. The interannual variability is also well reported for numerous regions of the world, particularly where precipitation controls photosynthesis. The comparison also reveals that some processes need to be improved or introduced in the model, in particular the snow dynamics and the treatment of vegetation in cultivated areas, respectively. The overall comparisons demonstrate the potential of ISBA-A-gs model to simulate LAI in a realistic fashion at the global scale. --- paper_title: A Bayesian spatial assimilation scheme for snow coverage observations in a gridded snow model paper_content: A method for assimilating remotely sensed snow covered area (SCA) into the snow subroutine of a grid dis- tributed precipitation-runoff model (PRM) is presented. The PRM is assumed to simulate the snow state in each grid cell by a snow depletion curve (SDC), which relates that cell's SCA to its snow cover mass balance. The assimilation is based on Bayes' theorem, which requires a joint prior dis- tribution of the SDC variables in all the grid cells. In this paper we propose a spatial model for this prior distribution, and include similarities and dependencies among the grid cells. Used to represent the PRM simulated snow cover state, our joint prior model regards two elevation gradients and a degree-day factor as global variables, rather than describing their effect separately for each cell. This transformation re- sults in smooth normalised surfaces for the two related mass balance variables, supporting a strong inter-cell dependency in their joint prior model. The global features and spatial interdependency in the prior model cause each SCA obser- vation to provide information for many grid cells. The spa- tial approach similarly facilitates the utilisation of observed discharge. Assimilation of SCA data using the proposed spatial model is evaluated in a 2400 km 2 mountainous region in cen- tral Norway (61 N, 9 E), based on two Landsat 7 ETM+ images generalized to 1 km 2 resolution. An image acquired on 11 May, a week before the peak flood, removes 78% of the variance in the remaining snow storage. Even an image from 4 May, less than a week after the melt onset, reduces this variance by 53%. These results are largely improved compared to a cell-by-cell independent assimilation routine previously reported. Including observed discharge in the up- dating information improves the 4 May results, but has weak --- paper_title: Feasibility Test of Multifrequency Radiometric Data Assimilation to Estimate Snow Water Equivalent paper_content: Abstract A season-long, point-scale radiometric data assimilation experiment is performed in order to test the feasibility of snow water equivalent (SWE) estimation. Synthetic passive microwave observations at Special Sensor Microwave Imager (SSM/I) and Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) frequencies and synthetic broadband albedo observations are assimilated simultaneously in order to update snowpack states in a land surface model using the ensemble Kalman filter (EnKF). The effects of vegetation and atmosphere are included in the radiative transfer model (RTM). The land surface model (LSM) was given biased precipitation to represent typical errors introduced in modeling, yet was still able to recover the true value of SWE with a seasonally integrated rmse of only 2.95 cm, despite a snow depth of around 3 m and the presence of liquid water in the snowpack. This ensemble approach is ideal for investigating the complex theoretical relationships between the snowpack proper... --- paper_title: Mapping of snow water equivalent and snow depth in boreal and sub-arctic zones by assimilating space-borne microwave radiometer data and ground-based observations paper_content: Abstract The monitoring of snow water equivalent (SWE) and snow depth (SD) in boreal forests is investigated by applying space-borne microwave radiometer data and synoptic snow depth observations. A novel assimilation technique based on (forward) modelling of observed brightness temperatures as a function of snow pack characteristics is introduced. The assimilation technique is a Bayesian approach that weighs the space-borne data and the reference field on SD interpolated from discrete synoptic observations with their estimated statistical accuracy. The results obtained using SSM/I and AMSR-E data for northern Eurasia and Finland indicate that the employment of space-borne data using the assimilation technique improves the SD and SWE retrieval accuracy when compared with the use of values interpolated from synoptic observations. Moreover, the assimilation technique is shown to reduce systematic SWE/SD estimation errors evident in the inversion of space-borne radiometer data. --- paper_title: Impact of Accuracy, Spatial Availability, and Revisit Time of Satellite-Derived Surface Soil Moisture in a Multiscale Ensemble Data Assimilation System paper_content: This study evaluates the sensitivity of a multiscale ensemble assimilation system to different configurations of satellite soil moisture observations, namely the retrieval accuracy, spatial availability, and revisit time. We perform horizontally coupled assimilation experiments where pixels are updated not only by observations at the same location but also all in the study domain. Carrying out sensitivity studies within a multiscale assimilation system is a significant advancement over previous studies that used a 1-D assimilation framework where all horizontal grids are uncoupled. Twin experiments are performed with synthetic soil moisture retrievals. The hydrologic modeling system is forced with satellite estimated rainfall, and the assimilation performance is evaluated against model simulations using in-situ measured rainfall. The study shows that the assimilation performance is most sensitive to the spatial availability of soil moisture observations, then to revisit time and least sensitive to retrieval accuracy. The horizontally coupled assimilation system performs reasonably well even with large observation errors, and it is less sensitive to retrieval accuracy than the uncoupled system, as reported by previous studies. This suggests that more information may be extracted from satellite soil moisture observations using multiscale assimilation systems resulting in a potentially higher value of such satellite products. --- paper_title: Comparison of Data Assimilation Techniques for a Coupled Model of Surface and Subsurface Flow paper_content: Data assimilation in the geophysical sciences refers to methodologies to optimally merge model predictions and observations. The ensemble Kalman filter (EnKF) is a statistical sequential data assimilation technique explicitly developed for nonlinear filtering problems. It is based on a Monte Carlo approach that approximates the conditional probability densities of the variables of interest by a finite number of randomly generated model trajectories. In Newtonian relaxation or nudging (NN), which can be viewed as a special case of the classic Kalman filter, model variables are driven toward observations by adding to the model equations a forcing term, or relaxation component, that is proportional to the difference between simulation and observation. The forcing term contains four-dimensional weighting functions that can, ideally, incorporate prior knowledge about the characteristic scales of spatial and temporal variability of the state variable(s) being assimilated. In this study, we examined the EnKF and NN algorithms as implemented for a complex hydrologic model that simulates catchment dynamics, coupling a three-dimensional finite element Richards9 equation solver for variably saturated porous media and a finite difference diffusion wave approximation for surface water flow. We report on the retrieval performance of the two assimilation schemes for a small catchment in Belgium. The results of the comparison show that nudging, while more straightforward and less expensive computationally, is not as effective as the ensemble Kalman filter in retrieving the true system state. We discuss some of the strengths and weaknesses, both physical and numerical, of the NN and EnKF schemes. ---
Title: Multivariate and Multiscale Data Assimilation in Terrestrial Systems: A Review Section 1: Introduction Description 1: Introduces the concept of data assimilation (DA) in terrestrial systems, highlights the importance of combining measurements and models, and outlines the objectives of the review. Section 2: Data Assimilation Theory Description 2: Discusses the theoretical background of various data assimilation techniques, including the Ensemble Kalman Filter (EnKF), Particle Filter (PF), and variational methods (VAR). Section 3: Ensemble Kalman Filter (EnKF) Description 3: Provides an in-depth examination of the Ensemble Kalman Filter methodology, its application, and its variants such as the Ensemble Kalman Smoother (EnKS). Section 4: Particle Filter (PF) Description 4: Reviews the Particle Filter method, detailing its procedure, advantages, drawbacks, and specific applications in hydrological modeling. Section 5: Variational Assimilation (VAR) Description 5: Explains the variational assimilation technique, focusing on its cost function minimization approach and applications in numerical weather prediction and terrestrial systems. Section 6: Data Assimilation across States and Scales Description 6: Analyzes different approaches to DA based on the number of states and scales, detailing univariate single-scale, univariate multiscale, multivariate single-scale, and multivariate multiscale data assimilation. Section 7: Univariate Single-Scale Data Assimilation (UVSS) Description 7: Discusses DA methods that assimilate a single type of data without scale mismatch considerations, providing specific examples and applications. Section 8: Multivariate Single-Scale Data Assimilation (MVSS) Description 8: Reviews the techniques and benefits of assimilating multiple types of observation data on the same spatial scale into simulation models. Section 9: Univariate Multiscale Data Assimilation (UVMS) Description 9: Focuses on DA methods that handle data obtained at different resolutions than the model resolution, employing scaling techniques for proper assimilation. Section 10: Multivariate Multiscale Data Assimilation (MVMS) Description 10: Covers the complex integration of various types and scales of data, detailing studies and methods for efficiently combining and assimilating these data sets. Section 11: Multisource DA Description 11: Defines the scenario where a single state variable is updated by multiple data sources at the same scale, highlighting its distinct advantages and applications. Section 12: Methodology Description 12: Compares different strategies for assimilating data obtained at varying resolutions, including observation operators and prior downscaling techniques. Section 13: Applications Description 13: Provides examples of diverse applications of univariate and multivariate data assimilation across different terrestrial systems. Section 14: Advantages and Disadvantages of Multivariate and Multiscale Data Assimilation Description 14: Summarizes the benefits and challenges of using multivariate and multiscale DA methods versus univariate single-scale approaches. Section 15: Other Aspects Description 15: Discusses additional considerations in DA, such as parameter estimation verification and the computational demands of high-resolution models. Section 16: Conclusions Description 16: Summarizes the current state and future potential of DA in terrestrial systems, emphasizing the need for further research and application to fully leverage available environmental data.
Two decades of local binary patterns: A survey
11
--- paper_title: Face Description with Local Binary Patterns: Application to Face Recognition paper_content: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed --- paper_title: Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions paper_content: Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation --- paper_title: Local Binary Patterns: New Variants and Applications paper_content: This book introduces Local Binary Patterns (LBP), arguably one of the most powerful texture descriptors, and LBP variants. This volume provides the latest reviews of the literature and a presentation of some of the best LBP variants by researchers at the forefront of textual analysis research and research on LBP descriptors and variants. The value of LBP variants is illustrated with reported experiments using many databases representing a diversity of computer vision applications in medicine, biometrics, and other areas. There is also a chapter that provides an excellent theoretical foundation for texture analysis and LBP in particular. A special section focuses on LBP and LBP variants in the area of face recognition, including thermal face recognition. This book will be of value to anyone already in the field as well as to those interested in learning more about this powerful family of texture descriptors. --- paper_title: Texture discrimination with multidimensional distributions of signed gray level differences paper_content: The statistics of gray-level di!erences have been successfully used in a number of texture analysis studies. In this paper we propose to use signed gray-level di!erences and their multidimensional distributions for texture description. The present approach has important advantages compared to earlier related approaches based on gray level cooccurrence matrices or histograms of absolute gray-level di!erences. Experiments with di$cult texture classi"cation and supervised texture segmentation problems show that our approach provides a very good and robust performance in comparison with the mainstream paradigms such as cooccurrence matrices, Gaussian Markov random "elds, or Gabor "ltering. ( 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. --- paper_title: Face Description with Local Binary Patterns: Application to Face Recognition paper_content: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed --- paper_title: Monogenic-LBP: A new approach for rotation invariant texture classification paper_content: Analysis of two-dimensional textures has many potential applications in computer vision. In this paper, we investigate the problem of rotation invariant texture classification, and propose a novel texture feature extractor, namely Monogenic-LBP (M-LBP). M-LBP integrates the traditional Local Binary Pattern (LBP) operator with the other two rotation invariant measures: the local phase and the local surface type computed by the 1st-order and 2ndorder Riesz transforms, respectively. The classification is based on the image's histogram of M-LBP responses. Extensive experiments conducted on the CUReT database demonstrate the overall superiority of M-LBP over the other state-of-the-art methods evaluated. --- paper_title: A comparative study of texture measures with classification based on featured distributions paper_content: This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented --- paper_title: Retrieval of translated, rotated and scaled color textures paper_content: A new method for color texture retrieval using color and edge features is proposed in this study. The proposed method unifies color and edge features rather than simply analyzing only color characteristics. First, the distributions of color and local edge patterns are used to derive a similarity measure for a pair of textures. Then, a retrieval method based on the similarity measure is proposed to retrieve texture images from a database of color textures. Finally, the similarity measure is extended to retrieve texture regions from a database of natural images. Since the proposed feature distributions can resist variations in translation, rotation and scale, our method has the ability to retrieve texture images or regions that change in translation, rotation and/or scale. The effectiveness and practicability of the proposed method have been demonstrated by various experiments. --- paper_title: Constructing Local Binary Pattern Statistics by Soft Voting paper_content: In this paper we propose a novel method for constructing Local Binary Pattern (LBP) statistics for image appearance description. The method is inspired by the kernel density estimation designed for estimating the underlying probability function of a random variable. An essential part of the proposed method is the use of Hamming distance. Compared to the standard LBP histogram statistics where one labeled pixel always contributes to one bin of the histogram, the proposed method exploits a kernel-like similarity function to determine weighted votes contributing several possible pattern types in the statistic. As a result, the method yields a more reliable estimate of the underlying LBP distribution of the given image. In overall, the method is easy to implement and outperforms the standard LBP histogram description in texture classification and in biometrics-related face verification. We demonstrate that the method is extremely potential in problems where the number of pixels is limited. This makes the method very promising, for example, in low-resolution image description and the description of interest regions. Another interesting property of the proposed method is that it can be easily integrated with many existing LBP variants that use label statistics as descriptors. --- paper_title: FLBP: Fuzzy Local Binary Patterns paper_content: In this chapter, a variant of the Local Binary Patterns method that extends the ability of the method to cope with noisy texture by utilizing fuzzy modelling techniques is presented. The presented generalised Fuzzy Binary Patterns model is applied to the classic Local Binary Patterns method as well as to the Local Binary Patterns with Contrast measure method, resulting to the respective fuzzy logic based methods. Supervised classification experiments were conducted on a wide range of natural and medical texture images, degraded by different types and intensities of additive noise, in order to evaluate the efficiency of the Fuzzy Local Binary Patterns method and its fusion with other proposed methods. Fuzzy Binary Patterns based methods outperform the respective methods based on the classic Binary Patterns model for all types of images and noise, indicating the efficiency of fuzzy modelling in coping with the uncertainty introduced to texture due to noise. --- paper_title: Classification with color and texture: jointly or separately? paper_content: Abstract Current approaches to color texture analysis can be roughly divided into two categories: methods that process color and texture information separately, and those that consider color and texture a joint phenomenon. In this paper, both approaches are empirically evaluated with a large set of natural color textures. The classification performance of color indexing methods is compared to gray-scale and color texture methods, and to combined color and texture methods, in static and varying illumination conditions. Based on the results, we argue that color and texture are separate phenomena that can, or even should, be treated individually. --- paper_title: CENTRIST: A Visual Descriptor for Scene Categorization paper_content: CENsus TRansform hISTogram (CENTRIST), a new visual descriptor for recognizing topological places or scene categories, is introduced in this paper. We show that place and scene recognition, especially for indoor environments, require its visual descriptor to possess properties that are different from other vision domains (e.g., object recognition). CENTRIST satisfies these properties and suits the place and scene recognition task. It is a holistic representation and has strong generalizability for category recognition. CENTRIST mainly encodes the structural properties within an image and suppresses detailed textural information. Our experiments demonstrate that CENTRIST outperforms the current state of the art in several place and scene recognition data sets, compared with other descriptors such as SIFT and Gist. Besides, it is easy to implement and evaluates extremely fast. --- paper_title: Learning binarized pixel-difference pattern for scene recognition paper_content: Local binary pattern (LBP) and its variants have been used in scene recognition. However, most existing approaches rely on a pre-defined LBP structure to extract features. Those pre-defined structures can be generalized as the patterns constructed from the binarized pixel differences in a local neighborhood. Instead of using a handcraft structure, we propose to learn binarized pixel-difference patterns (BPP). We cast the problem as a feature selection problem and solve it by an incremental search via the criterion of minimum-redundancy-maximum-relevance. Then, BPP features are extracted based on the structures derived. On two challenging scene recognition databases, the proposed approach significantly outperforms the state of the arts. --- paper_title: A comprehensive benchmark of local binary pattern algorithms for texture retrieval paper_content: Image retrieval is a well researched area and often based on integrating various kinds of image features. Apart from colour features, texture features are deemed crucial for successful image retrieval. Local binary pattern (LBP) based texture algorithms have gained significant popularity in recent years and have been shown to be useful for a variety of tasks. In this paper, we provide a comprehensive benchmark of LBP based methods for texture retrieval. In particular, a comparison of 16 LBP variants leading to 38 different texture descriptors, are evaluated on a large dataset of more than 6000 texture images. Interestingly, conventional LBP features are shown to work best, while almost all LBP methods are shown to significantly outperform other texture methods including Tamura, co-occurrence and Gabor features. ---
Title: Two Decades of Local Binary Patterns: A Survey Section 1: An overview of basic LBP operators Description 1: Summarize the fundamental concepts, theoretical bases, and initial implementations of the Local Binary Patterns (LBP) operator. Section 2: Different types of variants Description 2: Discuss various extensions and modifications of the original LBP to enhance its performance and robustness across different applications. Section 3: Preprocessing Description 3: Detail preprocessing techniques used prior to LBP computation, including methods like Gabor filtering and edge detection to improve performance under various conditions. Section 4: Neighborhood topology and sampling Description 4: Explain the flexibility of LBP in terms of neighborhood topology and sampling, and how different shapes and configurations can be used based on the application requirements. Section 5: Thresholding and encoding Description 5: Explore different thresholding and encoding techniques that address the limitations of the original LBP, enhancing robustness to noise and gray level changes. Section 6: Considering co-occurrences Description 6: Describe approaches that incorporate co-occurrences of LBP codes to improve detection performance and feature representation. Section 7: Feature selection and learning Description 7: Highlight methods for reducing the dimensionality of LBP features and selecting the most discriminative patterns through feature selection and machine learning techniques. Section 8: Other methods inspired by LBP Description 8: Summarize new local image descriptors developed based on LBP principles, such as Local Phase Quantization (LPQ) and Patterns of Oriented Edge Magnitudes (POEM). Section 9: Variants of spatiotemporal LBP Description 9: Discuss the adaptations of LBP for spatiotemporal analysis, extending its applicability from static images to dynamic video sequences. Section 10: Analysis of depth and 4D images Description 10: Detail the use of LBP variants for analyzing depth (3D) and 4D images, with applications in fields like medical imaging and gesture recognition. Section 11: Conclusions Description 11: Summarize the overall advantages of LBP, highlight key variants and their contributions, and discuss future challenges and directions for LBP research.
A Review on Speckle Noise Reduction Techniques
18
--- paper_title: A hybrid filter for image despeckling with wavelet-based denoising and spatial filtering paper_content: Noise is one of the most common problems found in different imaging applications. Even a high resolution photo is bound to have some noise in it. Image denoising algorithms may be the oldest in image processing. Many methods, regardless of their implementation, share the same basic idea: noise reduction through image blurring. However, a universal “best” approach has yet to be found. Wavelet transform has been studied extensively in recent years and it has been widely used for noise reduction. In this paper, we propose a novel technique for image despeckling that combines wavelet denoising and an enhanced adaptive version of the old Kuan filter, which results in a significant gain with respect to speckle noise filters and the simple wavelet denoising technique. ---
Title: A Review on Speckle Noise Reduction Techniques Section 1: Introduction Description 1: This section introduces the problem of noise in imaging systems, particularly focusing on speckle noise encountered in medical imaging. Section 2: Statistical Model of Speckle in Ultrasound Description 2: In this section, discuss the statistical models used to understand and represent speckle noise in ultrasound imaging. Section 3: Speckle Filtering Methods Description 3: This section outlines various methods for speckle noise reduction, divided into spatial domain techniques and transform domain techniques. Section 3.1: Spatial Domain Filtering Description 3.1: Discuss the spatial domain techniques for filtering speckle noise, including explanations of common filters. Section 3.1.1: Box Filter Description 3.1.1: Describe the box filter method, its advantages, and its disadvantages. Section 3.1.2: Median Filter Description 3.1.2: Explain the median filter technique, focusing on its effectiveness against different types of noise and its computational requirements. Section 3.1.3: Statistic Lee Filter Description 3.1.3: Detail the Lee filter, its working principles, and any limitations. Section 3.1.4: Statistic Kuan Filter Description 3.1.4: Discuss the Kuan filter, its approach compared to the Lee filter, and its limitations. Section 3.1.5: Frost Filter Description 3.1.5: Describe the Frost filter and how it adapts to different variances within an image for effective noise reduction. Section 3.1.6: Srad Filter Description 3.1.6: Explore the SRAD filter technique, which uses non-linear anisotropic diffusion for reducing speckle noise. Section 3.2: Transform Domain Techniques Description 3.2: Explain the methods that work in the transform domain to reduce speckle noise, including their principles and execution. Section 3.2.1: Fourier Transform Description 3.2.1: Describe the Fourier transform approach for speckle noise reduction, including its steps and applications. Section 3.2.2: Wavelet Transform Description 3.2.2: Discuss the wavelet transform method, its benefits in multi-resolution analysis, and its applications in noise reduction. Section 3.2.3: Wavelet Domain Noise Filtering Description 3.2.3: Explain how noise filtering is achieved in the wavelet domain and the specific processes used. Section 4: Thresholding Description 4: Discuss the role of thresholding techniques in speckle noise reduction, including soft and hard thresholding methods. Section 4.1: Soft Thresholding Description 4.1: Explain the soft thresholding technique and how it improves image quality by reducing discontinuities. Section 4.2: Hard Thresholding Description 4.2: Describe the hard thresholding technique and its application in detail coefficient handling. Section 5: Conclusion Description 5: Summarize the findings and discussion on the various filtering methods, highlighting the advantages of wavelet transform techniques over traditional methods.
A Survey of Design Pattern Based Web Applications
8
--- paper_title: Design reuse in hypermedia applications development paper_content: In this paper we discuss the use of design patterns for the process of building hypermedia applications. The idea of design patterns has been recently developed, and rapidly spread outside the object-oriented community to a general audience of software developers. By using patterns it is not only possible to document design experience with a very simple and comprehensible format, but also reuse the same experience several times for different applications. We argue that the hypermedia community will take a vital step towards better designs of hypermedia applications and systems by developing a pattern language for that domain. --- paper_title: A formal description of design patterns using OWL paper_content: Design patterns have been used successfully in the last decade to reuse and communicate object-oriented design. However, the documentation of pattern usage is often very poor. This motivates the use of tools which can detect and document design patterns found in software. A couple of approaches have been proposed in recent years. The approach introduced is based on a formal description of design patterns using the Web ontology language OWL. Software artefacts used to define design patterns in a formal and machine processable fashion are represented by uniform resource identifiers (URIs). This yields a description that is open and extensible, and facilitates the sharing of design among software engineers. We discuss the developed software design ontology, and how this approach relates to the meta-modelling architecture of the OMG. In the second part, an effective pattern scanner for the Java language is presented. This scanner is based on the ontology developed in part one and uses reflection and AST analysis to verify constraints. Various applications of this scanner are discussed. --- paper_title: Ontology Design Patterns for Semantic Web Content paper_content: The paper presents a framework for introducing design patterns that facilitate or improve the techniques used during ontology lifecycle. Some distinctions are drawn between kinds of ontology design patterns. Some content-oriented patterns are presented in order to illustrate their utility at different degrees of abstraction, and how they can be specialized or composed. The proposed framework and the initial set of patterns are designed in order to function as a pipeline connecting domain modelling, user requirements, and ontology driven tasks/queries to be executed. --- paper_title: Transforming legacy Web applications to the MVC architecture paper_content: With the rapid changes that occur in the area of Web technologies, the porting and adaptation of existing Web applications into new platforms that take advantage of modern technologies has become an issue of increasing importance. This paper presents a reengineering framework whose target system is an architecture based on the model-view-controller (MVC) design pattern and enabled for the Java/spl trade/ 2 Platform, Enterprise Edition (J2EE). The proposed framework is mainly concerned with the decomposition of a legacy Web application by identifying software components to be transformed into Java objects such as JavaBeans, JavaServer Pages (JSP), and Java Servlet. --- paper_title: Legacy Migration to Service-Oriented Computing with Mashups paper_content: Although service-oriented computing holds great promises, it is still not clear when and how the existing systems will exploit this new computational model. The problem is particularly severe for the software having several years of use. This work provides a roadmap for the migration of legacy software to service-oriented computing by means of right levels of abstraction. The key idea is having integration even at the presentation layer, not only at backend layers such as application or data. This requires re-inventing the popular MASHUP technology of Web 2.0 at the enterprise level. Domain- specific-kits and choreography engine concepts that were originally introduced by the software factory automation approach have been reshaped as another enabling technology towards migrating to the service harmonization platform. The paper also exemplifies the proposed approach on a simple case problem. --- paper_title: A Method for Integration of Web Applications Based on Information Extraction paper_content: Integration of Web services from different Web sites has brought new creativity and functionality to Web applications. These integration technologies, called mashup or mixup, have made a shift in Web service development and created a new generation of widely popular and successful Web services such as Google Maps API and YouTube Data API. However, the integration is limited to the Web sites that provide the open Web service APIs, and currently, most existing Web sites do not provide Web services. In this paper, we present a method to integrate the general Web applications. For this purpose, we propose a Web information extraction method to generate the virtual Web service functions from Web applications at client side. Our implementation shows that the general Web applications can be also integrated easily. --- paper_title: A Database and Web Application Based on MVC Architecture paper_content: MVC architecture has had wide acceptance for corporation software development. It plans to divide the system in three different layers that are in charge of interface control logic and data access, this facilitates the maintenance and evolution of systems according to the independence of the present classes in each layer. With the purpose of illustrating a successful application built under MVC, in this work we introduce different phases of analysis, design and implementation of a database and web application using UML. As central component of the application, it has a database made up by fifteen relations and a user interface supported by seventeen web pages. --- paper_title: Design reuse in hypermedia applications development paper_content: In this paper we discuss the use of design patterns for the process of building hypermedia applications. The idea of design patterns has been recently developed, and rapidly spread outside the object-oriented community to a general audience of software developers. By using patterns it is not only possible to document design experience with a very simple and comprehensible format, but also reuse the same experience several times for different applications. We argue that the hypermedia community will take a vital step towards better designs of hypermedia applications and systems by developing a pattern language for that domain. --- paper_title: Ontology Design Patterns for Semantic Web Content paper_content: The paper presents a framework for introducing design patterns that facilitate or improve the techniques used during ontology lifecycle. Some distinctions are drawn between kinds of ontology design patterns. Some content-oriented patterns are presented in order to illustrate their utility at different degrees of abstraction, and how they can be specialized or composed. The proposed framework and the initial set of patterns are designed in order to function as a pipeline connecting domain modelling, user requirements, and ontology driven tasks/queries to be executed. --- paper_title: Reengineering Web applications based on cloned pattern analysis paper_content: Web applications are subject to continuous and rapid evolution. Often it happens that programmers indiscriminately duplicate Web pages without considering systematic development and maintenance methods. This practice creates code clones that make Web applications hard to maintain and reuse. This paper presents an approach for reengineering Web applications based on clone analysis that aims at identifying and generalizing static and dynamic pages and navigational patterns of a Web application. Clone analysis is also helpful for identifying literals that can be generated from a database. A case study is described which shows how the proposed approach can be used for restructuring the navigational structure of a Web application by removing redundant code. --- paper_title: Reengineering to the Web: a reference architecture paper_content: Reengineering existing (large-scale) applications to the Web is a complex and highly challenging task. This is due to a variety of demanding requirements for interactive Web applications. High performance is usually required, old interfaces still have to be supported, high availability requirements are usual, information has to be provided to multiple channels and in different formats, pages should contain individual layouts across different channels, styles should be imposed over presentation, etc. To achieve these goals a variety of different technologies and concepts have to be well understood, including HTTP protocol handling, persistent stores/databases, various XML standards, authentication, session management, dynamic content creation, presentational abstractions, and flexible legacy system wrapping. In a concrete project, all these components have to be integrated properly and appropriate technologies have to be chosen. On basis of practical and theoretical experience in the problem domain, in this paper, we try to identify the recurring components in reengineering projects to the Web, lay out critical issues and choices, and conceptually integrate the components into a reference architecture. ---
Title: A Survey of Design Pattern Based Web Applications Section 1: INTRODUCTION Description 1: This section introduces the challenges faced by web application designers, the role of design patterns in overcoming these challenges, the benefits and threats of pattern-based web applications, and provides an overview of the paper's organization. Section 2: PATTERN-BASED WEB APPLICATIONS Description 2: This section discusses the classification of web applications into pattern-based and non-pattern-based, and provides an analysis of cases where patterns are used in web applications, including hypermedia and semantic web applications. Section 3: Hypermedia Description 3: This subsection under Section 2 elaborates on the role of hypermedia in pattern-based web applications, using examples such as the CAIBCO System and various navigation patterns. Section 4: Semantic Web Description 4: This subsection under Section 2 explains the application of design patterns in the construction of the semantic web, including examples of ontology design patterns and their implications. Section 5: Architectural Solutions for Web applications Description 5: This section addresses the use of architectural patterns like PAC and MVC in web applications, discussing their benefits and limitations, as well as strategies for migrating legacy systems to web-based architectures. Section 6: LIABILITIES DUE TO IMPROPER USAGE OF PATTERNS FOR WEB APPLICATIONS Description 6: This section explores the negative consequences of improper pattern usage, including issues like tight coupling, inefficiencies, and partitioning problems in pattern-based web applications. Section 7: RE-ENGINEERING WEB APPLICATIONS Description 7: This section reviews various re-engineering efforts aimed at addressing the problems caused by the misuse of patterns, including methods for pattern analysis and reference architecture. Section 8: CONCLUSION AND FUTURE WORK Description 8: This section summarizes the survey’s findings on pattern-based web applications and outlines potential future work, including the development of architectures for porting web applications to the MVC framework without partition issues.
Low frequency noise and disturbance assessment methods: A brief literature overview and a new proposal
8
--- paper_title: Study and Assessment of Low Frequency Noise in Occupational Settings paper_content: Low frequency noise is one of the most harmful factors occurring in human working and living environment. Low frequency noise components from 20 to 250 Hz are often the cause of employee complaints. Noise from power stations is an actual problem for large cities, including Cairo. The noise from equipments of station could be a serious problem for station and for environmental area. The development of power stations in Cairo leads to appearing a wide range of gas turbines which are strong source of noise. Two measurement techniques using C-weighted along side the A-weighted scale are explored. C-weighting is far more sensitive to detect low frequency sound. Spectrum analysis in the low frequency range is done in order to identify a significant tonal component. Field studies were supported by a questionnaire to determine whether sociological or other factors might influence the results by using annoyance rating mean value. Subjects included in the study were 153 (mean = 36.86, SD = 8.49) male employees at the three electrical power stations. The (C-A) level difference is an appropriate metric for indicating a potential low frequency noise problem. A-weighting characteristics seem to be able to predict quite accurately annoyance experienced from LFN at workplaces. The aim of the present study is to find simple and reliable method for assessing low frequency noise in occupational environment to prevent its effects on work performance for the workers. The proposed method has to be compared with European methods. --- paper_title: Options for Assessment and Regulation of Low Frequency Noise paper_content: Dutch legal limits for Lden noise levels are different for road traffic, railways, industry and airports. A justification for these differences might be found in the difference between dose-response relationships for noise annoyance. Nevertheless, frequently situations occur where people severely complain of low frequency noise, even when Lden levels amply comply with limits. So far, an effective regulation specifically for low frequency noise is not available. This paper discusses some options for improving the protection for LFN annoyance from Lden legislation. A potential solution might be to penalize Lden levels based on low versus high frequency ratio and tonality. This paper describes how such an approach is applied for a number of LFN cases in the Netherlands and discusses the results. --- paper_title: The Central Role of Interpersonal Conflict in Low Frequency Noise Annoyance paper_content: This paper considers the relationship between the Environmental Health Officer and the Low Frequency Noise complainant (sufferer). It is suggested that the characteristic psychoacoustic properties of Low Frequency Noise may interact with inappropriate assessment protocols to produce a series of interpersonal pressures that play an active part in shaping the overall noise problem. This interaction may be considered as a legitimate and common impact factor within Low Frequency Noise complaints. The confounding role of misperception and miscommunication, between the parties, is explored and models of conflict resolution are considered as a means for providing counter measures to the behavioural consequences of failed assessment procedures and ineffective personal coping strategies. --- paper_title: Danish Guidelines on Environmental Low Frequency Noise, Infrasound and Vibration paper_content: In Denmark a set of guidelines for measurement and assessment of environmental low frequency noise, infrasound and vibration was published in 1997 as “Information from the Danish Environmental Protection Agency no. 9/1997” (Miljøstyrelsen2). Recommended measurement methods are specified, and recommended limit values for noise and vibration are given. In this paper a brief description of the measurement methods is given, the recommended limit values are shown, and the background for the measurement and assessment methods is discussed. This paper is an extended summary of “Information from the Danish Environmental Protection Agency No.9/1997” (in Danish) --- paper_title: Annoyance from transportation noise: relationships with exposure metrics DNL and DENL and their confidence intervals. paper_content: We present a model of the distribution of noise annoyance with the mean varying as a function of the noise exposure. Day-night level (DNL) and day-evening-night level (DENL) were used as noise descriptors. Because the entire annoyace distribution has been modeled, any annoyance measure that summarizes this distribution can be calculated from the model. We fitted the model to data from noise annoyance studies for aircraft, road traffic, and railways separately. Polynomial approximations of relationships implied by the model for the combinations of the following exposure and annoyance measures are presented: DNL or DENL, and percentage "highly annoyed" (cutoff at 72 on a scale of 0-100), percentage "annoyed" (cutoff at 50 on a scale of 0-100), or percentage (at least) "a little annoyed" (cutoff at 28 on a scale of 0-100). These approximations are very good, and they are easier to use for practical calculations than the model itself, because the model involves a normal distribution. Our results are based on the same data set that was used earlier to establish relationships between DNL and percentage highly annoyed. In this paper we provide better estimates of the confidence intervals due to the improved model of the relationship between annoyance and noise exposure. Moreover, relationships using descriptors other than DNL and percentage highly annoyed, which are presented here, have not been established earlier on the basis of a large dataset. --- paper_title: Assessment criterion for indoor noise disturbance in the presence of low frequency sources paper_content: Abstract Several studies have presented the effects of environmental noise in and around buildings and communities in which people live and work. In particular, the noise introduced into a building is mostly evaluated using the A weighted sound pressure level ( L Aeq ) as the only parameter to determine the perceived disturbance. Nevertheless, if noise is produced by activities or sources characterised by a low frequency contribution, the measurement of L Aeq underestimates the real disturbance, in particular during sleeping time. The international literature suggests methods to evaluate the low-frequency noise contribution to annoyance separately from the A weighted sound pressure level; almost all of the proposed methods are based on exceeding a threshold limit. This paper tests international criteria, by applying them in real-life indoor noise situations, and then analysing, comparing and contrasting results. Based on the result of the procedure above, a new criterion consisting of a single threshold is proposed, which simplifies the procedures in case of low-frequency components, but could be used for any situation. --- paper_title: Sound power level of speaking people paper_content: In restaurants and cafes many sound sources are present: music, refrigerant and cooling equipment and people speaking. The smoking prohibition law (in force in Italy since 2005 for instance) did move out people, creating a lot of aggregation areas outdoor, both in summer and in winter time. As a matter of facts many cafes open on the outer part a stallage in order to provide beverages to outside costumers. In this way, the "people speaking" source became a common problem for the neighborhood to deal with and solve. In order to characterize this particular sound source in terms of sound power level of a typical stallage situation full of speaking people, sound pressure power measurements according to ISO 3746 standard [1] were carried out. The results confirm the first investigation achievements provided by Sepulcri et all [2] with a non-direct method. --- paper_title: Low Frequency Hearing Thresholds in Pressure Field and in Free Field paper_content: Thresholds of hearing were determined in pressure field at frequencies fromp 4 Hz to 125 Hz. At the frequencies 4-25 Hz hearing thresholds were found that are in the lower middle of the range already reported by other investigators. At frequencies from 25 Hz to 1 kHz thresholds have already been determined in free field by the same method and using the same subjects. The two investigations overlap at frequencies from 25 Hz to 125 Hz, and in this range the results were almost identical. The differences were below 1 dB, except at 63 Hz where the difference was 2.5 dB. None of the differences was significant in a t-test --- paper_title: Assessment criterion for indoor noise disturbance in the presence of low frequency sources paper_content: Abstract Several studies have presented the effects of environmental noise in and around buildings and communities in which people live and work. In particular, the noise introduced into a building is mostly evaluated using the A weighted sound pressure level ( L Aeq ) as the only parameter to determine the perceived disturbance. Nevertheless, if noise is produced by activities or sources characterised by a low frequency contribution, the measurement of L Aeq underestimates the real disturbance, in particular during sleeping time. The international literature suggests methods to evaluate the low-frequency noise contribution to annoyance separately from the A weighted sound pressure level; almost all of the proposed methods are based on exceeding a threshold limit. This paper tests international criteria, by applying them in real-life indoor noise situations, and then analysing, comparing and contrasting results. Based on the result of the procedure above, a new criterion consisting of a single threshold is proposed, which simplifies the procedures in case of low-frequency components, but could be used for any situation. --- paper_title: Cortical Plasticity: Learning from Cortical Reorganisation paper_content: Neocortical circuits can undergo dynamic rearrangements, not only in response to injury, but also when new skills are acquired. But although training can lead to functional rewiring of the cortex, we are far from being able to reprogram an animal by manipulating its cortical circuitry directly. --- paper_title: Cortical Plasticity: Learning from Cortical Reorganisation paper_content: Neocortical circuits can undergo dynamic rearrangements, not only in response to injury, but also when new skills are acquired. But although training can lead to functional rewiring of the cortex, we are far from being able to reprogram an animal by manipulating its cortical circuitry directly. --- paper_title: Experimental Characterisation of the Low Frequency Noise Annoyance Arising from Industrial Plants paper_content: Complaints about low frequency noise annoyance are frequently reported although the sound levels, measured according to usual procedures, do not seem to justify such reactions. In the present paper some experimental data, gathered by the authors, are illustrated. The results are analyzed on the basis of the available methodologies with the aim of giving a useful contribution to the discussion about the necessity of a special methodology for the correct evaluation of the low frequency noise annoyance. --- paper_title: Impact Sound Pressure Level Performances of Basic Beam Floor Structures paper_content: In order to obtain reliable estimations of the impact sound insulation between rooms, it is necessary to know the acoustic performance of each element composing the floors. The contribution of the flanking transmissions, the attenuation of floating floors and the weighted normalized impact sound pressure level of the basic structure need to be determined in order to apply the simplified calculation method according to the EN ISO 12354-2 standard. With the aim of verifying the range of validity of the calculation method proposed by the EN ISO 12354-2 standard for typical basic beam floor structures, a research based on on-site measurements was conducted. This paper provides an analysis in terms of spectrum trend, predicted average weighted normalized impact sound pressure level and reduction of impact sound pressure level obtainable with a generic floating floor typology. The study can represent a starting point for a correct estimation of the impact sound insulation in new buildings and renovation plans. --- paper_title: Impulse response method for defect detection in polymers: Description of the method and preliminary results paper_content: The major problem encountered in the application of polymer industrial products is the difficulty of effectively modelling and predicting material performance and service life according to applied loads and operating environmental conditions. Furthermore, the presence of defects such as voids or inclusions created during manufacturing may affect the final performance. ::: ::: The aim of this study is to present and investigate the development of an innovative acoustic non-destructive technique (patent pending), able to verify defects into composite laminates. ::: ::: The analysis was carried out in two steps: the first aims to verify if distinct phases can be recognized within a material, while the second has the purpose of testing the proposed method on defective materials ad hoc prepared. --- paper_title: Assessment criterion for indoor noise disturbance in the presence of low frequency sources paper_content: Abstract Several studies have presented the effects of environmental noise in and around buildings and communities in which people live and work. In particular, the noise introduced into a building is mostly evaluated using the A weighted sound pressure level ( L Aeq ) as the only parameter to determine the perceived disturbance. Nevertheless, if noise is produced by activities or sources characterised by a low frequency contribution, the measurement of L Aeq underestimates the real disturbance, in particular during sleeping time. The international literature suggests methods to evaluate the low-frequency noise contribution to annoyance separately from the A weighted sound pressure level; almost all of the proposed methods are based on exceeding a threshold limit. This paper tests international criteria, by applying them in real-life indoor noise situations, and then analysing, comparing and contrasting results. Based on the result of the procedure above, a new criterion consisting of a single threshold is proposed, which simplifies the procedures in case of low-frequency components, but could be used for any situation. ---
Title: Low Frequency Noise and Disturbance Assessment Methods: A Brief Literature Overview and a New Proposal Section 1: INTRODUCTION Description 1: Introduce the topic of low frequency noise (LFN), its impacts, and the limitations of existing assessment methods. Section 2: FREQUENCY RANGE AND THRESHOLDS Description 2: Discuss various frequency ranges used in LFN assessment and the importance of thresholds in evaluation criteria. Section 3: LOW FREQUENCY VS. INFRASOUND Description 3: Clarify the distinction between low frequency noise and infrasound, and highlight different viewpoints on their classification. Section 4: HEARING THRESHOLDS Description 4: Explain how hearing sensitivity varies with age and the implications for LFN perception. Section 5: HEARING VS. ANNOYANCE Description 5: Examine the relationship between the perception of LFN and the annoyance it causes, including psychological factors. Section 6: THE USE OF dB(A) Description 6: Critique the use of the A-weighted sound pressure level as a measure of LFN disturbance and consider alternative metrics. Section 7: DISCUSSION AND NEW PROPOSAL Description 7: Present a discussion on the need for improved LFN assessment methods and introduce a new objective proposal based on robust measurement criteria. Section 8: FINAL REMARKS Description 8: Conclude with the significance of addressing LFN in noise assessments and the potential impact of the proposed method on future standards and legislation.
A Survey on the Use of Graphical Passwords in Security
16
--- paper_title: Recognition memory for words, sentences, and pictures paper_content: The S s looked through a series of about 600 stimuli selected at random from an initially larger population. They were then tested for their ability to recognize these “old” stimuli in pairs in which the alternative was always a “new” stimulus selected at random from the stimuli remaining in the original population. Depending upon whether this original population consisted solely of words, sentences, or pictures, median Ss were able correctly to recognize the “old” stimulus in 90, 88, or 98% of the test pairs, respectively. Estimated lower bounds on the informational capacity of human memory considerably exceed previously published estimates. --- paper_title: Graphical passwords: Learning from the first twelve years paper_content: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology. --- paper_title: A Novel Cued-recall Graphical Password Scheme paper_content: Graphical passwords have been proposed as an alternative to alphanumeric passwords with their advantages in usability and security. However, most of these alternate schemes have their own disadvantages. For example, cued-recall graphical password schemes are vulnerable to shoulder-surfing and cannot prevent intersection analysis attack. A novel cued-recall graphical password scheme CBFG (Click Buttons according to Figures in Grids) is proposed in this paper. Inheriting the way of setting password in traditional cued-recall scheme, this scheme is also added the ideology of image identification. CBFG helps users tend to set their passwords more complex. Simultaneously, it has the capability against shoulder surfing attack and intersection analysis attack. Experiments illustrate that CBFG has better performance in usability, especially in security. --- paper_title: UNIX Password Security - Ten Years Later paper_content: Passwords in the UNIX operating system are encrypted with the crypt algorithm and kept in the publicly-readable file /etc/passwd. This paper examines the vulnerability of UNIX to attacks on its password system. Over the past 10 years, improvements in hardware and software have increased the crypts/second/dollai ratio by five orders of magnitude. We reexamine the UNIX password system in light of these advances and point out possible solutions to the problem of easily found passwords. The paper discusses how the authors built some high-speed tools for password cracking and what elements were necessary for their success. These elements are examined to determine if any of them can be removed from the hands of a possible system infiltrator, and thus increase the security of the system. We conclude that the single most important step that can be taken to improve password security is to increase password entropy. --- paper_title: Making Passwords Secure and Usable paper_content: To date, system research has focused on designing security mechanisms to protect systems access although their usability has rarely been investigated. This paper reports a study in which users’ perceptions of password mechanisms were investigated through questionnaires and interviews. Analysis of the questionnaires shows that many users report problems, linked to the number of passwords and frequency of password use. In-depth analysis of the interview data revealed that the degree to which users conform to security mechanisms depends on their perception of security levels, information sensitivity and compatibility with work practices. Security mechanisms incompatible with these perceptions may be circumvented by users and thereby undermine system security overall. --- paper_title: A New Graphical Password Scheme Resistant to Shoulder-Surfing paper_content: Shoulder-surfing is a known risk where an attacker can capture a password by direct observation or by recording the authentication session. Due to the visual interface, this problem has become exacerbated in graphical passwords. There have been some graphical schemes resistant or immune to shoulder-surfing, but they have significant usability drawbacks, usually in the time and effort to log in. In this paper, we propose and evaluate a new shoulder-surfing resistant scheme which has a desirable usability for PDAs. Our inspiration comes from the drawing input method in DAS and the association mnemonics in Story for sequence retrieval. The new scheme requires users to draw a curve across their password images orderly rather than click directly on them. The drawing input trick along with the complementary measures, such as erasing the drawing trace, displaying degraded images, and starting and ending with randomly designated images provide a good resistance to shoulder-surfing. A preliminary user study showed that users were able to enter their passwords accurately and to remember them over time. --- paper_title: Password security: a case history paper_content: This paper describes the history of the design of the password security scheme on a remotely accessed time-sharing system. The present design was the result of countering observed attempts to penetrate the system. The result is a compromise between extreme security and ease of use. --- paper_title: Graphical passwords: a survey paper_content: The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: "Are graphical passwords as secure as text-based passwords?"; "What are the major design and implementation issues for graphical passwords?" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods --- paper_title: Purely Automated Attacks on PassPoints-Style Graphical Passwords paper_content: We introduce and evaluate various methods for purely automated attacks against PassPoints-style graphical passwords. For generating these attacks, we introduce a graph-based algorithm to efficiently create dictionaries based on heuristics such as click-order patterns (e.g., five points all along a line). Some of our methods combine click-order heuristics with focus-of-attention scan-paths generated from a computational model of visual attention, yielding significantly better automated attacks than previous work. One resulting automated attack finds 7%-16% of passwords for two representative images using dictionaries of approximately 226 entries (where the full password space is 243). Relaxing click-order patterns substantially increased the attack efficacy albeit with larger dictionaries of approximately 235 entries, allowing attacks that guessed 48%-54% of passwords (compared to previous results of 1% and 9% on the same dataset for two images with 235 guesses). These latter attacks are independent of focus-of-attention models, and are based on image-independent guessing patterns. Our results show that automated attacks, which are easier to arrange than human-seeded attacks and are more scalable to systems that use multiple images, require serious consideration when deploying basic PassPoints-style graphical passwords. --- paper_title: Graphical passwords: Learning from the first twelve years paper_content: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology. --- paper_title: On predictive models and user-drawn graphical passwords paper_content: In commonplace text-based password schemes, users typically choose passwords that are easy to recall, exhibit patterns, and are thus vulnerable to brute-force dictionary attacks. This leads us to ask whether other types of passwords (e.g., graphical) are also vulnerable to dictionary attack because of users tending to choose memorable passwords. We suggest a method to predict and model a number of such classes for systems where passwords are created solely from a user's memory. We hypothesize that these classes define weak password subspaces suitable for an attack dictionary. For user-drawn graphical passwords, we apply this method with cognitive studies on visual recall. These cognitive studies motivate us to define a set of password complexity factors (e.g., reflective symmetry and stroke count), which define a set of classes. To better understand the size of these classes and, thus, how weak the password subspaces they define might be, we use the “Draw-A-Secret” (DAS) graphical password scheme of Jermyn et al. [1999] as an example. We analyze the size of these classes for DAS under convenient parameter choices and show that they can be combined to define apparently popular subspaces that have bit sizes ranging from 31 to 41—a surprisingly small proportion of the full password space (58 bits). Our results quantitatively support suggestions that user-drawn graphical password systems employ measures, such as graphical password rules or guidelines and proactive password checking. --- paper_title: A Novel Cued-recall Graphical Password Scheme paper_content: Graphical passwords have been proposed as an alternative to alphanumeric passwords with their advantages in usability and security. However, most of these alternate schemes have their own disadvantages. For example, cued-recall graphical password schemes are vulnerable to shoulder-surfing and cannot prevent intersection analysis attack. A novel cued-recall graphical password scheme CBFG (Click Buttons according to Figures in Grids) is proposed in this paper. Inheriting the way of setting password in traditional cued-recall scheme, this scheme is also added the ideology of image identification. CBFG helps users tend to set their passwords more complex. Simultaneously, it has the capability against shoulder surfing attack and intersection analysis attack. Experiments illustrate that CBFG has better performance in usability, especially in security. --- paper_title: Design and evaluation of a shoulder-surfing resistant graphical password scheme paper_content: When users input their passwords in a public place, they may be at risk of attackers stealing their password. An attacker can capture a password by direct observation or by recording the individual's authentication session. This is referred to as shoulder-surfing and is a known risk, of special concern when authenticating in public places. Until recently, the only defense against shoulder-surfing has been vigilance on the part of the user. This paper reports on the design and evaluation of a game-like graphical method of authentication that is resistant to shoulder-surfing. The Convex Hull Click (CHC) scheme allows a user to prove knowledge of the graphical password safely in an insecure location because users never have to click directly on their password images. Usability testing of the CHC scheme showed that novice users were able to enter their graphical password accurately and to remember it over time. However, the protection against shoulder-surfing comes at the price of longer time to carry out the authentication. --- paper_title: Graphical passwords: a survey paper_content: The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: "Are graphical passwords as secure as text-based passwords?"; "What are the major design and implementation issues for graphical passwords?" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods --- paper_title: Analyzing User Choice in Graphical Passwords paper_content: In ubiquitous textual password schemes, users choose passwords that contain predictable characteristics that are roughly equated with what users flnd easy to recall. This motivates us to examine user choice in graphical passwords to determine whether predictable characteristics exist that may reduce the entropy of the password space. We present an informal user study of the scheme proposed by Jermyn et al. (1999), and the results, both in context of the study’s goals and a separate analysis of the results performed at a later date. Our results support that user drawings contain the predictable characteristics relating to symmetry, number of composite strokes, and centering within the grid. Our results also highlight a usability challenge with the DAS scheme. --- paper_title: PassPoints: Design and longitudinal evaluation of a graphical password system paper_content: Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. --- paper_title: Passshapes - utilizing stroke based authentication to increase password memorability paper_content: Authentication today mostly relies on passwords or personal identification numbers (PINs). Therefore the average user has to remember an increasing amount of PINs and passwords. Unfortunately, humans have limited capabilities for remembering abstract alphanumeric sequences. Thus, many people either forget them or use very simple ones, which implies several security risks. In this work, a novel authentication method called PassShapes is presented. In this system users authenticate themselves to a computing system by drawing simple geometric shapes constructed of an arbitrary combination of eight different strokes. We argue that using such shapes will allow more complex and thus more secure authentication tokens with a lower cognitive load and higher memorability. To prove these assumptions, two user studies have been conducted. The memorability evaluation showed that the PassShapes concept is able to increase the memorability when users can practice the PassShapes several times. This effect is even increasing over time. Additionally, a prototype was implemented to conduct a usability study. The results of both studies indicate that the PassShapes approach is able to provide a usable and memorable authentication method. --- paper_title: Graphical Dictionaries and the Memorable Space of Graphical Passwords paper_content: In commonplace textual password schemes, users choose passwords that are easy to recall. Since memorable passwords typically exhibit patterns, they are exploitable by brute-force password crackers using attack dictionaries. This leads us to ask what classes of graphical passwords users find memorable. We postulate one such class supported by a collection of cognitive studies on visual recall, which can be characterized as mirror symmetric (reflective) passwords. We assume that an attacker would put this class in an attack dictionary for graphical passwords and propose how an attacker might order such a dictionary. We extend the existing analysis of graphical passwords by analyzing the size of the mirror symmetric password space relative to the full password space of the graphical password scheme of Jermyn et al. (1999), and show it to be exponentially smaller (assuming appropriate axes of reflection). This reduction in size can be compensated for by longer passwords: the size of the space of mirror symmetric passwords of length about L + 5 exceeds that of the full password space for corresponding length L ≤ 14 on a 5 × 5 grid. This work could be used to help in formulating password rules for graphical password users and in creating proactive graphical password checkers. --- paper_title: On predictive models and user-drawn graphical passwords paper_content: In commonplace text-based password schemes, users typically choose passwords that are easy to recall, exhibit patterns, and are thus vulnerable to brute-force dictionary attacks. This leads us to ask whether other types of passwords (e.g., graphical) are also vulnerable to dictionary attack because of users tending to choose memorable passwords. We suggest a method to predict and model a number of such classes for systems where passwords are created solely from a user's memory. We hypothesize that these classes define weak password subspaces suitable for an attack dictionary. For user-drawn graphical passwords, we apply this method with cognitive studies on visual recall. These cognitive studies motivate us to define a set of password complexity factors (e.g., reflective symmetry and stroke count), which define a set of classes. To better understand the size of these classes and, thus, how weak the password subspaces they define might be, we use the “Draw-A-Secret” (DAS) graphical password scheme of Jermyn et al. [1999] as an example. We analyze the size of these classes for DAS under convenient parameter choices and show that they can be combined to define apparently popular subspaces that have bit sizes ranging from 31 to 41—a surprisingly small proportion of the full password space (58 bits). Our results quantitatively support suggestions that user-drawn graphical password systems employ measures, such as graphical password rules or guidelines and proactive password checking. --- paper_title: Towards secure design choices for implementing graphical passwords paper_content: We study the impact of selected parameters on the size of the password space for "Draw-A-Secret" (DAS) graphical passwords. We examine the role of and relationships between the number of composite strokes, grid dimensions, and password length in the DAS password space. We show that a very significant proportion of the DAS password space depends on the assumption that users will choose long passwords with many composite strokes. If users choose passwords having 4 or fewer strokes, with passwords of length 12 or less on a 5 /spl times/ 5 grid, instead of up to the maximum 12 possible strokes, the size of the DAS password space is reduced from 58 to 40 bits. Additionally, we found a similar reduction when users choose no strokes of length 1. To strengthen security, we propose a technique and describe a representative system that may gain up to 16 more bits of security with an expected negligible increase in input time. Our results can be directly applied to determine secure design choices, graphical password parameter guidelines, and in deciding which parameters deserve focus in graphical password user studies. --- paper_title: An enhanced drawing reproduction graphical password strategy paper_content: Passwords are used in the vast majority of computer and communication systems for authentlcation. The greater security and memorability of graphical passwords make them a possible alternative to traditional textual passwords. In this paper we propose a new graphical password scheme called YAGP, which is an extension of the Draw-A-Secret (DAS) scheme. The main difference between YAGP and DAS is soft matching. The concepts of the stroke-box, image-box, trend quadrant, and similarity are used to describe the images characteristics for soft matching. The reduction in strict user input rules in soft matching improves the usability and therefore creates a great advantage. The denser grid granubtrity enables users to design a longer password, enlarging the practical password space and enhancing security. Meanwhile, YAGP adopts a triple-register process to create multi-templates, increasing the accuracy and memorability of characteristics extraction. Experiments illustrate the effectiveness of YAGP. --- paper_title: Is a picture really worth a thousand words? Exploring the feasibility of graphical authentication systems paper_content: The weakness of knowledge-based authentication systems, such as passwords and Personal Identification Numbers (PINs), is well known, and reflects an uneasy compromise between security and human memory constraints. Research has been undertaken for some years now into the feasibility of graphical authentication mechanisms in the hope that these will provide a more secure and memorable alternative. The graphical approach substitutes the exact recall of alphanumeric codes with the recognition of previously learnt pictures, a skill at which humans are remarkably proficient. So far, little attention has been devoted to usability, and initial research has failed to conclusively establish significant memory improvement. This paper reports two user studies comparing several implementations of the graphical approach with PINs. Results demonstrate that pictures can be a solution to some problems relating to traditional knowledge-based authentication but that they are not a simple panacea, since a poor design can eliminate the picture superiority effect in memory. The paper concludes by discussing the potential of the graphical approach and providing guidelines for developers contemplating using these mechanisms. --- paper_title: Smudge Attacks on Smartphone Touch Screens paper_content: Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred. ::: ::: In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern. --- paper_title: The Design and Analysis of Graphical Passwords paper_content: In this paper we propose and evaluate new graphical password schemes that exploit features of graphical input displays to achieve better security than text-based passwords. Graphical input devices enable the user to decouple the position of inputs from the temporal order in which those inputs occur, and we show that this decoupling can be used to generate password schemes with substantially larger (memorable) password spaces. In order to evaluate the security of one of our schemes, we devise a novel way to capture a subset of the "memorable" passwords that, we believe, is itself a contribution. In this work we are primarily motivated by devices such as personal digital assistants (PDAs) that offer graphical input capabilities via a stylus, and we describe our prototype implementation of one of our password schemes on such a PDA, namely the Palm PilotTM. --- paper_title: Graphical passwords & qualitative spatial relations paper_content: A potential drawback of graphical password schemes is that they are more vulnerable to shoulder surfing than conventional alphanumeric text passwords. We present a variation of the Draw-a-Secret scheme originally proposed by Jermyn et al [1] that is more resistant to shoulder surfing through the use of a qualitative mapping between user strokes and the password, and the use of dynamic grids to both obfuscate attributes of the user secret and encourage them to use different surface realizations of the secret. The use of qualitative spatial relations relaxes the tight constraints on the reconstruction of a secret; allowing a range of deviations from the original. We describe QDAS (Qualitative Draw-A-Secret), an initial implementation of this graphical password scheme, and the results of an empirical study in which we examined the memorability of secrets, and their susceptibility to shoulder-surfing attacks, for both Draw-A-Secret and QDAS. --- paper_title: The design and implementation of background Pass-Go scheme towards security threats paper_content: Currently, access to computer systems is often based on the use of alpha-numeric. The textual passwords or alpha-numeric passwords have been the basis of authentication systems for centuries. Similarly, it had also been the major attraction for crackers and attackers. However, users tend to face difficulty in remembering a password that is considered as secured password because this type of secured password usually has long string of characters and they appear randomly [14]. Hence, most users tend to create simple, short and insecure passwords. As a consequence, most of the time, the usability level of passwords has not achieved an optimum for a secured password [14]. In order to solve this problem, a new password scheme had been invented, known as Graphical Password System (GPS). Graphical password is an alternative mean of authentication for login intended to be used in place of conventional password; it utilizes images instead of text. In this paper, we discuss the design and intention of our proposed scheme, called Background Pass-Go (BPG). BPG is an improved version of Pass-Go, as it keeps most of the advantages of Pass-Go and achieves better usability. We had analyzed the BPG scheme in terms of 1) how BPG is able to improve other schemes of GPS especially in Pass-Go, 2) how BPG acts as a solution to different types of threats to networked computer systems. We had verified that BPG manages to overcome the shortage of other GPS schemes. Moreover, the BPG also manages to address most of the security threats for the network security system. --- paper_title: Pass-Go: A Proposal to Improve the Usability of Graphical Passwords paper_content: Inspired by an old Chinese game, Go, we have designed a new graphical password scheme, Pass-Go, in which a user selects intersections on a grid as a way to input a password. While offering an extremely large full password space (256 bits for the most basic scheme), our scheme provides acceptable usability, as empirically demonstrated by, to the best of our knowledge, the largest user study (167 subjects involved) on graphical passwords, conducted in the fall semester of 2005 in two university classes. Our scheme supports most application environments and input devices, rather than being limited to small mobile devices (PDAs), and can be used to derive cryptographic keys. We study the memorable password space and show the potential power of this scheme by exploring further improvements and variation mechanisms. --- paper_title: Multi-grid background Pass-Go paper_content: Computer security depends largely on passwords to authenticate the human user for access to secure systems. Remembering the password they have chosen is a frequent problem for all users. As a result, they tend to choose short and insecure passwords as compared to secure passwords which usually consist of a long mixture of random alphanumeric and non-alphanumeric characters. Thus, the tendency of choosing insecure passwords has brought up many security problems. Graphical password is an alternative to replace alphanumeric password in which users only have to click on the images in order to authenticate themselves rather than typing alphanumeric strings. The main objectives of this paper are to present a classification of graphical passwords system (GPS) and identify its future research area. In this paper, we attempt to identify a number of threats to the networked computer systems, focus on the research of graphical password system (GPS) and analysis on some aspects of GPS; 1) how each GPS algorithm works, 2) the advantages and disadvantages of each GPS algorithm, 3) how each GPS algorithm is able to address solutions to the threats. Besides, the paper also concentrates on the design and the implication of a proposed prototype, namely Multi-Grid Background Pass-Go (MGBPG) which is targeted to be its strength and the winning edge over other graphical password systems. The preliminary result and analysis of the proposed prototype is then presented by comparing it on its role in addressing the drawbacks of current existing GPS and several security attacks. Finally, we highlight a few aspects, which need to be improved in the future to overcome the deficiencies of previous GPS methods that have been invented. --- paper_title: YAGP: Yet Another Graphical Password Strategy paper_content: Alphanumeric passwords are widely used in computer and network authentication to protect users' privacy. However, it is well known that long, text-based passwords are hard for people to remember, while shorter ones are susceptible to attack. Graphical password is a promising solution to this problem. Draw-A-Secret (DAS) is a typical implementation based on the user drawing on a grid canvas. Currently, too many constraints result in reduction in user experience and prevent its popularity. A novel graphical password strategy Yet Another Graphical Password (YAGP) inspired by DAS is proposed in this paper. The proposal has the advantages of free drawing positions, strong shoulder surfing resistance and large password space. Experiments illustrate the effectiveness of YAGP. --- paper_title: Analyzing User Choice in Graphical Passwords paper_content: In ubiquitous textual password schemes, users choose passwords that contain predictable characteristics that are roughly equated with what users flnd easy to recall. This motivates us to examine user choice in graphical passwords to determine whether predictable characteristics exist that may reduce the entropy of the password space. We present an informal user study of the scheme proposed by Jermyn et al. (1999), and the results, both in context of the study’s goals and a separate analysis of the results performed at a later date. Our results support that user drawings contain the predictable characteristics relating to symmetry, number of composite strokes, and centering within the grid. Our results also highlight a usability challenge with the DAS scheme. --- paper_title: Do background images improve "draw a secret" graphical passwords? paper_content: Draw a secret (DAS) is a representative graphical password scheme. Rigorous theoretical analysis suggests that DAS supports an overall password space larger than that of the ubiquitous textual password scheme. However, recent research suggests that DAS users tend to choose weak passwords, and their choices would render this theoretically sound scheme less secure in real life. In this paper we investigate the novel idea of introducing background images to the DAS scheme, where users were initially supposed to draw passwords on a blank canvas overlaid with a grid. Encouraging results from our two user studies have shown that people aided with background images tended to set significantly more complicated passwords than their counterparts using the original scheme. The background images also reduced other predictable characteristics in DAS passwords such as symmetry and centering within the drawing grid, further improving the strength of the passwords. We estimate that the average strength of successfully recalled passwords in the enhanced scheme was increased over those created using the original scheme by more than 10 bits. Moreover, a positive effect was observed with respect to the memorability of the more complex passwords encouraged by the background images. --- paper_title: Your Memory: How It Works and How to Improve It paper_content: The word mnemonic (pronounced “ne MON ik”) is briefly defined as “aiding the memory.” It is derived from Mnemosyne, the name of the ancient Greek goddess of memory. “Mnemonics” refers in general to methods for improving memory; a mnemonic technique is any technique that aids the memory. Most researchers, however, define mnemonics more narrowly as being what most people consider to be rather unusual, artificial memory aids. --- paper_title: Persuasive Cued Click-Points: Design, Implementation, and Evaluation of a Knowledge-Based Authentication Mechanism paper_content: This paper presents an integrated evaluation of the Persuasive Cued Click-Points graphical password scheme, including usability and security evaluations, and implementation considerations. An important usability goal for knowledge-based authentication systems is to support users in selecting passwords of higher security, in the sense of being from an expanded effective security space. We use persuasion to influence user choice in click-based graphical passwords, encouraging users to select more random, and hence more difficult to guess, click-points. --- paper_title: Graphical passwords: Learning from the first twelve years paper_content: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology. --- paper_title: Graphical Password Authentication using Cued Click Points paper_content: With the rapid use of computers, the security of passwords is must where privacy is important. For password protection various techniques are available. The main issues of knowledge based authentication, usually text based passwords, are well known. Users tend to choose passwords they can easily remember and hence can also be guessed by the hacker. To avoid the limitation of textual passwords, this paper focuses on the implementation of a Graphical Password Authentication System using cued click points based on the integrated evaluation of the cued click points. Click cued points (CCP) is a click-based graphical password scheme. Users click on one point per image for a sequence of images. The next image is based on the previous click-point. Users prefer CCP over Passpoints because selecting and remembering only one point per image is less secure, than having each image being triggered by the user’s memory as to where the corresponding point was located. CCP also provides greater security than PassPoints because the number of images increases the workload for attackers. --- paper_title: A Novel Cued-recall Graphical Password Scheme paper_content: Graphical passwords have been proposed as an alternative to alphanumeric passwords with their advantages in usability and security. However, most of these alternate schemes have their own disadvantages. For example, cued-recall graphical password schemes are vulnerable to shoulder-surfing and cannot prevent intersection analysis attack. A novel cued-recall graphical password scheme CBFG (Click Buttons according to Figures in Grids) is proposed in this paper. Inheriting the way of setting password in traditional cued-recall scheme, this scheme is also added the ideology of image identification. CBFG helps users tend to set their passwords more complex. Simultaneously, it has the capability against shoulder surfing attack and intersection analysis attack. Experiments illustrate that CBFG has better performance in usability, especially in security. --- paper_title: Influencing users towards better passwords: persuasive cued click-points paper_content: Usable security has unique usability challenges because the need for security often means that standard human-computer-interaction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in selecting better passwords, thus increasing security by expanding the effective password space. In click-based graphical passwords, poorly chosen passwords lead to the emergence of hotspots -- portions of the image where users are more likely to select click-points, allowing attackers to mount more successful dictionary attacks. We use persuasion to influence user choice in click-based graphical passwords, encouraging users to select more random, and hence more secure, click-points. Our approach is to introduce persuasion to the Cued Click-Points graphical password scheme (Chiasson, van Oorschot, Biddle, 2007). Our resulting scheme significantly reduces hotspots while still maintaining its usability. --- paper_title: Graphical passwords: a survey paper_content: The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: "Are graphical passwords as secure as text-based passwords?"; "What are the major design and implementation issues for graphical passwords?" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods --- paper_title: PassPoints: Design and longitudinal evaluation of a graphical password system paper_content: Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. --- paper_title: Analysis and Evaluation of the ColorLogin Graphical Password Scheme paper_content: It is believed that graphical passwords are more memorable than traditional textual passwords, but usually seen as complex and time-consuming for users. Furthermore, most of the existing graphical password schemes are vulnerable to spyware and shoulder surfing. ColorLogin uses color, a method not previously considered, to decrease login time. Multiple colors are used to confuse the peepers, while not burdening the legitimate users. Meanwhile, the scheme is resistant to shoulder surfing and intersection attack to a certain extent. This paper analyzes and evaluates the ColorLogin scheme using some experiments. --- paper_title: Towards Usable Solutions to Graphical Password Hotspot Problem paper_content: Click based graphical passwords that use background images suffer from hot-spot problem. Previous graphical password schemes based on recognition of images do not have a sufficiently large password space suited for most Internet applications. In this paper, we propose two novel graphical password methods based on recognition of icons to solve the hotspot problem without decreasing the password space. The experiment we have conducted that compares the security and usability of proposed methods with earlier work (i.e. Passpoints) shows that hotspot problem can be eliminated if a small increase in password entrance and confirmation times is tolerable. --- paper_title: Is a picture really worth a thousand words? Exploring the feasibility of graphical authentication systems paper_content: The weakness of knowledge-based authentication systems, such as passwords and Personal Identification Numbers (PINs), is well known, and reflects an uneasy compromise between security and human memory constraints. Research has been undertaken for some years now into the feasibility of graphical authentication mechanisms in the hope that these will provide a more secure and memorable alternative. The graphical approach substitutes the exact recall of alphanumeric codes with the recognition of previously learnt pictures, a skill at which humans are remarkably proficient. So far, little attention has been devoted to usability, and initial research has failed to conclusively establish significant memory improvement. This paper reports two user studies comparing several implementations of the graphical approach with PINs. Results demonstrate that pictures can be a solution to some problems relating to traditional knowledge-based authentication but that they are not a simple panacea, since a poor design can eliminate the picture superiority effect in memory. The paper concludes by discussing the potential of the graphical approach and providing guidelines for developers contemplating using these mechanisms. --- paper_title: Design and evaluation of a shoulder-surfing resistant graphical password scheme paper_content: When users input their passwords in a public place, they may be at risk of attackers stealing their password. An attacker can capture a password by direct observation or by recording the individual's authentication session. This is referred to as shoulder-surfing and is a known risk, of special concern when authenticating in public places. Until recently, the only defense against shoulder-surfing has been vigilance on the part of the user. This paper reports on the design and evaluation of a game-like graphical method of authentication that is resistant to shoulder-surfing. The Convex Hull Click (CHC) scheme allows a user to prove knowledge of the graphical password safely in an insecure location because users never have to click directly on their password images. Usability testing of the CHC scheme showed that novice users were able to enter their graphical password accurately and to remember it over time. However, the protection against shoulder-surfing comes at the price of longer time to carry out the authentication. --- paper_title: Use Your Illusion: secure authentication usable anywhere paper_content: In this paper, we propose and evaluate Use Your Illusion, a novel mechanism for user authentication that is secure and usable regardless of the size of the device on which it is used. Our system relies on the human ability to recognize a degraded version of a previously seen image. We illustrate how distorted images can be used to maintain the usability of graphical password schemes while making them more resilient to social engineering or observation attacks. Because it is difficult to mentally "revert" a degraded image, without knowledge of the original image, our scheme provides a strong line of defense against impostor access, while preserving the desirable memorability properties of graphical password schemes. Using low-fidelity tests to aid in the design, we implement prototypes of Use Your Illusion as i) an Ajax-based web service and ii) on Nokia N70 cellular phones. We conduct a between-subjects usability study of the cellular phone prototype with a total of 99 participants in two experiments. We demonstrate that, regardless of their age or gender, users are very skilled at recognizing degraded versions of self-chosen images, even on small displays and after time periods of one month. Our results indicate that graphical passwords with distorted images can achieve equivalent error rates to those using traditional images, but only when the original image is known. --- paper_title: Are Passfaces 1 More Usable Than Passwords? A Field Trial Investigation paper_content: The proliferation of technology requiring user authentication has increased the number of passwords which users have to remember, creating a significant usability problem. This paper reports a usability comparison between a new mechanism for user authentication — Passfaces — and passwords, with 34 student participants in a 3-month field trial. Fewer login errors were made with Passfaces, even when periods between logins were long. On the computer facilities regularly chosen by participants to log in, Passfaces took a long time to execute. Participants consequently started their work later when using Passfaces than when using passwords, and logged into the system less often. The results emphasise the importance of evaluating the usability of security mechanisms in field trials. --- paper_title: On User Choice in Graphical Password Schemes paper_content: Graphical password schemes have been proposed as an alternative to text passwords in applications that support graphics and mouse or stylus entry. In this paper we detail what is, to our knowledge, the largest published empirical evaluation of the effects of user choice on the security of graphical password schemes. We show that permitting user selection of passwords in two graphical password schemes, one based directly on an existing commercial product, can yield passwords with entropy far below the theoretical optimum and, in some cases, that are highly correlated with the race or gender of the user. For one scheme, this effect is so dramatic so as to render the scheme insecure. A conclusion of our work is that graphical password schemes of the type we study may generally require a different posture toward password selection than text passwords, where selection by the user remains the norm today. --- paper_title: Cognitive authentication schemes safe against spyware paper_content: Can we secure user authentication against eavesdropping adversaries, relying on human cognitive functions alone, unassisted by any external computational device? To accomplish this goal, we propose challenge response protocols that rely on a shared secret set of pictures. Under the considered brute-force attack the protocols are safe against eavesdropping, in that a modestly powered adversary who fully records a series of successful interactions cannot compute the user's secret. Moreover, the protocols can be tuned to any desired level of security against random guessing, where security can be traded-off with authentication time. The proposed protocols have two drawbacks: First, training is required to familiarize the user with the secret set of pictures. Second, depending on the level of security required, entry time can be significantly longer than with alternative methods. We describe user studies showing that people can use these protocols successfully, and quantify the time it takes for training and for successful authentication. We show evidence that the secret can be maintained for a long time (up to a year) with relatively low loss. --- paper_title: Awase-E: Image-Based Authentication for Mobile Phones Using User’s Favorite Images paper_content: There is a trade-off between security and usability in user authentication for mobile phones. Since such devices have a poor input interfaces, 4-digit number passwords are widely used at present. Therefore, a more secure and user friendly authentication is needed. This paper proposes a novel authentication method called “Awase-E”. The system uses image passwords. It, moreover, integrates image registration and notification interfaces. Image registration enables users to use their favorite image instead of a text password. Notification gives users a trigger to take action against a threat when it happens. Awase-E is implemented so that it has a higher usability even when it is used through a mobile phone. --- paper_title: Action-based graphical password: “Click-a-Secret” paper_content: This paper relates to a novel graphical system named “Click-a-Secret” that allows entering a secret through interactions with an image. Seamless modifications of the image will create a new image and derive corresponding secret. In this article, we will focus on access control related applications. However, other usages are possible in various cryptographic systems (e.g.: secret key to encrypt data, etc.). It is well known that humans use long-term memory for storing pictures therefore leading to better recall of images compared to text. This leads to improved memorization of graphical passwords. --- paper_title: S3PAS: A Scalable Shoulder-Surfing Resistant Textual-Graphical Password Authentication Scheme paper_content: The vulnerabilities of the textual password have been well known. Users tend to pick short passwords or passwords that are easy to remember, which makes the passwords vulnerable for attackers to break. Furthermore, textual password is vulnerable to shoulder-surfing, hidden-camera and spyware attacks. Graphical password schemes have been proposed as a possible alternative to text-based scheme. However, they are mostly vulnerable to shoulder-surfing. In this paper, we propose a Scalable Shoulder-Surfing Resistant Textual-Graphical Password Authentication Scheme (S3PAS). S3PAS seamlessly integrates both graphical and textual password schemes and provides nearly perfect resistant to shoulder-surfing, hidden-camera and spyware attacks. It can replace or coexist with conventional textual password systems without changing existing user password profiles. Moreover, it is immune to brute-force attacks through dynamic and volatile session passwords. S3PAS shows significant potential bridging the gap between conventional textual password and graphical password. Further enhancements of S3PAS scheme are proposed and briefly discussed. Theoretical analysis of the security level using S3PAS is also investigated. --- paper_title: Graphical passwords: Learning from the first twelve years paper_content: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology. --- paper_title: A Novel Cued-recall Graphical Password Scheme paper_content: Graphical passwords have been proposed as an alternative to alphanumeric passwords with their advantages in usability and security. However, most of these alternate schemes have their own disadvantages. For example, cued-recall graphical password schemes are vulnerable to shoulder-surfing and cannot prevent intersection analysis attack. A novel cued-recall graphical password scheme CBFG (Click Buttons according to Figures in Grids) is proposed in this paper. Inheriting the way of setting password in traditional cued-recall scheme, this scheme is also added the ideology of image identification. CBFG helps users tend to set their passwords more complex. Simultaneously, it has the capability against shoulder surfing attack and intersection analysis attack. Experiments illustrate that CBFG has better performance in usability, especially in security. --- paper_title: A New Graphical Password Scheme Resistant to Shoulder-Surfing paper_content: Shoulder-surfing is a known risk where an attacker can capture a password by direct observation or by recording the authentication session. Due to the visual interface, this problem has become exacerbated in graphical passwords. There have been some graphical schemes resistant or immune to shoulder-surfing, but they have significant usability drawbacks, usually in the time and effort to log in. In this paper, we propose and evaluate a new shoulder-surfing resistant scheme which has a desirable usability for PDAs. Our inspiration comes from the drawing input method in DAS and the association mnemonics in Story for sequence retrieval. The new scheme requires users to draw a curve across their password images orderly rather than click directly on them. The drawing input trick along with the complementary measures, such as erasing the drawing trace, displaying degraded images, and starting and ending with randomly designated images provide a good resistance to shoulder-surfing. A preliminary user study showed that users were able to enter their passwords accurately and to remember them over time. --- paper_title: Evaluating the usability and security of a graphical one-time PIN system paper_content: Traditional Personal Identification Numbers (PINs) are widely used, but the attacks in which they are captured have been increasing. One-time PINs offer better security, but potentially create greater workload for users. In this paper, we present an independent evaluation of a commercial system that makes PINs more resistant to observation attacks by using graphical passwords on a grid to generate a one-time PIN. 83 participants were asked to register with the system and log in at varying intervals. The successful login rate was approximately 91% after 3--4 days, and 97% after 9--10 days. Twenty five participants were retested after two years, and 27% of those were able to recall their pattern. We recorded 17 instances of failed attempts, and found that even though participants recalled the general shape of the pass-pattern in 13 of these instances, they could not recall its detailed location or sequence of cells. We conclude that GrIDsure is usable if people have one pass-pattern, but the level of security will depend on the context of use (it will work best in scenarios where repeated observations of transactions are unlikely), and the instructions given to users (without guidance, they are likely to chose from a small subset of the possible patterns which are easily guessed). --- paper_title: A Hybrid Password Authentication Scheme Based on Shape and Text paper_content: Text-based password authentication scheme tends to be more vulnerable to attacks such as shoulder surfing or a hidden camera. To overcome the vulnerabilities of traditional methods, visual or graphical password schemes have been developed as possible alternative solutions to text-based password schemes. Since it also has some drawbacks to simply adopt graphical password authentication, schemes using graphic and text have been developed. In this paper, a hybrid password authentication scheme based on shapes and texts is proposed. Shapes of strokes are used in the grid as the original passwords and users can log in with textual passwords via traditional input device. The method provides strong resistance to shoulder surfing or a hidden camera。Moreover, the scheme has high scalability and flexibility to enhance the authentication process security. The analysis of the security level of this approach is also discussed. --- paper_title: Exploration of a hand-based graphical password scheme paper_content: Graphical passwords have been proposed as an alternative to alphanumeric passwords with their advantages in usability and security. However, most of these alternate schemes have their own problems about memorability. Biometrics schemes are not widely adopted though they need no remembrance and provide the highest level of security, owing to their great cost both in device and time. In this paper, we explore the feasibility of introducing hand-based biometrics into the realm of graphical passwords with a simplified scheme named PassHands. PassHands has similar authentication form with Passfaces and has the advantages of unnecessary to remember the password and unpredictable of user choice. Some primary experiments are conducted and the results illustrate the performance of PassHands. --- paper_title: Graphical passwords: a survey paper_content: The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: "Are graphical passwords as secure as text-based passwords?"; "What are the major design and implementation issues for graphical passwords?" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods --- paper_title: Securing passwords against dictionary attacks paper_content: The use of passwords is a major point of vulnerability in computer security, as passwords are often easy to guess by automated programs running dictionary attacks. Passwords remain the most widely used authentication method despite their well-known security weaknesses. User authentication is clearly a practical problem. From the perspective of a service provider this problem needs to be solved within real-world constraints such as the available hardware and software infrastructures. From a user's perspective user-friendliness is a key requirement.In this paper we suggest a novel authentication scheme that preserves the advantages of conventional password authentication, while simultaneously raising the costs of online dictionary attacks by orders of magnitude. The proposed scheme is easy to implement and overcomes some of the difficulties of previously suggested methods of improving the security of user authentication schemes.Our key idea is to efficiently combine traditional password authentication with a challenge that is very easy to answer by human users, but is (almost) infeasible for automated programs attempting to run dictionary attacks. This is done without affecting the usability of the system. The proposed scheme also provides better protection against denial of service attacks against user accounts. --- paper_title: Purely Automated Attacks on PassPoints-Style Graphical Passwords paper_content: We introduce and evaluate various methods for purely automated attacks against PassPoints-style graphical passwords. For generating these attacks, we introduce a graph-based algorithm to efficiently create dictionaries based on heuristics such as click-order patterns (e.g., five points all along a line). Some of our methods combine click-order heuristics with focus-of-attention scan-paths generated from a computational model of visual attention, yielding significantly better automated attacks than previous work. One resulting automated attack finds 7%-16% of passwords for two representative images using dictionaries of approximately 226 entries (where the full password space is 243). Relaxing click-order patterns substantially increased the attack efficacy albeit with larger dictionaries of approximately 235 entries, allowing attacks that guessed 48%-54% of passwords (compared to previous results of 1% and 9% on the same dataset for two images with 235 guesses). These latter attacks are independent of focus-of-attention models, and are based on image-independent guessing patterns. Our results show that automated attacks, which are easier to arrange than human-seeded attacks and are more scalable to systems that use multiple images, require serious consideration when deploying basic PassPoints-style graphical passwords. --- paper_title: Graphical Dictionaries and the Memorable Space of Graphical Passwords paper_content: In commonplace textual password schemes, users choose passwords that are easy to recall. Since memorable passwords typically exhibit patterns, they are exploitable by brute-force password crackers using attack dictionaries. This leads us to ask what classes of graphical passwords users find memorable. We postulate one such class supported by a collection of cognitive studies on visual recall, which can be characterized as mirror symmetric (reflective) passwords. We assume that an attacker would put this class in an attack dictionary for graphical passwords and propose how an attacker might order such a dictionary. We extend the existing analysis of graphical passwords by analyzing the size of the mirror symmetric password space relative to the full password space of the graphical password scheme of Jermyn et al. (1999), and show it to be exponentially smaller (assuming appropriate axes of reflection). This reduction in size can be compensated for by longer passwords: the size of the space of mirror symmetric passwords of length about L + 5 exceeds that of the full password space for corresponding length L ≤ 14 on a 5 × 5 grid. This work could be used to help in formulating password rules for graphical password users and in creating proactive graphical password checkers. --- paper_title: Graphical passwords: Learning from the first twelve years paper_content: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology. --- paper_title: Analyzing User Choice in Graphical Passwords paper_content: In ubiquitous textual password schemes, users choose passwords that contain predictable characteristics that are roughly equated with what users flnd easy to recall. This motivates us to examine user choice in graphical passwords to determine whether predictable characteristics exist that may reduce the entropy of the password space. We present an informal user study of the scheme proposed by Jermyn et al. (1999), and the results, both in context of the study’s goals and a separate analysis of the results performed at a later date. Our results support that user drawings contain the predictable characteristics relating to symmetry, number of composite strokes, and centering within the grid. Our results also highlight a usability challenge with the DAS scheme. --- paper_title: Modeling user choice in the PassPoints graphical password scheme paper_content: We develop a model to identify the most likely regions for users to click in order to create graphical passwords in the PassPoints system. A PassPoints password is a sequence of points, chosen by a user in an image that is displayed on the screen. Our model predicts probabilities of likely click points; this enables us to predict the entropy of a click point in a graphical password for a given image. The model allows us to evaluate automatically whether a given image is well suited for the PassPoints system, and to analyze possible dictionary attacks against the system. We compare the predictions provided by our model to results of experiments involving human users. At this stage, our model and the experiments are small and limited; but they show that user choice can be modeled and that expansions of the model and the experiments are a promising direction of research. --- paper_title: On User Choice in Graphical Password Schemes paper_content: Graphical password schemes have been proposed as an alternative to text passwords in applications that support graphics and mouse or stylus entry. In this paper we detail what is, to our knowledge, the largest published empirical evaluation of the effects of user choice on the security of graphical password schemes. We show that permitting user selection of passwords in two graphical password schemes, one based directly on an existing commercial product, can yield passwords with entropy far below the theoretical optimum and, in some cases, that are highly correlated with the race or gender of the user. For one scheme, this effect is so dramatic so as to render the scheme insecure. A conclusion of our work is that graphical password schemes of the type we study may generally require a different posture toward password selection than text passwords, where selection by the user remains the norm today. --- paper_title: A Novel Cued-recall Graphical Password Scheme paper_content: Graphical passwords have been proposed as an alternative to alphanumeric passwords with their advantages in usability and security. However, most of these alternate schemes have their own disadvantages. For example, cued-recall graphical password schemes are vulnerable to shoulder-surfing and cannot prevent intersection analysis attack. A novel cued-recall graphical password scheme CBFG (Click Buttons according to Figures in Grids) is proposed in this paper. Inheriting the way of setting password in traditional cued-recall scheme, this scheme is also added the ideology of image identification. CBFG helps users tend to set their passwords more complex. Simultaneously, it has the capability against shoulder surfing attack and intersection analysis attack. Experiments illustrate that CBFG has better performance in usability, especially in security. --- paper_title: A New Graphical Password Scheme Resistant to Shoulder-Surfing paper_content: Shoulder-surfing is a known risk where an attacker can capture a password by direct observation or by recording the authentication session. Due to the visual interface, this problem has become exacerbated in graphical passwords. There have been some graphical schemes resistant or immune to shoulder-surfing, but they have significant usability drawbacks, usually in the time and effort to log in. In this paper, we propose and evaluate a new shoulder-surfing resistant scheme which has a desirable usability for PDAs. Our inspiration comes from the drawing input method in DAS and the association mnemonics in Story for sequence retrieval. The new scheme requires users to draw a curve across their password images orderly rather than click directly on them. The drawing input trick along with the complementary measures, such as erasing the drawing trace, displaying degraded images, and starting and ending with randomly designated images provide a good resistance to shoulder-surfing. A preliminary user study showed that users were able to enter their passwords accurately and to remember them over time. --- paper_title: Evaluating the usability and security of a graphical one-time PIN system paper_content: Traditional Personal Identification Numbers (PINs) are widely used, but the attacks in which they are captured have been increasing. One-time PINs offer better security, but potentially create greater workload for users. In this paper, we present an independent evaluation of a commercial system that makes PINs more resistant to observation attacks by using graphical passwords on a grid to generate a one-time PIN. 83 participants were asked to register with the system and log in at varying intervals. The successful login rate was approximately 91% after 3--4 days, and 97% after 9--10 days. Twenty five participants were retested after two years, and 27% of those were able to recall their pattern. We recorded 17 instances of failed attempts, and found that even though participants recalled the general shape of the pass-pattern in 13 of these instances, they could not recall its detailed location or sequence of cells. We conclude that GrIDsure is usable if people have one pass-pattern, but the level of security will depend on the context of use (it will work best in scenarios where repeated observations of transactions are unlikely), and the instructions given to users (without guidance, they are likely to chose from a small subset of the possible patterns which are easily guessed). --- paper_title: A comparison of perceived and real shoulder-surfing risks between alphanumeric and graphical passwords paper_content: Previous research has found graphical passwords to be more memorable than non-dictionary or "strong" alphanumeric passwords. Participants in a prior study expressed concerns that this increase in memorability could also lead to an increased susceptibility of graphical passwords to shoulder-surfing. This appears to be yet another example of the classic trade-off between usability and security for authentication systems. This paper explores whether graphical passwords' increased memorability necessarily leads to risks of shoulder-surfing. To date, there are no studies examining the vulnerability of graphical versus alphanumeric passwords to shoulder-surfing.This paper examines the real and perceived vulnerability to shoulder-surfing of two configurations of a graphical password, Passfaces™[30], compared to non-dictionary and dictionary passwords. A laboratory experiment with 20 participants asked them to try to shoulder surf the two configurations of Passfaces™ (mouse versus keyboard data entry) and strong and weak passwords. Data gathered included the vulnerability of the four authentication system configurations to shoulder-surfing and study participants' perceptions concerning the same vulnerability. An analysis of these data compared the relative vulnerability of each of the four configurations to shoulder-surfing and also compared study participants' real and perceived success in shoulder-surfing each of the configurations. Further analysis examined the relationship between study participants' real and perceived success in shoulder-surfing and determined whether there were significant differences in the vulnerability of the four authentication configurations to shoulder-surfing.Findings indicate that configuring data entry for Passfaces™ through a keyboard is the most effective deterrent to shoulder-surfing in a laboratory setting and the participants' perceptions were consistent with that result. While study participants believed that Passfaces™ with mouse data entry would be most vulnerable to shoulder-surfing attacks, the empirical results found that strong passwords were actually more vulnerable. --- paper_title: Design and evaluation of a shoulder-surfing resistant graphical password scheme paper_content: When users input their passwords in a public place, they may be at risk of attackers stealing their password. An attacker can capture a password by direct observation or by recording the individual's authentication session. This is referred to as shoulder-surfing and is a known risk, of special concern when authenticating in public places. Until recently, the only defense against shoulder-surfing has been vigilance on the part of the user. This paper reports on the design and evaluation of a game-like graphical method of authentication that is resistant to shoulder-surfing. The Convex Hull Click (CHC) scheme allows a user to prove knowledge of the graphical password safely in an insecure location because users never have to click directly on their password images. Usability testing of the CHC scheme showed that novice users were able to enter their graphical password accurately and to remember it over time. However, the protection against shoulder-surfing comes at the price of longer time to carry out the authentication. --- paper_title: Graphical passwords & qualitative spatial relations paper_content: A potential drawback of graphical password schemes is that they are more vulnerable to shoulder surfing than conventional alphanumeric text passwords. We present a variation of the Draw-a-Secret scheme originally proposed by Jermyn et al [1] that is more resistant to shoulder surfing through the use of a qualitative mapping between user strokes and the password, and the use of dynamic grids to both obfuscate attributes of the user secret and encourage them to use different surface realizations of the secret. The use of qualitative spatial relations relaxes the tight constraints on the reconstruction of a secret; allowing a range of deviations from the original. We describe QDAS (Qualitative Draw-A-Secret), an initial implementation of this graphical password scheme, and the results of an empirical study in which we examined the memorability of secrets, and their susceptibility to shoulder-surfing attacks, for both Draw-A-Secret and QDAS. --- paper_title: A Hybrid Password Authentication Scheme Based on Shape and Text paper_content: Text-based password authentication scheme tends to be more vulnerable to attacks such as shoulder surfing or a hidden camera. To overcome the vulnerabilities of traditional methods, visual or graphical password schemes have been developed as possible alternative solutions to text-based password schemes. Since it also has some drawbacks to simply adopt graphical password authentication, schemes using graphic and text have been developed. In this paper, a hybrid password authentication scheme based on shapes and texts is proposed. Shapes of strokes are used in the grid as the original passwords and users can log in with textual passwords via traditional input device. The method provides strong resistance to shoulder surfing or a hidden camera。Moreover, the scheme has high scalability and flexibility to enhance the authentication process security. The analysis of the security level of this approach is also discussed. --- paper_title: The urgency for effective user privacy-education to counter social engineering attacks on secure computer systems paper_content: Trusted people can fail to be trustworthy when it comes to protecting their aperture of access to secure computer systems due to inadequate education, negligence, and various social pressures. People are often the weakest link in an otherwise secure computer system and, consequently, are targeted for social engineering attacks. Social Engineering is a technique used by hackers or other attackers to gain access to information technology systems by getting the needed information (for example, a username and password) from a person rather than breaking into the system through electronic or algorithmic hacking methods. Such attacks can occur on both a physical and psychological level. The physical setting for these attacks occurs where a victim feels secure: often the workplace, the phone, the trash, and even on-line. Psychology is often used to create a rushed or officious ambiance that helps the social engineer to cajole information about accessing the system from an employee. Data privacy legislation in the United States and international countries that imposes privacy standards and fines for negligent or willful non-compliance increases the urgency to measure the trustworthiness of people and systems. One metric for determining compliance is to simulate, by audit, a social engineering attack upon an organization required to follow data privacy standards. Such an organization commits to protect the confidentiality of personal data with which it is entrusted. This paper presents the results of an approved social engineering audit made without notice within an organization where data security is a concern. Areas emphasized include experiences between the Social Engineer and the audited users, techniques used by the Social Engineer, and other findings from the audit. Possible steps to mitigate exposure to the dangers of Social Engineering through improved user education are reviewed. --- paper_title: Graphical passwords: Learning from the first twelve years paper_content: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology. --- paper_title: A Small Subgroup Attack for Recovering Ephemeral Keys in Chang and Chang Password Key Exchange Protocol paper_content: Three-party authenticated key exchange protocol is an important cryptographic technique in the secure communication areas. Recently Chang and Chang proposed a novel three party simple key exchange protocol and claimed the protocol is secure, efficient and practical. Unless their claim, a key recovery attack is proposed on the above protocol by recovering the ephemeral keys. One way of recovering the ephemeral key is to solve the mathematical hard Discrete Logarithm Problem (DLP). The DLP is solved by using a popular Pohlig-Hellman method in the above key recovery attack. In the present study, a new method based on the small subgroup attack to solve the DLP is discussed to recover the ephemeral keys. Computation of DLP is carried out by two stages, such as the prior-computation and DLP computation. The prior-computation is performed on off-line and the DLP computation is performed on on-line. The method is analyzed on a comprehensive set of experiments and the ephemeral keys are recovered in reduced time. Also, the key recovery attack on Chang and Chang password key exchange protocol is implemented by using the new method to recover the ephemeral key. --- paper_title: An Authentication and Key Agreement Scheme with Key Confirmation and Privacy-preservation for Multi-server Environments paper_content: In the internet environment, it is desirable for a user to login different servers by keying the same password and using the same smart card. This paper proposes an authentication and key agreement scheme with key confirmation for multi-server environments. Compared with the previous authentication and key agreement schemes for multi-server environments, the new scheme holds many merits. It satisfies the following properties: R1. Single registration; R2. User friendly; R3. Prevention of the replay, the password guessing without smart cards, the impersonation and the stolen-verifier attacks; R4. Resistance against server spoofing; R5. Mutual authentication; R6. Two-factor authentication; R7.Resistance against known-key attacks; R8. Perfect forward secrecy; R9. Scalability of login; R10. Anonymity of users. ---
Title: A Survey on the Use of Graphical Passwords in Security Section 1: INTRODUCTION Description 1: This section introduces the importance of authentication in information security, the drawbacks of traditional alphanumeric passwords, and the advantages of graphical passwords which use images to enhance memorability and security. Section 2: LITERATURE REVIEW Description 2: This section reviews existing studies on graphical passwords, focusing on specific schemes, security analyses, and prior surveys that laid the foundation for further research. Section 3: CATEGORIZATION OF GRAPHICAL PASSWORDS Description 3: This section categorizes graphical password schemes into four main categories: Drawmetric schemes, Locimetric schemes, Cognometric schemes, and Hybrid schemes, providing an overview and discussion of each category. Section 4: Drawmetric Schemes Description 4: This subsection provides an in-depth analysis of Drawmetric schemes, including various specific schemes, their development, strengths, and weaknesses. Section 5: Locimetric Schemes Description 5: This subsection explores Locimetric schemes, detailing their methodology, notable examples, and associated security concerns. Section 6: Cognometric Schemes Description 6: This subsection discusses Cognometric schemes, their reliance on image recognition for authentication, and the usability and security implications. Section 7: Hybrid Schemes Description 7: This subsection examines Hybrid schemes that combine elements from different types of schemes to enhance security and usability. Section 8: ATTACK TYPES BASED ON PASSWORD SPACE Description 8: This section introduces common attack methods like brute force and dictionary attacks and provides universal password space formulas for graphical password schemes. Section 9: Brute Force Attack Description 9: This subsection explains brute force attacks, their application to graphical passwords, and methods to resist these attacks by increasing password space. Section 10: Dictionary Attack Description 10: This subsection describes dictionary attacks, their implications for graphical passwords, and how user choice patterns can affect the efficacy of such attacks. Section 11: ATTACK TYPES BASED ON PASSWORD CAPTURE Description 11: This section discusses attacks based on capturing passwords, including shoulder surfing, intersection analysis, social engineering, and spyware attacks. Section 12: Shoulder Surfing Description 12: This subsection addresses the susceptibility of graphical passwords to shoulder surfing attacks and the schemes developed to resist such attacks. Section 13: Intersection Analysis Description 13: This subsection details intersection analysis attacks, particularly problematic for schemes using multiple image selections. Section 14: Social Engineering Description 14: This subsection explores social engineering attacks like tricking, phishing, and pharming targeting graphical passwords. Section 15: Spyware Attack Description 15: This subsection discusses the threat of spyware attacks, including keystroke-loggers, mouse-loggers, and screen-scrapers, and their impact on graphical password schemes. Section 16: CONCLUSION AND RECOMMENDATION Description 16: This section provides a summary of the survey, analyzing the strengths and weaknesses of graphical password schemes and offering recommendations for improving security and usability.
Bridging the Chasm: A Survey of Software Engineering Practice in Scientific Programming
18
--- paper_title: The Mythical Man-Month paper_content: The book, The Mythical Man-Month, Addison-Wesley, 1975 (excerpted in Datamation, December 1974), gathers some of the published data about software engineering and mixes it with the assertion of a lot of personal opinions. In this presentation, the author will list some of the assertions and invite dispute or support from the audience. This is intended as a public discussion of the published book, not a regular paper. --- paper_title: Engineering the Software for Understanding Climate Change paper_content: Climate scientists build large, complex simulations with little or no software engineering training—and don't readily adopt the latest software engineering tools and techniques. This ethnographic study of climate scientists shows that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques. --- paper_title: The Chimera of Software Quality paper_content: Despite years of computing progress, today's systems experience spectacular and all-too-frequent crashes, while many enormously expensive projects fail to produce anything useful. Of equal importance, and potentially more damaging, are the misleading smaller defects we tend to miss. From time to time, we must remind ourselves that the underlying quality of the software that our results and progress increasingly depend on will likely be flawed and even more dependent on independent corroboration than the science itself. Many scientific results are corrupted, perhaps fatally so, by undiscovered mistakes in the software used to calculate and present those results. --- paper_title: The T-experiments: errors in scientific software paper_content: Extensive tests showed that many software codes widely used in science and engineering are not as accurate as we would like to think. It is argued that better software engineering practices would help solve this problem, but realizing that the problem exists is an important first step. --- paper_title: Uncertainty, Complexity and Concepts of Good Science in Climate Change Modelling: Are GCMs the Best Tools? paper_content: In this paper we explore the dominant position of a particular style of scientific modelling in the provision of policy-relevant scientific knowledge on future climate change. We describe how the apical position of General Circulation Models (GCMs) appears to follow ‘logically’ both from conventional understandings of scientific representation and the use of knowledge, so acquired, in decision-making. We argue, however, that both of these particular understandings are contestable. In addition to questioning their current policy-usefulness, we draw upon existing analyses of GCMs which discuss model trade-offs, errors, and the effects of parameterisations, to raise questions about the validity of the conception of complexity in conventional accounts. An alternative approach to modelling, incorporating concepts of uncertainty, is discussed, and an illustrative example given for the case of the global carbon cycle. In then addressing the question of how GCMs have come to occupy their dominant position, we argue that the development of global climate change science and global environmental ‘management’ frameworks occurs concurrently and in a mutually supportive fashion, so uniting GCMs and environmental policy developments in certain industrialised nations and international organisations. The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’ are concealed in this mutual reinforcement. Additionally, a rather technocratic policy orientation to climate change may be supported by such science, even though it involves political choices which deserve to be more widely debated. --- paper_title: Mechanizing Proof: Computing, Risk, and Trust paper_content: Most aspects of our private and social lives---our safety, the integrity of the financial system, the functioning of utilities and other services, and national security---now depend on computing. But how can we know that this computing is trustworthy? In Mechanizing Proof, Donald MacKenzie addresses this key issue by investigating the interrelations of computing, risk, and mathematical proof over the last half century from the perspectives of history and sociology. His discussion draws on the technical literature of computer science and artificial intelligence and on extensive interviews with scientists and engineers. --- paper_title: Software release build process and components in ATLAS offline paper_content: ATLAS is one of the largest collaborations in the physical sciences and involves 3000 scientists and engineers from 174 institutions in 38 countries. The geographically dispersed developer community has produced a large amount of software which is organized in 10 projects. In this presentation we discuss how the software is built on a variety of compiler and operating system combinations every night. File level and package level parallelism together with multi-core build servers are used to perform fast builds of the different platforms in several branches. We discuss the different tools involved during the software release build process and also the various mechanisms implemented to provide performance gains and error detection and retry mechanisms in order to counteract network and other instabilities that would otherwise degrade the robustness of the system. The goal is to provide high quality software built as fast as possible ready for final validation and deployment. --- paper_title: Lessons from the JMCB Archive paper_content: We examine the online archive of the Journal of Money, Credit, and Banking, in which an author is required to deposit the data and code that replicate the results of his paper. We find that most authors do not fulfill this requirement. Of more than 150 empirical articles, fewer than 15 could be replicated. Despite all this, there is no doubt that a data/code archive is more conducive to replicable research than the alternatives. We make recommendations to improve the functioning of the archive. --- paper_title: The Role of Data and Program Code Archives in the Future of Economic Research paper_content: This essay examines the role of data and program-code archives in making economic research “replicable.” Replication of published results is recognized as an essential part of the scientific method. Yet, historically, both the “demand for” and “supply of” replicable results in economics has been minimal. “Respect for the scientific method” is not sufficient to motivate either economists or editors of professional journals to ensure the replicability of published results. We enumerate the costs and benefits of mandatory data and code archives, and argue that the benefits far exceed the costs. Progress has been made since the gloomy assessment of Dewald, Thursby and Anderson some twenty years ago in the American Economic Review, but much remains to be done before empirical economics ceases to be a “dismal science” when judged by the replicability of its published results. JEL Classification: B4, C8 (This is Federal Reserve Bank of St. Louis Research Division working paper 2005-14.) Revised version of paper prepared for the American Economic Association meeting, Philadelphia PA, January 9, 2005. Views expressed herein are solely those of the authors and not necessarily those of the Federal Reserve Bank of St. Louis or the Federal Reserve System, New York University, Drexel University, Fordham University, or their staffs. Correspondence: (1) [email protected]; (2) [email protected]; (3) [email protected]; (4) [email protected]. --- paper_title: A longitudinal study of development and maintenance in Norway: Report from the 2003 investigation paper_content: Abstract The amount of work on application systems being taken up by maintenance activities (work done on an IT-system after being put in production) has been one of the arguments of those speaking about a ‘software crisis’. We have earlier investigated the applicability of this notion, and propose to rather look at the percentage of work being done on application portfolio upkeep (work made to keep up the functional coverage of the application system portfolio of the organization. This also includes the development of replacement systems), to assess the efficiency of the application systems support in an organisation. This paper presents the main results of a survey investigation performed in 2003 in 54 Norwegian organisations within this area. The amount of application portfolio upkeep is significantly higher than in a similar investigation conducted in 1993. The level of maintenance is smaller (although not significantly) than in another similar investigation conducted in 1998. There was a significant increase in both maintenance and application portfolio upkeep from 1993 to 1998, which could partly be attributed to be the extra maintenance and replacement-oriented work necessary to deal with the ‘year 2000 problem.’ As for the 2003 investigation, the slow IT-market in general seems to have influenced the results negatively seen from the point of view of application systems support efficiency in organization. --- paper_title: Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals paper_content: Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals. --- paper_title: Measuring Reproducibility in Computer Systems Research paper_content: We describe a study into the willingness of Computer Systems researchers to share their code and data. We nd that . . . . We also propose a novel sharing specication scheme that will require researchers to specify the level of reproducibility that reviewers and readers can assume from a paper either submitted for publication, or published. --- paper_title: Uncertainty, Complexity and Concepts of Good Science in Climate Change Modelling: Are GCMs the Best Tools? paper_content: In this paper we explore the dominant position of a particular style of scientific modelling in the provision of policy-relevant scientific knowledge on future climate change. We describe how the apical position of General Circulation Models (GCMs) appears to follow ‘logically’ both from conventional understandings of scientific representation and the use of knowledge, so acquired, in decision-making. We argue, however, that both of these particular understandings are contestable. In addition to questioning their current policy-usefulness, we draw upon existing analyses of GCMs which discuss model trade-offs, errors, and the effects of parameterisations, to raise questions about the validity of the conception of complexity in conventional accounts. An alternative approach to modelling, incorporating concepts of uncertainty, is discussed, and an illustrative example given for the case of the global carbon cycle. In then addressing the question of how GCMs have come to occupy their dominant position, we argue that the development of global climate change science and global environmental ‘management’ frameworks occurs concurrently and in a mutually supportive fashion, so uniting GCMs and environmental policy developments in certain industrialised nations and international organisations. The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’ are concealed in this mutual reinforcement. Additionally, a rather technocratic policy orientation to climate change may be supported by such science, even though it involves political choices which deserve to be more widely debated. --- paper_title: Dealing with Risk in Scientific Software Development paper_content: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers. --- paper_title: Growth in a Time of Debt paper_content: We study economic growth and inflation at different levels of government and external debt. Our analysis is based on new data on forty-four countries spanning about two hundred years. The dataset incorporates over 3,700 annual observations covering a wide range of political systems, institutions, exchange rate arrangements, and historic circumstances. Our main findings are: First, the relationship between government debt and real GDP growth is weak for debt/GDP ratios below a threshold of 90 percent of GDP. Above 90 percent, median growth rates fall by one percent, and average growth falls considerably more. We find that the threshold for public debt is similar in advanced and emerging economies. Second, emerging markets face lower thresholds for external debt (public and private)--which is usually denominated in a foreign currency. When external debt reaches 60 percent of GDP, annual growth declines by about two percent; for higher levels, growth rates are roughly cut in half. Third, there is no apparent contemporaneous link between inflation and public debt levels for the advanced countries as a group (some countries, such as the United States, have experienced higher inflation when debt/GDP is high.) The story is entirely different for emerging markets, where inflation rises sharply as debt increases. --- paper_title: Lessons from the JMCB Archive paper_content: We examine the online archive of the Journal of Money, Credit, and Banking, in which an author is required to deposit the data and code that replicate the results of his paper. We find that most authors do not fulfill this requirement. Of more than 150 empirical articles, fewer than 15 could be replicated. Despite all this, there is no doubt that a data/code archive is more conducive to replicable research than the alternatives. We make recommendations to improve the functioning of the archive. --- paper_title: When Software Engineers Met Research Scientists: A Case Study paper_content: This paper describes a case study of software engineers developing a library of software components for a group of research scientists, using a traditional, staged, document-led methodology. The case study reveals two problems with the use of the methodology. The first is that it demands an upfront articulation of requirements, whereas the scientists had experience, and hence expectations, of emergent requirements; the second is that the project documentation does not suffice to construct a shared understanding. Reflecting on our case study, we discuss whether combining agile elements with a traditional methodology might have alleviated these problems. We then argue that the rich picture painted by the case study, and the reflections on methodology that it inspires, has a relevance that reaches beyond the original context of the study. --- paper_title: An overview of the Trilinos project paper_content: The Trilinos Project is an effort to facilitate the design, development, integration, and ongoing support of mathematical software libraries within an object-oriented framework for the solution of large-scale, complex multiphysics engineering and scientific problems. Trilinos addresses two fundamental issues of developing software for these problems: (i) providing a streamlined process and set of tools for development of new algorithmic implementations and (ii) promoting interoperability of independently developed software.Trilinos uses a two-level software structure designed around collections of packages. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. Packages exist underneath the Trilinos top level, which provides a common look-and-feel, including configuration, documentation, licensing, and bug-tracking.Here we present the overall Trilinos design, describing our use of abstract interfaces and default concrete implementations. We discuss the services that Trilinos provides to a prospective package and how these services are used by various packages. We also illustrate how packages can be combined to rapidly develop new algorithms. Finally, we discuss how Trilinos facilitates high-quality software engineering practices that are increasingly required from simulation software. --- paper_title: Configuration Management for Large-Scale Scientific Computing at the UK Met Office paper_content: The UK Met Office's flexible configuration management (FCM) system uses existing open source tools, adapted for use with high-performance scientific Fortran code, to help manage evolving code in its large-scale climate simulation and weather forecasting models. FCM has simplified the development process, improved team coordination, and reduced release cycles. --- paper_title: A survey of scientific software development paper_content: Software for scientific research purposes has received increased attention in recent years. Case studies have noted development practices, limitations, and problems in the development of scientific software. However, applicability of the results of these studies to improving the wider scientific software development practices is not known. This paper presents a survey of 60 scientific software developers. The survey was conducted online from August--September 2009, and aims to identify where improvements to scientific software practices can be made. While our results generally confirm previous work, we have found some notable differences. The use of IDEs and version control tools among the surveyed scientific software developers has increased, and trace-ability of scientific software is not as important to scientific software developers as it is to scientific software users. Documentation also appears to be more widely produced than previous studies indicate. However, there remains room for improvement in tool use, documentation, testing, and verification activities for scientific software development. --- paper_title: When Software Engineers Met Research Scientists: A Case Study paper_content: This paper describes a case study of software engineers developing a library of software components for a group of research scientists, using a traditional, staged, document-led methodology. The case study reveals two problems with the use of the methodology. The first is that it demands an upfront articulation of requirements, whereas the scientists had experience, and hence expectations, of emergent requirements; the second is that the project documentation does not suffice to construct a shared understanding. Reflecting on our case study, we discuss whether combining agile elements with a traditional methodology might have alleviated these problems. We then argue that the rich picture painted by the case study, and the reflections on methodology that it inspires, has a relevance that reaches beyond the original context of the study. --- paper_title: An overview of the Trilinos project paper_content: The Trilinos Project is an effort to facilitate the design, development, integration, and ongoing support of mathematical software libraries within an object-oriented framework for the solution of large-scale, complex multiphysics engineering and scientific problems. Trilinos addresses two fundamental issues of developing software for these problems: (i) providing a streamlined process and set of tools for development of new algorithmic implementations and (ii) promoting interoperability of independently developed software.Trilinos uses a two-level software structure designed around collections of packages. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. Packages exist underneath the Trilinos top level, which provides a common look-and-feel, including configuration, documentation, licensing, and bug-tracking.Here we present the overall Trilinos design, describing our use of abstract interfaces and default concrete implementations. We discuss the services that Trilinos provides to a prospective package and how these services are used by various packages. We also illustrate how packages can be combined to rapidly develop new algorithms. Finally, we discuss how Trilinos facilitates high-quality software engineering practices that are increasingly required from simulation software. --- paper_title: Software Development Environments for Scientific and Engineering Software: A Series of Case Studies paper_content: The need for high performance computing applications for computational science and engineering projects is growing rapidly, yet there have been few detailed studies of the software engineering process used for these applications. The DARPA High Productivity Computing Systems Program has sponsored a series of case studies of representative computational science and engineering projects to identify the steps involved in developing such applications (i.e. the life cycle, the workflows, technical challenges, and organizational challenges). Secondary goals were to characterize tool usage and identify enhancements that would increase the programmers' productivity. Finally, these studies were designed to develop a set of lessons learned that can be transferred to the general computational science and engineering community to improve the software engineering process used for their applications. Nine lessons learned from five representative projects are presented, along with their software engineering implications, to provide insight into the software development environments in this domain. --- paper_title: Software Development Cultures and Cooperation Problems: A Field Study of the Early Stages of Development of Software for a Scientific Community paper_content: In earlier work, I identified a particular class of end-user developers, who include scientists and whom I term `professional end-user developers', as being of especial interest. Here, I extend this work by articulating a culture of professional end-user development, and illustrating by means of a field-study how the influence of this culture causes cooperation problems in an inter-disciplinary team developing a software system for a scientific community. My analysis of the field study data is informed by some recent literature on multi-national work cultures. Whilst acknowledging that viewing a scientific development through a lens of software development culture does not give a full picture, I argue that it nonetheless provides deep insights. --- paper_title: Configuration Management for Large-Scale Scientific Computing at the UK Met Office paper_content: The UK Met Office's flexible configuration management (FCM) system uses existing open source tools, adapted for use with high-performance scientific Fortran code, to help manage evolving code in its large-scale climate simulation and weather forecasting models. FCM has simplified the development process, improved team coordination, and reduced release cycles. --- paper_title: The ASC-Alliance Projects: A Case Study of Large-Scale Parallel Scientific Code Development paper_content: Computational scientists face many challenges when developing software that runs on large-scale parallel machines. However, software-engineering researchers haven't studied their software development processes in much detail. To better understand the nature of software development in this context, the authors examined five large-scale computational science software projects operated at the five ASC-Alliance centers. --- paper_title: Supporting scientific SE process improvement paper_content: The increasing complexity of scientific software can result in significant impacts on the research itself. In traditional software development projects, teams adopt historical best practices into their development processes to mitigate the risk of such problems. In contrast, the gap that has formed between the traditional and scientific software communities leaves scientists to rely on only their own experience when facing software process improvement (SPI) decisions. Rather than expect scientists to become software engineering (SE) experts or the SE community to learn all of the intricacies involved in scientific software development projects, we seek a middle ground. The Scientific Software Process Improvement Framework (SciSPIF) will allow scientists to self-drive their own SPI efforts while leveraging the collective experiences of their peers and linking their concerns to established SE best practices. This proposal outlines the known challenges of scientific software development, relevant concepts from traditional SE research, and our planned methodology for collecting the data required to construct SciSPIF while staying grounded in the actual goals and concerns of the scientists. --- paper_title: Spatial Data e-Infrastructure paper_content: This paper examines overlap between e-Science and geospatial communities using work undertaken as part of the JISC funded Secure Access to Geospatial Services (SEE-GEO) project. Working from a position that open standards provide the best means of achieving interoperability, the case studies demonstrate the use of Grid technology and open geospatial standard interfaces to realise classic spatial data infrastructure scenarios. These examples are used to illustrate just how e-infrastructures enable seamless, security driven spatial data access at the national, regional and global scale. --- paper_title: Exploring XP for scientific research paper_content: Can we successfully apply XP (Extreme Programming) in a scientific research context? A pilot project at the NASA Langley Research Center tested XPs applicability in this context. Since the cultural environment at a government research center differs from the customer-centric business view, eight of XPs 12 practices seemed incompatible with the existing research culture. Despite initial awkwardness, the authors determined that XP can function in situations for which it appears to be ill suited. --- paper_title: Engineering the Software for Understanding Climate Change paper_content: Climate scientists build large, complex simulations with little or no software engineering training—and don't readily adopt the latest software engineering tools and techniques. This ethnographic study of climate scientists shows that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques. --- paper_title: Assuring the Future? A Look at Validating Climate Model Software paper_content: The scientific community studying climate change uses a variety of strategies to assess the correctness of their models. These software systems represent large, sophisticated, fine-grained scientific tools. The validation practices described are thus tailored to a domain in which software and software engineering practices are useful but cannot be allowed to get in the way of the science. In audio interviews, two scientists--Robert Jacob, a computational climate scientist at Argonne National Laboratory, and Gavin Schmidt, a climatologist and climate modeler at the NASA Goddard Institute for Space Studies--discuss what it means to develop and communicate ground-breaking results. --- paper_title: Scientific Software Development at a Research Facility paper_content: Software engineers at Daresbury Laboratory develop experiment control and data acquisition software to support scientific research. Here, they review their experiences and learning over the years. --- paper_title: Agile methods in biomedical software development: a multi-site experience report paper_content: BackgroundAgile is an iterative approach to software development that relies on strong collaboration and automation to keep pace with dynamic environments. We have successfully used agile development approaches to create and maintain biomedical software, including software for bioinformatics. This paper reports on a qualitative study of our experiences using these methods.ResultsWe have found that agile methods are well suited to the exploratory and iterative nature of scientific inquiry. They provide a robust framework for reproducing scientific results and for developing clinical support systems. The agile development approach also provides a model for collaboration between software engineers and researchers. We present our experience using agile methodologies in projects at six different biomedical software development organizations. The organizations include academic, commercial and government development teams, and included both bioinformatics and clinical support applications. We found that agile practices were a match for the needs of our biomedical projects and contributed to the success of our organizations.ConclusionWe found that the agile development approach was a good fit for our organizations, and that these practices should be applicable and valuable to other biomedical software development efforts. Although we found differences in how agile methods were used, we were also able to identify a set of core practices that were common to all of the groups, and that could be a focus for others seeking to adopt these methods. --- paper_title: What Do We Know about Scientific Software Development's Agile Practices? paper_content: The development of scientific software has similarities with processes that follow the software engineering "agile manifesto": responsiveness to change and collaboration are of utmost importance. But how well do current scientific software-development processes match the practices found in agile development methods, and what are the effects of using agile practices in such processes? --- paper_title: Dealing with Risk in Scientific Software Development paper_content: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers. --- paper_title: Chaste: using agile programming techniques to develop computational biology software paper_content: Cardiac modelling is the area of physiome modelling where the available simulation software is perhaps most mature, and it therefore provides an excellent starting point for considering the software requirements for the wider physiome community. In this paper, we will begin by introducing some of the most advanced existing software packages for simulating cardiac electrical activity. We consider the software development methods used in producing codes of this type, and discuss their use of numerical algorithms, relative computational efficiency, usability, robustness and extensibility. We then go on to describe a class of software development methodologies known as test-driven agile methods and argue that such methods are more suitable for scientific software development than the traditional academic approaches. A case study is a project of our own, Cancer, Heart and Soft Tissue Environment, which is a library of computational biology software that began as an experiment in the use of agile programming methods. We present our experiences with a review of our progress thus far, focusing on the advantages and disadvantages of this new approach compared with the development methods used in some existing packages. We conclude by considering whether the likely wider needs of the cardiac modelling community are currently being met and suggest that, in order to respond effectively to changing requirements, it is essential that these codes should be more malleable. Such codes will allow for reliable extensions to include both detailed mathematical models--of the heart and other organs--and more efficient numerical techniques that are currently being developed by many research groups worldwide. --- paper_title: Introducing agile development into bioinformatics: an experience report paper_content: This experience report describes our efforts to introduce agile development techniques incrementally into our customer's organization in the National Cancer Institute and develop a partnering relationship in the process. The report addresses the steps we have taken not only to deploy the practices, but also to gain customer support for them. It addresses variations we have used to adapt to our customer's environment, including our approach to involving customer personnel at remote locations. We also address challenges we still must face, including how best to manage a product-line with agile development techniques. --- paper_title: Domain-specific languages: an annotated bibliography paper_content: We survey the literature available on the topic of domain-specific languages as used for the construction and maintenance of software systems. We list a selection of 75 key publications in the area, and provide a summary for each of the papers. Moreover, we discuss terminology, risks and benefits, example domain-specific languages, design methodologies, and implementation techniques. --- paper_title: Braincurry: A Domain- Specific Language for Integrative Neuroscience paper_content: This paper describes Braincurry, a domain-specific, declarative language for describing and analysing experiments in neuroscience. Braincurry has three goals: to allow experiments and data analysis to be described in a way that is sufficiently abstract to serve as a definition; to facilitate carrying out experiments by executing such descriptions; and to be directly usable by end users: neuroscientists. We adopted an experimental and incremental approach to the design and implementation of Braincurry, focusing on the neurophysiological response to visual stimuli in locusts as a test case. Braincurry is currently implemented as an embedding in Haskell, which is a highly effective tool for this kind of exploratory language design. The declarative nature of Haskell and its flexible syntax fitted with our goals. We discuss the requirements for a realistic language meeting the above goals, describe the current Braincurry design and how it may be generalised, and explain how some particularly challenging hard real-time requirements were met. --- paper_title: Dealing with Risk in Scientific Software Development paper_content: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers. --- paper_title: Chaste: using agile programming techniques to develop computational biology software paper_content: Cardiac modelling is the area of physiome modelling where the available simulation software is perhaps most mature, and it therefore provides an excellent starting point for considering the software requirements for the wider physiome community. In this paper, we will begin by introducing some of the most advanced existing software packages for simulating cardiac electrical activity. We consider the software development methods used in producing codes of this type, and discuss their use of numerical algorithms, relative computational efficiency, usability, robustness and extensibility. We then go on to describe a class of software development methodologies known as test-driven agile methods and argue that such methods are more suitable for scientific software development than the traditional academic approaches. A case study is a project of our own, Cancer, Heart and Soft Tissue Environment, which is a library of computational biology software that began as an experiment in the use of agile programming methods. We present our experiences with a review of our progress thus far, focusing on the advantages and disadvantages of this new approach compared with the development methods used in some existing packages. We conclude by considering whether the likely wider needs of the cardiac modelling community are currently being met and suggest that, in order to respond effectively to changing requirements, it is essential that these codes should be more malleable. Such codes will allow for reliable extensions to include both detailed mathematical models--of the heart and other organs--and more efficient numerical techniques that are currently being developed by many research groups worldwide. --- paper_title: A Software Chasm: Software Engineering and Scientific Computing paper_content: Some time ago, a chasm opened between the scientific-computing community and the software engineering community. Originally, computing meant scientific computing. Today, science and engineering applications are at the heart of software systems such as environmental monitoring systems, rocket guidance systems, safety studies for nuclear stations, and fuel injection systems. Failures of such health-, mission-, or safety-related systems have served as examples to promote the use of software engineering best practices. Yet, the bulk of the software engineering community's research is on anything but scientific-application software. This chasm has many possible causes. In this article, we look at the impact of one particular contributor in industry. --- paper_title: Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research paper_content: Scholarly dissemination and communication standards are changing to reflect the increasingly computational nature of scholarly research, primarily to include the sharing of the data and code associated with published results. This paper presents a formalized set of best practice recommendations for computational scientists wishing to disseminate reproducible research, facilitate innovation by enabling data and code re-use, and enable broader communication of the output of digital scientific research. We distinguish two forms of collaboration to motivate choices of software environment for computational scientific research. We also present these Best Practices as a living, evolving, and changing document on wiki. --- paper_title: A Risk-Based, Practice-Centered Approach to Project Management for HPCMP CREATE paper_content: The Department of Defense's High Performance Computing Modernization Program Computational Research and Engineering Acquisition Tools and Environments Computational Research and Engineering Acquisition Tools and Environments (CREATE) program is developing and deploying multiphysics high-performance computing software applications for engineers to design and make accurate performance predictions for military aircraft, ships, ground vehicles, and radio frequency antennas. When CREATE started (2007), no commonly recognized set of successful software project management practices existed for developing multiphysics, HPC engineering software. Based on lessons learned from the HPC and computational engineering communities, CREATE leadership developed and implemented a risk-based, practice-centered strategy to organize and manage the highly distributed program. This approach led to a good balance between ensuring a sufficiently structured workflow and accountability and providing the flexibility and agility necessary to create new sets of engineering tools with the capabilities needed to design next-generation weapon systems. --- paper_title: A new DoD initiative: the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program paper_content: In FY2008, the U.S. Department of Defense (DoD) initiated the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a $360M program with a two-year planning phase and a ten-year execution phase. CREATE will develop and deploy three computational engineering tool sets for DoD acquisition programs to use to design aircraft, ships and radio-frequency antennas. The planning and execution of CREATE are based on the 'lessons learned' from case studies of large-scale computational science and engineering projects. The case studies stress the importance of a stable, close-knit development team; a focus on customer needs and requirements; verification and validation; flexible and agile planning, management, and development processes; risk management; realistic schedules and resource levels; balanced short- and long-term goals and deliverables; and stable, long-term support by the program sponsor. Since it began in FY2008, the CREATE program has built a team and project structure, developed requirements and begun validating them, identified candidate products, established initial connections with the acquisition programs, begun detailed project planning and development, and generated the initial collaboration infrastructure necessary for success by its multi-institutional, multidisciplinary teams. --- paper_title: MAINTAINING CORRECTNESS IN SCIENTIFIC PROGRAMS paper_content: Combine a high rate of change (which makes correctness hard to maintain) with an increased sensitivity to failure to maintain correctness and you have a big problem. Solving this problem must be the focus of our methodology. In this paper, the author describes the layered approach that he found to be the most successful in maintaining correctness in the face of rapid change. --- paper_title: Five Recommended Practices for Computational Scientists Who Write Software paper_content: Few software engineering techniques and approaches are specifically useful for computational scientists, and despite recent efforts, it could be many years before a consolidated handbook is available. Meanwhile, computational scientists can look to the practices of other scientists who write successful software. --- paper_title: Software Carpentry: Getting Scientists to Write Better Code by Making Them More Productive paper_content: For the past years, my colleagues and I have developed a one-semester course that teaches scientists and engineers the "common core" of modern software development. Our experience shows that an investment of 150 hours-25 of lectures and the rest of practical work-can improve productivity by roughly 20 percent. That's one day a week, one less semester in a master's degree, or one less year for a typical PhD. The course is called software carpentry, rather than software engineering, to emphasize the fact that it focuses on small-scale and immediately practical issues. All of the material is freely available under an open-source license at www.swc.scipy.org and can be used both for self-study and in the classroom. This article describes what the course contains, and why --- paper_title: Verification of Computer Simulation Models paper_content: The problem of validating computer simulation models of industrial systems has received only limited attention in the management science literature. The purpose of this paper is to consider the problem of validating computer models in the light of contemporary thought in the fields of philosophy of science, economic theory, and statistics. In order to achieve this goal we have attempted to gather together and present some of the ideas of scientific philosophers, economists, statisticians, and practitioners in the field of simulation which are relevant to the problem of verifying simulation models. We have paid particular attention to the writings of economists who have been concerned with testing the validity of economic models. Among the questions which we shall consider are included: What does it mean to verify a computer model of an industrial system? Are there any differences between the verification of computer models and the verification of other types of models? If so, what are some of these differences? Also considered are a number of measures and techniques for testing the "goodness of fit" of time series generated by computer models to observed historical series. --- paper_title: 'Integronsters', integral and integrated modeling paper_content: In many cases model integration treats models as software components only, ignoring the fluid relationship between models and reality, the evolving nature of models and their constant modification and recalibration. As a result, with integrated models we find increased complexity, where changes that used to impact only relatively contained models of subsystems, now propagate throughout the whole integrated system. This makes it harder to keep the overall complexity under control and, in a way, defeats the purpose of modularity, when efficiency is supposed to be gained from independent development of modules. Treating models only as software in solving the integration challenge may give birth to 'integronsters' - constructs that are perfectly valid as software products but ugly or even useless as models. We argue that one possible remedy is to learn to use data sets as modules and integrate them into the models. Then the data that are available for module calibration can serve as an intermediate linkage tool, sitting between modules and providing a module-independent baseline dynamics, which is then incremented when scenarios are to be run. In this case it is not the model output that is directed into the next model input, but model output is presented as a variation around the baseline trajectory, and it is this variation that is then fed into the next module down the chain. However still with growing overall complexity, calibration can become an important limiting factor, giving more promise to the integral approach, when the system is modeled and simplified as a whole. --- paper_title: Computational Science Demands a New Paradigm paper_content: The field has reached a threshold at which better organization becomes crucial. New methods of verifying and validating complex codes are mandatory if computational science is to fulfill its promise for science and society. --- paper_title: Lessons from Space paper_content: Given the parallels between the complexity of human spaceflight and large software systems, there are many things we developers can learn from successful space programs, such as the Soyuz. First, limiting a project's scope and complexity early on can have a dramatic payoff in its success and longevity. In addition, adding generous margins to early estimates (and any subsequent revisions) will ease the pain of development and deployment. Furthermore, gradual evolution with a working program at each step, rather than massive rewrites, benefits from successful architectures and teams, while also retaining the software's customer base and third-party contributors. Finally, a well-defined modular structure can increase the software's versatility yielding economies of scope and scale over its lifetime. --- paper_title: Engineering the Software for Understanding Climate Change paper_content: Climate scientists build large, complex simulations with little or no software engineering training—and don't readily adopt the latest software engineering tools and techniques. This ethnographic study of climate scientists shows that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques. --- paper_title: Predictive Capability Maturity Model for computational modeling and simulation. paper_content: The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering applicationmore » of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less --- paper_title: Self-Perceptions about Software Engineering: A Survey of Scientists and Engineers paper_content: Scientists and engineers devote considerable effort to developing large, complex codes to solve important problems. However, while they often develop useful code, many scientists and engineers are frequently unaware of how various software engineering practices can help them write better code. This article presents the results of a survey on this topic. --- paper_title: Understanding the High-Performance-Computing Community: A Software Engineer's Perspective paper_content: Computational scientists developing software for HPC systems face unique software engineering issues. Attempts to transfer SE technologies to this domain must take these issues into account. --- paper_title: Dealing with Risk in Scientific Software Development paper_content: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers. --- paper_title: Managing Chaos: Lessons Learned Developing Software in the Life Sciences paper_content: In the life sciences, the need to balance the costs and benefits of introducing software processes into a research environment presents a distinct set of challenges due to the cultural disconnect between life sciences research and software engineering. The Institute for Systems Biology's research informatics team has studied these challenges and developed a software process to address them. --- paper_title: Hints on Test Data Selection: Help for the Practicing Programmer paper_content: In many cases tests of a program that uncover simple errors are also effective in uncovering much more complex errors. This so-called coupling effect can be used to save work during the testing process. --- paper_title: A Scientific Function Test Framework for Modular Environmental Model Development: Application to the Community Land Model paper_content: As environmental models have become more complicated, we need new tools to analyze and validate these models and to facilitate collaboration among field scientists, observation dataset providers, environmental system modelers, and computer scientists. Modular design and function test of environmental models have gained attention recently within the Biological and Environmental Research Program of the U.S. Department of Energy. In this paper, we will present our methods and software tools 1) to analyze environmental software and 2) to generate modules for scientific function testing of environmental models. We have applied these methods to the Community Land Model with three typical scenarios: 1) benchmark case function validation, 2) observation-constraint function validation, and 3) a virtual root module generation for root function investigation and evaluation. We believe that our strategies and experience in scientific function test framework can be beneficial to many other research programs that adapt integrated environmental modeling methodology. --- paper_title: Testing Scientific Software: A Systematic Literature Review paper_content: Context: Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective: This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method: We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results: We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions: Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. --- paper_title: Examining random and designed tests to detect code mistakes in scientific software paper_content: Abstract Successfully testing computational software to detect code mistakes is impacted by multiple factors. One factor is the tolerance accepted in test output. Other factors are the nature of the code mistake, the characteristics of the code structure, and the choice of test input. We have found that randomly generated test input is a viable approach to testing for code mistakes and that simple structural metrics have little predictive power in the type of testing required. We provide further evidence that reduction of tolerance in expected test output has a much larger impact than running many more tests to discover code mistakes. --- paper_title: Lessons from Space paper_content: Given the parallels between the complexity of human spaceflight and large software systems, there are many things we developers can learn from successful space programs, such as the Soyuz. First, limiting a project's scope and complexity early on can have a dramatic payoff in its success and longevity. In addition, adding generous margins to early estimates (and any subsequent revisions) will ease the pain of development and deployment. Furthermore, gradual evolution with a working program at each step, rather than massive rewrites, benefits from successful architectures and teams, while also retaining the software's customer base and third-party contributors. Finally, a well-defined modular structure can increase the software's versatility yielding economies of scope and scale over its lifetime. --- paper_title: Building PDE codes to be verifiable and validatable paper_content: For codes that solve nonlinear partial differential equations (PDEs), powerful methodologies already exist for verification of codes, verification of calculations, and validation (V2V). If computational scientists and engineers are serious about these issues, they will take the responsibility and the relatively little extra effort to design (or modify) their codes so that independent users can confirm V2V. --- paper_title: Engineering the Software for Understanding Climate Change paper_content: Climate scientists build large, complex simulations with little or no software engineering training—and don't readily adopt the latest software engineering tools and techniques. This ethnographic study of climate scientists shows that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques. --- paper_title: Assuring the Future? A Look at Validating Climate Model Software paper_content: The scientific community studying climate change uses a variety of strategies to assess the correctness of their models. These software systems represent large, sophisticated, fine-grained scientific tools. The validation practices described are thus tailored to a domain in which software and software engineering practices are useful but cannot be allowed to get in the way of the science. In audio interviews, two scientists--Robert Jacob, a computational climate scientist at Argonne National Laboratory, and Gavin Schmidt, a climatologist and climate modeler at the NASA Goddard Institute for Space Studies--discuss what it means to develop and communicate ground-breaking results. --- paper_title: Limited discrepancy search revisited paper_content: Harvey and Ginsberg's limited discrepancy search (LDS) is based on the assumption that costly heuristic mistakes are made early in the search process. Consequently, LDS repeatedly probes the state space, going against the heuristic (i.e., taking discrepancies) a specified number of times in all possible ways and attempts to take those discrepancies as early as possible. LDS was improved by Richard Korf, to become improved LDS (ILDS), but in doing so, discrepancies were taken as late as possible, going against the original assumption. Many subsequent algorithms have faithfully inherited Korf's interpretation of LDS, and take discrepancies late. This then raises the question: Should we take our discrepancies late or early? We repeat the original experiments performed by Harvey and Ginsberg and those by Korf in an attempt to answer this question. We also investigate the early stopping condition of the YIELDS algorithm, demonstrating that it is simple, elegant and efficient. --- paper_title: On Testing Non-testable Programs paper_content: It is widely accepted that the fundamental limitation of using program testing techniques to determine the correctness of a program is the inability to extrapolate from the correctness of results for a proper subset of the input domain to the program's correctness for all elements of the domain. In particular, for any proper subset of the domain there are infinitely many programs which produce the correct output on those elements, but produce an incorrect output for some other domain element. None the less we routinely test programs to increase our confidence in their correctness, and a great deal of research is currently being devoted to improving the effectiveness of program testing. These efforts fall into three primary categories: (1) the development of a sound theoretical basis for testing; (2) devising and improving testing methodologies, particularly mechanizable ones; (3) the definition of accurate measures of and criteria for test data adequacy. Almost all of the research on software testing therefore focuses on the development and analysis of input data. In particular there is an underlying assumption that once this phase is complete, the remaining tasks are straightforward. These consist of running the program on the selected data, producing output which is then examined to determine the program's correctness on the test data. The mechanism which checks this correctness is known as an oracle, and the belief that the tester is routinely able to determine whether or not the test output is correct is the oracle assumption.' • 2 --- paper_title: Software Engineering for Scientists paper_content: At two recent workshops, participants discussed the juxtaposition of software engineering with the development of scientific computational software. --- paper_title: Dealing with Risk in Scientific Software Development paper_content: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers. --- paper_title: The ASC-Alliance Projects: A Case Study of Large-Scale Parallel Scientific Code Development paper_content: Computational scientists face many challenges when developing software that runs on large-scale parallel machines. However, software-engineering researchers haven't studied their software development processes in much detail. To better understand the nature of software development in this context, the authors examined five large-scale computational science software projects operated at the five ASC-Alliance centers. --- paper_title: Assessing climate model software quality : a defect density analysis of three models paper_content: Abstract. A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model, one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness. --- paper_title: How Well Do Coupled Models Simulate Today's Climate? paper_content: Coupled climate models are sophisticated tools designed to simulate the Earth climate system and the complex interactions between its components. Currently, more than a dozen centers around the world develop climate models to enhance our understanding of climate and climate change and to support the activities of the Intergovernmental Panel on Climate Change (IPCC). However, climate models are not perfect. Our theoretical understanding of climate is still incomplete, and certain simplifying assumptions are unavoidable when building these models. This introduces biases into their simulations, which sometimes are surprisingly difficult to correct. Model imperfections have attracted criticism, with some arguing that model-based projections of climate --- paper_title: Software Testing and Verification in Climate Model Development paper_content: Over the past 30 years, most climate models have grown from relatively simple representations of a few atmospheric processes to complex multidisciplinary systems. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Verification processes for model implementations rely almost exclusively on some combination of detailed analyses of output from full climate simulations and system-level regression tests. Besides being costly in terms of developer time and computing resources, these testing methodologies are limited in the types of defects they can detect, isolate, and diagnose. Mitigating these weaknesses of coarse-grained testing with finer-grained unit tests has been perceived as cumbersome and counterproductive. Recent advances in commercial software tools and methodologies have led to a renaissance of systematic fine-grained testing. This opens new possibilities for testing climate-modeling-software methodologies. --- paper_title: The ASC-Alliance Projects: A Case Study of Large-Scale Parallel Scientific Code Development paper_content: Computational scientists face many challenges when developing software that runs on large-scale parallel machines. However, software-engineering researchers haven't studied their software development processes in much detail. To better understand the nature of software development in this context, the authors examined five large-scale computational science software projects operated at the five ASC-Alliance centers. --- paper_title: Task-directed software inspection paper_content: Software inspection is recognized as an effective verification technique. Despite this fact, the use of inspection is surprisingly low. This paper describes a new inspection technique, called task-directed inspection (TDI), and a light-weight process, that were used to introduce inspection in a particular industrial environment. This environment had no history of inspections, was resistant to the idea of inspection, but had a situation where confidence in a safety-related legacy suite of software had to be increased. The characteristics of TDI are explored. They give rise to a variety of approaches that may encourage more widespread use of inspections. This paper examines the industrial exercise as a case study, with the intent that it be useful in other situations that share characteristics with the situation described. --- paper_title: Scientific Software Testing: Analysis with Four Dimensions paper_content: By analyzing our testing exercise through the four dimensions of context, goals, techniques, and adequacy, we developed a better understanding of how to effectively test a piece of scientific software. Once we considered the scientist-tester as part of the testing system, the exercise evolved in a way that made use of and increased his knowledge of the software. One result was an approach to software assessment that combines inspection with code execution. An other result was the suppression of process-driven testing in favor of goal centric approaches. The combination of software engineer working with scientist was success ful in this case. The software engineer brings a toolkit of ideas, and the scientist chooses and fashions the tools into some thing that works for a specific situation. Unlike many other types of software systems, scientific software includes the scientist as an integral part of the system. The tools that support the scientist must include the scientist's knowledge and goals in their design. This represents a different way of considering the juxtaposition of software engineering with scientific software development. --- paper_title: The Chimera of Software Quality paper_content: Despite years of computing progress, today's systems experience spectacular and all-too-frequent crashes, while many enormously expensive projects fail to produce anything useful. Of equal importance, and potentially more damaging, are the misleading smaller defects we tend to miss. From time to time, we must remind ourselves that the underlying quality of the software that our results and progress increasingly depend on will likely be flawed and even more dependent on independent corroboration than the science itself. Many scientific results are corrupted, perhaps fatally so, by undiscovered mistakes in the software used to calculate and present those results. --- paper_title: The T-experiments: errors in scientific software paper_content: Extensive tests showed that many software codes widely used in science and engineering are not as accurate as we would like to think. It is argued that better software engineering practices would help solve this problem, but realizing that the problem exists is an important first step. --- paper_title: What every computer scientist should know about floating-point arithmetic paper_content: Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point. --- paper_title: Dealing with Risk in Scientific Software Development paper_content: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers. --- paper_title: A Component Architecture for High-Performance Scientific Computing paper_content: The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry. --- paper_title: A Component Architecture for High-Performance Scientific Computing paper_content: The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry. --- paper_title: Object construction and destruction design patterns in Fortran 2003 paper_content: Abstract This paper presents object-oriented design patterns in the context of object construction and destruction. The examples leverage the newly supported object-oriented features of Fortran 2003. We describe from the client perspective two patterns articulated by Gamma et al. [1]: abstract factory and factory method. We also describe from the implementation perspective one new pattern: the object pattern. We apply the Gamma et al. patterns to solve a partial differential equation, and we discuss applying the new pattern to a quantum vortex dynamics code. Finally, we address consequences and describe the use of the patterns in two open-source software projects: ForTrilinos and Morfeus. --- paper_title: Patterns in scientific software: an introduction paper_content: Patterns are well understood methodology for object-oriented software architecture, especially for business applications. Scientific programmers have generally avoided object-oriented approaches because of their heavy computational overhead, but the benefits of using patterns for scientific problems can outweigh their costs. This article introduces the concept of object oriented software patterns and discusses how they can be applied to scientific software problems. After a brief explanation of what patterns are and why they can be relevant to scientific software, the author explores the application of patterns to dynamic-systems simulation, such as molecular dynamics, and identifies four design patterns that emerge in modeling such systems. To illustrate how to reuse a general pattern for a specific problem, he applies one of the dynamic simulation patterns to the different problem of hydrodynamic chemistry tracers. --- paper_title: Object-oriented design patterns in Fortran 90/95: mazev1, mazev2 and mazev3 paper_content: This paper discusses the concept, application, and usefulness of software design patterns for scientific programming in Fortran 90/95. An example from the discipline of object-oriented design patterns, that of a game based on navigation through a maze, is used to describe how some important patterns can be implemented in Fortran 90/95 and how the progressive introduction of design patterns can usefully restructure Fortran software as it evolves. This example is complemented by a discussion of how design patterns have been used in a real-life simulation of Particle-in-Cell plasma physics. The following patterns are mentioned in this paper: Factory, Strategy, Template, Abstract Factory and Facade. --- paper_title: Design Patterns for e-Science paper_content: This is a book about a code and about coding. The code is a case study which has been used to teachcourses in e-Science atthe Australian NationalUniv- sity since 2001. Students learn advanced programming skills and techniques TM in the Java language. Above all, they learn to apply useful object-oriented design patterns as they progressively refactor and enhance the software. We think our case study,EScope, is as close to real life as you can get! It is a smaller version of a networked, graphical, waveform browser which is used in the control rooms of fusion energy experiments around the world. It is quintessential e-Science in the sense of e-Science being computer science and information technology in the service of science. It is not, speci?cally, Grid-enabled, but we develop it in a way that will facilitate its deployment onto the Grid. The standard version ofEScope interfaces with a specialised database for waveforms, and related data, known asMDSplus. On the acc- panying CD, we have provided you with software which will enable you to installMDSplus,EScope and sample data ?les onto Windows or Linux c- puters. There is much additional software including many versions of the case study as it gets built up and progressively refactored using design patterns. There will be a home web-site for this book which will contain up-to-date information about the software and other aspects of the case study. --- paper_title: Design patterns and Fortran 90/95 paper_content: In the literature on object oriented programming (OO), design patterns are a very popular subject. Apart from any hype that may be connected to the concept, they are supposed to help you look at a programming problem and come up with a robust design for its solution. The reason design patterns work is not that they are something new, but instead that they are time-honoured, well-developed solutions.I will not repeat the story about architectural design patterns and Christopher Alexander who recognised their potential. Instead I will try to explain how these (software) design patterns can be used in setting up Fortran 90/95 programs, despite the "fact" that Fortran 90/95 lacks certain OO features, such as inheritance and polymorphism. It may not be stressed in all OO literature, but design patterns help you find solutions that do not necessarily involve inheritance or polymorphism (cf. Shalloway and Trott, 2002).Design patterns come by fancy names such as the Adapter pattern or the Decorations pattern and explaining what they are and how to use them is best done via a few examples. --- paper_title: Software Engineering for Scientists paper_content: At two recent workshops, participants discussed the juxtaposition of software engineering with the development of scientific computational software. --- paper_title: Refactoring to Patterns paper_content: In 1994, Design Patterns changed the landscape of object-oriented development by introducing classic solutions to recurring design problems. In 1999, Refactoring revolutionized design by introducing an effective process for improving code. With the highly anticipated Refactoring to Patterns, Joshua Kerievsky has changed our approach to design by forever uniting patterns with the evolutionary process of refactoring.This book introduces the theory and practice of pattern-directed refactorings: sequences of low-level refactorings that allow designers to safely move designs to, towards, or away from pattern implementations. Using code from real-world projects, Kerievsky documents the thinking and steps underlying over two dozen pattern-based design transformations. Along the way he offers insights into pattern differences and how to implement patterns in the simplest possible ways.Coverage includes: A catalog of twenty-seven pattern-directed refactorings, featuring real-world code examples Descriptions of twelve design smells that indicate the need for this book's refactorings General information and new insights about patterns and refactoring Detailed implementation mechanics: how low-level refactorings are combined to implement high-level patterns Multiple ways to implement the same pattern-and when to use each Practical ways to get started even if you have little experience with patterns or refactoringRefactoring to Patterns reflects three years of refinement and the insights of more than sixty software engineering thought leaders in the global patterns, refactoring, and agile development communities. Whether you're focused on legacy or “greenfield” development, this book will make you a better software designer by helping you learn how to make important design changes safely and effectively. --- paper_title: Developing scientific applications using Generative Programming paper_content: Scientific applications usually involve large number of distributed and dynamic resources and huge datasets. A mechanism like checkpointing is essential to make these applications resilient to failures. Using checkpointing as an example, this paper presents an approach for integrating the latest software engineering techniques with the development of scientific software. Generative programming is used in this research to achieve the goals of non-intrusive reengineering of existing applications to insert the checkpointing mechanism and to decouple the checkpointing-specifications from its actual implementation. The end-user specifies the checkpointing details at a higher level of abstraction, using which the necessary code is generated and woven into the application. The lessons learned and the implementation approach presented in this paper can be applied to the development of scientific applications in general. The paper also demonstrates that the generated code does not introduce any inaccuracies and its performance is comparable to the manually inserted code. --- paper_title: Requirements Analysis for Engineering Computation: A Systematic Approach for Improving Reliability paper_content: This paper argues that the reliability of engineering computation can be significantly improved by adopting software engineering methodologies for requirements analysis and specification. The argument centers around the fact that the only way to judge the reliability of a system is by comparison to a specification of the requirements. This paper also points to methods for documenting the requirements. In particular, a requirements template is proposed for specifying engineering computation software. To make the mathematical specification easily understandable by all stakeholders, the requirements documentation employs the technique of using tabular expressions. To clarify the presentation, this paper includes a case study of the documentation for a system for analyzing statically determinant beams. --- paper_title: Electronic Documents Give Reproducible Research a New Meaning paper_content: A revolution in education and technology transfer follows from the marriage of word processing and software command scripts. In this marriage an a.uthor attaches to every figure caption a pushbutton or a name tag usable t,o recalculate the figure from all its data, parameters, and programs. This provides a concrete definition of reproducibility in computationally oriented research. Experience at the Stanford Exploration Project shows that preparing such electronic documents is lit,tle effort beyond our customary report writing; mainly, we need to file everything in a systematic way. --- paper_title: myExperiment: social networking for workflow-using e-scientists paper_content: We present the Taverna workflow workbench and argue that scientific workflow environments need a rich ecosystem of tools that support the scientists. experimental lifecycle. Workflows are scientific objects in their own right, to be exchanged and reused. myExperiment is a new initiative to create a social networking environment for workflow workers. We present the motivation for myExperiment and sketch the proposed capabilities and challenges. We argue that actively engaging with a scientist's needs, fears and reward incentives is crucial for success. --- paper_title: A Universal Identifier for Computational Results paper_content: Abstract We present a discipline for verifiable computational scienti_c research. Our discipline revolves around three simple new concepts — verifiable computational result (VCR), VCR repository and Verifiable Result Identifier (VRI). These are web- and cloud-computing oriented concepts, which exploit today's web infrastructure to achieve standard, simple and automatic reproducibility in computational scientific research. The VCR discipline requires very slight modifications to the way researchers already conduct their computational research and authoring, and to the way publishers manage their content. In return, the discipline marks a significant step towards delivering on the long-anticipated promises of making scientific computation truly reproducible. A researcher practicing this discipline in everyday work produces computational scripts and word processor files that look very much like those they already produce today, but in which a few lines change very subtly and naturally. Those scripts produce a stream of verifiable results, which are the same tables, figures, charts and datasets the researcher traditionally would have produced, but which are watermarked for permanent identification by a VRI, and are automatically and permanently stored in a VCR repository. In a scientific community practicing Verifiable Computational Research, exchange of both ideas and data involves exchanging result identifiers—VRIs—rather than exchanging files. These identifiers are controlled, trusted and automatically generated strings that point to publicly available result as it was originally created by the computational process itself. When a verifiable result is included in a publication, its identifier can be used by any reader with a web browser to locate, browse and, where appropriate, re-execute the computation that produced the result. Journal readers can therefore scrutinize, dispute, understand and eventually trust these computational results, all to an extent impossible through textual explanations that constitute the core of scientific publications to date. In addition, the result identi_er can be used by subsequent computations to locate and retrieve both the published result (in graphical or numerical form) and the original datasets used by its generating computation. Colleagues can thus cite and import data into their own computations, just as traditional publications allow them to cite and import ideas. We describe an existing software implementation of the Verifiable Computational Research discipline, and argue that it solves many of the crucial problems commonly facing computer-based and computer-aided research in various scientific fields. Our system is secure, naturally adapted to large-scale and cloud computations and to modern massive data analysis, yet places effectively no additional workload on either the researcher or the publisher. --- paper_title: A literate experimentation manifesto paper_content: This paper proposes a new approach to experimental computer systems research, which we call Literate Experimentation. Conventionally, experimental procedure and writeup are divided into distinct phases: i.e. setup (the method), data collection (the results) and analysis (the evaluation of the results). Our concept of a literate experiment is to have a single, rich, human-generated, text-based description of a particular experiment, from which can be automatically derived: (1) a summary of the experimental setup to include in the paper; (2) a sequence of executable commands to setup a computer platform ready to perform the actual experiment; (3) the experiment itself, executed on this appropriately configured platform; and, (4) a means of generating results tables and graphs from the experimental output, ready for inclusion in the paper. Our Literate Experimentation style has largely been inspired by Knuth's Literate Programming philosophy. Effectively, a literate experiment is a small step towards the executable paper panacea. In this work, we argue that a literate experimentation approach makes it easier to produce rigorous experimental evaluation papers. We suggest that such papers are more likely to be accepted for publication, due to (a) the imposed uniformity of structure, and (b) the assurance that experimental results are easily reproducible. We present a case study of a prototype literate experiment involving memory management in Jikes RVM. --- paper_title: Taverna , lessons in creating a workflow environment for the life sciences paper_content: Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The myGrid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science. --- paper_title: The Collage Authoring Environment paper_content: Abstract The Collage Authoring Environment is a software infrastructure which enables domain scientists to collaboratively develop and publish their work in the form of executable papers. It corresponds to the recent developments in both e-Science and computational technologies which call for a novel publishing paradigm. As part of this paradigm, static content (such as traditional scientific publications) should be supplemented with elements of interactivity, enabling reviewers and readers to reexamine the reported results by executing parts of the software on which such results are based as well as access primary scientific data. Taking into account the presented rationale we propose an environment which enables authors to seamlessly embed chunks of executable code (called assets) into scientific publications and allow repeated execution of such assets on underlying computing and data storage resources, as required by scientists who wish to build upon the presented results. The Collage Authoring Environment can be deployed on arbitrary resources, including those belonging to high performance computing centers, scientific e-Infrastructures and resources contributed by the scientists themselves. The environment provides access to static content, primary datasets (where exposed by authors) and executable assets. Execution features are provided by a dedicated engine (called the Collage Server) and embedded into an interactive view delivered to readers, resembling a traditional research publication but interactive and collaborative in its scope. Along with a textual description of the Collage environment the authors also present a prototype implementation, which supports the features described in this paper. The functionality of this prototype is discussed along with theoretical assumptions underpinning the proposed system. --- paper_title: Automated Capture of Experiment Context for Easier Reproducibility in Computational Research paper_content: Published scientific research that relies on numerical computations is too often not reproducible. For computational research to become consistently and reliably reproducible, the process must become easier to achieve, as part of day-to-day research. A combination of best practices and automated tools can make it easier to create reproducible research. --- paper_title: SHARE: a web portal for creating and sharing executable research papers paper_content: Abstract This paper describes how SHARE (Sharing Hosted Autonomous Research Environments) satisfies the criteria of the Elsevier 2011 Executable Paper Grand Challenge. This challenge aims at disseminating the use of systems that provide reviewers and fellow scientists a convenient way to reproduce computational results of research papers. This can involve among others the calculation of a number, the plotting of a diagram, the automatic proof of a theorem or the interactive transformation of various inputs into a complex output document. Besides reproducing the literate results, readers of an executable paper should also be able to explore the result space by entering different input parameters than the ones reported in the original text. SHARE is a web portal that enables academics to create, share, and access remote virtual machines that can be cited from research papers. By deploying in SHARE a copy of the required operating system as well as all the relevant software and data, authors can make a conventional paper fully reproducible and interactive. Shared virtual machines can also contain the original paper text— when desirable even with embedded computations. This paper shows the concrete potential of SHARE-based articles by means of an example virtual machine that is based on a conventional research article published by Elsevier recently. More generally, it demonstrates how SHARE has supported the publication workflow of a journal special issue and various workshop proceedings. Finally, it clarifies how the SHARE architecture supports among others the Elsevier challenge's licensing and scalability requirements without domain specific restrictions. --- paper_title: Developing open source scientific practice paper_content: 3 Routine practice 8 3.1 Version control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Execution automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4 Readability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.5 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 --- paper_title: Literate programming in quantum chemistry: A simple example paper_content: Literate programming methods have not been widely used in computational molecular structure theory. We argue that significant advantages would result from the use of literate programming methods in the computational molecular sciences and, indeed, in computational science, in general. Our arguments are illustrated by a simple example of literate programming methods in ab initio electronic structure theory is described. We distinguish text-embedded code, or literate programs, from code-embedded text and suggest that the latter may form a more useful vehicle for publication than literate programs. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2005 --- paper_title: Beyond accuracy: what data quality means to data consumers paper_content: Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework. --- paper_title: The Earth System Grid: Supporting the Next Generation of Climate Modeling Research paper_content: Understanding the Earth's climate system and how it might be changing is a preeminent scientific challenge. Global climate models are used to simulate past, present, and future climates, and experiments are executed continuously on an array of distributed supercomputers. The resulting data archive, spread over several sites, currently contains upwards of 100 TB of simulation data and is growing rapidly. Looking toward mid-decade and beyond, we must anticipate and prepare for distributed climate research data holdings of many petabytes. The Earth System Grid (ESG) is a collaborative interdisciplinary project aimed at addressing the challenge of enabling management, discovery, access, and analysis of these critically important datasets in a distributed and heterogeneous computational environment. The problem is fundamentally a Grid problem. Building upon the Globus toolkit and a variety of other technologies, ESG is developing an environment that addresses authentication, authorization for data access, large-scale data transport and management, services and abstractions for high-performance remote data access, mechanisms for scalable data replication, cataloging with rich semantic and syntactic information, data discovery, distributed monitoring, and Web-based portals for using the system. --- paper_title: Data Curation at Scale: The Data Tamer System paper_content: Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are nding in the eld. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) eort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identication, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. --- paper_title: A software architecture-based framework for highly distributed and data intensive scientific applications paper_content: Modern scientific research is increasingly conducted by virtual communities of scientists distributed around the world. The data volumes created by these communities are extremely large, and growing rapidly. The management of the resulting highly distributed, virtual data systems is a complex task, characterized by a number of formidable technical challenges, many of which are of a software engineering nature. In this paper we describe our experience over the past seven years in constructing and deploying OODT, a software framework that supports large, distributed, virtual scientific communities. We outline the key software engineering challenges that we faced, and addressed, along the way. We argue that a major contributor to the success of OODT was its explicit focus on software architecture. We describe several large-scale, real-world deployments of OODT, and the manner in which OODT helped us to address the domain-specific challenges induced by each deployment. --- paper_title: Verification of Computer Simulation Models paper_content: The problem of validating computer simulation models of industrial systems has received only limited attention in the management science literature. The purpose of this paper is to consider the problem of validating computer models in the light of contemporary thought in the fields of philosophy of science, economic theory, and statistics. In order to achieve this goal we have attempted to gather together and present some of the ideas of scientific philosophers, economists, statisticians, and practitioners in the field of simulation which are relevant to the problem of verifying simulation models. We have paid particular attention to the writings of economists who have been concerned with testing the validity of economic models. Among the questions which we shall consider are included: What does it mean to verify a computer model of an industrial system? Are there any differences between the verification of computer models and the verification of other types of models? If so, what are some of these differences? Also considered are a number of measures and techniques for testing the "goodness of fit" of time series generated by computer models to observed historical series. --- paper_title: Automated testing of refactoring engines paper_content: Refactorings are behavior-preserving program transformations that improve the design of a program. Refactoring engines are tools that automate the application of refactorings: first the user chooses a refactoring to apply, then the engine checks if the transformation is safe, and if so, transforms the program. Refactoring engines are a key component of modern IDEs, and programmers rely on them to perform refactorings. A bug in the refactoring engine can have severe consequences as it can erroneously change large bodies of source code. We present a technique for automated testing of refactoring engines. Test inputs for refactoring engines are programs. The core of our technique is a framework for iterative generation of structurally complex test inputs. We instantiate the framework to generate abstract syntax trees that represent Java programs. We also create several kinds of oracles to automatically check that the refactoring engine transformed the generated program correctly. We have applied our technique to testing Eclipse and NetBeans, two popular open-source IDEs for Java, and we have exposed 21 new bugs in Eclipse and 24 new bugs in NetBeans. --- paper_title: The Recomputation Manifesto paper_content: Replication of scientific experiments is critical to the advance of science. Unfortunately, the discipline of Computer Science has never treated replication seriously, even though computers are very good at doing the same thing over and over again. Not only are experiments rarely replicated, they are rarely even replicable in a meaningful way. Scientists are being encouraged to make their source code available, but this is only a small step. Even in the happy event that source code can be built and run successfully, running code is a long way away from being able to replicate the experiment that code was used for. I propose that the discipline of Computer Science must embrace replication of experiments as standard practice. I propose that the only credible technique to make experiments truly replicable is to provide copies of virtual machines in which the experiments are validated to run. I propose that tools and repositories should be made available to make this happen. I propose to be one of those who makes it happen. --- paper_title: Managing Chaos: Lessons Learned Developing Software in the Life Sciences paper_content: In the life sciences, the need to balance the costs and benefits of introducing software processes into a research environment presents a distinct set of challenges due to the cultural disconnect between life sciences research and software engineering. The Institute for Systems Biology's research informatics team has studied these challenges and developed a software process to address them. --- paper_title: A Software Chasm: Software Engineering and Scientific Computing paper_content: Some time ago, a chasm opened between the scientific-computing community and the software engineering community. Originally, computing meant scientific computing. Today, science and engineering applications are at the heart of software systems such as environmental monitoring systems, rocket guidance systems, safety studies for nuclear stations, and fuel injection systems. Failures of such health-, mission-, or safety-related systems have served as examples to promote the use of software engineering best practices. Yet, the bulk of the software engineering community's research is on anything but scientific-application software. This chasm has many possible causes. In this article, we look at the impact of one particular contributor in industry. --- paper_title: A vision towards Scientific Communication Infrastructures On bridging the realms of Research Digital Libraries and Scientific Data Centers paper_content: The two pillars of the modern scientific communication are Data Centers and Research Digital Libraries (RDLs), whose technologies and admin staff support researchers at storing, curating, sharing, and discovering the data and the publications they produce. Being realized to maintain and give access to the results of complementary phases of the scientific research process, such systems are poorly integrated with one another and generally do not rely on the strengths of the other. Today, such a gap hampers achieving the objectives of the modern scientific communication, that is, publishing, interlinking, and discovery of all outcomes of the research process, from the experimental and observational datasets to the final paper. In this work, we envision that instrumental to bridge the gap is the construction of "Scientific Communication Infrastructures". The main goal of these infrastructures is to facilitate interoperability between Data Centers and RDLs and to provide services that simplify the implementation of the large variety of modern scientific communication patterns. --- paper_title: SHARE: a web portal for creating and sharing executable research papers paper_content: Abstract This paper describes how SHARE (Sharing Hosted Autonomous Research Environments) satisfies the criteria of the Elsevier 2011 Executable Paper Grand Challenge. This challenge aims at disseminating the use of systems that provide reviewers and fellow scientists a convenient way to reproduce computational results of research papers. This can involve among others the calculation of a number, the plotting of a diagram, the automatic proof of a theorem or the interactive transformation of various inputs into a complex output document. Besides reproducing the literate results, readers of an executable paper should also be able to explore the result space by entering different input parameters than the ones reported in the original text. SHARE is a web portal that enables academics to create, share, and access remote virtual machines that can be cited from research papers. By deploying in SHARE a copy of the required operating system as well as all the relevant software and data, authors can make a conventional paper fully reproducible and interactive. Shared virtual machines can also contain the original paper text— when desirable even with embedded computations. This paper shows the concrete potential of SHARE-based articles by means of an example virtual machine that is based on a conventional research article published by Elsevier recently. More generally, it demonstrates how SHARE has supported the publication workflow of a journal special issue and various workshop proceedings. Finally, it clarifies how the SHARE architecture supports among others the Elsevier challenge's licensing and scalability requirements without domain specific restrictions. --- paper_title: Reproducible research for scientific computing: Tools and strategies for changing the culture paper_content: This article considers the obstacles involved in creating reproducible computational research as well as some efforts and approaches to overcome them. ---
Title: Bridging the Chasm: A Survey of Software Engineering Practice in Scientific Programming Section 1: INTRODUCTION Description 1: This section introduces the indispensable role of software in scientific research and outlines the benefits and challenges of its use. Section 2: CASE STUDIES OF SOFTWARE PROCESSES Description 2: This section reviews various case studies exploring the relationship between scientific research and software development practices. Section 3: Scientific Programming in Practice Description 3: This section provides an overview of published case studies focusing on general practices in scientific programming. Section 4: Agile Methods Description 4: Exploration of the use and adaptation of agile methods and practices in scientific programming teams is discussed in this section. Section 5: Project Team Evolution and Software Documentation Description 5: This section discusses the evolutionary nature of scientific software development teams and the importance of documentation. Section 6: Best Practices Description 6: Proposals for best practices in scientific programming based on experiences and observations in the domain are summarized in this section. Section 7: QUALITY ASSURANCE PRACTICES Description 7: This section addresses the importance of quality assurance practices in scientific programming and explores various techniques such as testing and inspections. Section 8: Testing Description 8: This section delves into the challenges and methodologies of testing scientific software to ensure reliability and accuracy. Section 9: Inspections Description 9: The use of inspections as a quality assurance technique in scientific programming, including methodologies and case studies, is discussed here. Section 10: Continuous Integration Description 10: Recent practices of continuous integration employed in scientific programming efforts are reviewed in this section. Section 11: Formal Methods Description 11: This section covers the application of formal methods in scientific programming and their benefits in ensuring software correctness. Section 12: DESIGN, EVOLUTION, AND MAINTENANCE Description 12: This section discusses the management and maintenance of long-term scientific software projects, highlighting component architectures, design patterns, and refactoring techniques. Section 13: Component Architectures Description 13: The integration and structuring of heterogeneous component architectures in scientific computing are outlined in this section. Section 14: Design Patterns Description 14: Exploration of applying software design patterns to scientific programming to manage complexity is provided in this section. Section 15: Refactoring and Reengineering Techniques Description 15: Techniques for refactoring and reengineering legacy scientific software to enhance maintainability are discussed in this section. Section 16: Workflow Management and Executable Research Papers Description 16: This section reviews tools and approaches for managing scientific workflows and the role of executable research papers in enhancing reproducibility. Section 17: DATA QUALITY Description 17: The challenges and methods for ensuring data quality in scientific research are discussed in this section. Section 18: CONCLUSIONS Description 18: The concluding section summarizes the challenges and unresolved issues in integrating software engineering practices into scientific programming and proposes future directions for research and practice.
Trust Management Survey
10
--- paper_title: Human Experiments in Trust Dynamics paper_content: In the literature, the validity of theories or models for trust is usually based on intuition and common sense. Theories and models are not often verified experimentally. The research reported here contributes results of experiments on the dynamics of trust over time depending on positive or negative experiences. In previous research a number of dynamic properties for such trust dynamics were identified, but not verified empirically. As a continuation of this work, now these properties have been verified in an experimental setting. The outcomes of the experiment (involving a substantial number of 238 subjects) are discussed and related to the previously formulated dynamic properties. --- paper_title: A computational model of trust and reputation paper_content: Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores. --- paper_title: Managing Internet-Mediated Community Trust Relations paper_content: This paper advances a framework for analysing and managing community trust relations. The framework is based upon an analysis of the evidence for different forms of trust in community relations and of the experiential dimensions of community relations that promote trust levels. It features a community trust cycle, a trust compact and an experience management matrix which collectively support managers in addressing the relational dynamics of community trust relations. We show that this framework can be used to analyse relations that are mediated by ICT and that the framework supports the identification of opportunities to better promote ICT-mediated trust development and promulgation. --- paper_title: Towards Dynamic Security Perimeters for Virtual Collaborative Networks paper_content: Rapid technological advancements capitalising on the convergence of information (middleware) and communication (network) technologies now enable open application-to-application communication and bring about the prospect of ad hoc integration of systems across organisational boundaries to support collaborations that may last for a single transaction or evolve dynamically over a longer period. Architectures for managing networks of collaborating peers in such environments face new security and trust management challenges. In this paper we will introduce the basic elements of such an architecture emphasising trust establishment, secure collaboration, distributed monitoring and performance assessment issues. --- paper_title: Engineering Trust Based Collaborations in a Global Computing Environment paper_content: Trust management seems a promising approach for dealing with security concerns in collaborative applications in a global computing environment. However, the characteristics of this environment require a move from reliable identification to mechanisms for the recognition of entities. Furthermore, they require explicit reasoning about the risks of interactions, and a notion of uncertainty in the underlying trust model. From our experience of engineering collaborative applications in such an environment, we found that the relationship between trust and risk is a fundamental issue. In this paper, as an initial step towards an engineering approach for the development of trust based collaborative applications, we focus on the relationship between trust and risk, and explore alternative views of this relationship. We also exemplify how particular views can be exploited in two particular application scenarios. This paper builds upon our previous work in developing a general model for trust based collaborations. --- paper_title: A computational model of trust and reputation paper_content: Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores. --- paper_title: Hardware security appliances for trust paper_content: This paper looks at the trust relationships that exist within an outsourcing scenario finding that whilst some of the trust relationships are clear other implicit trust relationships need exposing. These implicit trust relationships are often a result of information supplied for the main explicit task for which an entity is being trusted. The use of hardware security appliance based services is proposed allowing trust to be dissipated over multiple parties whilst retaining efficient execution. Such an approach helps mitigate these implicit trust relationships by increasing the control and transparency given to the trustor. --- paper_title: Analysing the Relationship between Risk and Trust paper_content: Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface. --- paper_title: Trust Development and Management in Virtual Communities paper_content: The web is increasingly used as a platform and an enabler for the existence of virtual communities. However, there is evidence that the growth and adoption of these communities is being held back by many barriers- including that of trust development and management. This paper discusses the potential benefits and barriers to the introduction of trust development and management in virtual communities. Based on the analysis of the barriers and benefits of trust development and management, mechanisms for supporting its development and management is proposed and presented. Ideas for further research are presented and discussed. The paper is based on ongoing research and is part of a research bid towards the introduction of a trust development and management framework to support the creation of trusted virtual communities. --- paper_title: Modeling Controls for Dynamic Value Exchanges in Virtual Organizations paper_content: The e 3 -value modeling tool was developed for the design of a value proposition for virtual organizations. However, it is less suitable for designing the control structure of the virtual organization. We show how e 3 -value can be extended using legal concepts such as ownership, possession, usufruct and license. We also introduce value object transfer diagrams that show the transfers of value objects graphically and that can be used for elicitation of the required control mechanisms in order for the virtual organization to function properly and with a level of risk that is acceptable to all parties in the virtual organization. --- paper_title: Designing and Evaluating E-Business Models paper_content: This article presents an e-business modeling approach that combines the rigorous approach of IT systems analysis with an economic value perspective from business sciences. --- paper_title: Trust Propagation in Small Worlds paper_content: The possibility of a massive, networked infrastructure of diverse entities partaking in collaborative applications with each other increases more and more with the proliferation of mobile devices and the development of ad hoc networking technologies. In this context, traditional security measures do not scale well. We aim to develop trust-based security mechanisms using small world concepts to optimise formation and propagation of trust amongst entities in these vast networks. In this regard, we surmise that in a very large mobile ad hoc network, trust, risk, and recommendations can be propagated through relatively short paths connecting entities. Our work describes the design of trust-formation and risk-assessment systems, as well as that of an entity recognition scheme, within the context of the small world network topology. --- paper_title: KAoS: A Policy and Domain Services Framework for Grid Computing and Semantic Web Services paper_content: In this article we introduce KAoS, a policy and domain services framework based on W3C’s OWL ontology language. KAoS was developed in response to the challenges presented by emerging semantic application requirements for infrastructure, especially in the area of security and trust management. The KAoS architecture, ontologies, policy representation, management and disclosure mechanisms are described. KAoS enables the specification and enforcement of both authorization and obligation policies. The use of ontologies as a source of policy vocabulary enables its extensibility. KAoS has been adapted for use in several applications and deployment platforms. We briefly describe its integration with the Globus Grid Computing environment. --- paper_title: Towards Dynamic Security Perimeters for Virtual Collaborative Networks paper_content: Rapid technological advancements capitalising on the convergence of information (middleware) and communication (network) technologies now enable open application-to-application communication and bring about the prospect of ad hoc integration of systems across organisational boundaries to support collaborations that may last for a single transaction or evolve dynamically over a longer period. Architectures for managing networks of collaborating peers in such environments face new security and trust management challenges. In this paper we will introduce the basic elements of such an architecture emphasising trust establishment, secure collaboration, distributed monitoring and performance assessment issues. --- paper_title: Decentralized trust management paper_content: We identify the trust management problem as a distinct and important component of security in network services. Aspects of the trust management problem include formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Existing systems that support security in networked applications, including X.509 and PGP, address only narrow subsets of the overall trust management problem and often do so in a manner that is appropriate to only one application. This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships. It also describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services. --- paper_title: Engineering Trust Based Collaborations in a Global Computing Environment paper_content: Trust management seems a promising approach for dealing with security concerns in collaborative applications in a global computing environment. However, the characteristics of this environment require a move from reliable identification to mechanisms for the recognition of entities. Furthermore, they require explicit reasoning about the risks of interactions, and a notion of uncertainty in the underlying trust model. From our experience of engineering collaborative applications in such an environment, we found that the relationship between trust and risk is a fundamental issue. In this paper, as an initial step towards an engineering approach for the development of trust based collaborative applications, we focus on the relationship between trust and risk, and explore alternative views of this relationship. We also exemplify how particular views can be exploited in two particular application scenarios. This paper builds upon our previous work in developing a general model for trust based collaborations. --- paper_title: Simulating the Effect of Reputation Systems on E-markets paper_content: Studies show that reputation systems have the potential to improve market quality. In this paper we report the results of simulating a market of trading agents that uses the beta reputation system for collecting feedback and computing agents' reputations. The simulation confirms the hypothesis that the presence of the reputation system improves the quality of the market. Among other things it also shows that a market with limited duration rather than infinite longevity of transaction feedback provides the best conditions under which agents can adapt to each others change in behaviour. --- paper_title: Automated trust negotiation paper_content: Distributed software subjects face the problem of determining one another's trustworthiness. The problem considered is managing the exchange of credentials between strangers for the purpose of property-based authentication and authorization when credentials are sensitive. An architecture for trust negotiation between client and server is presented. The notion of a trust negotiation strategy is introduced and examined with respect to an abstract model of trust negotiation. Two strategies with very different properties are defined and analyzed. A language of credential expressions is presented, with two example negotiations illustrating the two negotiation strategies. Ongoing work on policies governing credential disclosure and trust negotiation is summarized. --- paper_title: Revocation in the Privilege Calculus paper_content: We have previously presented a framework for updating privileges and creating management structures by means of authority certificates. These are used both to create access-level permissions and to delegate authority to other agents. In this paper we extend the framework to support a richer set of revocation schemes. As in the original, we present an associated calculus of privileges, encoded as a logic program, for reasoning about certificates, revocations, and the privileges they create and destroy. The discussion of revocation schemes follows an existing classification in the literature based on three separate ::: dimensions: resilience, propagation, and dominance. The first does not apply to this framework. The second is specified straightforwardly. The third can be encoded but raises a number offurther questions for future investigation. --- paper_title: Server based application level authorisation for Rotor paper_content: Delegent is an authorisation server developed to provide a single centralised policy repository for multiple applications with support for decentralised administration by means of delegation. The author investigates how to integrate Delegent with the Rotor implementation of the .NET framework and compare the features of Delegent with those of the existing application level authorisation models of .NET. He concludes that Delegent offers help for application developers and a decentralised administration model, which are not available in standard .NET, and that the .NET model is well suited to be extended to use an authorisation server. --- paper_title: Specifying and Analysing Trust for Internet Applications paper_content: The Internet is now being used for commercial, social and educational interactions, which previously relied on direct face-to-face contact in order to establish trust relationships. Thus, there is a need to be able to establish and evaluate trust relationships relying only on electronic interactions over the Internet. A trust framework for Internet applications should incorporate concepts such as experience, reputation and trusting propensity in order to specify and evaluate trust. SULTAN (Simple Universal Logic-oriented Trust Analysis Notation) is an abstract, logic-oriented notation designed to facilitate the specification and analysis of trust relationships. SULTAN seeks to address all the above issues, although this paper focuses on our initial work on trust specification and analysis. --- paper_title: The Ponder Policy Specification Language paper_content: The Ponder language provides a common means of specifying security policies that map onto various access control implementation mechanisms for firewalls, operating systems, databases and Java. It supports obligation policies that are event triggered condition-action rules for policy based management of networks and distributed systems. Ponder can also be used for security management activities such as registration of users or logging and auditing events for dealing with access to critical resources or security violations. Key concepts of the language include roles to group policies relating to a position in an organisation, relationships to define interactions between roles and management structures to define a configuration of roles and relationships pertaining to an organisational unit such as a department. These reusable composite policy specifications cater for the complexity of large enterprise information systems. Ponder is declarative, strongly-typed and object-oriented which makes the language flexible, extensible and adaptable to a wide range of management requirements. --- paper_title: Analysing the Relationship between Risk and Trust paper_content: Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface. --- paper_title: Challenges for trust, fraud and deception research in multi-agent systems paper_content: Discussions at the 5th Workshop on Deception, Fraud and Trust in Agent Societies held at the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002) centered around many important research issues1. This paper attempts to challenge researchers in the community toward future work concerning three issues inspired by the workshop's roundtable discussion: (1) distinguishing elements of an agent's behavior that influence its trustworthiness, (2) building reputation-based trust models without relying on interaction, and (3) benchmarking trust modeling algorithms. Arguments justifying the validity of each problem are presented, and benefits from their solutions are enumerated. --- paper_title: A policy language for a pervasive computing environment paper_content: We describe a policy language designed for pervasive computing applications that is based on deontic concepts and grounded in a semantic language. The pervasive computing environments under consideration are those in which people and devices are mobile and use various wireless networking technologies to discover and access services and devices in their vicinity. Such pervasive environments lend themselves to policy-based security due to their extremely dynamic nature. Using policies allows the security functionality to be modified without changing the implementation of the entities involved. However, along with being extremely dynamic, these environments also tend to span several domains and be made up of entities of varied capabilities. A policy language for environments of this sort needs to be very expressive but lightweight and easily extensible. We demonstrate the feasibility of our policy language in pervasive environments through a prototype used as part of a secure pervasive system. --- paper_title: Supporting trust in virtual communities paper_content: At any given time, the stability of a community depends on the right balance of trust and distrust. Furthermore, we face information overload, increased uncertainty and risk taking as a prominent feature of modern living. As members of society, we cope with these complexities and uncertainties by relying trust, which is the basis of all social interactions. Although a small number of trust models have been proposed for the virtual medium, we find that they are largely impractical and artificial. In this paper we provide and discuss a trust model that is grounded in real-world social trust characteristics, and based on a reputation mechanism, or word-of-mouth. Our proposed model allows agents to decide which other agents' opinions they trust more and allows agents to progressively tune their understanding of another agent's subjective recommendations. --- paper_title: Patterns of trust and policy paper_content: This paper proposes a new paradigm of trust and policy that provides a unified treatment of organizational and data system policies. Policy is the programming language of organizations and just like any other language must be formally specified or specifiable. This paper attempts to demonstrate that it is specifiable. Trust is a major component of policy. Trust is presented as a function of specific elements - identity, reputation, capability, stake and benefit. These elements are defined and presented in the form of a trust equation. The points at which trust enters into the formal definition of policy are identified. The trust equation provides a useful way do &scribe trust in general that is not circular (unlike many previous definitions). The resulting constructs can be st&iently nontechnical that both systems people and those without a technical background can understand them The availability of a common language to guide analysis of policy requirements, policy formulation and policy execution may provide a way for organizations to break out of a recurring cycle of policy failures. --- paper_title: Using trust for secure collaboration in uncertain environments paper_content: The SECURE project investigates the design of security mechanisms for pervasive computing based on trust. It addresses how entities in unfamiliar pervasive computing environments can overcome initial suspicion to provide secure collaboration. --- paper_title: Decentralized trust management paper_content: We identify the trust management problem as a distinct and important component of security in network services. Aspects of the trust management problem include formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Existing systems that support security in networked applications, including X.509 and PGP, address only narrow subsets of the overall trust management problem and often do so in a manner that is appropriate to only one application. This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships. It also describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services. --- paper_title: A computational model of trust and reputation paper_content: Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores. --- paper_title: Automated trust negotiation paper_content: Distributed software subjects face the problem of determining one another's trustworthiness. The problem considered is managing the exchange of credentials between strangers for the purpose of property-based authentication and authorization when credentials are sensitive. An architecture for trust negotiation between client and server is presented. The notion of a trust negotiation strategy is introduced and examined with respect to an abstract model of trust negotiation. Two strategies with very different properties are defined and analyzed. A language of credential expressions is presented, with two example negotiations illustrating the two negotiation strategies. Ongoing work on policies governing credential disclosure and trust negotiation is summarized. --- paper_title: Reasoning About Trust: A Formal Logical Framework paper_content: There is no consensus about the definition of the concept of trust. In this paper formal definitions of different kinds of trust are given in the framework of modal logic. This framework also allows to define a logic for deriving consequences from a set of assumptions about trust.Trust is defined as a mental attitude of an agent with respect to some property held by another agent. These properties are systematically analysed and we propose 6 epistemic properties, 4 deontic properties and 1 dynamic property. --- paper_title: Specifying and Analysing Trust for Internet Applications paper_content: The Internet is now being used for commercial, social and educational interactions, which previously relied on direct face-to-face contact in order to establish trust relationships. Thus, there is a need to be able to establish and evaluate trust relationships relying only on electronic interactions over the Internet. A trust framework for Internet applications should incorporate concepts such as experience, reputation and trusting propensity in order to specify and evaluate trust. SULTAN (Simple Universal Logic-oriented Trust Analysis Notation) is an abstract, logic-oriented notation designed to facilitate the specification and analysis of trust relationships. SULTAN seeks to address all the above issues, although this paper focuses on our initial work on trust specification and analysis. --- paper_title: Analysing the Relationship between Risk and Trust paper_content: Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface. --- paper_title: "Trust me, I'm an online vendor": towards a model of trust for e-commerce system design paper_content: Consumers' lack of trust has often been cited as a major barrier to the adoption of electronic commerce (e-commerce). To address this problem, a model of trust was developed that describes what design factors affect consumers' assessment of online vendors' trustworthiness. Six components were identified and regrouped into three categories: Prepurchase Knowledge, Interface Properties and Informational Content. This model also informs the Human-Computer Interaction (HCI) design of e-commerce systems in that its components can be taken as trust-specific high-level user requirements. --- paper_title: Automated management of inter-organisational applications paper_content: Inter-organisational applications require improved support from middleware services. This paper analyses the management requirements of multidomain applications, covering both technological domains and organisations (administrative business domains). The analysis leads to a suggested middleware solution for these needs, presenting a group of cooperative, pervasive management services that use a common, abstract management language suitable for expressing a platform-independent model of cooperation between sovereign components and contracts about selected technologies. The approach is compatible with OMG MDA (Model Driven Architecture) work. --- paper_title: Statistical trustability (conceptual work) paper_content: Trust management and trust-aware applications have gained importance in re- cent years (e.g., [1,2]). Conventional security mechanisms are unable to cover all application domains. For example, in ad hoc networks, the frequent lack of an online connection to the Internet makes routine tasks very difficult. Trust seems to be a good way to overcome such shortcomings because it does not necessarily rely on fixed centralized infrastructures [5]. --- paper_title: Simulating the Effect of Reputation Systems on E-markets paper_content: Studies show that reputation systems have the potential to improve market quality. In this paper we report the results of simulating a market of trading agents that uses the beta reputation system for collecting feedback and computing agents' reputations. The simulation confirms the hypothesis that the presence of the reputation system improves the quality of the market. Among other things it also shows that a market with limited duration rather than infinite longevity of transaction feedback provides the best conditions under which agents can adapt to each others change in behaviour. --- paper_title: A Case for Evidence-Aware Distributed Reputation Systems: Overcoming the Limitations of Plausibility Considerations paper_content: Reputation systems support trust formation in artificial societies by keeping track of the behavior of autonomous entities. In the absence of any commonly trusted entity, the reputation system has to be distributed to the autonomous entities themselves. They may cooperate by issuing recommendations of other entities’ trustworthiness. At the time being, distributed reputation systems rely on plausibility for assessing the truthfulness and consistency of such recommendations. In this paper, we point out the limitations of such plausibility considerations and present an alternative concept that is based on evidences. The concept combines the strengths of non-repudiability and distributed reputation systems. We analyze the issues that are related to the issuance and gathering of evidences. In this regard, we identify four patterns of how evidence-awareness overcomes the limitations of plausibility considerations. --- paper_title: Implementation of an Agent-Oriented Trust Management Infrastructure Based on a Hybrid PKI Model paper_content: Access control in modern computing environments is different from access control in the traditional setting of operating systems. For distributed computing systems, specification and enforcement of permissions can be based on a public key infrastructure which deals with public keys for asymmetric cryptography. Previous approaches and their implementations for applying a public key infrastructure are classified as based either on trusted authorities with licencing or on owners with delegations. We present the architecture and main features of a trust management infrastructure based on a hybrid model which unifies and extends the previous public key infrastructure approaches. The trust management infrastructure constitutes a flexible framework for experimenting with the applications of different trust models. --- paper_title: A trust matrix model for electronic commerce paper_content: We propose a so-called trust matrix model to build trust for conducting first trade transactions in electronic commerce; i.e. transactions in an electronic commerce environment between two parties that have never conducted trade transactions between them before. The trust matrix model is based on the idea that for business-to-business electronic commerce a balance has to be found between anonymous procedural trust, i.e. procedural solutions for trust building, and personal trust based on positive past experiences within an existing business relation. Procedural trust building solutions are important for first trade situations, because of the lack of experience in these situations. The procedural trust solutions are related to the notion of institution-based trust, because the trust in the procedural solutions depends on the trust one has in the institutions that issued or enforces the procedure. The trust matrix model can be used as a tool to analyze and develop trust-building services to help organizations conduct first-trade electronic commerce transactions. --- paper_title: Challenges for trust, fraud and deception research in multi-agent systems paper_content: Discussions at the 5th Workshop on Deception, Fraud and Trust in Agent Societies held at the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002) centered around many important research issues1. This paper attempts to challenge researchers in the community toward future work concerning three issues inspired by the workshop's roundtable discussion: (1) distinguishing elements of an agent's behavior that influence its trustworthiness, (2) building reputation-based trust models without relying on interaction, and (3) benchmarking trust modeling algorithms. Arguments justifying the validity of each problem are presented, and benefits from their solutions are enumerated. --- paper_title: Pinocchio: Incentives for Honest Participation in Distributed Trust Management paper_content: In this paper, we introduce a framework for providing incentives for honest participation in global-scale distributed trust management infrastructures. Our system can improve the quality of information supplied by these systems by reducing free-riding and encouraging honesty. Our approach is twofold: (1) we provide rewards for participants that advertise their experiences to others, and (2) impose the credible threat of halting the rewards, for a substantial amount of time, for participants who consistently provide suspicious feedback. For this purpose we develop an honesty metric which can indicate the accuracy of feedback. --- paper_title: An Intrusion-Detection Model paper_content: A model of a real-time intrusion-detection expert system capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusion-detection expert system. --- paper_title: Specification-based anomaly detection: a new approach for detecting network intrusions paper_content: Unlike signature or misuse based intrusion detection techniques, anomaly detection is capable of detecting novel attacks. However, the use of anomaly detection in practice is hampered by a high rate of false alarms. Specification-based techniques have been shown to produce a low rate of false alarms, but are not as effective as anomaly detection in detecting novel attacks, especially when it comes to network probing and denial-of-service attacks. This paper presents a new approach that combines specification-based and anomaly-based intrusion detection, mitigating the weaknesses of the two approaches while magnifying their strengths. Our approach begins with state-machine specifications of network protocols, and augments these state machines with information about statistics that need to be maintained to detect anomalies. We present a specification language in which all of this information can be captured in a succinct manner. We demonstrate the effectiveness of the approach on the 1999 Lincoln Labs intrusion detection evaluation data, where we are able to detect all of the probing and denial-of-service attacks with a low rate of false alarms (less than 10 per day). Whereas feature selection was a crucial step that required a great deal of expertise and insight in the case of previous anomaly detection approaches, we show that the use of protocol specifications in our approach simplifies this problem. Moreover, the machine learning component of our approach is robust enough to operate without human supervision, and fast enough that no sampling techniques need to be employed. As further evidence of effectiveness, we present results of applying our approach to detect stealthy email viruses in an intranet environment. --- paper_title: Adaptive real-time anomaly detection using inductively generated sequential patterns paper_content: A time-based inductive learning approach to the problem of real-time anomaly detection is described. This approach uses sequential rules that characterize a user's behavior over time. A rulebase is used to store patterns of user activities, and anomalies are reported whenever a user's activity deviates significantly from those specified in the rules. The rules in the rulebase characterize either the sequential relationships between security audit records or the temporal properties of the records. The rules are created in two ways: they are either dynamically generated and modified by a time-based inductive engine in order to adapt to changes in a user's behavior, or they are specified by the security management to implement a site security policy. This approach allows the correlation between adjacent security events to be exploited for the purpose of greater sensitivity in anomaly detection against seemingly intractable (or erratic) activities using statistical approaches. Real-time detection of anomaly activities is possible. > --- paper_title: A sense of self for Unix processes paper_content: A method for anomaly detection is introduced in which ``normal'' is defined by short-range correlations in a process' system calls. Initial experiments suggest that the definition is stable during normal behavior for standard UNIX programs. Further, it is able to detect several common intrusions involving sendmail and lpr. This work is part of a research program aimed at building computer security systems that incorporate the mechanisms and algorithms used by natural immune systems. --- paper_title: Using Internal Sensors For Computer Intrusion Detection paper_content: This dissertation introduces the concept of using internal sensors to perform intrusion detection in computer systems. It shows its practical feasibility and discusses its characteristics and related design and implementation issues. ::: We introduce a classification of data collection mechanisms for intrusion detection systems. At a conceptual level, these mechanisms are classified as direct and indirect monitoring. At a practical level, direct monitoring can be implemented using external or internal sensors. Internal sensors provide advantages with respect to reliability, completeness, timeliness and volume of data, in addition to efficiency and resistance against attacks. ::: We introduce an architecture called ESP as a framework for building intrusion detection systems based on internal sensors. We describe in detail a prototype implementation based on the ESP architecture and introduce the concept of embedded detectors as a mechanism for localized data reduction. ::: We show that it is possible to build both specific (specialized for a certain intrusion) and generic (able to detect different types of intrusions) detectors. Furthermore, we provide information about the types of data and places of implementation that are most effective in detecting different types of attacks. ::: Finally, performance testing of the ESP implementation shows the impact that embedded detectors can have on a computer system. Detection testing shows that embedded detectors have the capability of detecting a significant percentage of new attacks. --- paper_title: Trust-adapted enforcement of security policies in distributed component-structured applications paper_content: Software component technology on the one hand supports the cost-effective development of specialized applications. On the other hand, however it introduces special security problems. Some major problems can be solved by the automated run-time enforcement of security policies. Each component is controlled by a wrapper which monitors the component's behavior and checks its compliance with the security behavior constraints of the component's employment contract. Since control functions and wrappers can cause substantial overhead, we introduce trust-adapted control functions where the intensity of monitoring and behavior checks depends on the level of trust, the component, its hosting environment, and its vendor have currently in the eyes of the application administration. We report on wrappers and a trust information service, outline the embedding security model and architecture, and describe a Java Bean based experimental implementation. --- paper_title: Using trust for secure collaboration in uncertain environments paper_content: The SECURE project investigates the design of security mechanisms for pervasive computing based on trust. It addresses how entities in unfamiliar pervasive computing environments can overcome initial suspicion to provide secure collaboration. --- paper_title: A computational model of trust and reputation paper_content: Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores. --- paper_title: Specifying and Analysing Trust for Internet Applications paper_content: The Internet is now being used for commercial, social and educational interactions, which previously relied on direct face-to-face contact in order to establish trust relationships. Thus, there is a need to be able to establish and evaluate trust relationships relying only on electronic interactions over the Internet. A trust framework for Internet applications should incorporate concepts such as experience, reputation and trusting propensity in order to specify and evaluate trust. SULTAN (Simple Universal Logic-oriented Trust Analysis Notation) is an abstract, logic-oriented notation designed to facilitate the specification and analysis of trust relationships. SULTAN seeks to address all the above issues, although this paper focuses on our initial work on trust specification and analysis. --- paper_title: Enhanced Reputation Mechanism for Mobile Ad Hoc Networks paper_content: Interactions between entities unknown to each other are inevitable in the ambient intelligence vision of service access anytime, anywhere. Trust management through a reputation mechanism to facilitate such interactions is recognized as a vital part of mobile ad hoc networks, which features lack of infrastructure, autonomy, mobility and resource scarcity of composing light-weight terminals. However, the design of a reputation mechanism is faced by challenges of how to enforce reputation information sharing and honest recommendation elicitation. In this paper, we present a reputation model, which incorporates two essential dimensions, time and context, along with mechanisms supporting reputation formation, evolution and propagation. By introducing the notion of recommendation reputation, our reputation mechanism shows effectiveness in distinguishing truth-telling and lying agents, obtaining true reputation of an agent, and ensuring reliability against attacks of defame and collusion. --- paper_title: SULTAN-A Language for Trust Specification and Analysis paper_content: A precast unit, preferably of cementitious material, of circular outline having stone-like block units on the outer surface separated by mortar joints and top shoulder annular portions to serve as a guide to enable stacking like units above each other. The inner surface of each unit and outer surface of each projection extend in a downwardly and outwardly extending direction to provide a stepped inner wall. --- paper_title: Engineering Trust Based Collaborations in a Global Computing Environment paper_content: Trust management seems a promising approach for dealing with security concerns in collaborative applications in a global computing environment. However, the characteristics of this environment require a move from reliable identification to mechanisms for the recognition of entities. Furthermore, they require explicit reasoning about the risks of interactions, and a notion of uncertainty in the underlying trust model. From our experience of engineering collaborative applications in such an environment, we found that the relationship between trust and risk is a fundamental issue. In this paper, as an initial step towards an engineering approach for the development of trust based collaborative applications, we focus on the relationship between trust and risk, and explore alternative views of this relationship. We also exemplify how particular views can be exploited in two particular application scenarios. This paper builds upon our previous work in developing a general model for trust based collaborations. --- paper_title: A computational model of trust and reputation paper_content: Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores. --- paper_title: Enhanced Reputation Mechanism for Mobile Ad Hoc Networks paper_content: Interactions between entities unknown to each other are inevitable in the ambient intelligence vision of service access anytime, anywhere. Trust management through a reputation mechanism to facilitate such interactions is recognized as a vital part of mobile ad hoc networks, which features lack of infrastructure, autonomy, mobility and resource scarcity of composing light-weight terminals. However, the design of a reputation mechanism is faced by challenges of how to enforce reputation information sharing and honest recommendation elicitation. In this paper, we present a reputation model, which incorporates two essential dimensions, time and context, along with mechanisms supporting reputation formation, evolution and propagation. By introducing the notion of recommendation reputation, our reputation mechanism shows effectiveness in distinguishing truth-telling and lying agents, obtaining true reputation of an agent, and ensuring reliability against attacks of defame and collusion. --- paper_title: SULTAN-A Language for Trust Specification and Analysis paper_content: A precast unit, preferably of cementitious material, of circular outline having stone-like block units on the outer surface separated by mortar joints and top shoulder annular portions to serve as a guide to enable stacking like units above each other. The inner surface of each unit and outer surface of each projection extend in a downwardly and outwardly extending direction to provide a stepped inner wall. ---
Title: Trust Management Survey Section 1: Introduction Description 1: Introduce the importance and challenges of trust management, and outline the structure of the paper. Section 2: On the Nature of Trust Description 2: Provide a brief overview of the trust phenomenon from a sociological perspective. Section 3: Concepts for Trust Management Description 3: Discuss trust as it is directed at independent actors and the factors affecting trust decisions. Section 4: The Trust Management Model Description 4: Explore the roots of trust management in authentication and authorization and its evolution into dynamic trust models. Section 5: The Trust Information Model Description 5: Examine the representation of trust, reputation, and related concepts in computer systems. Section 6: The Tasks of a Trust Management System Description 6: Identify the main challenges for trust management systems, including initialization, observation, and action based on trust decisions. Section 7: Initializing a Trust Relationship Description 7: Discuss methods for determining initial trust and the role of reputation systems. Section 8: Observation Description 8: Explore techniques for monitoring trustee behavior, including the use of intrusion detection systems. Section 9: Evolving Reputation and Trust Description 9: Describe how reputation and trust are updated based on observed behavior and gathered experience. Section 10: Conclusions Description 10: Summarize the findings, highlight the challenges, and suggest areas for future research in trust management.
Multimodal Machine Learning: A Survey and Taxonomy
7
--- paper_title: Coupled hidden Markov models for complex action recognition paper_content: We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm and a clear Bayesian semantics. However the Markovian framework makes strong restrictive assumptions about the system generating the signal-that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing $$\sim $$~0.25 M images, $$\sim $$~0.76 M questions, and $$\sim $$~10 M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa). --- paper_title: Generative Adversarial Text to Image Synthesis paper_content: Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. --- paper_title: Integration of acoustic and visual speech signals using neural networks paper_content: Results from a series of experiments that use neural networks to process the visual speech signals of a male talker are presented. In these preliminary experiments, the results are limited to static images of vowels. It is demonstrated that these networks are able to extract speech information from the visual images and that this information can be used to improve automatic vowel recognition. The structure of speech and its corresponding acoustic and visual signals are reviewed. The specific data that was used in the experiments along with the network architectures and algorithms are described. The results of integrating the visual and auditory signals for vowel recognition in the presence of acoustic noise are presented. > --- paper_title: A mew ASR approach based on independent processing and recombination of partial frequency bands paper_content: In the framework of hidden Markov models (HMM) or hybrid HMM/artificial neural network (ANN) systems, we present a new approach towards automatic speech recognition (ASR). The general idea is to split the whole frequency band (represented in terms of critical bands) into a few sub-bands on which different recognizers are independently applied and then recombined at a certain speech unit level to yield global scores and a global recognition decision. The preliminary results presented in this paper show that such an approach, even using quite simple recombination strategies, can yield at least comparable performance on clean speech while providing better robustness in the case of noisy speech. --- paper_title: The SEMAINE corpus of emotionally coloured character interactions paper_content: We have recorded a new corpus of emotionally coloured conversations. Users were recorded while holding conversations with an operator who adopts in sequence four roles designed to evoke emotional reactions. The operator and the user are seated in separate rooms; they see each other through teleprompter screens, and hear each other through speakers. To allow high quality recording, they are recorded by five high-resolution, high framerate cameras, and by four microphones. All sensor information is recorded synchronously, with an accuracy of 25 μs. In total, we have recorded 20 participants, for a total of 100 character conversational and 50 non-conversational recordings of approximately 5 minutes each. All recorded conversations have been fully transcribed and annotated for five affective dimensions and partially annotated for 27 other dimensions. The corpus has been made available to the scientific community through a web-accessible database. --- paper_title: Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics paper_content: In [Hodosh et al., 2013], we establish a rankingbased framework for sentence-based image description and retrieval. We introduce a new dataset of images paired with multiple descriptive captions that was specifically designed for these tasks. We also present strong KCCA-based baseline systems for description and search, and perform an in-depth study of evaluation metrics for these two tasks. Our results indicate that automatic evaluation metrics for our ranking-based tasks are more accurate and robust than those proposed for generation-based image description. --- paper_title: Multimodal Video Indexing: A Review of the State-of-the-art paper_content: Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. In this paper we survey several methods aiming at automating this time and resource consuming process. Good reviews on single modality based video indexing have appeared in literature. Effective indexing, however, requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. Therefore, instead of separately treating the different information sources involved, and their specific algorithms, we focus on the similarities and differences between the modalities. To that end we put forward a unifying and multimodal framework, which views a video document from the perspective of its author. This framework forms the guiding principle for identifying index types, for which automatic methods are found in literature. It furthermore forms the basis for categorizing these different methods. --- paper_title: Multimodal Saliency and Fusion for Movie Summarization Based on Aural, Visual, and Textual Attention paper_content: Multimodal streams of sensory information are naturally parsed and integrated by humans using signal-level feature extraction and higher level cognitive processes. Detection of attention-invoking audiovisual segments is formulated in this work on the basis of saliency models for the audio, visual, and textual information conveyed in a video stream. Aural or auditory saliency is assessed by cues that quantify multifrequency waveform modulations, extracted through nonlinear operators and energy tracking. Visual saliency is measured through a spatiotemporal attention model driven by intensity, color, and orientation. Textual or linguistic saliency is extracted from part-of-speech tagging on the subtitles information available with most movie distributions. The individual saliency streams, obtained from modality-depended cues, are integrated in a multimodal saliency curve, modeling the time-varying perceptual importance of the composite video stream and signifying prevailing sensory events. The multimodal saliency representation forms the basis of a generic, bottom-up video summarization algorithm. Different fusion schemes are evaluated on a movie database of multimodal saliency annotations with comparative results provided across modalities. The produced summaries, based on low-level features and content-independent fusion and selection, are of subjectively high aesthetic and informative quality. --- paper_title: AVEC 2011–The first international audio/visual emotion challenge paper_content: The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) is the first competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. This paper first describes the challenge participation conditions. Next follows the data used - the SEMAINE corpus - and its partitioning into train, development, and test partitions for the challenge with labelling in four dimensions, namely activity, expectation, power, and valence. Further, audio and video baseline features are introduced as well as baseline results that use these features for the three sub-challenges of audio, video, and audiovisual emotion recognition. --- paper_title: Hidden Markov Models for Speech Recognition paper_content: The use of hidden Markov models for speech recognition has become predominant in the last several years, as evidenced by the number of published papers and talks at major speech conferences. The reasons this method has become so popular are the inherent statistical (mathematically precise) framework; the ease and availability of training algorithms for cstimating the parameters of the models from finite training sets of speech data; the flexibility of the resulting recognition system in which one can easily change the size, type, or architecture of the models to suit particular words, sounds, and so forth; and the ease of implementation of the overall recognition system. In this expository article, we address the role of statistical methods in this powerful technology as applied to speech recognition and discuss a range of theoretical and practical issues that are as yet unsolved in terms of their importance and their effect on performance for different system implementations. --- paper_title: VizWiz: nearly real-time answers to visual questions paper_content: Visual information pervades our environment. Vision is used to decide everything from what we want to eat at a restaurant and which bus route to take to whether our clothes match and how long until the milk expires. Individually, the inability to interpret such visual information is a nuisance for blind people who often have effective, if inefficient, work-arounds to overcome them. Collectively, however, they can make blind people less independent. Specialized technology addresses some problems in this space, but automatic approaches cannot yet answer the vast majority of visual questions that blind people may have. VizWiz addresses this shortcoming by using the Internet connections and cameras on existing smartphones to connect blind people and their questions to remote paid workers' answers. VizWiz is designed to have low latency and low cost, making it both competitive with expensive automatic solutions and much more versatile. --- paper_title: Dynamic modality weighting for multi-stream hmms inaudio-visual speech recognition paper_content: Merging decisions from different modalities is a crucial problem in Audio-Visual Speech Recognition. To solve this, state synchronous multi-stream HMMs have been proposed for their important advantage of incorporating stream reliability in their fusion scheme. This paper focuses on stream weight adaptation based on modality confidence estimators. We assume different and time-varying environment noise, as can be encountered in realistic applications, and, for this, adaptive methods are best suited. Stream reliability is assessed directly through classifier outputs since they are not specific to either noise type or level. The influence of constraining the weights to sum to one is also discussed. --- paper_title: Multimodal Deep Learning paper_content: Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning. --- paper_title: The AMI Meeting Corpus: A Pre-announcement paper_content: The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated. --- paper_title: Comparison of automatic shot boundary detection algorithms paper_content: Various methods of automatic shot boundary detection have been proposed and claimed to perform reliably. Although the detection of edits is fundamental to any kind of video analysis since it segments a video into its basic components, the shots, only few comparative investigations on early shot boundary detection algorithms have been published. These investigations mainly concentrate on measuring the edit detection performance, however, do not consider the algorithms? ability to classify the types and to locate the boundaries of the edits correctly. This paper extends these comparative investigations. More recent algorithms designed explicitly to detect specific complex editing operations such as fades and dissolves are taken into account, and their ability to classify the types and locate the boundaries of such edits are examined. The algorithms? performance is measured in terms of hit rate, number of false hits, and miss rate for hard cuts, fades, and dissolves over a large and diverse set of video sequences. The experiments show that while hard cuts and fades can be detected reliably, dissolves are still an open research issue. The false hit rate for dis-solves is usually unacceptably high, ranging from 50% up to over 400%. Moreover, all algorithms seem to fail under roughly the same conditions. --- paper_title: Text to 3D Scene Generation with Rich Lexical Grounding paper_content: The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments. --- paper_title: Multimodal fusion for multimedia analysis: a survey paper_content: This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks. The existing literature on multimodal fusion research is presented through several classifications based on the fusion methodology and the level of fusion (feature, decision, and hybrid). The fusion methods are described from the perspective of the basic concept, advantages, weaknesses, and their usage in various analysis tasks as reported in the literature. Moreover, several distinctive issues that influence a multimodal fusion process such as, the use of correlation and independence, confidence level, contextual information, synchronization between different modalities, and the optimal modality selection are also highlighted. Finally, we present the open issues for further research in the area of multimodal fusion. --- paper_title: A survey of recent advances in visual feature detection paper_content: Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. --- paper_title: Imagenet classification with deep convolutional neural networks paper_content: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network paper_content: The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of ‘context-aware’ emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database. --- paper_title: Multimodal Learning with Deep Boltzmann Machines paper_content: Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time. --- paper_title: Representation Learning: A Review and New Perspectives paper_content: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning. --- paper_title: Distributed Representations of Words and Phrases and their Compositionality paper_content: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. --- paper_title: Deep Canonical Correlation Analysis paper_content: We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks. --- paper_title: Order-Embeddings of Images and Language paper_content: Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval. --- paper_title: Learning Grounded Meaning Representations with Autoencoders paper_content: In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models. --- paper_title: Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis paper_content: article i nfo For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromalstage,Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM) 2 , a deep network with a restricted Boltzmann machine as a building block, to find a latent hierar- chical feature representation from a 3D patch, and then devise a systematic method for a joint feature representa- tion from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing $$\sim $$~0.25 M images, $$\sim $$~0.76 M questions, and $$\sim $$~10 M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa). --- paper_title: Describing Videos by Exploiting Temporal Structure paper_content: Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. --- paper_title: Neural Machine Translation by Jointly Learning to Align and Translate paper_content: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition. --- paper_title: Learning Representations for Multimodal Data with Deep Belief Nets paper_content: We propose a Deep Belief Network architecture for learning a joint representation of multimodal data. The model denes a probability distribution over the space of multimodal inputs and allows sampling from the conditional distributions over each data modality. This makes it possible for the model to create a multimodal representation even when some data modalities are missing. Our experimental results on bi-modal data consisting of images and text show that the Multimodal DBN can learn a good generative model of the joint space of image and text inputs that is useful for lling in missing data so it can be used both for image annotation and image retrieval. We further demonstrate that using the representation discovered by the Multimodal DBN our model can significantly outperform SVMs and LDA on discriminative tasks. --- paper_title: SoundNet: Learning Sound Representations from Unlabeled Video paper_content: We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene/object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels. --- paper_title: Multi-modal Dimensional Emotion Recognition using Recurrent Neural Networks paper_content: Emotion recognition has been an active research area with both wide applications and big challenges. This paper presents our effort for the Audio/Visual Emotion Challenge (AVEC2015), whose goal is to explore utilizing audio, visual and physiological signals to continuously predict the value of the emotion dimensions (arousal and valence). Our system applies the Recurrent Neural Networks (RNN) to model temporal information. We explore various aspects to improve the prediction performance including: the dominant modalities for arousal and valence prediction, duration of features, novel loss functions, directions of Long Short Term Memory (LSTM), multi-task learning, different structures for early feature fusion and late fusion. Best settings are chosen according to the performance on the development set. Competitive experimental results compared with the baseline show the effectiveness of the proposed methods. --- paper_title: Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space paper_content: Past research in analysis of human affect has focused on recognition of prototypic expressions of six basic emotions based on posed data acquired in laboratory settings. Recently, there has been a shift toward subtle, continuous, and context-specific interpretations of affective displays recorded in naturalistic and real-world settings, and toward multimodal analysis and recognition of human affect. Converging with this shift, this paper presents, to the best of our knowledge, the first approach in the literature that: 1) fuses facial expression, shoulder gesture, and audio cues for dimensional and continuous prediction of emotions in valence and arousal space, 2) compares the performance of two state-of-the-art machine learning techniques applied to the target problem, the bidirectional Long Short-Term Memory neural networks (BLSTM-NNs), and Support Vector Machines for Regression (SVR), and 3) proposes an output-associative fusion framework that incorporates correlations and covariances between the emotion dimensions. Evaluation of the proposed approach has been done using the spontaneous SAL data from four subjects and subject-dependent leave-one-sequence-out cross validation. The experimental results obtained show that: 1) on average, BLSTM-NNs outperform SVR due to their ability to learn past and future context, 2) the proposed output-associative fusion framework outperforms feature-level and model-level fusion by modeling and learning correlations and patterns between the valence and arousal dimensions, and 3) the proposed system is well able to reproduce the valence and arousal ground truth obtained from human coders. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Bimodal recognition experiments with recurrent neural networks paper_content: A bimodal automatic speech recognition system, using simultaneously auditory model and articulatory parameters, is described. Results given for various speaker dependent phonetic recognition experiments, regarding the Italian plosive class, show the usefulness of this approach especially in noisy conditions. > --- paper_title: Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text paper_content: This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality. --- paper_title: Dropout: a simple way to prevent neural networks from overfitting paper_content: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. --- paper_title: Deep multimodal hashing with orthogonal regularization paper_content: Hashing is an important method for performing efficient similarity search. With the explosive growth of multimodal data, how to learn hashing-based compact representations for multimodal data becomes highly non-trivial. Compared with shallow-structured models, deep models present superiority in capturing multimodal correlations due to their high nonlinearity. However, in order to make the learned representation more accurate and compact, how to reduce the redundant information lying in the multimodal representations and incorporate different complexities of different modalities in the deep models is still an open problem. In this paper, we propose a novel deep multimodal hashing method, namely Deep Multimodal Hashing with Orthogonal Regularization (DMHOR), which fully exploits intra-modality and inter-modality correlations. In particular, to reduce redundant information, we impose orthogonal regularizer on the weighting matrices of the model, and theoretically prove that the learned representation is guaranteed to be approximately orthogonal. Moreover, we find that a better representation can be attained with different numbers of layers for different modalities, due to their different complexities. Comprehensive experiments on WIKI and NUS-WIDE, demonstrate a substantial gain of DMHOR compared with state-of-the-art methods. --- paper_title: Multimodal Learning with Deep Boltzmann Machines paper_content: Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time. --- paper_title: Understanding the difficulty of training deep feedforward neural networks paper_content: Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact). --- paper_title: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift paper_content: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters. --- paper_title: Audio-visual deep learning for noise robust speech recognition paper_content: Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21% relative over a baseline multi-stream audio-visual GMM/HMM system. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. --- paper_title: Representation Learning: A Review and New Perspectives paper_content: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning. --- paper_title: Multi-source Deep Learning for Human Pose Estimation paper_content: Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets. --- paper_title: Multimodal Deep Learning paper_content: Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning. --- paper_title: Deep learning for robust feature generation in audiovisual emotion recognition paper_content: Automatic emotion recognition systems predict high-level affective content from low-level human-centered signal cues. These systems have seen great improvements in classification accuracy, due in part to advances in feature selection methods. However, many of these feature selection methods capture only linear relationships between features or alternatively require the use of labeled data. In this paper we focus on deep learning techniques, which can overcome these limitations by explicitly capturing complex non-linear feature interactions in multimodal data. We propose and evaluate a suite of Deep Belief Network models, and demonstrate that these models show improvement in emotion classification performance over baselines that do not employ deep learning. This suggests that the learned high-order non-linear relationships are effective for emotion recognition. --- paper_title: A Fast Learning Algorithm for Deep Belief Nets paper_content: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. --- paper_title: Deep Boltzmann machines paper_content: We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks. --- paper_title: Multimodal Dynamic Networks for Gesture Recognition paper_content: Multimodal input is a real-world situation in gesture recognition applications such as sign language recognition. In this paper, we propose a novel bi-modal (audio and skeleton joints) dynamic network for gesture recognition. First, state-of-the-art dynamic Deep Belief Networks are deployed to extract high level audio and skeletal joints representations. Then, instead of traditional late fusion, we adopt another layer of perceptron for cross modality learning taking the input from each individual net's penultimate layer. Finally, to account for temporal dynamics, the learned shared representations are used for estimating the emission probability to infer action sequences. In particular, we demonstrate that multimodal feature learning will extract semantically meaningful shared representations, outperforming individual modalities, and the early fusion scheme's efficacy against the traditional method of late fusion. --- paper_title: Deep multimodal learning for Audio-Visual Speech Recognition paper_content: In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of 41% under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of 35.83% demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of 34.03%. --- paper_title: Autoencoders, Minimum Description Length and Helmholtz Free Energy paper_content: An autoencoder network uses a set of recognition weights to convert an input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector. We derive an objective function for training autoencoders based on the Minimum Description Length (MDL) principle. The aim is to minimize the information required to describe both the code vector and the reconstruction error. We show that this information is minimized by choosing code vectors stochastically according to a Boltzmann distribution, where the generative weights define the energy of each possible code vector given the input vector. Unfortunately, if the code vectors use distributed representations, it is exponentially expensive to compute this Boltzmann distribution because it involves all possible code vectors. We show that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length. Even when this bound is poor, it can be used as a Lyapunov function for learning both the generative and the recognition weights. We demonstrate that this approach can be used to learn factorial codes. --- paper_title: Exploring Inter-feature and Inter-class Relationships with Deep Neural Networks for Video Classification paper_content: Videos contain very rich semantics and are intrinsically multimodal. In this paper, we study the challenging task of classifying videos according to their high-level semantics such as human actions or complex events. Although extensive efforts have been paid to study this problem, most existing works combined multiple features using simple fusion strategies and neglected the exploration of inter-class semantic relationships. In this paper, we propose a novel unified framework that jointly learns feature relationships and exploits the class relationships for improved video classification performance. Specifically, these two types of relationships are learned and utilized by rigorously imposing regularizations in a deep neural network (DNN). Such a regularized DNN can be efficiently launched using a GPU implementation with an affordable training cost. Through arming the DNN with better capability of exploring both the inter-feature and the inter-class relationships, the proposed regularized DNN is more suitable for identifying video semantics. With extensive experimental evaluations, we demonstrate that the proposed framework exhibits superior performance over several state-of-the-art approaches. On the well-known Hollywood2 and Columbia Consumer Video benchmarks, we obtain to-date the best reported results: 65.7% and 70.6% respectively in terms of mean average precision. --- paper_title: Extending Long Short-Term Memory for Multi-View Structured Learning paper_content: Long Short-Term Memory (LSTM) networks have been successfully applied to a number of sequence learning problems but they lack the design flexibility to model multiple view interactions, limiting their ability to exploit multi-view relationships. In this paper, we propose a Multi-View LSTM (MV-LSTM), which explicitly models the view-specific and cross-view interactions over time or structured outputs. We evaluate the MV-LSTM model on four publicly available datasets spanning two very different structured learning problems: multimodal behaviour recognition and image captioning. The experimental results show competitive performance on all four datasets when compared with state-of-the-art models. --- paper_title: FaceSync: A Linear Operator for Measuring Synchronization of Video Facial Images and Audio Tracks paper_content: FaceSync is an optimal linear algorithm that finds the degree of synchronization between the audio and image recordings of a human speaker. Using canonical correlation, it finds the best direction to combine all the audio and image data, projecting them onto a single axis. FaceSync uses Pearson's correlation to measure the degree of synchronization between the audio and image data. We derive the optimal linear transform to combine the audio and visual information and describe an implementation that avoids the numerical problems caused by computing the correlation matrices. --- paper_title: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions paper_content: We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics , which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph , i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. --- paper_title: Learning Concept Taxonomies from Multi-modal Data paper_content: We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap. --- paper_title: Cross-modal Retrieval with Correspondence Autoencoder paper_content: The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks. --- paper_title: Canonical Correlation Analysis: An Overview with Application to Learning Methods paper_content: We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model. --- paper_title: WSABIE: Scaling Up to Large Vocabulary Image Annotation paper_content: Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory. --- paper_title: Relations Between Two Sets of Variates paper_content: Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each --- paper_title: Deep Visual-Semantic Hashing for Cross-Modal Retrieval paper_content: Due to the storage and retrieval efficiency, hashing has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which enables efficient retrieval of images in response to text queries or vice versa, has received increasing attention recently. Most existing work on cross-modal hashing does not capture the spatial dependency of images and temporal dynamics of text sentences for learning powerful feature representations and cross-modal embeddings that mitigate the heterogeneity of different modalities. This paper presents a new Deep Visual-Semantic Hashing (DVSH) model that generates compact hash codes of images and sentences in an end-to-end deep learning architecture, which capture the intrinsic cross-modal correspondences between visual data and natural language. DVSH is a hybrid deep architecture that constitutes a visual-semantic fusion network for learning joint embedding space of images and text sentences, and two modality-specific hashing networks for learning hash functions to generate compact binary codes. Our architecture effectively unifies joint multimodal embedding and cross-modal hashing, which is based on a novel combination of Convolutional Neural Networks over images, Recurrent Neural Networks over sentences, and a structured max-margin objective that integrates all things together to enable learning of similarity-preserving and high-quality hash codes. Extensive empirical evidence shows that our DVSH approach yields state of the art results in cross-modal retrieval experiments on image-sentences datasets, i.e. standard IAPR TC-12 and large-scale Microsoft COCO. --- paper_title: Large-scale supervised multimodal hashing with semantic correlation maximization paper_content: Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability. --- paper_title: The classification of multi-modal data with hidden conditional random field paper_content: The classification of multi-modal data has been an active research topic in recent years. It has been used in many applications where the processing of multi-modal data is involved. Motivated by the assumption that different modalities in multi-modal data share latent structure (topics), this paper attempts to learn the shared structure by exploiting the symbiosis of multiple-modality and therefore boost the classification of multi-modal data, we call it Multi-modal Hidden Conditional Random Field (M-HCRF). M-HCRF represents the intrinsical structure shared by different modalities as hidden variables in a undirected general graphical model. When learning the latent shared structure of the multi-modal data, M-HCRF can discover the interactions among the hidden structure and the supervised category information. The experimental results show the effectiveness of our proposed M-HCRF when applied to the classification of multi-modal data. --- paper_title: Deep correspondence restricted Boltzmann machine for cross-modal retrieval paper_content: The task of cross-modal retrieval, i.e., using a text query to search for images or vice versa, has received considerable attention with the rapid growth of multi-modal web data. Modeling the correlations between different modalities is the key to tackle this problem. In this paper, we propose a correspondence restricted Boltzmann machine (Corr-RBM) to map the original features of bimodal data, such as image and text in our setting, into a low-dimensional common space, in which the heterogeneous data are comparable. In our Corr-RBM, two RBMs built for image and text, respectively are connected at their individual hidden representation layers by a correlation loss function. A single objective function is constructed to trade off the correlation loss and likelihoods of both modalities. Through the optimization of this objective function, our Corr-RBM is able to capture the correlations between two modalities and learn the representation of each modality simultaneously. Furthermore, we construct two deep neural structures using Corr-RBM as the main building block for the task of cross-modal retrieval. A number of comparison experiments are performed on three public real-world data sets. All of our models show significantly better results than state-of-the-art models in both searching images via text query and vice versa. --- paper_title: Data fusion through cross-modality metric learning using similarity-sensitive hashing paper_content: Visual understanding is often based on measuring similarity between observations. Learning similarities specific to a certain perception task from a set of examples has been shown advantageous in various computer vision and pattern recognition problems. In many important applications, the data that one needs to compare come from different representations or modalities, and the similarity between such data operates on objects that may have different and often incommensurable structure and dimensionality. In this paper, we propose a framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space. The mapping is expressed as a binary classification problem with positive and negative examples, and can be efficiently learned using boosting algorithms. The utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images. --- paper_title: Grounded Compositional Semantics for Finding and Describing Images with Sentences paper_content: Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image. --- paper_title: Deep Canonical Correlation Analysis paper_content: We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks. --- paper_title: Order-Embeddings of Images and Language paper_content: Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval. --- paper_title: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models paper_content: Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison. --- paper_title: Audiovisual Synchronization and Fusion Using Canonical Correlation Analysis paper_content: It is well-known that early integration (also called data fusion) is effective when the modalities are correlated, and late integration (also called decision or opinion fusion) is optimal when modalities are uncorrelated. In this paper, we propose a new multimodal fusion strategy for open-set speaker identification using a combination of early and late integration following canonical correlation analysis (CCA) of speech and lip texture features. We also propose a method for high precision synchronization of the speech and lip features using CCA prior to the proposed fusion. Experimental results show that i) the proposed fusion strategy yields the best equal error rates (EER), which are used to quantify the performance of the fusion strategy for open-set speaker identification, and ii) precise synchronization prior to fusion improves the EER; hence, the best EER is obtained when the proposed synchronization scheme is employed together with the proposed fusion strategy. We note that the proposed fusion strategy outperforms others because the features used in the late integration are truly uncorrelated, since they are output of the CCA analysis. --- paper_title: Fisher Vectors Derived from Hybrid Gaussian-Laplacian Mixture Models for Image Annotation paper_content: In the traditional object recognition pipeline, descriptors are densely sampled over an image, pooled into a high dimensional non-linear representation and then passed to a classifier. In recent years, Fisher Vectors have proven empirically to be the leading representation for a large variety of applications. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). Motivated by the assumption that different distributions should be applied for different datasets, we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. An interesting property of the Expectation-Maximization algorithm for the latter is that in the maximization step, each dimension in each component is chosen to be either a Gaussian or a Laplacian. Finally, by using the new Fisher Vectors derived from HGLMMs, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks. --- paper_title: Hashing for Similarity Search: A Survey paper_content: Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space. --- paper_title: Kernel and Nonlinear Canonical Correlation Analysis paper_content: We review a neural implementation of the statistical technique of Canonical Correlation Analysis (CCA) and extend it to nonlinear CCA. We then derive the method of kernel-based CCA and compare these two methods on real and artificial data sets before using both on the Blind Separation of Sources. --- paper_title: Jointly modeling deep video and compositional text to bridge vision and language in a unified framework paper_content: Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks. --- paper_title: Jointly Modeling Embedding and Translation to Bridge Video and Language paper_content: Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets. --- paper_title: Learning Hash Functions for Cross-View Similarity Search paper_content: Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems. --- paper_title: Deep Cross-Modal Hashing paper_content: Due to its low storage cost and fast query speed, cross-modal hashing (CMH) has been widely used for similarity search in multimedia retrieval applications. However, most existing CMH methods are based on hand-crafted features which might not be optimally compatible with the hash-code learning procedure. As a result, existing CMH methods with hand-crafted features may not achieve satisfactory performance. In this paper, we propose a novel CMH method, called deep cross-modal hashing (DCMH), by integrating feature learning and hash-code learning intothe same framework. DCMH is an end-to-end learning framework with deep neural networks, one for each modality, to perform feature learning from scratch. Experiments on three real datasets with image-text modalities show that DCMH can outperform other baselines to achieve the state-of-the-art performance in cross-modal retrieval applications. --- paper_title: Neural Module Networks paper_content: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. --- paper_title: Pixel Recurrent Neural Networks paper_content: Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent. --- paper_title: Text-to-visual speech synthesis based on parameter generation from HMM paper_content: This paper presents a new technique for synthesizing visual speech from arbitrarily given text. The technique is based on an algorithm for parameter generation from HMM with dynamic features, which has been successfully applied to text-to-speech synthesis. In the training phase, syllable HMMs are trained with visual speech parameter sequences that represent lip movements. In the synthesis phase, a sentence HMM is constructed by concatenating syllable HMMs corresponding to the phonetic transcription for the input text. Then an optimum visual speech parameter sequence is generated from the sentence HMM in an ML sense. The proposed technique can generate synchronized lip movements with speech in a unified framework. Furthermore, coarticulation is implicitly incorporated into the generated mouth shapes. As a result, synthetic lip motion becomes smooth and realistic. --- paper_title: Generative Adversarial Text to Image Synthesis paper_content: Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. --- paper_title: Neural Machine Translation by Jointly Learning to Align and Translate paper_content: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition. --- paper_title: Using Descriptive Video Services to Create a Large Data Source for Video Annotation Research paper_content: In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing. --- paper_title: Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures paper_content: Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem, viz., models that cast description as either generation problem or as a retrieval problem over a visual or multimodal representational space. We provide a detailed review of existing models, highlighting their advantages and disadvantages. Moreover, we give an overview of the benchmark image datasets and the evaluation measures that have been developed to assess the quality of machine-generated image descriptions. Finally we extrapolate future directions in the area of automatic image description generation. --- paper_title: WaveNet: A Generative Model for Raw Audio paper_content: This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition. --- paper_title: Visually Indicated Sounds paper_content: Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions. --- paper_title: Show and tell: A neural image caption generator paper_content: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. --- paper_title: Unit selection in a concatenative speech synthesis system using a large speech database paper_content: One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning. --- paper_title: Microsoft COCO Captions: Data Collection and Evaluation Server paper_content: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. --- paper_title: Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions paper_content: We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments. --- paper_title: Im2Text: Describing Images Using 1 Million Captioned Photographs paper_content: We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning. --- paper_title: Deep Fragment Embeddings for Bidirectional Image Sentence Mapping paper_content: We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit. --- paper_title: Phrase-based Image Captioning paper_content: Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO. --- paper_title: Deep Visual-Semantic Hashing for Cross-Modal Retrieval paper_content: Due to the storage and retrieval efficiency, hashing has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which enables efficient retrieval of images in response to text queries or vice versa, has received increasing attention recently. Most existing work on cross-modal hashing does not capture the spatial dependency of images and temporal dynamics of text sentences for learning powerful feature representations and cross-modal embeddings that mitigate the heterogeneity of different modalities. This paper presents a new Deep Visual-Semantic Hashing (DVSH) model that generates compact hash codes of images and sentences in an end-to-end deep learning architecture, which capture the intrinsic cross-modal correspondences between visual data and natural language. DVSH is a hybrid deep architecture that constitutes a visual-semantic fusion network for learning joint embedding space of images and text sentences, and two modality-specific hashing networks for learning hash functions to generate compact binary codes. Our architecture effectively unifies joint multimodal embedding and cross-modal hashing, which is based on a novel combination of Convolutional Neural Networks over images, Recurrent Neural Networks over sentences, and a structured max-margin objective that integrates all things together to enable learning of similarity-preserving and high-quality hash codes. Extensive empirical evidence shows that our DVSH approach yields state of the art results in cross-modal retrieval experiments on image-sentences datasets, i.e. standard IAPR TC-12 and large-scale Microsoft COCO. --- paper_title: Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics paper_content: In [Hodosh et al., 2013], we establish a rankingbased framework for sentence-based image description and retrieval. We introduce a new dataset of images paired with multiple descriptive captions that was specifically designed for these tasks. We also present strong KCCA-based baseline systems for description and search, and perform an in-depth study of evaluation metrics for these two tasks. Our results indicate that automatic evaluation metrics for our ranking-based tasks are more accurate and robust than those proposed for generation-based image description. --- paper_title: The classification of multi-modal data with hidden conditional random field paper_content: The classification of multi-modal data has been an active research topic in recent years. It has been used in many applications where the processing of multi-modal data is involved. Motivated by the assumption that different modalities in multi-modal data share latent structure (topics), this paper attempts to learn the shared structure by exploiting the symbiosis of multiple-modality and therefore boost the classification of multi-modal data, we call it Multi-modal Hidden Conditional Random Field (M-HCRF). M-HCRF represents the intrinsical structure shared by different modalities as hidden variables in a undirected general graphical model. When learning the latent shared structure of the multi-modal data, M-HCRF can discover the interactions among the hidden structure and the supervised category information. The experimental results show the effectiveness of our proposed M-HCRF when applied to the classification of multi-modal data. --- paper_title: Dynamic units of visual speech paper_content: We present a new method for generating a dynamic, concatenative, unit of visual speech that can generate realistic visual speech animation. We redefine visemes as temporal units that describe distinctive speech movements of the visual speech articulators. Traditionally visemes have been surmized as the set of static mouth shapes representing clusters of contrastive phonemes (e.g. /p, b, m/, and /f, v/). In this work, the motion of the visual speech articulators are used to generate discrete, dynamic visual speech gestures. These gestures are clustered, providing a finite set of movements that describe visual speech, the visemes. Dynamic visemes are applied to speech animation by simply concatenating viseme units. We compare to static visemes using subjective evaluation. We find that dynamic visemes are able to produce more accurate and visually pleasing speech animation given phonetically annotated audio, reducing the amount of time that an animator needs to spend manually refining the animation. --- paper_title: A Distributed Representation Based Query Expansion Approach for Image Captioning paper_content: In this paper, we propose a novel query expansion approach for improving transferbased automatic image captioning. The core idea of our method is to translate the given visual query into a distributional semantics based form, which is generated by the average of the sentence vectors extracted from the captions of images visually similar to the input image. Using three image captioning benchmark datasets, we show that our approach provides more accurate results compared to the state-of-theart data-driven methods in terms of both automatic metrics and subjective evaluation. --- paper_title: Language Models for Image Captioning: The Quirks and What Works paper_content: Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments. --- paper_title: Collective Generation of Natural Image Descriptions paper_content: We present a holistic data-driven approach to image description generation, exploiting the vast amount of (noisy) parallel image data and associated natural language descriptions available on the web. More specifically, given a query image, we retrieve existing human-composed phrases used to describe visually similar images, then selectively combine those phrases to generate a novel description for the query image. We cast the generation process as constraint optimization problems, collectively incorporating multiple interconnected aspects of language composition for content planning, surface realization and discourse structure. Evaluation by human annotators indicates that our final system generates more semantically correct and linguistically appealing descriptions than two nontrivial baselines. --- paper_title: Speech-Driven Facial Animation Using A Shared Gaussian Process Latent Variable Model paper_content: In this work, synthesis of facial animation is done by modelling the mapping between facial motion and speech using the shared Gaussian process latent variable model. Both data are processed separately and subsequently coupled together to yield a shared latent space. This method allows coarticulation to be modelled by having a dynamical model on the latent space. Synthesis of novel animation is done by first obtaining intermediate latent points from the audio data and then using a Gaussian Process mapping to predict the corresponding visual data. Statistical evaluation of generated visual features against ground truth data compares favourably with known methods of speech animation. The generated videos are found to show proper synchronisation with audio and exhibit correct facial dynamics. --- paper_title: Grounded Compositional Semantics for Finding and Describing Images with Sentences paper_content: Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image. --- paper_title: Video Rewrite: driving visual speech with audio paper_content: Video Rewrite uses existing footage to create automatically new video of a person mouthing words that she did not speak in the original footage. This technique is useful in movie dubbing, for example, where the movie sequence can be modified to sync the actors’ lip motions to the new soundtrack. Video Rewrite automatically labels the phonemes in the training data and in the new audio track. Video Rewrite reorders the mouth images in the training footage to match the phoneme sequence of the new audio track. When particular phonemes are unavailable in the training footage, Video Rewrite selects the closest approximations. The resulting sequence of mouth images is stitched into the background footage. This stitching process automatically corrects for differences in head position and orientation between the mouth images and the background footage. Video Rewrite uses computer-vision techniques to track points on the speaker’s mouth in the training footage, and morphing techniques to combine these mouth gestures into the final video sequence. The new video combines the dynamics of the original actor’s articulations with the mannerisms and setting dictated by the background footage. Video Rewrite is the first facial-animation system to automate all the labeling and assembly tasks required to resync existing footage to a new soundtrack. --- paper_title: Unit selection in a concatenative speech synthesis system using a large speech database paper_content: One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning. --- paper_title: Every picture tells a story: generating sentences from images paper_content: Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche. --- paper_title: Jointly modeling deep video and compositional text to bridge vision and language in a unified framework paper_content: Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks. --- paper_title: Choosing linguistics over vision to describe images paper_content: In this paper, we address the problem of automatically generating human-like descriptions for unseen images, given a collection of images and their corresponding human-generated descriptions. Previous attempts for this task mostly rely on visual clues and corpus statistics, but do not take much advantage of the semantic information inherent in the available image descriptions. Here, we present a generic method which benefits from all these three sources (i. e. visual clues, corpus statistics and available descriptions) simultaneously, and is capable of constructing novel descriptions. Our approach works on syntactically and linguistically motivated phrases extracted from the human descriptions. Experimental evaluations demonstrate that our formulation mostly generates lucid and semantically correct descriptions, and significantly outperforms the previous methods on automatic evaluation metrics. One of the significant advantages of our approach is that we can generate multiple interesting descriptions for an image. Unlike any previous work, we also test the applicability of our method on a large dataset containing complex images with rich descriptions. --- paper_title: TTS Synthesis with Bidirectional LSTM based Recurrent Neural Networks paper_content: Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed. --- paper_title: Listen, attend and spell: A neural network for large vocabulary conversational speech recognition paper_content: We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set. --- paper_title: Video In Sentences Out paper_content: We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the trackto-role assignments, and changing body posture. --- paper_title: Sequence to Sequence Learning with Neural Networks paper_content: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. --- paper_title: Generative Adversarial Text to Image Synthesis paper_content: Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. --- paper_title: Expressive Visual Text-to-Speech Using Active Appearance Models paper_content: This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems. --- paper_title: Statistical Parametric Speech Synthesis paper_content: This paper gives a general overview of techniques in statistical parametric speech synthesis. One of the instances of these techniques, called HMM-based generation synthesis (or simply HMM-based synthesis), has recently been shown to be very effective in generating acceptable speech synthesis. This paper also contrasts these techniques with the more conventional unit selection technology that has dominated speech synthesis over the last ten years. Advantages and disadvantages of statistical parametric synthesis are highlighted as well as identifying where we expect the key developments to appear in the immediate future. --- paper_title: Composing Simple Image Descriptions using Web-scale N-grams paper_content: Studying natural language, and especially how people describe the world around them can help us better understand the visual world. In turn, it can also help us in the quest to generate natural language that describes this world in a human manner. We present a simple yet effective approach to automatically compose image descriptions given computer vision based inputs and using web-scale n-grams. Unlike most previous work that summarizes or retrieves pre-existing text relevant to an image, our method composes sentences entirely from scratch. Experimental results indicate that it is viable to generate simple textual descriptions that are pertinent to the specific content of an image, while permitting creativity in the description -- making for more human-like annotations than previous approaches. --- paper_title: Describing Videos by Exploiting Temporal Structure paper_content: Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. --- paper_title: Neural Machine Translation by Jointly Learning to Align and Translate paper_content: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition. --- paper_title: Imagenet classification with deep convolutional neural networks paper_content: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. --- paper_title: Guiding the Long-Short Term Memory Model for Image Caption Generation paper_content: In this work we focus on the problem of image caption generation. We propose an extension of the long short term memory (LSTM) model, which we coin gLSTM for short. In particular, we add semantic information extracted from the image as extra input to each unit of the LSTM block, with the aim of guiding the model towards solutions that are more tightly coupled to the image content. Additionally, we explore different length normalization strategies for beam search to avoid bias towards short sentences. On various benchmark datasets such as Flickr8K, Flickr30K and MS COCO, we obtain results that are on par with or better than the current state-of-the-art. --- paper_title: Two-Stream Convolutional Networks for Action Recognition in Videos paper_content: We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. ::: ::: Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification. --- paper_title: Bringing Semantics into Focus Using Visual Abstraction paper_content: Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of semantically similar real images would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract scenes with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity. --- paper_title: BabyTalk: Understanding and Generating Simple Image Descriptions paper_content: We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work. --- paper_title: Dynamic units of visual speech paper_content: We present a new method for generating a dynamic, concatenative, unit of visual speech that can generate realistic visual speech animation. We redefine visemes as temporal units that describe distinctive speech movements of the visual speech articulators. Traditionally visemes have been surmized as the set of static mouth shapes representing clusters of contrastive phonemes (e.g. /p, b, m/, and /f, v/). In this work, the motion of the visual speech articulators are used to generate discrete, dynamic visual speech gestures. These gestures are clustered, providing a finite set of movements that describe visual speech, the visemes. Dynamic visemes are applied to speech animation by simply concatenating viseme units. We compare to static visemes using subjective evaluation. We find that dynamic visemes are able to produce more accurate and visually pleasing speech animation given phonetically annotated audio, reducing the amount of time that an animator needs to spend manually refining the animation. --- paper_title: Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild paper_content: This paper integrates techniques in natural language processing and computer vision to improve recognition and description of entities and activities in real-world videos. We propose a strategy for generating textual descriptions of videos by using a factor graph to combine visual detections with language statistics. We use state-of-the-art visual recognition systems to obtain confidences on entities, activities, and scenes present in the video. Our factor graph model combines these detection confidences with probabilistic knowledge mined from text corpora to estimate the most likely subject, verb, object, and place. Results on YouTube videos show that our approach improves both the joint detection of these latent, diverse sentence components and the detection of some individual components when compared to using the vision system alone, as well as over a previous n-gram language-modeling approach. The joint detection allows us to automatically generate more accurate, richer sentential descriptions of videos with a wide array of possible content. --- paper_title: Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures paper_content: Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem, viz., models that cast description as either generation problem or as a retrieval problem over a visual or multimodal representational space. We provide a detailed review of existing models, highlighting their advantages and disadvantages. Moreover, we give an overview of the benchmark image datasets and the evaluation measures that have been developed to assess the quality of machine-generated image descriptions. Finally we extrapolate future directions in the area of automatic image description generation. --- paper_title: Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) paper_content: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html . --- paper_title: Language Models for Image Captioning: The Quirks and What Works paper_content: Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments. --- paper_title: WaveNet: A Generative Model for Raw Audio paper_content: This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition. --- paper_title: Speech-Driven Facial Animation Using A Shared Gaussian Process Latent Variable Model paper_content: In this work, synthesis of facial animation is done by modelling the mapping between facial motion and speech using the shared Gaussian process latent variable model. Both data are processed separately and subsequently coupled together to yield a shared latent space. This method allows coarticulation to be modelled by having a dynamical model on the latent space. Synthesis of novel animation is done by first obtaining intermediate latent points from the audio data and then using a Gaussian Process mapping to predict the corresponding visual data. Statistical evaluation of generated visual features against ground truth data compares favourably with known methods of speech animation. The generated videos are found to show proper synchronisation with audio and exhibit correct facial dynamics. --- paper_title: Speech recognition with deep recurrent neural networks paper_content: Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score. --- paper_title: I2t: Image parsing to text description paper_content: In this paper, we present an image parsing to text description (I2T) framework that generates text descriptions of image and video content based on image understanding. The proposed I2T framework follows three steps: 1) input images (or video frames) are decomposed into their constituent visual patterns by an image parsing engine, in a spirit similar to parsing sentences in natural language; 2) the image parsing results are converted into semantic representation in the form of Web ontology language (OWL), which enables seamless integration with general knowledge bases; and 3) a text generation engine converts the results from previous steps into semantically meaningful, human readable, and query-able text reports. The centerpiece of the I2T framework is an and-or graph (AoG) visual knowledge representation, which provides a graphical representation serving as prior knowledge for representing diverse visual patterns and provides top-down hypotheses during the image parsing. The AoG embodies vocabularies of visual elements including primitives, parts, objects, scenes as well as a stochastic image grammar that specifies syntactic relations (i.e., compositional) and semantic relations (e.g., categorical, spatial, temporal, and functional) between these visual elements. Therefore, the AoG is a unified model of both categorical and symbolic representations of visual knowledge. The proposed I2T framework has two objectives. First, we use semiautomatic method to parse images from the Internet in order to build an AoG for visual knowledge representation. Our goal is to make the parsing process more and more automatic using the learned AoG model. Second, we use automatic methods to parse image/video in specific domains and generate text reports that are useful for real-world applications. In the case studies at the end of this paper, we demonstrate two automatic I2T systems: a maritime and urban scene video surveillance system and a real-time automatic driving scene understanding system. --- paper_title: Visually Indicated Sounds paper_content: Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions. --- paper_title: Statistical Parametric Speech Synthesis Based on Speaker and Language Factorization paper_content: An increasingly common scenario in building speech synthesis and recognition systems is training on inhomogeneous data. This paper proposes a new framework for estimating hidden Markov models on data containing both multiple speakers and multiple languages. The proposed framework, speaker and language factorization, attempts to factorize speaker-/language-specific characteristics in the data and then model them using separate transforms. Language-specific factors in the data are represented by transforms based on cluster mean interpolation with cluster-dependent decision trees. Acoustic variations caused by speaker characteristics are handled by transforms based on constrained maximum-likelihood linear regression. Experimental results on statistical parametric speech synthesis show that the proposed framework enables data from multiple speakers in different languages to be used to: train a synthesis system; synthesize speech in a language using speaker characteristics estimated in a different language; and adapt to a new language. --- paper_title: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention paper_content: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO. --- paper_title: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models paper_content: Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison. --- paper_title: Generating Images from Captions with Attention paper_content: Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset. --- paper_title: Midge: Generating Image Descriptions From Computer Vision Detections paper_content: This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date. --- paper_title: Show and tell: A neural image caption generator paper_content: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. --- paper_title: Recurrent Continuous Translation Models paper_content: We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43% lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations. --- paper_title: Wav2Letter: an End-to-End ConvNet-based Speech Recognition System paper_content: This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC (Graves et al., 2006) while being simpler. We show competitive results in word error rate on the Librispeech corpus (Panayotov et al., 2015) with MFCC features, and promising results from raw waveform. --- paper_title: Unit selection in a concatenative speech synthesis system using a large speech database paper_content: One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning. --- paper_title: WordsEye: an automatic text-to-scene conversion system paper_content: Natural language is an easy and effective medium for describing visual ideas and mental images. Thus, we foresee the emergence of language-based 3D scene generation systems to let ordinary users quickly create 3D scenes without having to learn special software, acquire artistic skills, or even touch a desktop window-oriented interface. WordsEye is such a system for automatically converting text into representative 3D scenes. WordsEye relies on a large database of 3D models and poses to depict entities and actions. Every 3D model can have associated shape displacements, spatial tags, and functional properties to be used in the depiction process. We describe the linguistic analysis and depiction techniques used by WordsEye along with some general strategies by which more abstract concepts are made depictable. --- paper_title: Distributed Representations of Words and Phrases and their Compositionality paper_content: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. --- paper_title: Image Description using Visual Dependency Representations paper_content: Describing the main event of an image involves identifying the objects depicted and predicting the relationships between them. Previous approaches have represented images as unstructured bags of regions, which makes it difficult to accurately predict meaningful relationships between regions. In this paper, we introduce visual dependency representations to capture the relationships between the objects in an image, and hypothesize that this representation can improve image description. We test this hypothesis using a new data set of region-annotated images, associated with visual dependency representations and gold-standard descriptions. We describe two template-based description generation models that operate over visual dependency representations. In an image description task, we find that these models outperform approaches that rely on object proximity or corpus information to generate descriptions on both automatic measures and on human judgements. --- paper_title: The Long-Short Story of Movie Description paper_content: Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] and M-VAD [31] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long Short-Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these classifiers we generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD and M-VAD datasets. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task. --- paper_title: Very Deep Convolutional Networks for Large-Scale Image Recognition paper_content: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. --- paper_title: Generative Adversarial Nets paper_content: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. --- paper_title: Convolutional Two-Stream Network Fusion for Video Action Recognition paper_content: Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results. --- paper_title: Jointly Modeling Embedding and Translation to Bridge Video and Language paper_content: Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets. --- paper_title: Trainable Methods For Surface Natural Language Generation paper_content: We present three systems for surface natural language generation that are trainable from annotated corpora. The first two systems, called NLG1 and NLG2, require a corpus marked only with domainspecific semantic attributes, while the last system, called NLG3, requires a corpus marked with both semantic attributes and syntactic dependency information. All systems attempt to produce a grammatical natural language phrase from a domain-specific semantic representation. NLG1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step, while NLG2 and NLG3 use maximum entropy probability models to individually generate each word in the phrase. The systems NLG2 and NLG3 learn to determine both the word choice and the word order of the phrase. We present experiments in which we generate phrases to describe flights in the air travel domain. --- paper_title: Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions paper_content: We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments. --- paper_title: Deep Fragment Embeddings for Bidirectional Image Sentence Mapping paper_content: We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: Expressive Visual Text-to-Speech Using Active Appearance Models paper_content: This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems. --- paper_title: Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics paper_content: In [Hodosh et al., 2013], we establish a rankingbased framework for sentence-based image description and retrieval. We introduce a new dataset of images paired with multiple descriptive captions that was specifically designed for these tasks. We also present strong KCCA-based baseline systems for description and search, and perform an in-depth study of evaluation metrics for these two tasks. Our results indicate that automatic evaluation metrics for our ranking-based tasks are more accurate and robust than those proposed for generation-based image description. --- paper_title: BabyTalk: Understanding and Generating Simple Image Descriptions paper_content: We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work. --- paper_title: Dynamic units of visual speech paper_content: We present a new method for generating a dynamic, concatenative, unit of visual speech that can generate realistic visual speech animation. We redefine visemes as temporal units that describe distinctive speech movements of the visual speech articulators. Traditionally visemes have been surmized as the set of static mouth shapes representing clusters of contrastive phonemes (e.g. /p, b, m/, and /f, v/). In this work, the motion of the visual speech articulators are used to generate discrete, dynamic visual speech gestures. These gestures are clustered, providing a finite set of movements that describe visual speech, the visemes. Dynamic visemes are applied to speech animation by simply concatenating viseme units. We compare to static visemes using subjective evaluation. We find that dynamic visemes are able to produce more accurate and visually pleasing speech animation given phonetically annotated audio, reducing the amount of time that an animator needs to spend manually refining the animation. --- paper_title: Automatic Evaluation Of Summaries Using N-Gram Co-Occurrence Statistics paper_content: Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results. --- paper_title: WaveNet: A Generative Model for Raw Audio paper_content: This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition. --- paper_title: Comparing Automatic Evaluation Measures for Image Description paper_content: Image description is a new natural language generation task, where the aim is to generate a human-like description of an image. The evaluation of computer-generated text is a notoriously difficult problem, however, the quality of image descriptions has typically been measured using unigram BLEU and human judgements. The focus of this paper is to determine the correlation of automatic measures with human judgements for this task. We estimate the correlation of unigram and Smoothed BLEU, TER, ROUGE-SU4, and Meteor against human judgements on two data sets. The main finding is that unigram BLEU has a weak correlation, and Meteor has the strongest correlation with human judgements. --- paper_title: Movie Description paper_content: Audio description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. We introduce the Large Scale Movie Description Challenge (LSMDC) which contains a parallel corpus of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total). The goal of the challenge is to automatically generate descriptions for the movie clips. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in the challenges organized in the context of two workshops at ICCV 2015 and ECCV 2016. --- paper_title: Statistical Parametric Speech Synthesis Based on Speaker and Language Factorization paper_content: An increasingly common scenario in building speech synthesis and recognition systems is training on inhomogeneous data. This paper proposes a new framework for estimating hidden Markov models on data containing both multiple speakers and multiple languages. The proposed framework, speaker and language factorization, attempts to factorize speaker-/language-specific characteristics in the data and then model them using separate transforms. Language-specific factors in the data are represented by transforms based on cluster mean interpolation with cluster-dependent decision trees. Acoustic variations caused by speaker characteristics are handled by transforms based on constrained maximum-likelihood linear regression. Experimental results on statistical parametric speech synthesis show that the proposed framework enables data from multiple speakers in different languages to be used to: train a synthesis system; synthesize speech in a language using speaker characteristics estimated in a different language; and adapt to a new language. --- paper_title: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models paper_content: Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison. --- paper_title: Generation and Comprehension of Unambiguous Object Descriptions paper_content: We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox. --- paper_title: Midge: Generating Image Descriptions From Computer Vision Detections paper_content: This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date. --- paper_title: Meteor Universal: Language Specific Translation Evaluation for Any Target Language paper_content: This paper describes Meteor Universal, released for the 2014 ACL Workshop on Statistical Machine Translation. Meteor Universal brings language specific evaluation to previously unsupported target languages by (1) automatically extracting linguistic resources (paraphrase tables and function word lists) from the bitext used to train MT systems and (2) using a universal parameter set learned from pooling human judgments of translation quality from several language directions. Meteor Universal is shown to significantly outperform baseline BLEU on two new languages, Russian (WMT13) and Hindi (WMT14). --- paper_title: Bleu: A Method For Automatic Evaluation Of Machine Translation paper_content: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. --- paper_title: Microsoft COCO Captions: Data Collection and Evaluation Server paper_content: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. --- paper_title: What Are You Talking About? Text-to-Image Coreference paper_content: In this paper we exploit natural sentential descriptions of RGB-D scenes in order to improve 3D semantic parsing. Importantly, in doing so, we reason about which particular object each noun/pronoun is referring to in the image. This allows us to utilize visual information in order to disambiguate the so-called coreference resolution problem that arises in text. Towards this goal, we propose a structure prediction model that exploits potentials computed from text and RGB-D imagery to reason about the class of the 3D objects, the scene type, as well as to align the nouns/pronouns with the referred visual objects. We demonstrate the effectiveness of our approach on the challenging NYU-RGBD v2 dataset, which we enrich with natural lingual descriptions. We show that our approach significantly improves 3D detection and scene classification accuracy, and is able to reliably estimate the text-to-image alignment. Furthermore, by using textual and visual information, we are also able to successfully deal with coreference in text, improving upon the state-of-the-art Stanford coreference system [15]. --- paper_title: Deep Fragment Embeddings for Bidirectional Image Sentence Mapping paper_content: We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit. --- paper_title: Deep visual-semantic alignments for generating image descriptions paper_content: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations. --- paper_title: Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books paper_content: Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in the current datasets. To align movies and books we propose a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for. --- paper_title: What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision paper_content: We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest. --- paper_title: Unsupervised alignment of actions in video with text descriptions paper_content: Advances in video technology and data storage have made large scale video data collections of complex activities readily accessible. An increasingly popular approach for automatically inferring the details of a video is to associate the spatio-temporal segments in a video with its natural language descriptions. Most algorithms for connecting natural language with video rely on pre-aligned supervised training data. Recently, several models have been shown to be effective for unsupervised alignment of objects in video with language. However, it remains difficult to generate good spatio-temporal video segments for actions that align well with language. This paper presents a framework that extracts higher level representations of low-level action features through hyperfeature coding from video and aligns them with language. We propose a two-step process that creates a high-level action feature codebook with temporally consistent motions, and then applies an unsupervised alignment algorithm over the action codewords and verbs in the language to identify individual activities. We show an improvement over previous alignment models of objects and nouns on videos of biological experiments, and also evaluate our system on a larger scale collection of videos involving kitchen activities. --- paper_title: Movie/Script: Alignment and Parsing of Video and Text Transcription paper_content: Movies and TV are a rich source of diverse and complex video of people, objects, actions and locales "in the wild". Harvesting automatically labeled sequences of actions from video would enable creation of large-scale and highly-varied datasets. To enable such collection, we focus on the task of recovering scene structure in movies and TV series for object tracking and action retrieval. We present a weakly supervised algorithm that uses the screenplay and closed captions to parse a movie into a hierarchy of shots and scenes. Scene boundaries in the movie are aligned with screenplay scene labels and shots are reordered into a sequence of long continuous tracks or threads which allow for more accurate tracking of people, actions and objects. Scene segmentation, alignment, and shot threading are formulated as inference in a unified generative model and a novel hierarchical dynamic programming algorithm that can handle alignment and jump-limited reorderings in linear time is presented. We present quantitative and qualitative results on movie alignment and parsing, and use the recovered structure to improve character naming and retrieval of common actions in several episodes of popular TV series. --- paper_title: Alignment of Speech to Highly Imperfect Text Transcriptions paper_content: We introduce a novel and inexpensive approach for the temporal alignment of speech to highly imperfect transcripts from automatic speech recognition (ASR). Transcripts are generated for extended lecture and presentation videos, which in some cases feature more than 30 speakers with different accents, resulting in highly varying transcription qualities. In our approach we detect a subset of phonemes in the speech track, and align them to the sequence of phonemes extracted from the transcript. We report on the results for 4 speech-transcript sets ranging from 22 to 108 minutes. The alignment performance is promising, showing a correct matching of phonemes within 10, 20, 30 second error margins for more than 60 %, 75 %, 90 % of text, respectively, on average. For perfect manually generated transcripts, more than 75 % of text is correctly aligned within 5 seconds. --- paper_title: Deep Canonical Time Warping paper_content: Machine learning algorithms for the analysis of timeseries often depend on the assumption that the utilised data are temporally aligned. Any temporal discrepancies arising in the data is certain to lead to ill-generalisable models, which in turn fail to correctly capture the properties of the task at hand. The temporal alignment of time-series is thus a crucial challenge manifesting in a multitude of applications. Nevertheless, the vast majority of algorithms oriented towards the temporal alignment of time-series are applied directly on the observation space, or utilise simple linear projections. Thus, they fail to capture complex, hierarchical non-linear representations which may prove to be beneficial towards the task of temporal alignment, particularly when dealing with multi-modal data (e.g., aligning visual and acoustic information). To this end, we present the Deep Canonical Time Warping (DCTW), a method which automatically learns complex non-linear representations of multiple time-series, generated such that (i) they are highly correlated, and (ii) temporally in alignment. By means of experiments on four real datasets, we show that the representations learnt via the proposed DCTW significantly outperform state-of-the-art methods in temporal alignment, elegantly handling scenarios with highly heterogeneous features, such as the temporal alignment of acoustic and visual features. --- paper_title: Canonical Time Warping for Alignment of Human Behavior paper_content: Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter/intra subject variability. In this paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between two subjects. CTW extends previous work on CCA in two ways: (i) it combines CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing local spatial deformations. We show CTW's effectiveness in three experiments: alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two people. Our results demonstrate that CTW provides both visually and qualitatively better alignment than state-of-the-art techniques based on DTW. --- paper_title: Weakly Supervised Action Labeling in Videos Under Ordering Constraints paper_content: We are given a set of video clips, each one annotated with an {\em ordered} list of actions, such as "walk" then "sit" then "answer phone" extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies. --- paper_title: Generalized time warping for multi-modal alignment of human motion paper_content: Temporal alignment of human motion has been a topic of recent interest due to its applications in animation, telerehabilitation and activity recognition among others. This paper presents generalized time warping (GTW), an extension of dynamic time warping (DTW) for temporally aligning multi-modal sequences from multiple subjects performing similar activities. GTW solves three major drawbacks of existing approaches based on DTW: (1) GTW provides a feature weighting layer to adapt different modalities (e.g., video and motion capture data), (2) GTW extends DTW by allowing a more flexible time warping as combination of monotonic functions, (3) unlike DTW that typically incurs in quadratic cost, GTW has linear complexity. Experimental results demonstrate that GTW can efficiently solve the multi-modal temporal alignment problem and outperforms state-of-the-art DTW methods for temporal alignment of time series within the same modality. --- paper_title: Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books paper_content: Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in the current datasets. To align movies and books we propose a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for. --- paper_title: Natural Language Object Retrieval paper_content: In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer. --- paper_title: Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models paper_content: The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research. --- paper_title: Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion paper_content: Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms. --- paper_title: Generation and Comprehension of Unambiguous Object Descriptions paper_content: We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox. --- paper_title: Audio-to-text alignment for speech recognition with very limited resources. paper_content: In this paper we present our efforts in building a speech recognizer constrained by the availability of very limited resources. We consider that neither proper training databases nor initial acoustic models are available for the target language. Moreover, for the experiments shown here, we use grapheme-based speech recognizers. Most prior work in the area use initial acoustic models, trained on the target or a similar language, to force-align new data and then retrain the models with it. In the proposed approach a speech recognizer is trained from scratch by using audio recordings aligned with (sometimes approximate) text transcripts. All training data has been harvested online (e.g. audiobooks, parliamentary speeches). First, the audio is decoded into a phoneme sequence by an off-theshelf phonetic recognizer in Hungarian. Phoneme sequences are then aligned to the normalized text transcripts through dynamic programming. Correspondence between phonemes and graphemes is done through a matrix of approximate sound-tographeme matching. Finally, the aligned data is split into short audio/text segments and the speech recognizer is trained using Kaldi toolkit. Alignment experiments performed for Catalan and Spanish show the feasibility to obtain accurate alignments that can be used to successfully train a speech recognizer. --- paper_title: Book2Movie: Aligning video scenes with book chapters paper_content: Film adaptations of novels often visually display in a few shots what is described in many pages of the source novel. In this paper we present a new problem: to align book chapters with video scenes. Such an alignment facilitates finding differences between the adaptation and the original source, and also acts as a basis for deriving rich descriptions from the novel for the video clips. We propose an efficient method to compute an alignment between book chapters and video scenes using matching dialogs and character identities as cues. A major consideration is to allow the alignment to be non-sequential. Our suggested shortest path based approach deals with the non-sequential alignments and can be used to determine whether a video scene was part of the original book. We create a new data set involving two popular novel-to-film adaptations with widely varying properties and compare our method against other text-to-video alignment baselines. Using the alignment, we present a qualitative analysis of describing the video through rich narratives obtained from the novel. --- paper_title: Multimodal Speaker Diarization paper_content: We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization. --- paper_title: What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision paper_content: We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest. --- paper_title: Unsupervised alignment of natural language instructions with video segments paper_content: We propose an unsupervised learning algorithm for automatically inferring the mappings between English nouns and corresponding video objects. Given a sequence of natural language instructions and an unaligned video recording, we simultaneously align each instruction to its corresponding video segment, and also align nouns in each instruction to their corresponding objects in video. While existing grounded language acquisition algorithms rely on pre-aligned supervised data (each sentence paired with corresponding image frame or video segment), our algorithm aims to automatically infer the alignment from the temporal structure of the video and parallel text instructions. We propose two generative models that are closely related to the HMM and IBM 1 word alignment models used in statistical machine translation. We evaluate our algorithm on videos of biological experiments performed in wetlabs, and demonstrate its capability of aligning video segments to text instructions and matching video objects to nouns in the absence of any direct supervision. --- paper_title: Modeling Context in Referring Expressions paper_content: Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg (Datasets and toolbox can be downloaded from https://github.com/lichengunc/refer), shows the advantages of our methods for both referring expression generation and comprehension. --- paper_title: Aligning plot synopses to videos for story-based retrieval paper_content: We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions—plot synopses—of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events. --- paper_title: Discriminative Unsupervised Alignment of Natural Language Instructions with Corresponding Video Segments paper_content: We address the problem of automatically aligning natural language sentences with corresponding video segments without any direct supervision. Most existing algorithms for integrating language with videos rely on handaligned parallel data, where each natural language sentence is manually aligned with its corresponding image or video segment. Recently, fully unsupervised alignment of text with video has been shown to be feasible using hierarchical generative models. In contrast to the previous generative models, we propose three latent-variable discriminative models for the unsupervised alignment task. The proposed discriminative models are capable of incorporating domain knowledge, by adding diverse and overlapping features. The results show that discriminative models outperform the generative models in terms of alignment accuracy. --- paper_title: Weakly-Supervised Alignment of Video with Text paper_content: Suppose that we are given a set of videos, along with natural language descriptions in the form of multiple sentences (e.g., manual annotations, movie scripts, sport summaries etc.), and that these sentences appear in the same temporal order as their visual counterparts. We propose in this paper a method for aligning the two modalities, i.e., automatically providing a time stamp for every sentence. Given vectorial features for both video and text, we propose to cast this task as a temporal assignment problem, with an implicit linear mapping between the two feature modalities. We formulate this problem as an integer quadratic program, and solve its continuous convex relaxation using an efficient conditional gradient algorithm. Several rounding procedures are proposed to construct the final integer solution. After demonstrating significant improvements over the state of the art on the related task of aligning video with symbolic labels [7], we evaluate our method on a challenging dataset of videos with associated textual descriptions [36], using both bag-of-words and continuous representations for text. --- paper_title: What Are You Talking About? Text-to-Image Coreference paper_content: In this paper we exploit natural sentential descriptions of RGB-D scenes in order to improve 3D semantic parsing. Importantly, in doing so, we reason about which particular object each noun/pronoun is referring to in the image. This allows us to utilize visual information in order to disambiguate the so-called coreference resolution problem that arises in text. Towards this goal, we propose a structure prediction model that exploits potentials computed from text and RGB-D imagery to reason about the class of the 3D objects, the scene type, as well as to align the nouns/pronouns with the referred visual objects. We demonstrate the effectiveness of our approach on the challenging NYU-RGBD v2 dataset, which we enrich with natural lingual descriptions. We show that our approach significantly improves 3D detection and scene classification accuracy, and is able to reliably estimate the text-to-image alignment. Furthermore, by using textual and visual information, we are also able to successfully deal with coreference in text, improving upon the state-of-the-art Stanford coreference system [15]. --- paper_title: On the Integration of Grounding Language and Learning Objects paper_content: This paper presents a multimodal learning system that can ground spoken names of objects in their physical referents and learn to recognize those objects simultaneously from naturally co-occurring multisensory input. There are two technical problems involved: (1) the correspondence problem in symbol grounding - how to associate words (symbols) with their perceptually grounded meanings from multiple cooccurrences between words and objects in the physical environment. (2) object learning - how to recognize and categorize visual objects. We argue that those two problems can be fundamentally simplified by considering them in a general system and incorporating the spatio-temporal and cross-modal constraints of multimodal data. The system collects egocentric data including image sequences as well as speech while users perform natural tasks. It is able to automatically infer the meanings of object names from vision, and categorize objects based on teaching signals potentially encoded in speech. The experimental results reported in this paper reveal the effectiveness of using multimodal data and integrating heterogeneous techniques in machine learning, natural language processing and computer vision. --- paper_title: Isotonic CCA for sequence alignment and activity recognition paper_content: This paper presents an approach for sequence alignment based on canonical correlation analysis(CCA). We show that a novel set of constraints imposed on traditional CCA leads to canonical solutions with the time warping property, i.e., non-decreasing monotonicity in time. This formulation generalizes the more traditional dynamic time warping (DTW) solutions to cases where the alignment is accomplished on arbitrary subsequence segments, optimally determined from data, instead on individual sequence samples. We then introduce a robust and efficient algorithm to find such alignments using non-negative least squares reductions. Experimental results show that this new method, when applied to MOCAP activity recognition problems, can yield improved recognition accuracy. --- paper_title: Listen, attend and spell: A neural network for large vocabulary conversational speech recognition paper_content: We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set. --- paper_title: Deep Fragment Embeddings for Bidirectional Image Sentence Mapping paper_content: We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit. --- paper_title: Describing Videos by Exploiting Temporal Structure paper_content: Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. --- paper_title: Neural Machine Translation by Jointly Learning to Align and Translate paper_content: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition. --- paper_title: Deep visual-semantic alignments for generating image descriptions paper_content: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations. --- paper_title: Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences paper_content: We propose a neural sequence-to-sequence model for direction following, a task that is essential to realizing effective autonomous agents. Our alignment-based encoder-decoder model with long short-term memory recurrent neural networks (LSTM-RNN) translates natural language instructions to action sequences based upon a representation of the observable world state. We introduce a multi-level aligner that empowers our model to focus on sentence "regions" salient to the current world state by using multiple abstractions of the input sentence. In contrast to existing methods, our model uses no specialized linguistic resources (e.g., parsers) or task-specific annotations (e.g., seed lexicons). It is therefore generalizable, yet still achieves the best results reported to-date on a benchmark single-sentence dataset and competitive results for the limited-training multi-sentence setting. We analyze our model through a series of ablations that elucidate the contributions of the primary components of our model. --- paper_title: Attention-Based Models for Speech Recognition paper_content: Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1,2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in [2] reaches a competitive 18.7% phoneme error rate (PER) on the TIMET phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18% PER in single utterances and 20% in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6% level. --- paper_title: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention paper_content: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO. --- paper_title: Stacked Attention Networks for Image Question Answering paper_content: This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer. --- paper_title: Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks paper_content: We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. --- paper_title: Hierarchical Co-Attention for Visual Question Answering paper_content: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question and consequently the image via the co-attention mechanism in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN) model. Our final model outperforms all reported methods, improving the state-of-the-art on the VQA dataset from 60.4% to 62.1%, and from 61.6% to 65.4% on the COCO-QA dataset. --- paper_title: Dynamic Memory Networks for Visual and Textual Question Answering paper_content: Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision. --- paper_title: Leveraging Video Descriptions to Learn Video Question Answering paper_content: We propose a scalable approach to learn video-based question answering (QA): answer a "free-form natural language question" about a video content. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended fromMN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines. --- paper_title: An HMM-based system for automatic segmentation and alignment of speech paper_content: A system for automatic time-aligned phone transcription of spoken Swedish has been developed. Using a speech recording and an orthographic transcription of the words spoken in the recording the system is able to generate a phone-level segmentation without manual intervention. The system uses a technique based on Hidden Markov Models to position 85.5% of all boundary positions within 20 ms of manually segmented reference boundaries on a set of test recordings. --- paper_title: Analyzing the Behavior of Visual Question Answering Models paper_content: Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze two models, one each from two major classes of VQA models -- with-attention and without-attention and show the similarities and differences in the behavior of these models. We also analyze the winning entry of the VQA Challenge 2016. ::: Our behavior analysis reveals that despite recent progress, today's VQA models are "myopic" (tend to fail on sufficiently novel instances), often "jump to conclusions" (converge on a predicted answer after 'listening' to just half the question), and are "stubborn" (do not change their answers across images). --- paper_title: Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper_content: Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge. --- paper_title: Integration of acoustic and visual speech signals using neural networks paper_content: Results from a series of experiments that use neural networks to process the visual speech signals of a male talker are presented. In these preliminary experiments, the results are limited to static images of vowels. It is demonstrated that these networks are able to extract speech information from the visual images and that this information can be used to improve automatic vowel recognition. The structure of speech and its corresponding acoustic and visual signals are reviewed. The specific data that was used in the experiments along with the network architectures and algorithms are described. The results of integrating the visual and auditory signals for vowel recognition in the presence of acoustic noise are presented. > --- paper_title: Multimodal Video Indexing: A Review of the State-of-the-art paper_content: Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. In this paper we survey several methods aiming at automating this time and resource consuming process. Good reviews on single modality based video indexing have appeared in literature. Effective indexing, however, requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. Therefore, instead of separately treating the different information sources involved, and their specific algorithms, we focus on the similarities and differences between the modalities. To that end we put forward a unifying and multimodal framework, which views a video document from the perspective of its author. This framework forms the guiding principle for identifying index types, for which automatic methods are found in literature. It furthermore forms the basis for categorizing these different methods. --- paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions paper_content: Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology. --- paper_title: Medical Image Fusion: A survey of the state of the art paper_content: Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multi-modal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. This review article provides a factual listing of methods and summarizes the broad scientific challenges faced in the field of medical image fusion. We characterize the medical image fusion research based on (1) the widely used image fusion methods, (2) imaging modalities, and (3) imaging of organs that are under study. This review concludes that even though there exists several open ended technological and scientific challenges, the fusion of medical images has proved to be useful for advancing the clinical reliability of using medical imaging for medical diagnostics and analysis, and is a scientific discipline that has the potential to significantly grow in the coming years. --- paper_title: Multimodal Emotion Recognition in Response to Videos paper_content: This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then, EEG responses and eye gaze data were recorded from 24 participants while watching emotional video clips. Ground truth was defined based on the median arousal and valence scores given to clips in a preliminary study using an online questionnaire. Based on the participants' responses, three classes for each dimension were defined. The arousal classes were calm, medium aroused, and activated and the valence classes were unpleasant, neutral, and pleasant. One of the three affective labels of either valence or arousal was determined by classification of bodily responses. A one-participant-out cross validation was employed to investigate the classification performance in a user-independent approach. The best classification accuracies of 68.5 percent for three labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments. --- paper_title: Multimodal fusion for multimedia analysis: a survey paper_content: This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks. The existing literature on multimodal fusion research is presented through several classifications based on the fusion methodology and the level of fusion (feature, decision, and hybrid). The fusion methods are described from the perspective of the basic concept, advantages, weaknesses, and their usage in various analysis tasks as reported in the literature. Moreover, several distinctive issues that influence a multimodal fusion process such as, the use of correlation and independence, confidence level, contextual information, synchronization between different modalities, and the optimal modality selection are also highlighted. Finally, we present the open issues for further research in the area of multimodal fusion. --- paper_title: Multimedia classification and event detection using double fusion paper_content: Multimedia Event Detection(MED) is a multimedia retrieval task with the goal of finding videos of a particular event in video archives, given example videos and event descriptions; different from MED, multimedia classification is a task that classifies given videos into specified classes. Both tasks require mining features of example videos to learn the most discriminative features, with best performance resulting from a combination of multiple complementary features. How to combine different features is the focus of this paper. Generally, early fusion and late fusion are two popular combination strategies. The former one fuses features before performing classification and the latter one combines output of classifiers from different features. Early fusion can better capture the relationship among features yet is prone to over-fit the training data. Late fusion deals with the over-fitting problem better but does not allow classifiers to train on all the data at the same time. In this paper, we introduce a fusion scheme named double fusion, which simply combines early fusion and late fusion together to incorporate their advantages. Results are reported on the TRECVID MED 2010, MED 2011, UCF50 and HMDB51 datasets. For the MED 2010 dataset, we get a mean minimal normalized detection cost (MMNDC) of 0.49, which exceeds the state-of-the-art performance by more than 12 percent. On the TRECVID MED 2011 test dataset, we achieve a MMNDC of 0.51, which is the second best among all 19 participants. On UCF50 and HMDB51, we obtain classification accuracy of 88.1 % and 48.7 % respectively, which are the best reported results to date. --- paper_title: Recent advances in the automatic recognition of audio-visual speech paper_content: Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks. --- paper_title: Multimodal Saliency and Fusion for Movie Summarization Based on Aural, Visual, and Textual Attention paper_content: Multimodal streams of sensory information are naturally parsed and integrated by humans using signal-level feature extraction and higher level cognitive processes. Detection of attention-invoking audiovisual segments is formulated in this work on the basis of saliency models for the audio, visual, and textual information conveyed in a video stream. Aural or auditory saliency is assessed by cues that quantify multifrequency waveform modulations, extracted through nonlinear operators and energy tracking. Visual saliency is measured through a spatiotemporal attention model driven by intensity, color, and orientation. Textual or linguistic saliency is extracted from part-of-speech tagging on the subtitles information available with most movie distributions. The individual saliency streams, obtained from modality-depended cues, are integrated in a multimodal saliency curve, modeling the time-varying perceptual importance of the composite video stream and signifying prevailing sensory events. The multimodal saliency representation forms the basis of a generic, bottom-up video summarization algorithm. Different fusion schemes are evaluated on a movie database of multimodal saliency annotations with comparative results provided across modalities. The produced summaries, based on low-level features and content-independent fusion and selection, are of subjectively high aesthetic and informative quality. --- paper_title: Multi-level Fusion of Audio and Visual Features for Speaker Identification paper_content: This paper explores the fusion of audio and visual evidences through a multi-level hybrid fusion architecture based on dynamic Bayesian network (DBN), which combines model level and decision level fusion to achieve higher performance. In model level fusion, a new audio-visual correlative model (AVCM) based on DBN is proposed, which describes both the inter-correlations and loose timing synchronicity between the audio and video streams. The experiments on the CMU database and our own homegrown database both demonstrate that the methods can improve the accuracies of audio-visual bimodal speaker identification at all levels of acoustic signal-to-noise-ratios (SNR) from 0dB to 30dB with varying acoustic conditions. --- paper_title: Multimodal fusion for multimedia analysis: a survey paper_content: This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks. The existing literature on multimodal fusion research is presented through several classifications based on the fusion methodology and the level of fusion (feature, decision, and hybrid). The fusion methods are described from the perspective of the basic concept, advantages, weaknesses, and their usage in various analysis tasks as reported in the literature. Moreover, several distinctive issues that influence a multimodal fusion process such as, the use of correlation and independence, confidence level, contextual information, synchronization between different modalities, and the optimal modality selection are also highlighted. Finally, we present the open issues for further research in the area of multimodal fusion. --- paper_title: Multiple classifier systems for the classificatio of audio-visual emotional states paper_content: Research activities in the field of human-computer interaction increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human emotional states. For that a variety of features have been derived. From the audio signal the fundamental frequency, LPCand MFCC coefficients, and RASTA-PLP have been used. In addition to that two types of visual features have been computed, namely form and motion features of intermediate complexity. The numerical evaluation has been performed on the four emotional labels Arousal, Expectancy, Power, Valence as defined in the AVEC data set. As classifier architectures multiple classifier systems are applied, these have been proven to be accurate and robust against missing and noisy data. --- paper_title: Multimedia classification and event detection using double fusion paper_content: Multimedia Event Detection(MED) is a multimedia retrieval task with the goal of finding videos of a particular event in video archives, given example videos and event descriptions; different from MED, multimedia classification is a task that classifies given videos into specified classes. Both tasks require mining features of example videos to learn the most discriminative features, with best performance resulting from a combination of multiple complementary features. How to combine different features is the focus of this paper. Generally, early fusion and late fusion are two popular combination strategies. The former one fuses features before performing classification and the latter one combines output of classifiers from different features. Early fusion can better capture the relationship among features yet is prone to over-fit the training data. Late fusion deals with the over-fitting problem better but does not allow classifiers to train on all the data at the same time. In this paper, we introduce a fusion scheme named double fusion, which simply combines early fusion and late fusion together to incorporate their advantages. Results are reported on the TRECVID MED 2010, MED 2011, UCF50 and HMDB51 datasets. For the MED 2010 dataset, we get a mean minimal normalized detection cost (MMNDC) of 0.49, which exceeds the state-of-the-art performance by more than 12 percent. On the TRECVID MED 2011 test dataset, we achieve a MMNDC of 0.51, which is the second best among all 19 participants. On UCF50 and HMDB51, we obtain classification accuracy of 88.1 % and 48.7 % respectively, which are the best reported results to date. --- paper_title: Modeling latent discriminative dynamic of multi-dimensional affective signals paper_content: During face-to-face communication, people continuously exchange para-linguistic information such as their emotional state through facial expressions, posture shifts, gaze patterns and prosody. These affective signals are subtle and complex. In this paper, we propose to explicitly model the interaction between the high level perceptual features using Latent-Dynamic Conditional Random Fields. This approach has the advantage of explicitly learning the sub-structure of the affective signals as well as the extrinsic dynamic between emotional labels. We evaluate our approach on the Audio-Visual Emotion Challenge (AVEC 2011) dataset. By using visual features easily computable using off-the-helf sensing software (vertical and horizontal eye gaze, head tilt and smile intensity), we show that our approach based on LDCRF model outperforms previously published baselines for all four affective dimensions. By integrating audio features, our approach also outperforms the audio-visual baseline. --- paper_title: Recent advances in the automatic recognition of audio-visual speech paper_content: Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks. --- paper_title: Multi-view latent variable discriminative models for action recognition paper_content: Many human action recognition tasks involve data that can be factorized into multiple views such as body postures and hand shapes. These views often interact with each other over time, providing important cues to understanding the action. We present multi-view latent variable discriminative models that jointly learn both view-shared and view-specific sub-structures to capture the interaction between views. Knowledge about the underlying structure of the data is formulated as a multi-chain structured latent conditional model, explicitly learning the interaction between multiple views using disjoint sets of hidden variables in a discriminative manner. The chains are tied using a predetermined topology that repeats over time. We present three topologies — linked, coupled, and linked-coupled — that differ in the type of interaction between views that they model. We evaluate our approach on both segmented and unsegmented human action recognition tasks, using the ArmGesture, the NATOPS, and the ArmGesture-Continuous data. Experimental results show that our approach outperforms previous state-of-the-art action recognition models. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: Video Description Generation using Audio and Visual Cues paper_content: The recent advances in image captioning stimulate the research in generating natural language description for visual content, which can be widely applied in many applications such as assisting blind people. Video description generation is a more complex task than image caption. Most works of video description generation focus on visual information in the video. However, audio provides rich information for describing video contents as well. In this paper, we propose to generate video descriptions in natural sentences using both audio and visual cues. We use unified deep neural networks with both convolutional and recurrent structure. Experimental results on the Microsoft Research Video Description (MSVD) corpus prove that fusing audio information greatly improves the video description performance. --- paper_title: Imagenet classification with deep convolutional neural networks paper_content: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. --- paper_title: Guiding the Long-Short Term Memory Model for Image Caption Generation paper_content: In this work we focus on the problem of image caption generation. We propose an extension of the long short term memory (LSTM) model, which we coin gLSTM for short. In particular, we add semantic information extracted from the image as extra input to each unit of the LSTM block, with the aim of guiding the model towards solutions that are more tightly coupled to the image content. Additionally, we explore different length normalization strategies for beam search to avoid bias towards short sentences. On various benchmark datasets such as Flickr8K, Flickr30K and MS COCO, we obtain results that are on par with or better than the current state-of-the-art. --- paper_title: Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space paper_content: Past research in analysis of human affect has focused on recognition of prototypic expressions of six basic emotions based on posed data acquired in laboratory settings. Recently, there has been a shift toward subtle, continuous, and context-specific interpretations of affective displays recorded in naturalistic and real-world settings, and toward multimodal analysis and recognition of human affect. Converging with this shift, this paper presents, to the best of our knowledge, the first approach in the literature that: 1) fuses facial expression, shoulder gesture, and audio cues for dimensional and continuous prediction of emotions in valence and arousal space, 2) compares the performance of two state-of-the-art machine learning techniques applied to the target problem, the bidirectional Long Short-Term Memory neural networks (BLSTM-NNs), and Support Vector Machines for Regression (SVR), and 3) proposes an output-associative fusion framework that incorporates correlations and covariances between the emotion dimensions. Evaluation of the proposed approach has been done using the spontaneous SAL data from four subjects and subject-dependent leave-one-sequence-out cross validation. The experimental results obtained show that: 1) on average, BLSTM-NNs outperform SVR due to their ability to learn past and future context, 2) the proposed output-associative fusion framework outperforms feature-level and model-level fusion by modeling and learning correlations and patterns between the valence and arousal dimensions, and 3) the proposed system is well able to reproduce the valence and arousal ground truth obtained from human coders. --- paper_title: The classification of multi-modal data with hidden conditional random field paper_content: The classification of multi-modal data has been an active research topic in recent years. It has been used in many applications where the processing of multi-modal data is involved. Motivated by the assumption that different modalities in multi-modal data share latent structure (topics), this paper attempts to learn the shared structure by exploiting the symbiosis of multiple-modality and therefore boost the classification of multi-modal data, we call it Multi-modal Hidden Conditional Random Field (M-HCRF). M-HCRF represents the intrinsical structure shared by different modalities as hidden variables in a undirected general graphical model. When learning the latent shared structure of the multi-modal data, M-HCRF can discover the interactions among the hidden structure and the supervised category information. The experimental results show the effectiveness of our proposed M-HCRF when applied to the classification of multi-modal data. --- paper_title: Dimensional affect recognition using Continuous Conditional Random Fields paper_content: During everyday interaction people display various non-verbal signals that convey emotions. These signals are multi-modal and range from facial expressions, shifts in posture, head pose, and non-verbal speech. They are subtle, continuous and complex. Our work concentrates on the problem of automatic recognition of emotions from such multimodal signals. Most of the previous work has concentrated on classifying emotions as belonging to a set of categories, or by discretising the continuous dimensional space. We propose the use of Continuous Conditional Random Fields (CCRF) in combination with Support Vector Machines for Regression (SVR) for modeling continuous emotion in dimensional space. Our Correlation Aware Continuous Conditional Random Field (CA-CCRF) exploits the non-orthogonality of emotion dimensions. By using visual features based on geometric shape and appearance, and a carefully selected subset of audio features we show that our CCRF and CA-CCRF approaches outperform previously published baselines for all four affective dimensions of valence, arousal, power and expectancy. --- paper_title: Deep multimodal fusion for persuasiveness prediction paper_content: Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches. --- paper_title: Hidden Conditional Random Fields paper_content: We present a discriminative latent variable model for classification problems in structured domains where inputs can be represented by a graph of local observations. A hidden-state conditional random field framework learns a set of latent variables conditioned on local features. Observations need not be independent and may overlap in space and time. --- paper_title: Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis paper_content: We present a novel way of extracting features from short texts, based on the activation values of an inner layer of a deep convolutional neural network. We use the extracted features in multimodal sentiment analysis of short video clips representing one sentence each. We use the combined feature vectors of textual, visual, and audio modalities to train a classifier based on multiple kernel learning, which is known to be good at heterogeneous data. We obtain 14% performance improvement over the state of the art and present a parallelizable decision-level data fusion method, which is much faster, though slightly less accurate. --- paper_title: Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text paper_content: This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality. --- paper_title: Multiple Kernel Learning in the Primal for Multimodal Alzheimer’s Disease Classification paper_content: To achieve effective and efficient detection of Alzheimer's disease (AD), many machine learning methods have been introduced into this realm. However, the general case of limited training samples, as well as different feature representations typically makes this problem challenging. In this paper, we propose a novel multiple kernel-learning framework to combine multimodal features for AD classification, which is scalable and easy to implement. Contrary to the usual way of solving the problem in the dual, we look at the optimization from a new perspective. By conducting Fourier transform on the Gaussian kernel, we explicitly compute the mapping function, which leads to a more straightforward solution of the problem in the primal. Furthermore, we impose the mixed L21 norm constraint on the kernel weights, known as the group lasso regularization, to enforce group sparsity among different feature modalities. This actually acts as a role of feature modality selection, while at the same time exploiting complementary information among different kernels. Therefore, it is able to extract the most discriminative features for classification. Experiments on the ADNI dataset demonstrate the effectiveness of the proposed method. --- paper_title: Factorial Hidden Markov Models paper_content: Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable—the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward–backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach‘s chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot. --- paper_title: A coupled HMM for audio-visual speech recognition paper_content: In recent years several speech recognition systems that use visual together with audio information showed significant increase in performance over the standard speech recognition systems. The use of visual features is justified by both the bimodality of the speech generation and by the need of features that are invariant to acoustic noise perturbation. The audio-visual speech recognition system presented in this paper introduces a novel audio-visual fusion technique that uses a coupled hidden Markov model (HMM). The statistical properties of the coupled-HMM allow us to model the state asynchrony of the audio and visual observations sequences while still preserving their natural correlation over time. The experimental results show that the coupled HMM outperforms the multistream HMM in audio visual speech recognition. --- paper_title: A Novel Multiple Kernel Learning Framework for Heterogeneous Feature Fusion and Variable Selection paper_content: We propose a novel multiple kernel learning (MKL) algorithm with a group lasso regularizer, called group lasso regularized MKL (GL-MKL), for heterogeneous feature fusion and variable selection. For problems of feature fusion, assigning a group of base kernels for each feature type in an MKL framework provides a robust way in fitting data extracted from different feature domains. Adding a mixed norm constraint (i.e., group lasso) as the regularizer, we can enforce the sparsity at the group/feature level and automatically learn a compact feature set for recognition purposes. More precisely, our GL-MKL determines the optimal base kernels, including the associated weights and kernel parameters, and results in improved recognition performance. Besides, our GL-MKL can also be extended to address heterogeneous variable selection problems. For such problems, we aim to select a compact set of variables (i.e., feature attributes) for comparable or improved performance. Our proposed method does not need to exhaustively search for the entire variable space like prior sequential-based variable selection methods did, and we do not require any prior knowledge on the optimal size of the variable subset either. To verify the effectiveness and robustness of our GL-MKL, we conduct experiments on video and image datasets for heterogeneous feature fusion, and perform variable selection on various UCI datasets. --- paper_title: Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional LSTM modeling paper_content: In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 %, 65 %, and 55 % for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively. --- paper_title: Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering paper_content: We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single "hop" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3]. --- paper_title: Boosted Learning in Dynamic Bayesian Networks for Multimodal Speaker Detection paper_content: Bayesian network models provide an attractive framework for multimodal sensor fusion. They combine an intuitive graphical representation with efficient algorithms for inference and learning. However, the unsupervised nature of standard parameter learning algorithms for Bayesian networks can lead to poor performance in classification tasks. We have developed a supervised learning framework for Bayesian networks, which is based on the Adaboost algorithm of Schapire and Freund. Our framework covers static and dynamic Bayesian networks with both discrete and continuous states. We have tested our framework in the context of a novel multimodal HCI application: a speech-based command and control interface for a Smart Kiosk. We provide experimental evidence for the utility of our boosted learning approach. --- paper_title: Multimodal human behavior analysis: learning correlation and interaction across modalities paper_content: Multimodal human behavior analysis is a challenging task due to the presence of complex nonlinear correlations and interactions across modalities. We present a novel approach to this problem based on Kernel Canonical Correlation Analysis (KCCA) and Multi-view Hidden Conditional Random Fields (MV-HCRF). Our approach uses a nonlinear kernel to map multimodal data to a high-dimensional feature space and finds a new projection of the data that maximizes the correlation across modalities. We use a multi-chain structured graphical model with disjoint sets of latent variables, one set per modality, to jointly learn both view-shared and view-specific sub-structures of the projected data, capturing interaction across modalities explicitly. We evaluate our approach on a task of agreement and disagreement recognition from nonverbal audio-visual cues using the Canal 9 dataset. Experimental results show that KCCA makes capturing nonlinear hidden dynamics easier and MV-HCRF helps learning interaction across modalities. --- paper_title: Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data paper_content: We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data. --- paper_title: Dynamic modality weighting for multi-stream hmms inaudio-visual speech recognition paper_content: Merging decisions from different modalities is a crucial problem in Audio-Visual Speech Recognition. To solve this, state synchronous multi-stream HMMs have been proposed for their important advantage of incorporating stream reliability in their fusion scheme. This paper focuses on stream weight adaptation based on modality confidence estimators. We assume different and time-varying environment noise, as can be encountered in realistic applications, and, for this, adaptive methods are best suited. Stream reliability is assessed directly through classifier outputs since they are not specific to either noise type or level. The influence of constraining the weights to sum to one is also discussed. --- paper_title: Show and tell: A neural image caption generator paper_content: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. --- paper_title: Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering paper_content: In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. --- paper_title: Multimodal Deep Learning paper_content: Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning. --- paper_title: On feature combination for multiclass object classification paper_content: A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently. --- paper_title: Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel Learning paper_content: This paper presents our proposed approach for the second Emotion Recognition in The Wild Challenge. We propose a new feature descriptor called Histogram of Oriented Gradients from Three Orthogonal Planes (HOG_TOP) to represent facial expressions. We also explore the properties of visual features and audio features, and adopt Multiple Kernel Learning (MKL) to find an optimal feature fusion. An SVM with multiple kernels is trained for the facial expression classification. Experimental results demonstrate that our method achieves a promising performance. The overall classification accuracy on the validation set and test set are 40.21% and 45.21%, respectively. --- paper_title: Multiple kernel learning for emotion recognition in the wild paper_content: We propose a method to automatically detect emotions in unconstrained settings as part of the 2013 Emotion Recognition in the Wild Challenge [16], organized in conjunction with the ACM International Conference on Multimodal Interaction (ICMI 2013). Our method combines multiple visual descriptors with paralinguistic audio features for multimodal classification of video clips. Extracted features are combined using Multiple Kernel Learning and the clips are classified using an SVM into one of the seven emotion categories: Anger, Disgust, Fear, Happiness, Neutral, Sadness and Surprise. The proposed method achieves competitive results, with an accuracy gain of approximately 10% above the challenge baseline. --- paper_title: Hidden Conditional Random Fields for Meeting Segmentation paper_content: Automatic segmentation and classification of recorded meetings provides a basis towards understanding the content of a meeting. It enables effective browsing and querying in a meeting archive. Though robustness of existing approaches is often not reliable enough. We therefore strive to improve on this task by applying conditional random fields augmented by hidden states. These hidden conditional random fields have been proven to be efficient in low level pattern recognition tasks. Now we propose to use these novel models to segment a pre-recorded meeting into meeting events. Since they can also be seen as an extension to hidden Markov models an elaborate comparison of the two approaches is provided. Extensive test runs on the public M4 Scripted Meeting Corpus prove the great performance of applying our suggested novel approach compared to other similar methods. --- paper_title: EmoNets: Multimodal deep learning approaches for emotion recognition in video paper_content: The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67 % on the 2014 dataset. --- paper_title: Learning Multi-modal Similarity paper_content: In many applications involving multi-media data, the definition of similarity between items is integral to several key tasks, e.g., nearest-neighbor retrieval, classification, and recommendation. Data in such regimes typically exhibits multiple modalities, such as acoustic and visual content of video. Integrating such heterogeneous data to form a holistic similarity space is therefore a key challenge to be overcome in many real-world applications. We present a novel multiple kernel learning technique for integrating heterogeneous data into a single, unified similarity space. Our algorithm learns an optimal ensemble of kernel transfor- mations which conform to measurements of human perceptual similarity, as expressed by relative comparisons. To cope with the ubiquitous problems of subjectivity and inconsistency in multi- media similarity, we develop graph-based techniques to filter similarity measurements, resulting in a simplified and robust training procedure. --- paper_title: Extending Long Short-Term Memory for Multi-View Structured Learning paper_content: Long Short-Term Memory (LSTM) networks have been successfully applied to a number of sequence learning problems but they lack the design flexibility to model multiple view interactions, limiting their ability to exploit multi-view relationships. In this paper, we propose a Multi-View LSTM (MV-LSTM), which explicitly models the view-specific and cross-view interactions over time or structured outputs. We evaluate the MV-LSTM model on four publicly available datasets spanning two very different structured learning problems: multimodal behaviour recognition and image captioning. The experimental results show competitive performance on all four datasets when compared with state-of-the-art models. --- paper_title: Multiple Kernel Learning for Visual Object Recognition: A Review paper_content: Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient. --- paper_title: Recent advances in the automatic recognition of audio-visual speech paper_content: Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks. --- paper_title: Multi-View Learning in the Presence of View Disagreement paper_content: Traditional multi-view learning approaches suffer in the presence of view disagreement, i.e., when samples in each view do not belong to the same class due to view corruption, occlusion or other noise processes. In this paper we present a multi-view learning approach that uses a conditional entropy criterion to detect view disagreement. Once detected, samples with view disagreement are filtered and standard multi-view learning methods can be successfully applied to the remaining samples. Experimental evaluation on synthetic and audio-visual databases demonstrates that the detection and filtering of view disagreement considerably increases the performance of traditional multi-view learning approaches. --- paper_title: Co-Adaptation of audio-visual speech and gesture classifiers paper_content: The construction of robust multimodal interfaces often requires large amounts of labeled training data to account for cross-user differences and variation in the environment. In this work, we investigate whether unlabeled training data can be leveraged to build more reliable audio-visual classifiers through co-training, a multi-view learning algorithm. Multimodal tasks are good candidates for multi-view learning, since each modality provides a potentially redundant view to the learning algorithm. We apply co-training to two problems: audio-visual speech unit classification, and user agreement recognition using spoken utterances and head gestures. We demonstrate that multimodal co-training can be used to learn from only a few labeled examples in one or both of the audio-visual modalities. We also propose a co-adaptation algorithm, which adapts existing audio-visual classifiers to a particular user or noise condition by leveraging the redundancy in the unlabeled data. --- paper_title: Applying Co-Training Methods To Statistical Parsing paper_content: We propose a novel Co-Training method for statistical parsing. The algorithm takes as input a small corpus (9695 sentences) annotated with parse trees, a dictionary of possible lexicalized structures for each word in the training set and a large pool of unlabeled text. The algorithm iteratively labels the entire data set with parse trees. Using empirical results based on parsing the Wall Street Journal corpus we show that training a statistical parser on the combined labeled and unlabeled data strongly out-performs training only on the labeled data. --- paper_title: Multi-view CCA-based acoustic features for phonetic recognition across speakers and domains paper_content: Canonical correlation analysis (CCA) and kernel CCA can be used for unsupervised learning of acoustic features when a second view (e.g., articulatory measurements) is available for some training data, and such projections have been used to improve phonetic frame classification. Here we study the behavior of CCA-based acoustic features on the task of phonetic recognition, and investigate to what extent they are speaker-independent or domain-independent. The acoustic features are learned using data drawn from the University of Wisconsin X-ray Microbeam Database (XRMB). The features are evaluated within and across speakers on XRMB data, as well as on out-of-domain TIMIT and MOCHA-TIMIT data. Experimental results show consistent improvement with the learned acoustic features over baseline MFCCs and PCA projections. In both speaker-dependent and cross-speaker experiments, phonetic error rates are improved by 4-9% absolute (10-23% relative) using CCA-based features over baseline MFCCs. In cross-domain phonetic recognition (training on XRMB and testing on MOCHA or TIMIT), the learned projections provide smaller improvements. --- paper_title: Multimodal Learning with Deep Boltzmann Machines paper_content: Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time. --- paper_title: Unsupervised improvement of visual detectors using cotraining paper_content: One significant challenge in the construction of visual detection systems is the acquisition of sufficient labeled data. We describe a new technique for training visual detectors which requires only a small quantity of labeled data, and then uses unlabeled data to improve performance over time. Unsupervised improvement is based on the cotraining framework of Blum and Mitchell, in which two disparate classifiers are trained simultaneously. Unlabeled examples which are confidently labeled by one classifier are added, with labels, to the training set of the other classifier. Experiments are presented on the realistic task of automobile detection in roadway surveillance video. In this application, cotraining reduces the false positive rate by a factor of 2 to 11 from the classifier trained with labeled data alone. --- paper_title: Multimodal Deep Learning paper_content: Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning. --- paper_title: Combining labeled and unlabeled data with co-training paper_content: We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0/98/ 7...%5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 [email protected] --- paper_title: Multimodal Transfer Deep Learning for Audio Visual Recognition paper_content: We propose a multimodal deep learning framework that can transfer the knowledge obtained from a single-modal neural network to a network with a different modality. For instance, we show that we can leverage the speech data to fine-tune the network trained for video recognition, given an initial set of audio-video parallel dataset within the same semantics. Our approach learns the analogy-preserving embeddings between the abstract representations learned from each network, allowing for semantics-level transfer or reconstruction of the data among different modalities. Our method is thus specifically useful when one of the modalities is more scarce in labeled data than other modalities. While we mainly focus on applying transfer learning on the audio-visual recognition task as an application of our approach, our framework is flexible and thus can work with any multimodal datasets. In this work-in-progress report, we show our preliminary results on the AV-Letters dataset. --- paper_title: Grounding Action Descriptions in Videos paper_content: Recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. In this paper, we consider the problem of grounding sentences describing actions in visual information extracted from videos . We present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. Experimental results demonstrate that a text-based model of similarity between actions improves substantially when combined with visual information from videos depicting the described actions. --- paper_title: Multimodal distributional semantics paper_content: Distributional semantic models derive computational representations of word meaning from the patterns of co-occurrence of words in text. Such models have been a success story of computational linguistics, being able to provide reliable estimates of semantic relatedness for the many semantic tasks requiring them. However, distributional models extract meaning information exclusively from text, which is an extremely impoverished basis compared to the rich perceptual sources that ground human semantic knowledge. We address the lack of perceptual grounding of distributional models by exploiting computer vision techniques that automatically identify discrete "visual words" in images, so that the distributional representation of a word can be extended to also encompass its co-occurrence with the visual words of images it is associated with. We propose a flexible architecture to integrate text- and image-based distributional information, and we show in a set of empirical tests that our integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter. --- paper_title: Visual Information in Semantic Representation paper_content: The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account. --- paper_title: Zero-Shot Learning with Semantic Output Codes paper_content: We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words. --- paper_title: Grounded Language Learning from Video Described with Sentences paper_content: We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts simultaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video. --- paper_title: Grounding Semantics in Olfactory Perception paper_content: Multi-modal semantics has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in olfactory (smell) data, through the construction of a novel bag of chemical compounds model. We use standard evaluations for multi-modal semantics, including measuring conceptual similarity and cross-modal zero-shot learning. To our knowledge, this is the first work to evaluate semantic similarity on representations grounded in olfactory data. --- paper_title: Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models paper_content: The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research. --- paper_title: Regularizing Long Short Term Memory with 3D Human-Skeleton Sequences for Action Recognition paper_content: This paper argues that large-scale action recognition in video can be greatly improved by providing an additional modality in training data – namely, 3D human-skeleton sequences – aimed at complementing poorly represented or missing features of human actions in the training videos. For recognition, we use Long Short Term Memory (LSTM) grounded via a deep Convolutional Neural Network (CNN) onto the video. Training of LSTM is regularized using the output of another encoder LSTM (eLSTM) grounded on 3D human-skeleton training data. For such regularized training of LSTM, we modify the standard backpropagation through time (BPTT) in order to address the wellknown issues with gradient descent in constraint optimization. Our evaluation on three benchmark datasets – Sports-1M, HMDB-51, and UCF101 – shows accuracy improvements from 1.7% up to 14.8% relative to the state of the art. --- paper_title: Multi- and Cross-Modal Semantics Beyond Vision: Grounding in Auditory Perception paper_content: Multi-modal semantics has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in raw auditory data, using standard evaluations for multi-modal semantics, including measuring conceptual similarity and relatedness. We also evaluate cross-modal mappings, through a zero-shot learning task mapping between linguistic and auditory modalities. In addition, we evaluate multimodal representations on an unsupervised musical instrument clustering task. To our knowledge, this is the first work to combine linguistic and auditory information into multi-modal representations. --- paper_title: Zero-Shot Learning Through Cross-Modal Transfer paper_content: This work introduces a model that can recognize objects in images even if no training data is available for the object class. The only necessary knowledge about unseen visual categories comes from unsupervised text corpora. Unlike previous zero-shot learning models, which can only differentiate between unseen classes, our model can operate on a mixture of seen and unseen classes, simultaneously obtaining state of the art performance on classes with thousands of training images and reasonable performance on unseen classes. This is achieved by seeing the distributions of words in texts as a semantic space for understanding what objects look like. Our deep learning model does not require any manually defined semantic or visual features for either words or images. Images are mapped to be close to semantic word vectors corresponding to their classes, and the resulting image embeddings can be used to distinguish whether an image is of a seen or unseen class. We then use novelty detection methods to differentiate unseen classes from seen classes. We demonstrate two novelty detection strategies; the first gives high accuracy on unseen classes, while the second is conservative in its prediction of novelty and keeps the seen classes' accuracy high. --- paper_title: Describing objects by their attributes paper_content: We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework. --- paper_title: Distributional Semantics in Technicolor paper_content: Our research aims at building computational models of word meaning that are perceptually grounded. Using computer vision techniques, we build visual and multimodal distributional models and compare them to standard textual models. Our results show that, while visual models with state-of-the-art computer vision techniques perform worse than textual models in general tasks (accounting for semantic relatedness), they are as good or better models of the meaning of words with visual correlates such as color terms, even in a nontrivial task that involves nonliteral uses of such words. Moreover, we show that visual and textual information are tapping on different aspects of meaning, and indeed combining them in multimodal models often improves performance. --- paper_title: Symbol interdependency in symbolic and embodied cognition paper_content: Whether computational algorithms such as latent semantic analysis (LSA) can both extract meaning from language and advance theories of human cognition has become a topic of debate in cognitive science, whereby accounts of symbolic cognition and embodied cognition are often contrasted. Albeit for different reasons, in both accounts the importance of statistical regularities in linguistic surface structure tends to be underestimated. The current article gives an overview of the symbolic and embodied cognition accounts and shows how meaning induction attributed to a specific statistical process or to activation of embodied representations should be attributed to language itself. Specifically, the performance of LSA can be attributed to the linguistic surface structure, more than special characteristics of the algorithm, and embodiment findings attributed to perceptual simulations can be explained by distributional linguistic information. --- paper_title: Distributed Representations of Words and Phrases and their Compositionality paper_content: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. --- paper_title: Grounded Cognition paper_content: Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition. --- paper_title: Grounded Models of Semantic Representation paper_content: A popular tradition of studying semantic representation has been driven by the assumption that word meaning can be learned from the linguistic environment, despite ample evidence suggesting that language is grounded in perception and action. In this paper we present a comparative study of models that represent word meaning based on linguistic and perceptual data. Linguistic information is approximated by naturally occurring corpora and sensorimotor experience by feature norms (i.e., attributes native speakers consider important in describing the meaning of a word). The models differ in terms of the mechanisms by which they integrate the two modalities. Experimental results show that a closer correspondence to human data can be obtained by uncovering latent information shared among the textual and perceptual modalities rather than arriving at semantic knowledge by concatenating the two. --- paper_title: Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world paper_content: Following up on recent work on establishing a mapping between vector-based semantic embeddings of words and the visual representations of the corresponding objects from natural images, we first present a simple approach to cross-modal vector-based semantics for the task of zero-shot learning, in which an image of a previously unseen object is mapped to a linguistic representation denoting its word. We then introduce fast mapping, a challenging and more cognitively plausible variant of the zero-shot task, in which the learner is exposed to new objects and the corresponding words in very limited linguistic contexts. By combining prior linguistic and visual knowledge acquired about words and their objects, as well as exploiting the limited new evidence available, the learner must learn to associate new objects with words. Our results on this task pave the way to realistic simulations of how children or robots could use existing knowledge to bootstrap grounded semantic knowledge about new concepts. --- paper_title: Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics paper_content: We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images. --- paper_title: What Are You Talking About? Text-to-Image Coreference paper_content: In this paper we exploit natural sentential descriptions of RGB-D scenes in order to improve 3D semantic parsing. Importantly, in doing so, we reason about which particular object each noun/pronoun is referring to in the image. This allows us to utilize visual information in order to disambiguate the so-called coreference resolution problem that arises in text. Towards this goal, we propose a structure prediction model that exploits potentials computed from text and RGB-D imagery to reason about the class of the 3D objects, the scene type, as well as to align the nouns/pronouns with the referred visual objects. We demonstrate the effectiveness of our approach on the challenging NYU-RGBD v2 dataset, which we enrich with natural lingual descriptions. We show that our approach significantly improves 3D detection and scene classification accuracy, and is able to reliably estimate the text-to-image alignment. Furthermore, by using textual and visual information, we are also able to successfully deal with coreference in text, improving upon the state-of-the-art Stanford coreference system [15]. --- paper_title: Grounding Distributional Semantics in the Visual World paper_content: Distributional semantic models build vector-based word meaning representations on top of contextual information extracted from large collections of text. Object recognition methods from computer vision derive vector-based representations of visual content from natural images. This article reviews how methods from computer vision are exploited to tackle the fundamental problem of grounding distributional semantic models, bringing them closer to providing a full-fledged computational account of meaning. --- paper_title: Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning paper_content: Recently there has been a lot of interest in learning common representations for multiple views of data. Typically, such common representations are learned using a parallel corpus between the two views (say, 1M images and their English captions). In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, $V_1$ and $V_2$) but parallel data is available between each of these views and a pivot view ($V_3$). We propose a model for learning a common representation for $V_1$, $V_2$ and $V_3$ using only the parallel data available between $V_1V_3$ and $V_2V_3$. The proposed model is generic and even works when there are $n$ views of interest and only one pivot view which acts as a bridge between them. There are two specific downstream applications that we focus on (i) transfer learning between languages $L_1$,$L_2$,...,$L_n$ using a pivot language $L$ and (ii) cross modal access between images and a language $L_1$ using a pivot language $L_2$. Our model achieves state-of-the-art performance in multilingual document classification on the publicly available multilingual TED corpus and promising results in multilingual multimodal retrieval on a new dataset created and released as a part of this work. --- paper_title: Everybody loves a rich cousin: An empirical study of transliteration through bridge languages paper_content: Most state of the art approaches for machine transliteration are data driven and require significant parallel names corpora between languages. As a result, developing transliteration functionality among n languages could be a resource intensive task requiring parallel names corpora in the order of nC2. In this paper, we explore ways of reducing this high resource requirement by leveraging the available parallel data between subsets of the n languages, transitively. We propose, and show empirically, that reasonable quality transliteration engines may be developed between two languages, X and Y, even when no direct parallel names data exists between them, but only transitively through language Z. Such systems alleviate the need for O(nC2) corpora, significantly. In addition we show that the performance of such transitive transliteration systems is in par with direct transliteration systems, in practical applications, such as CLIR systems. --- paper_title: Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora paper_content: We propose a semi-supervised model which segments and annotates images using very few labeled images and a large unaligned text corpus to relate image regions to text labels. Given photos of a sports event, all that is necessary to provide a pixel-level labeling of objects and background is a set of newspaper articles about this sport and one to five labeled images. Our model is motivated by the observation that words in text corpora share certain context and feature similarities with visual objects. We describe images using visual words, a new region-based representation. The proposed model is based on kernelized canonical correlation analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. Kernels are derived from context and adjective features inside the respective visual and textual domains. We apply our method to a challenging dataset and rely on articles of the New York Times for textual features. Our model outperforms the state-of-the-art in annotation. In segmentation it compares favorably with other methods that use significantly more labeled training data. --- paper_title: Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data paper_content: While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired imagesentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-sentence data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context. --- paper_title: Improving Statistical Machine Translation for a Resource-Poor Language Using Related Resource-Rich Languages paper_content: We propose a novel language-independent approach for improving machine translation for resource-poor languages by exploiting their similarity to resource-rich ones. More precisely, we improve the translation from a resource-poor source language X1 into a resourcerich language Y given a bi-text containing a limited number of parallel sentences for X1-Y and a larger bi-text for X2-Y for some resource-rich language X2 that is closely related to X1. This is achieved by taking advantage of the opportunities that vocabulary overlap and similarities between the languages X1 and X2 in spelling, word order, and syntax offer: (1) we improve the word alignments for the resource-poor language, (2) we further augment it with additional translation options, and (3) we take care of potential spelling differences through appropriate transliteration. The evaluation for Indonesian → English using Malay and for Spanish → English using Portuguese and pretending Spanish is resource-poor shows an absolute gain of up to 1.35 and 3.37 BLEU points, respectively, which is an improvement over the best rivaling approaches, while using much less additional data. Overall, our method cuts the amount of necessary "real" training data by a factor of 2-5. ---
Title: Multimodal Machine Learning: A Survey and Taxonomy Section 1: INTRODUCTION Description 1: Provide an introduction to the concept of multimodal machine learning, its importance, and the core challenges faced by researchers in this field. Section 2: APPLICATIONS: A HISTORICAL PERSPECTIVE Description 2: Discuss the historical advancements and applications of multimodal machine learning, from early research to current trends in language and vision applications. Section 3: MULTIMODAL REPRESENTATIONS Description 3: Explain the concept of multimodal representations, distinguishing between joint and coordinated representations, and discuss various techniques for creating these representations. Section 4: TRANSLATION Description 4: Describe the process of translating information from one modality to another, including both example-based and generative approaches. Discuss the challenges and methods associated with these tasks. Section 5: ALIGNMENT Description 5: Define multimodal alignment and its significance. Categorize approaches into explicit and implicit alignment, and discuss the methods used for each type. Section 6: FUSION Description 6: Examine the methods used to combine information from multiple modalities for making predictions. Discuss model-agnostic and model-based approaches to multimodal fusion. Section 7: CO-LEARNING Description 7: Explore the concept of co-learning, where knowledge from one modality aids the modeling of another modality. Discuss approaches based on parallel, non-parallel, and hybrid data. Section 8: CONCLUSION Description 8: Summarize the key points of the survey, discuss the importance of understanding past achievements for future advancements, and highlight potential future research directions, especially in the area of co-learning.
An overview on analysis tools for software product lines
7
--- paper_title: Clafer tools for product line engineering paper_content: Clafer is a lightweight yet expressive language for structural modeling: feature modeling and configuration, class and object modeling, and metamodeling. Clafer Tools is an integrated set of tools based on Clafer. In this paper, we describe some product-line variability modeling scenarios of Clafer Tools from the viewpoints of product-line owner, product-line engineer, and product engineer. --- paper_title: Feature-Oriented Domain Analysis (FODA) Feasibility Study paper_content: Abstract : Successful Software reuse requires the systematic discovery and exploitation of commonality across related software systems. By examining related software systems and the underlying theory of the class of systems they represent, domain analysis can provide a generic description of the requirements of that class of systems and a set of approaches for their implementation. This report will establish methods for performing a domain analysis and describe the products of the domain analysis process. To illustrate the application of domain analysis to a representative class of software systems, this report will provide a domain analysis of window management system software. --- paper_title: A Classification and Survey of Analysis Strategies for Software Product Lines paper_content: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses. --- paper_title: Feature-Oriented Software Product Lines: Concepts and Implementation paper_content: While standardization has empowered the software industry to substantially scale software development and to provide affordable software to a broad market, it often does not address smaller market segments, nor the needs and wishes of individual customers. Software product lines reconcile mass production and standardization with mass customization in software engineering. Ideally, based on a set of reusable parts, a software manufacturer can generate a software product based on the requirements of its customer. The concept of features is central to achieving this level of automation, because features bridge the gap between the requirements the customer has and the functionality a product provides. Thus features are a central concept in all phases of product-line development. The authors take a developers viewpoint, focus on the development, maintenance, and implementation of product-line variability, and especially concentrate on automated product derivation based on a users feature selection. The book consists of three parts. Part I provides a general introduction to feature-oriented software product lines, describing the product-line approach and introducing the product-line development process with its two elements of domain and application engineering. The pivotal part II covers a wide variety of implementation techniques including design patterns, frameworks, components, feature-oriented programming, and aspect-oriented programming, as well as tool-based approaches including preprocessors, build systems, version-control systems, and virtual separation of concerns. Finally, part III is devoted to advanced topics related to feature-oriented product lines like refactoring, feature interaction, and analysis tools specific to product lines. In addition, an appendix lists various helpful tools for software product-line development, along with a description of how they relate to the topics covered in this book. To tie the book together, the authors use two running examples that are well documented in the product-line literature: data management for embedded systems, and variations of graph data structures. They start every chapter by explicitly stating the respective learning goals and finish it with a set of exercises; additional teaching material is also available online. All these features make the book ideally suited for teaching both for academic classes and for professionals interested in self-study. --- paper_title: Software Product Lines: Practices and Patterns paper_content: Foreword. Preface. Acknowledgements. Dedication. Reader's Guide. I. SOFTWARE PRODUCT LINE FUNDAMENTALS. 1. Basic Ideas and Terms. What Is a Software Product Line? What Software Product Lines Are Not. Fortuitous Small-Grained Reuse. Single-System Development with Reuse. Just Component-Based Development. Just a Reconfigurable Architecture. Releases and Versions of Single Products. Just a Set of Technical Standards. A Note on Terminology. For Further Reading. Discussion Questions. 2. Benefits. Organizational Benefits. Individual Benefits. Benefits versus Costs. For Further Reading. Discussion Questions. 3. The Three Essential Activities. What Are the Essential Activities? Core Asset Development. Product Development. Management. All Three Together. For Further Reading. Discussion Questions. II. SOFTWARE PRODUCT LINE PRACTICE AREAS. Describing the Practice Areas. Starting versus Running a Product Line. Organizing the Practice Areas. 4. Software Engineering Practice Areas. Architecture Definition. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Architecture Evaluation. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Component Development. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. COTS Utilization. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Mining Existing Assets. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Requirements Engineering. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Software System Integration. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Testing. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Understanding Relevant Domains. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. 5. Technical Management Practice Areas. Configuration Management. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Data Collection, Metrics, and Tracking. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Make/Buy/Mine/Commission Analysis. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Process Definition. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Scoping. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Technical Planning. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Technical Risk Management. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Tool Support. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. 6. Organizational Management Practice Areas. Building a Business Case. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Customer Interface Management. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Developing an Acquisition Strategy. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Funding. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Launching and Institutionalizing. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Market Analysis. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Operations. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Organizational Planning. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Organizational Risk Management. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Structuring the Organization. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. Discussion Questions. Technology Forecasting. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. Training. Aspects Peculiar to Product Lines. Application to Core Asset Development. Application to Product Development. Specific Practices. Practice Risks. For Further Reading. Discussion Questions. III. PUTTING THE PRACTICE AREAS INTO ACTION. 7. Software Product Line Practice Patterns. The Value of Patterns. Software Product Line Practice Pattern Descriptions. The Curriculum Pattern. The Essentials Coverage Pattern. Each Asset Pattern. What to Build Pattern. Product Parts Pattern. Assembly Line Pattern. Monitor Pattern. Product Builder Pattern. Cold Start Pattern. In Motion Pattern. Process Pattern. Factory Pattern. Other Patterns. Practice Area Coverage. Discussion Questions. 8. Product Line Technical Probe. What Is the Product Line Technical Probe? Probe Interview Questions. Probe Participants. Probe Process. Using the Probe Results. Conducting a Mini Self-Probe. Discussion Questions. 9. Cummins Engine Company: Embracing the Future. Prologue. Company History. A Product Line of Engine Software. Getting off the Ground. An Organization Structured for Cooperation. Running the Product Line. Results. Lessons Learned. Epilogue. Practice Area Compendium. For Further Reading. Discussion Questions. 10. Control Channel Toolkit: A Software Product Line that Controls Satellites. Contextual Background. Organizational Profiles. Project History. Control Channels. Launching CCT. Developing a Business Case for CCT. Developing the Acquisition Strategy and Funding CCT. Structuring the CCT Organization. Organizational and Technical Planning. Operations. Engineering the CCT Core Assets. Domain Analysis. Architecture. Component Engineering. Testing: Application and Test Engineering. Sustainment Engineering: Product Line Evolution. Documentation. Managing the CCT Effort. Early Benefits from CCT. First CCT Product. Benefits beyond CCT Products. Lessons and Issues. Tool Support Is Inadequate. Domain Analysis Documentation Is Important. An Early Architecture Focus Is Best. Product Builders Need More Support. CCT Users Need Reuse Metrics. It Pays to Be Flexible, and Cross-Unit Teams Work. A Real Product Is a Benefit. Summary. For Further Reading. Discussion Questions. 11. Successful Software product Line Development in Small Organization. Introduction. The Early Years. The MERGER Software Product Line. Market Maker Software Product Line Practices. Architecture Definition. Component Development. Structuring (and Staffing) the Organization. Testing. Data Collection and Metrics. Launching and Institutionalizing the Product Line. Understanding the Market. Technology Forecasting. A Few Observations. Effects of Company Culture. Cost Issues. The Customer Paradox. Tool Support. Lessons Learned. Drawbacks. Conclusions: Software Product Lines in Small Organizations. For Further Reading. Discussion Questions. 12. Conclusions: Practices, Patterns and Payoffs. The Practices. The Patterns. The Success Factors. The Payoff. Finale. Glossary. Bibliography. Index. --- paper_title: Automated analysis of feature models 20 years later: A literature review paper_content: Software product line engineering is about producing a set of related products that share more commonalities than variabilities. Feature models are widely used for variability and commonality management in software product lines. Feature models are information models where a set of products are represented as a set of features in a single model. The automated analysis of feature models deals with the computer-aided extraction of information from feature models. The literature on this topic has contributed with a set of operations, techniques, tools and empirical results which have not been surveyed until now. This paper provides a comprehensive literature review on the automated analysis of feature models 20 years after of their invention. This paper contributes by bringing together previously disparate streams of work to help shed light on this thriving area. We also present a conceptual framework to understand the different proposals as well as categorise future contributions. We finally discuss the different studies and propose some challenges to be faced in the future. --- paper_title: Delta-oriented programming of software product lines paper_content: Feature-oriented programming (FOP) implements software product lines by composition of feature modules. It relies on the principles of stepwise development. Feature modules are intended to refer to exactly one product feature and can only extend existing implementations. To provide more flexibility for implementing software product lines, we propose delta-oriented programming (DOP) as a novel programming language approach. A product line is represented by a core module and a set of delta modules. The core module provides an implementation of a valid product that can be developed with well-established single application engineering techniques. Delta modules specify changes to be applied to the core module to implement further products by adding, modifying and removing code. Application conditions attached to delta modules allow handling combinations of features explicitly. A product implementation for a particular feature configuration is generated by applying incrementally all delta modules with valid application condition to the core module. In order to evaluate the potential of DOP, we compare it to FOP, both conceptually and empirically. --- paper_title: Feature-Oriented Domain Analysis (FODA) Feasibility Study paper_content: Abstract : Successful Software reuse requires the systematic discovery and exploitation of commonality across related software systems. By examining related software systems and the underlying theory of the class of systems they represent, domain analysis can provide a generic description of the requirements of that class of systems and a set of approaches for their implementation. This report will establish methods for performing a domain analysis and describe the products of the domain analysis process. To illustrate the application of domain analysis to a representative class of software systems, this report will provide a domain analysis of window management system software. --- paper_title: Feature-Oriented Programming: A Fresh Look at Objects paper_content: We propose a new model for flexible composition of objects from a set of features. Features are similar to (abstract) subclasses, but only provide the core functionality of a (sub)class. Overwriting other methods is viewed as resolving feature interactions and is specified separately for two features at a time. This programming model allows to compose features (almost) freely in a way which generalizes inheritance and aggregation. For a set of n features, an exponential number of different feature combinations is possible, assuming a quadratic number of interaction resolutions. We present the feature model as an extension of Java and give two translations to Java, one via inheritance and the other via aggregation. We further discuss parameterized features, which work nicely with our feature model and can be translated into Pizza, an extension of Java. --- paper_title: A Classification and Survey of Analysis Strategies for Software Product Lines paper_content: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses. --- paper_title: Feature-Oriented Software Product Lines: Concepts and Implementation paper_content: While standardization has empowered the software industry to substantially scale software development and to provide affordable software to a broad market, it often does not address smaller market segments, nor the needs and wishes of individual customers. Software product lines reconcile mass production and standardization with mass customization in software engineering. Ideally, based on a set of reusable parts, a software manufacturer can generate a software product based on the requirements of its customer. The concept of features is central to achieving this level of automation, because features bridge the gap between the requirements the customer has and the functionality a product provides. Thus features are a central concept in all phases of product-line development. The authors take a developers viewpoint, focus on the development, maintenance, and implementation of product-line variability, and especially concentrate on automated product derivation based on a users feature selection. The book consists of three parts. Part I provides a general introduction to feature-oriented software product lines, describing the product-line approach and introducing the product-line development process with its two elements of domain and application engineering. The pivotal part II covers a wide variety of implementation techniques including design patterns, frameworks, components, feature-oriented programming, and aspect-oriented programming, as well as tool-based approaches including preprocessors, build systems, version-control systems, and virtual separation of concerns. Finally, part III is devoted to advanced topics related to feature-oriented product lines like refactoring, feature interaction, and analysis tools specific to product lines. In addition, an appendix lists various helpful tools for software product-line development, along with a description of how they relate to the topics covered in this book. To tie the book together, the authors use two running examples that are well documented in the product-line literature: data management for embedded systems, and variations of graph data structures. They start every chapter by explicitly stating the respective learning goals and finish it with a set of exercises; additional teaching material is also available online. All these features make the book ideally suited for teaching both for academic classes and for professionals interested in self-study. --- paper_title: Aspect-oriented programming paper_content: We have found many programming problems for which neither procedural nor object-oriented programming techniques are sufficient to clearly capture some of the important design decisions the program must implement. This forces the implementation of those design decisions to be scattered throughout the code, resulting in “tangled” code that is excessively difficult to develop and maintain. We present an analysis of why certain design decisions have been so difficult to clearly capture in actual code. We call the properties these decisions address aspects, and show that the reason they have been hard to capture is that they cross-cut the system's basic functionality. We present the basis for a new programming technique, called aspect-oriented programming, that makes it possible to clearly express programs involving such aspects, including appropriate isolation, composition and reuse of the aspect code. The discussion is rooted in systems we have built using aspect-oriented programming. --- paper_title: Virtual Separation of Concerns -- A Second Chance for Preprocessors paper_content: Conditional compilation with preprocessors like cpp is a simple but eective means to implement variability. By annotating code fragments with #ifdef and #endif directives, dierent program variants with or without these fragments can be created, which can be used (among others) to implement software product lines. Although, preprocessors are frequently used in practice, they are often criticized for their negative eect on code quality and maintainability. In contrast to modularized implementations, for example using components or aspects, preprocessors neglect separation of concerns, are prone to introduce subtle errors, can entirely obfuscate the source code, and limit reuse. Our aim is to rehabilitate the preprocessor by showing how simple tool support can address these problems and emulate some benets of modularized implementations. At the same time we emphasize unique benets of preprocessors, like simplicity and language independence. Although we do not have a denitive answer on how to implement variability, we want highlight opportunities to improve preprocessors and encourage research toward novel preprocessor-based approaches. --- paper_title: Reducing Configurations to Monitor in a Software Product Line paper_content: A software product line is a family of programs where each program is defined by a unique combination of features. Product lines, like conventional programs, can be checked for safety properties through execution monitoring. However, because a product line induces a number of programs that is potentially exponential in the number of features, it would be very expensive to use existing monitoring techniques: one would have to apply those techniques to every single program. Doing so would also be wasteful because many programs can provably never violate the stated property. We introduce a monitoring technique dedicated to product lines that, given a safety property, statically determines the feature combinations that cannot possibly violate the property, thus reducing the number of programs to monitor. Experiments show that our technique is effective, particularly for safety properties that crosscut many optional features. --- paper_title: Model composition in product lines and feature interaction detection using critical pair analysis paper_content: Software product lines (SPL) are an established technology for developing families of systems. In particular, they focus on modeling commonality and variability, that is, they are based on identifying features common to all members of the family and variable features that appear only in some members. Model-based development methods for product lines advocate the construction of SPL requirements, analysis and design models for features. This paper describes an approach for maintaining feature separation during modeling using a UML composition language based on graph transformations. This allows models of features to be reused more easily. The language can be used to compose the SPL models for a given set of features. Furthermore, critical pair analysis is used to detect dependencies and conflicts between features during analysis and design modeling. The approach is supported by a tool that allows automated composition of UML models of features and detection of some kinds of feature interactions. --- paper_title: ASADAL: a tool system for co-development of software and test environment based on product line engineering paper_content: Recently, product line software engineering (PLSE) is gaining popularity. To employ PLSE methods, many organizations are looking for a tool system that supports PLSE methods so that core assets and target software can be developed and tested in an effective and systematic way.ASADAL (A System Analysis and Design Aid tooL) supports the entire lifecycle of software development process based on a PLSE method called FORM (Feature-Oriented Reuse Method) [6]. It supports domain analysis, architecture and component design, code generation, and simulation-based verification and validation (V&V). Using the tool, users may co-develop target software and its test environment and verify software in a continuous and incremental way. --- paper_title: Integration Testing of Software Product Lines Using Compositional Symbolic Execution paper_content: Software product lines are families of products defined by feature commonality and variability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse shared assets and reduce test effort. The use of feature dependence graphs has also been employed to reduce testing effort, but little work has focused on code level analysis of dataflow between features. In this paper we present a compositional symbolic execution technique that works in concert with a feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically analyze feature interactions. We experiment with two product lines and determine that our technique can reduce the overall number of interactions that must be considered during testing, and requires less time to run than a traditional symbolic execution technique. --- paper_title: PLEDGE: a product line editor and test generation tool paper_content: Specific requirements of clients lead to the development of variants of the same software. These variants form a Software Product Line (SPL). Ideally, testing a SPL involves testing all the software products that can be configured through the combination of features. This, however, is intractable in practice since a) large SPLs can lead to millions of possible software variants and b) the testing process is usually limited by budget and time constraints. To overcome this problem, this paper introduces PLEDGE, an open source tool that selects and prioritizes the product configurations maximizing the feature interactions covered. The uniqueness of PLEDGE is that it bypasses the computation of the feature interactions, allowing to scale to large SPLs. --- paper_title: A systematic mapping study of software product lines testing paper_content: ContextIn software development, Testing is an important mechanism both to identify defects and assure that completed products work as specified. This is a common practice in single-system development, and continues to hold in Software Product Lines (SPL). Even though extensive research has been done in the SPL Testing field, it is necessary to assess the current state of research and practice, in order to provide practitioners with evidence that enable fostering its further development. ObjectiveThis paper focuses on Testing in SPL and has the following goals: investigate state-of-the-art testing practices, synthesize available evidence, and identify gaps between required techniques and existing approaches, available in the literature. MethodA systematic mapping study was conducted with a set of nine research questions, in which 120 studies, dated from 1993 to 2009, were evaluated. ResultsAlthough several aspects regarding testing have been covered by single-system development approaches, many cannot be directly applied in the SPL context due to specific issues. In addition, particular aspects regarding SPL are not covered by the existing SPL approaches, and when the aspects are covered, the literature just gives brief overviews. This scenario indicates that additional investigation, empirical and practical, should be performed. ConclusionThe results can help to understand the needs in SPL Testing, by identifying points that still require additional investigation, since important aspects regarding particular points of software product lines have not been addressed yet. --- paper_title: Exploring variability-aware execution for testing plugin-based web applications paper_content: In plugin-based systems, plugin conflicts may occur when two or more plugins interfere with one another, changing their expected behaviors. It is highly challenging to detect plugin conflicts due to the exponential explosion of the combinations of plugins (i.e., configurations). In this paper, we address the challenge of executing a test case over many configurations. Leveraging the fact that many executions of a test are similar, our variability-aware execution runs common code once. Only when encountering values that are different depending on specific configurations will the execution split to run for each of them. To evaluate the scalability of variability-aware execution on a large real-world setting, we built a prototype PHP interpreter called Varex and ran it on the popular WordPress blogging Web application. The results show that while plugin interactions exist, there is a significant amount of sharing that allows variability-aware execution to scale to 2^50 configurations within seven minutes of running time. During our study, with Varex, we were able to detect two plugin conflicts: one was recently reported on WordPress forum and another one was not previously discovered. --- paper_title: Feature integration using a feature construct paper_content: Abstract A feature is a unit of functionality that may be added to (or omitted from) a system. Examples of features are plug-ins for software packages or additional services offered by telecommunications providers. Many features override the default behaviour of the system, which may lead to unforeseen behaviour of the system; this is known as feature interaction . We propose a feature construct for defining features, and use it to provide a plug-and-play framework for exploring feature interactions. Our approach to the feature interaction problem has the following characteristics: • Features are treated as first-class objects during the development phase. • A method is given for integrating a feature into a system description. It allows features to override existing behaviour of the system being developed. • A prototype tool has been developed for performing the integration. • Interactions between features may be witnessed. In principle, our approach is quite general and need not be tied to any particular system description language. In this paper, however, we develop the approach in the context of the SMV model checking system. We describe two case studies in detail: a lift system and a telephone system. --- paper_title: Incremental Test Generation for Software Product Lines paper_content: Recent advances in mechanical techniques for systematic testing have increased our ability to automatically find subtle bugs, and hence, to deploy more dependable software. This paper builds on one such systematic technique, scope-bounded testing, to develop a novel specification-based approach for efficiently generating tests for products in a software product line. Given properties of features as first-order logic formulas in Alloy, our approach uses SAT-based analysis to automatically generate test inputs for each product in a product line. To ensure soundness of generation, we introduce an automatic technique for mapping a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experimental results using different data structure product lines show that an incremental approach can provide an order of magnitude speedup over conventional techniques. We also present a further optimization using dedicated integer constraint solvers for feature properties that introduce integer constraints, and show how to use a combination of solvers in tandem for solving Alloy formulas. --- paper_title: Strategies for product-line verification: Case studies and experiments paper_content: Product-line technology is increasingly used in mission-critical and safety-critical applications. Hence, researchers are developing verification approaches that follow different strategies to cope with the specific properties of product lines. While the research community is discussing the mutual strengths and weaknesses of the different strategies - mostly at a conceptual level - there is a lack of evidence in terms of case studies, tool implementations, and experiments. We have collected and prepared six product lines as subject systems for experimentation. Furthermore, we have developed a model-checking tool chain for C-based and Java-based product lines, called SPLverifier, which we use to compare sample-based and family-based strategies with regard to verification performance and the ability to find defects. Based on the experimental results and an analytical model, we revisit the discussion of the strengths and weaknesses of product-line-verification strategies. --- paper_title: A product line based aspect-oriented generative unit testing approach to building quality components paper_content: The quality of component-based systems highly depends on how effectively testing is carried out. To achieve the maximal testing effectiveness, this paper presents a product line based aspect oriented approach to unit testing. The aspect product line facilitates the automatic creation of aspect test cases that deal with specific quality requirements. An expandable repository of reusable aspect test cases has been developed. A prototype tool is built to verify and lever up the approach. --- paper_title: Avoiding redundant testing in application engineering paper_content: Many software product line testing techniques have been presented in the literature. The majority of those techniques address how to define reusable test assets (such as test models or test scenarios) in domain engineering and how to exploit those assets during application engineering. In addition to test case reuse however, the execution of test cases constitutes one important activity during application testing. Without a systematic support for the test execution in application engineering, while considering the specifics of product lines, product line artifacts might be tested redundantly. Redundant testing in application engineering, however, can lead to an increased testing effort without increasing the chance of uncovering failures. In this paper, we propose the model-based ScenTED-DF technique to avoid redundant testing in application engineering. Our technique builds on data flow-based testing techniques for single systems and adapts and extends those techniques to consider product line variability. The paper sketches the prototypical implementation of our technique to show its general feasibility and automation potential, and it describes the results of experiments using an academic product line to demonstrate that ScenTED-DF is capable of avoiding redundant tests in application engineering. --- paper_title: Evolutionary Search-based Test Generation for Software Product Line Feature Models paper_content: Product line-based software engineering is a paradigm that models the commonalities and variabilities of different applications of a given domain of interest within a unique framework and enhances rapid and low cost development of new applications based on reuse engineering principles. Despite the numerous advantages of software product lines, it is quite challenging to comprehensively test them. This is due to the fact that a product line can potentially represent many different applications; therefore, testing a single product line requires the test of its various applications. Theoretically, a product line with n software features can be a source for the development of 2n application. This requires the test of 2n applications if a brute-force comprehensive testing strategy is adopted. In this paper, we propose an evolutionary testing approach based on Genetic Algorithms to explore the configuration space of a software product line feature model in order to automatically generate test suites. We will show through the use of several publicly-available product line feature models that the proposed approach is able to generate test suites of O(n) size complexity as opposed to O(2n) while at the same time form a suitable tradeoff balance between error coverage and feature coverage in its generated test suites. --- paper_title: A survey of combinatorial testing paper_content: Combinatorial Testing (CT) can detect failures triggered by interactions of parameters in the Software Under Test (SUT) with a covering array test suite generated by some sampling mechanisms. It has been an active field of research in the last twenty years. This article aims to review previous work on CT, highlights the evolution of CT, and identifies important issues, methods, and applications of CT, with the goal of supporting and directing future practice and research in this area. First, we present the basic concepts and notations of CT. Second, we classify the research on CT into the following categories: modeling for CT, test suite generation, constraints, failure diagnosis, prioritization, metric, evaluation, testing procedure and the application of CT. For each of the categories, we survey the motivation, key issues, solutions, and the current state of research. Then, we review the contribution from different research groups, and present the growing trend of CT research. Finally, we recommend directions for future CT research, including: (1) modeling for CT, (2) improving the existing test suite generation algorithm, (3) improving analysis of testing result, (4) exploring the application of CT to different levels of testing and additional types of systems, (5) conducting more empirical studies to fully understand limitations and strengths of CT, and (6) combining CT with other testing techniques. --- paper_title: On strategies for testing software product lines: A systematic literature review paper_content: Context: Testing plays an important role in the quality assurance process for software product line engineering. There are many opportunities for economies of scope and scale in the testing activities, but techniques that can take advantage of these opportunities are still needed. Objective: The objective of this study is to identify testing strategies that have the potential to achieve these economies, and to provide a synthesis of available research on SPL testing strategies, to be applied towards reaching higher defect detection rates and reduced quality assurance effort. Method: We performed a literature review of two hundred seventy-six studies published from the year 1998 up to the 1st semester of 2013. We used several filters to focus the review on the most relevant studies and we give detailed analyses of the core set of studies. Results: The analysis of the reported strategies comprised two fundamental aspects for software product line testing: the selection of products for testing, and the actual test of products. Our findings indicate that the literature offers a large number of techniques to cope with such aspects. However, there is a lack of reports on realistic industrial experiences, which limits the inferences that can be drawn. Conclusion: This study showed a number of leveraged strategies that can support both the selection of products, and the actual testing of products. Future research should also benefit from the problems and advantages identified in this study. --- paper_title: Automated and Scalable T-wise Test Case Generation Strategies for Software Product Lines paper_content: Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability across their features. This leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large space of products is infeasible. One possible option is to test SPLs by generating test cases that cover all possible T feature interactions (T-wise). T-wise dramatically reduces the number of test products while ensuring reasonable SPL coverage. However, automatic generation of test cases satisfying T-wise using SAT solvers raises two issues. The encoding of SPL models and T-wise criteria into a set of formulas acceptable by the solver and their satisfaction which fails when processed ``all-at-once''. We propose a scalable toolset using Alloy to automatically generate test cases satisfying T-wise from SPL models. We define strategies to split T-wise combinations into solvable subsets. We design and compute metrics to evaluate strategies on Aspect OPTIMA, a concrete transactional SPL. --- paper_title: MoSo-PoLiTe: tool support for pairwise and model-based software product line testing paper_content: Testing Software Product Lines is a very challenging task and approaches like combinatorial testing and model-based testing are frequently used to reduce the effort of testing Software Product Lines and to reuse test artifacts. In this contribution we present a tool chain realizing our MoSo-PoLiTe concept combining combinatorial and model-based testing. Our tool chain contains a pairwise configuration selection component on the basis of a feature model. This component implements an heuristic finding a minimal subset of configurations covering 100% pairwise interaction. Additionally, our tool chain allows the model-based test case generation for each configuration within this generated subset. This tool chain is based on commercial tools since it was developed within industrial cooperations. A non-commercial implementation of pairwise configuration selection is available and an integration with an Open Source model-based testing tool is under development. --- paper_title: A Classification and Survey of Analysis Strategies for Software Product Lines paper_content: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses. --- paper_title: Coevolution of variability models and related artifacts: a case study from the Linux kernel paper_content: Variability-aware systems are subject to the coevolution of variability models and related artifacts. Surprisingly, little knowledge exists to understand such coevolution in practice. This shortage is directly reflected in existing approaches and tools for variability management, as they fail to provide effective support for such a coevolution. To understand how variability models and related artifacts coevolve in a large and complex real-world variability-aware system, we inspect over 500 Linux kernel commits spanning almost four years of development. We collect a catalog of evolution patterns, capturing the coevolution of the Linux kernel variability model, Makefiles, and C source code. Further, we extract general findings to guide further research and tool development. --- paper_title: Feature consistency in compile-time-configurable system software: facing the linux 10,000 feature problem paper_content: Much system software can be configured at compile time to tailor it with respect to a broad range of supported hardware architectures and application domains. A good example is the Linux kernel, which provides more than 10,000 configurable features, growing rapidly. From the maintenance point of view, compile-time configurability imposes big challenges. The configuration model (the selectable features and their constraints as presented to the user) and the configurability that is actually implemented in the code have to be kept in sync, which, if performed manually, is a tedious and error-prone task. In the case of Linux, this has led to numerous defects in the source code, many of which are actual bugs. We suggest an approach to automatically check for configurability-related implementation defects in large-scale configurable system software. The configurability is extracted from its various implementation sources and examined for inconsistencies, which manifest in seemingly conditional code that is in fact unconditional. We evaluate our approach with the latest version of Linux, for which our tool detects 1,776 configurability defects, which manifest as dead/superfluous source code and bugs. Our findings have led to numerous source-code improvements and bug fixes in Linux: 123 patches (49 merged) fix 364 defects, 147 of which have been confirmed by the corresponding Linux developers and 20 as fixing a new bug. --- paper_title: Reducing combinatorics in testing product lines paper_content: A Software Product Line (SPL) is a family of programs where each program is defined by a unique combination of features. Testing or checking properties of an SPL is hard as it may require the examination of a combinatorial number of programs. In reality, however, features are often irrelevant for a given test - they augment, but do not change, existing behavior, making many feature combinations unnecessary as far as testing is concerned. In this paper we show how to reduce the amount of effort in testing an SPL. We represent an SPL in a form where conventional static program analysis techniques can be applied to find irrelevant features for a test. We use this information to reduce the combinatorial number of SPL programs to examine. --- paper_title: Configuration coverage in the analysis of large-scale system software paper_content: System software, especially operating systems, tends to be highly configurable. Like every complex piece of software, a considerable amount of bugs in the implementation has to be expected. In order to improve the general code quality, tools for static analysis provide means to check for source code defects without having to run actual test cases on real hardware. Still, for proper type checking a specific configuration is required so that all header include paths are available and all types are properly resolved. In order to find as many bugs as possible, usually a "full configuration" is used for the check. However, mainly because of alternative blocks in form of #else-blocks, a single configuration is insufficient to achieve full coverage. In this paper, we present a metric for configuration coverage (CC) and explain the challenges for (properly) calculating it. Furthermore, we present an efficient approach for determining a sufficiently small set of configurations that achieve (nearly) full coverage and evaluate it on a recent Linux kernel version. --- paper_title: PACOGEN: Automatic Generation of Pairwise Test Configurations from Feature Models paper_content: Feature models are commonly used to specify variability in software product lines. Several tools support feature models for variability management at different steps in the development process. However, tool support for test configuration generation is currently limited. This test generation task consists in systematically selecting a set of configurations that represent a relevant sample of the variability space and that can be used to test the product line. In this paper we propose \pw tool to analyze feature models and automatically generate a set of configurations that cover all pair wise interactions between features. \pw tool relies on constraint programming to generate configurations that satisfy all constraints imposed by the feature model and to minimize the set of the tests configurations. This work also proposes an extensive experiment, based on the state-of-the art SPLOT feature models repository, showing that \pw tool scales over variability spaces with millions of configurations and covers pair wise with less configurations than other available tools. --- paper_title: Feature-Oriented Software Product Lines: Concepts and Implementation paper_content: While standardization has empowered the software industry to substantially scale software development and to provide affordable software to a broad market, it often does not address smaller market segments, nor the needs and wishes of individual customers. Software product lines reconcile mass production and standardization with mass customization in software engineering. Ideally, based on a set of reusable parts, a software manufacturer can generate a software product based on the requirements of its customer. The concept of features is central to achieving this level of automation, because features bridge the gap between the requirements the customer has and the functionality a product provides. Thus features are a central concept in all phases of product-line development. The authors take a developers viewpoint, focus on the development, maintenance, and implementation of product-line variability, and especially concentrate on automated product derivation based on a users feature selection. The book consists of three parts. Part I provides a general introduction to feature-oriented software product lines, describing the product-line approach and introducing the product-line development process with its two elements of domain and application engineering. The pivotal part II covers a wide variety of implementation techniques including design patterns, frameworks, components, feature-oriented programming, and aspect-oriented programming, as well as tool-based approaches including preprocessors, build systems, version-control systems, and virtual separation of concerns. Finally, part III is devoted to advanced topics related to feature-oriented product lines like refactoring, feature interaction, and analysis tools specific to product lines. In addition, an appendix lists various helpful tools for software product-line development, along with a description of how they relate to the topics covered in this book. To tie the book together, the authors use two running examples that are well documented in the product-line literature: data management for embedded systems, and variations of graph data structures. They start every chapter by explicitly stating the respective learning goals and finish it with a set of exercises; additional teaching material is also available online. All these features make the book ideally suited for teaching both for academic classes and for professionals interested in self-study. --- paper_title: Testing Software Product Lines Using Incremental Test Generation paper_content: We present a novel specification-based approach for generating tests for products in a software product line. Given properties of features as first-order logic formulas, our approach uses SAT-based analysis to automatically generate test inputs for each product in a product line. To ensure soundness of generation, we introduce an automatic technique for mapping a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experimental results using different data structure product lines show that incremental approach can provide an order of magnitude speed-up over conventional techniques. --- paper_title: Types and Programming Languages paper_content: A type system is a syntactic method for automatically checking the absence of certain erroneous behaviors by classifying program phrases according to the kinds of values they compute. The study of type systems -- and of programming languages from a type-theoretic perspective -- has important applications in software engineering, language design, high-performance compilers, and security.This text provides a comprehensive introduction both to type systems in computer science and to the basic theory of programming languages. The approach is pragmatic and operational; each new concept is motivated by programming examples and the more theoretical sections are driven by the needs of implementations. Each chapter is accompanied by numerous exercises and solutions, as well as a running implementation, available via the Web. Dependencies between chapters are explicitly identified, allowing readers to choose a variety of paths through the material.The core topics include the untyped lambda-calculus, simple type systems, type reconstruction, universal and existential polymorphism, subtyping, bounded quantification, recursive types, kinds, and type operators. Extended case studies develop a variety of approaches to modeling the features of object-oriented languages. --- paper_title: Modeling and Model Checking Software Product Lines paper_content: Software product line engineering combines the individual developments of systems to the development of a family of systems consisting of common and variable assets.In this paper we introduce the process algebra PL-CCS as a product line extension of CCS and show how to model the overall behavior of an entire family within PL-CCS. PL-CCS models incorporate behavioral variability and allow the derivation of individual systems in a systematic way due to a semantics given in terms of multi-valued modal Kripke structures. Furthermore, we introduce multi-valued modal μ-calculus as a property specification language for system families specified in PL-CCS and show how model checking techniques operate on such structures. In our setting the result of model checking is no longer a simple yesor noanswer but the set of systems of the product line that do meet the specified properties. --- paper_title: Dead or Alive: finding zombie features in the Linux kernel paper_content: When an interference signals is detected by a base station, the base station issues a report to a switching apparatus. The switching apparatus checks self to see whether it is in a standby mode waiting for a frequency change completion report from another base station which is in the process of changing the control channel frequency. If the switching apparatus is satisfied that the station should be given a new control channel frequency, it issues a frequency change command to the requesting base station, and the base station receiving the command revises the current control channel frequency. In the meantime, all other requests from other stations are denied by the switching apparatus until the requesting base station has successfully completed the process of changing the control channel frequency. The base station acknowledges the completion of frequency change process by sending a frequency change completion report to the switching apparatus. --- paper_title: Model checking JAVA programs using JAVA PathFinder paper_content: Abstract.This paper describes a translator called Java PathFinder (Jpf), which translates from Java to Promela, the modeling language of the Spin model checker. Jpf translates a given Java program into a Promela model, which then can be model checked using Spin. The Java program may contain assertions, which are translated into similar assertions in the Promela model. The Spin model checker will then look for deadlocks and violations of any stated assertions. Jpf generates a Promela model with the same state space characteristics as the Java program. Hence, the Java program must have a finite and tractable state space. This work should be seen in a broader attempt to make formal methods applicable within NASA’s areas such as space, aviation, and robotics. The work is a continuation of an effort to formally analyze, using Spin, a multi-threaded operating system for the Deep-Space 1 space craft, and of previous work in applying existing model checkers and theorem provers to real applications. --- paper_title: Language-independent reference checking in software product lines paper_content: Feature-Oriented Software Development (FOSD) is a paradigm for the development of software product lines. A challenge in FOSD is to guarantee that all software systems of a software product line are correct. Recent work on type checking product lines can provide a guarantee of type correctness without generating all possible systems. We generalize previous results by abstracting from the specifics of particular programming languages. In a first attempt, we present a reference-checking algorithm that performs key tasks of product-line type checking independently of the target programming language. Experiments with two sample product lines written in Java and C are encouraging and give us confidence that this approach is promising. --- paper_title: Language-Independent and Automated Software Composition: The FeatureHouse Experience paper_content: Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages. Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages. In particular, we have integrated Java, C#, C, Haskell, Alloy, and JavaCC. A substantial number of case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties that a language must have in order to be ready for superimposition. We discuss perspectives of our approach and demonstrate how we extended FEATUREHOUSE with support for XML languages (in particular, XHTML, XMI/UML, and Ant) and alternative composition approaches (in particular, aspect weaving). Rounding off our previous work, we provide here a holistic view of the FEATUREHOUSE approach based on rich experience with numerous languages and case studies and reflections on several years of research. --- paper_title: Access Control in Feature-Oriented Programming paper_content: In feature-oriented programming (FOP) a programmer decomposes a program in terms of features. Ideally, features are implemented modularly so that they can be developed in isolation. Access control mechanisms in the form of access or visibility modifiers are an important ingredient to attain feature modularity as they allow programmers to hide and expose internal details of a module's implementation. But developers of contemporary feature-oriented languages have not considered access control mechanisms so far. The absence of a well-defined access control model for FOP breaks encapsulation of feature code and leads to unexpected program behaviors and inadvertent type errors. We raise awareness of this problem, propose three feature-oriented access modifiers, and present a corresponding access modifier model. We offer an implementation of the model on the basis of a fully-fledged feature-oriented compiler. Finally, by analyzing ten feature-oriented programs, we explore the potential of feature-oriented modifiers in FOP. --- paper_title: Proof Composition for Deductive Verification of Software Product Lines paper_content: Software product line engineering aims at the efficient development of program variants that share a common set of features and that differ in other features. Product lines can be efficiently developed using feature-oriented programming. Given a feature selection and the code artifacts for each feature, program variants can be generated automatically. The quality of the program variants can be rigorously ensured by formal verification. However, verification of all program variants can be expensive and include redundant verification tasks. We introduce a classification of existing software product line verification approaches and propose proof composition as a novel approach. Proof composition generates correctness proofs of each program variant based on partial proofs of each feature. We present a case study to evaluate proof composition and demonstrate that it reduces the effort for verification. --- paper_title: Principles of Program Analysis paper_content: Program analysis utilizes static techniques for computing reliable information about the dynamic behavior of programs. Applications include compilers (for code improvement), software validation (for detecting errors) and transformations between data representation (for solving problems such as Y2K). This book is unique in providing an overview of the four major approaches to program analysis: data flow analysis, constraint-based analysis, abstract interpretation, and type and effect systems. The presentation illustrates the extensive similarities between the approaches, helping readers to choose the best one to utilize. --- paper_title: Aspect Composition Applying the Design by Contract Principle paper_content: The composition of software units has been one of the main research topics in computer science. This paper addresses the composition validation problem evolving in this context. It focuses on the composition for a certain kind of units called aspects. Aspects are a new concept which is introduced by aspect-oriented programming aiming at a better separation of concerns. Cross-cutting code is captured and localised in these aspects. Some of the cross-cutting features which are expressed in aspects cannot be woven with other features into the same application since two features could be mutually exclusive. With a growing number of aspects, manual control of these dependencies becomes error-prone or even impossible. We show how assertions can be useful in this respect to support the software developer. --- paper_title: Emergo: a tool for improving maintainability of preprocessor-based product lines paper_content: When maintaining a feature in preprocessor-based Software Product Lines (SPLs), developers are susceptible to introduce problems into other features. This is possible because features eventually share elements (like variables and methods) with the maintained one. This scenario might be even worse when hiding features by using techniques like Virtual Separation of Concerns (VSoC), since developers cannot see the feature dependencies and, consequently, they become unaware of them. Emergent Interfaces was proposed to minimize this problem by capturing feature dependencies and then providing information about other features that can be impacted during a maintenance task. In this paper, we present Emergo, a tool capable of computing emergent interfaces between the feature we are maintaining and the others. Emergo relies on feature-sensitive dataflow analyses in the sense it takes features and the SPL feature model into consideration when computing the interfaces. --- paper_title: Abstract Features in Feature Modeling paper_content: A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing. --- paper_title: Delta-oriented programming of software product lines paper_content: Feature-oriented programming (FOP) implements software product lines by composition of feature modules. It relies on the principles of stepwise development. Feature modules are intended to refer to exactly one product feature and can only extend existing implementations. To provide more flexibility for implementing software product lines, we propose delta-oriented programming (DOP) as a novel programming language approach. A product line is represented by a core module and a set of delta modules. The core module provides an implementation of a valid product that can be developed with well-established single application engineering techniques. Delta modules specify changes to be applied to the core module to implement further products by adding, modifying and removing code. Application conditions attached to delta modules allow handling combinations of features explicitly. A product implementation for a particular feature configuration is generated by applying incrementally all delta modules with valid application condition to the core module. In order to evaluate the potential of DOP, we compare it to FOP, both conceptually and empirically. --- paper_title: CPAchecker: A Tool for Configurable Software Verification paper_content: Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, implements the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. In software verification, it takes a considerable amount of effort to convert a verification idea into actual experimental results -- we aim at accelerating this process. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this flexible and easy-to-extend platform, and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. CPAchecker implements CPAs for several abstract domains. We evaluate the efficiency of the current version of our tool on software-verification benchmarks from the literature, and compare it with other state-of-the-art model checkers. CPAchecker is an open-source toolkit and publicly available. --- paper_title: Detection of feature interactions using feature-aware verification paper_content: A software product line is a set of software products that are distinguished in terms of features (i.e., end-user-visible units of behavior). Feature interactions-- situations in which the combination of features leads to emergent and possibly critical behavior --are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line-verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLVERIFIER for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only local knowledge. --- paper_title: Toward variability-aware testing paper_content: We investigate how to execute a unit test for all products of a product line without generating each product in isolation in a brute-force fashion. Learning from variability-aware analyses, we (a) design and implement a variability-aware interpreter and, alternatively, (b) reencode variability of the product line to simulate the test cases with a model checker. The interpreter internally reasons about variability, executing paths not affected by variability only once for the whole product line. The model checker achieves similar results by reusing powerful off-the-shelf analyses. We experimented with a prototype implementation for each strategy. We compare both strategies and discuss trade-offs and future directions. In the long run, we aim at finding an efficient testing approach that can be applied to entire product lines with millions of products. --- paper_title: Model checking lots of systems: efficient verification of temporal properties in software product lines paper_content: In product line engineering, systems are developed in families and differences between family members are expressed in terms of features. Formal modelling and verification is an important issue in this context as more and more critical systems are developed this way. Since the number of systems in a family can be exponential in the number of features, two major challenges are the scalable modelling and the efficient verification of system behaviour. Currently, the few attempts to address them fail to recognise the importance of features as a unit of difference, or do not offer means for automated verification. In this paper, we tackle those challenges at a fundamental level. We first extend transition systems with features in order to describe the combined behaviour of an entire system family. We then define and implement a model checking technique that allows to verify such transition systems against temporal properties. An empirical evaluation shows substantial gains over classical approaches. --- paper_title: A Classification and Survey of Analysis Strategies for Software Product Lines paper_content: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses. --- paper_title: Feature consistency in compile-time-configurable system software: facing the linux 10,000 feature problem paper_content: Much system software can be configured at compile time to tailor it with respect to a broad range of supported hardware architectures and application domains. A good example is the Linux kernel, which provides more than 10,000 configurable features, growing rapidly. From the maintenance point of view, compile-time configurability imposes big challenges. The configuration model (the selectable features and their constraints as presented to the user) and the configurability that is actually implemented in the code have to be kept in sync, which, if performed manually, is a tedious and error-prone task. In the case of Linux, this has led to numerous defects in the source code, many of which are actual bugs. We suggest an approach to automatically check for configurability-related implementation defects in large-scale configurable system software. The configurability is extracted from its various implementation sources and examined for inconsistencies, which manifest in seemingly conditional code that is in fact unconditional. We evaluate our approach with the latest version of Linux, for which our tool detects 1,776 configurability defects, which manifest as dead/superfluous source code and bugs. Our findings have led to numerous source-code improvements and bug fixes in Linux: 123 patches (49 merged) fix 364 defects, 147 of which have been confirmed by the corresponding Linux developers and 20 as fixing a new bug. --- paper_title: Precise interprocedural dataflow analysis via graph reachability paper_content: The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of probable problems includes—but is not limited to—the classical separable problems (also known as “gen/kill” or “bit-vector” problems)— e.g. , reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes many non-separable problems, including truly-live variables, copy constant propagation, and possibly-uninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possibly-uninitialized variables). --- paper_title: FeatureIDE: An Extensible Framework for Feature-Oriented Software Development paper_content: FeatureIDE is an open-source framework for feature-oriented software development (FOSD) based on Eclipse. FOSD is a paradigm for the construction, customization, and synthesis of software systems. Code artifacts are mapped to features, and a customized software system can be generated given a selection of features. The set of software systems that can be generated is called a software product line (SPL). FeatureIDE supports several FOSD implementation techniques such as feature-oriented programming, aspect-oriented programming, delta-oriented programming, and preprocessors. All phases of FOSD are supported in FeatureIDE, namely domain analysis, requirements analysis, domain implementation, and software generation. --- paper_title: Automatic detection of feature interactions using the Java modeling language: an experience report paper_content: In the development of complex software systems, interactions between different program features increase the design complexity. Feature-oriented software development focuses on the representation and compositions of features. The implementation of features often cuts across object-oriented module boundaries and hence comprises interactions. The manual detection and treatment of feature interactions requires a deep knowledge of the implementation details of the features involved. Our goal is to detect interactions automatically using specifications by means of design by contract and automated theorem proving. We provide a software tool that operates on programs in Java and the Java Modeling Language (JML). We discuss which kinds of feature interactions can be detected automatically with our tool and how to detect other kinds of interactions. --- paper_title: Applying 'design by contract' paper_content: Methodological guidelines for object-oriented software construction that improve the reliability of the resulting software systems are presented. It is shown that the object-oriented techniques rely on the theory of design by contract, which underlies the design of the Eiffel analysis, design, and programming language and of the supporting libraries, from which a number of examples are drawn. The theory of contract design and the role of assertions in that theory are discussed. > --- paper_title: Potential synergies of theorem proving and model checking for software product lines paper_content: The verification of software product lines is an active research area. A challenge is to efficiently verify similar products without the need to generate and verify them individually. As solution, researchers suggest family-based verification approaches, which either transform compile-time into runtime variability or make verification tools variability-aware. Existing approaches either focus on theorem proving, model checking, or other verification techniques. For the first time, we combine theorem proving and model checking to evaluate their synergies for product-line verification. We provide tool support by connecting five existing tools, namely FeatureIDE and FeatureHouse for product-line development, as well as KeY, JPF, and OpenJML for verification of Java programs. In an experiment, we found the synergy of improved effectiveness and efficiency, especially for product lines with few defects. Further, we experienced that model checking and theorem proving are more efficient and effective if the product line contains more defects. --- paper_title: Virtual Separation of Concerns -- A Second Chance for Preprocessors paper_content: Conditional compilation with preprocessors like cpp is a simple but eective means to implement variability. By annotating code fragments with #ifdef and #endif directives, dierent program variants with or without these fragments can be created, which can be used (among others) to implement software product lines. Although, preprocessors are frequently used in practice, they are often criticized for their negative eect on code quality and maintainability. In contrast to modularized implementations, for example using components or aspects, preprocessors neglect separation of concerns, are prone to introduce subtle errors, can entirely obfuscate the source code, and limit reuse. Our aim is to rehabilitate the preprocessor by showing how simple tool support can address these problems and emulate some benets of modularized implementations. At the same time we emphasize unique benets of preprocessors, like simplicity and language independence. Although we do not have a denitive answer on how to implement variability, we want highlight opportunities to improve preprocessors and encourage research toward novel preprocessor-based approaches. --- paper_title: SPL Conqueror: Toward optimization of non-functional properties in software product lines paper_content: A software product line (SPL) is a family of related programs of a domain. The programs of an SPL are distinguished in terms of features, which are end-user visible characteristics of programs. Based on a selection of features, stakeholders can derive tailor-made programs that satisfy functional requirements. Besides functional requirements, different application scenarios raise the need for optimizing non-functional properties of a variant. The diversity of application scenarios leads to heterogeneous optimization goals with respect to non-functional properties (e.g., performance vs. footprint vs. energy optimized variants). Hence, an SPL has to satisfy different and sometimes contradicting requirements regarding non-functional properties. Usually, the actually required non-functional properties are not known before product derivation and can vary for each application scenario and customer. Allowing stakeholders to derive optimized variants requires us to measure non-functional properties after the SPL is developed. Unfortunately, the high variability provided by SPLs complicates measurement and optimization of non-functional properties due to a large variant space. With SPL Conqueror, we provide a holistic approach to optimize non-functional properties in SPL engineering. We show how non-functional properties can be qualitatively specified and quantitatively measured in the context of SPLs. Furthermore, we discuss the variant-derivation process in SPL Conqueror that reduces the effort of computing an optimal variant. We demonstrate the applicability of our approach by means of nine case studies of a broad range of application domains (e.g., database management and operating systems). Moreover, we show that SPL Conqueror is implementation and language independent by using SPLs that are implemented with different mechanisms, such as conditional compilation and feature-oriented programming. --- paper_title: Reducing Configurations to Monitor in a Software Product Line paper_content: A software product line is a family of programs where each program is defined by a unique combination of features. Product lines, like conventional programs, can be checked for safety properties through execution monitoring. However, because a product line induces a number of programs that is potentially exponential in the number of features, it would be very expensive to use existing monitoring techniques: one would have to apply those techniques to every single program. Doing so would also be wasteful because many programs can provably never violate the stated property. We introduce a monitoring technique dedicated to product lines that, given a safety property, statically determines the feature combinations that cannot possibly violate the property, thus reducing the number of programs to monitor. Experiments show that our technique is effective, particularly for safety properties that crosscut many optional features. --- paper_title: ASADAL: a tool system for co-development of software and test environment based on product line engineering paper_content: Recently, product line software engineering (PLSE) is gaining popularity. To employ PLSE methods, many organizations are looking for a tool system that supports PLSE methods so that core assets and target software can be developed and tested in an effective and systematic way.ASADAL (A System Analysis and Design Aid tooL) supports the entire lifecycle of software development process based on a PLSE method called FORM (Feature-Oriented Reuse Method) [6]. It supports domain analysis, architecture and component design, code generation, and simulation-based verification and validation (V&V). Using the tool, users may co-develop target software and its test environment and verify software in a continuous and incremental way. --- paper_title: Dead or Alive: finding zombie features in the Linux kernel paper_content: When an interference signals is detected by a base station, the base station issues a report to a switching apparatus. The switching apparatus checks self to see whether it is in a standby mode waiting for a frequency change completion report from another base station which is in the process of changing the control channel frequency. If the switching apparatus is satisfied that the station should be given a new control channel frequency, it issues a frequency change command to the requesting base station, and the base station receiving the command revises the current control channel frequency. In the meantime, all other requests from other stations are denied by the switching apparatus until the requesting base station has successfully completed the process of changing the control channel frequency. The base station acknowledges the completion of frequency change process by sending a frequency change completion report to the switching apparatus. --- paper_title: Integration Testing of Software Product Lines Using Compositional Symbolic Execution paper_content: Software product lines are families of products defined by feature commonality and variability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse shared assets and reduce test effort. The use of feature dependence graphs has also been employed to reduce testing effort, but little work has focused on code level analysis of dataflow between features. In this paper we present a compositional symbolic execution technique that works in concert with a feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically analyze feature interactions. We experiment with two product lines and determine that our technique can reduce the overall number of interactions that must be considered during testing, and requires less time to run than a traditional symbolic execution technique. --- paper_title: Language-independent reference checking in software product lines paper_content: Feature-Oriented Software Development (FOSD) is a paradigm for the development of software product lines. A challenge in FOSD is to guarantee that all software systems of a software product line are correct. Recent work on type checking product lines can provide a guarantee of type correctness without generating all possible systems. We generalize previous results by abstracting from the specifics of particular programming languages. In a first attempt, we present a reference-checking algorithm that performs key tasks of product-line type checking independently of the target programming language. Experiments with two sample product lines written in Java and C are encouraging and give us confidence that this approach is promising. --- paper_title: Language-Independent and Automated Software Composition: The FeatureHouse Experience paper_content: Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages. Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages. In particular, we have integrated Java, C#, C, Haskell, Alloy, and JavaCC. A substantial number of case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties that a language must have in order to be ready for superimposition. We discuss perspectives of our approach and demonstrate how we extended FEATUREHOUSE with support for XML languages (in particular, XHTML, XMI/UML, and Ant) and alternative composition approaches (in particular, aspect weaving). Rounding off our previous work, we provide here a holistic view of the FEATUREHOUSE approach based on rich experience with numerous languages and case studies and reflections on several years of research. --- paper_title: Access Control in Feature-Oriented Programming paper_content: In feature-oriented programming (FOP) a programmer decomposes a program in terms of features. Ideally, features are implemented modularly so that they can be developed in isolation. Access control mechanisms in the form of access or visibility modifiers are an important ingredient to attain feature modularity as they allow programmers to hide and expose internal details of a module's implementation. But developers of contemporary feature-oriented languages have not considered access control mechanisms so far. The absence of a well-defined access control model for FOP breaks encapsulation of feature code and leads to unexpected program behaviors and inadvertent type errors. We raise awareness of this problem, propose three feature-oriented access modifiers, and present a corresponding access modifier model. We offer an implementation of the model on the basis of a fully-fledged feature-oriented compiler. Finally, by analyzing ten feature-oriented programs, we explore the potential of feature-oriented modifiers in FOP. --- paper_title: Clafer tools for product line engineering paper_content: Clafer is a lightweight yet expressive language for structural modeling: feature modeling and configuration, class and object modeling, and metamodeling. Clafer Tools is an integrated set of tools based on Clafer. In this paper, we describe some product-line variability modeling scenarios of Clafer Tools from the viewpoints of product-line owner, product-line engineer, and product engineer. --- paper_title: PLEDGE: a product line editor and test generation tool paper_content: Specific requirements of clients lead to the development of variants of the same software. These variants form a Software Product Line (SPL). Ideally, testing a SPL involves testing all the software products that can be configured through the combination of features. This, however, is intractable in practice since a) large SPLs can lead to millions of possible software variants and b) the testing process is usually limited by budget and time constraints. To overcome this problem, this paper introduces PLEDGE, an open source tool that selects and prioritizes the product configurations maximizing the feature interactions covered. The uniqueness of PLEDGE is that it bypasses the computation of the feature interactions, allowing to scale to large SPLs. --- paper_title: Aspect Composition Applying the Design by Contract Principle paper_content: The composition of software units has been one of the main research topics in computer science. This paper addresses the composition validation problem evolving in this context. It focuses on the composition for a certain kind of units called aspects. Aspects are a new concept which is introduced by aspect-oriented programming aiming at a better separation of concerns. Cross-cutting code is captured and localised in these aspects. Some of the cross-cutting features which are expressed in aspects cannot be woven with other features into the same application since two features could be mutually exclusive. With a growing number of aspects, manual control of these dependencies becomes error-prone or even impossible. We show how assertions can be useful in this respect to support the software developer. --- paper_title: Emergo: a tool for improving maintainability of preprocessor-based product lines paper_content: When maintaining a feature in preprocessor-based Software Product Lines (SPLs), developers are susceptible to introduce problems into other features. This is possible because features eventually share elements (like variables and methods) with the maintained one. This scenario might be even worse when hiding features by using techniques like Virtual Separation of Concerns (VSoC), since developers cannot see the feature dependencies and, consequently, they become unaware of them. Emergent Interfaces was proposed to minimize this problem by capturing feature dependencies and then providing information about other features that can be impacted during a maintenance task. In this paper, we present Emergo, a tool capable of computing emergent interfaces between the feature we are maintaining and the others. Emergo relies on feature-sensitive dataflow analyses in the sense it takes features and the SPL feature model into consideration when computing the interfaces. --- paper_title: Variability-aware parsing in the presence of lexical macros and conditional compilation paper_content: In many projects, lexical preprocessors are used to manage different variants of the project (using conditional compilation) and to define compile-time code transformations (using macros). Unfortunately, while being a simple way to implement variability, conditional compilation and lexical macros hinder automatic analysis, even though such analysis is urgently needed to combat variability-induced complexity. To analyze code with its variability, we need to parse it without preprocessing it. However, current parsing solutions use unsound heuristics, support only a subset of the language, or suffer from exponential explosion. As part of the TypeChef project, we contribute a novel variability-aware parser that can parse almost all unpreprocessed code without heuristics in practicable time. Beyond the obvious task of detecting syntax errors, our parser paves the road for further analysis, such as variability-aware type checking. We implement variability-aware parsers for Java and GNU C and demonstrate practicability by parsing the product line MobileMedia and the entire X86 architecture of the Linux kernel with 6065 variable features. --- paper_title: Delta-oriented programming of software product lines paper_content: Feature-oriented programming (FOP) implements software product lines by composition of feature modules. It relies on the principles of stepwise development. Feature modules are intended to refer to exactly one product feature and can only extend existing implementations. To provide more flexibility for implementing software product lines, we propose delta-oriented programming (DOP) as a novel programming language approach. A product line is represented by a core module and a set of delta modules. The core module provides an implementation of a valid product that can be developed with well-established single application engineering techniques. Delta modules specify changes to be applied to the core module to implement further products by adding, modifying and removing code. Application conditions attached to delta modules allow handling combinations of features explicitly. A product implementation for a particular feature configuration is generated by applying incrementally all delta modules with valid application condition to the core module. In order to evaluate the potential of DOP, we compare it to FOP, both conceptually and empirically. --- paper_title: Exploring variability-aware execution for testing plugin-based web applications paper_content: In plugin-based systems, plugin conflicts may occur when two or more plugins interfere with one another, changing their expected behaviors. It is highly challenging to detect plugin conflicts due to the exponential explosion of the combinations of plugins (i.e., configurations). In this paper, we address the challenge of executing a test case over many configurations. Leveraging the fact that many executions of a test are similar, our variability-aware execution runs common code once. Only when encountering values that are different depending on specific configurations will the execution split to run for each of them. To evaluate the scalability of variability-aware execution on a large real-world setting, we built a prototype PHP interpreter called Varex and ran it on the popular WordPress blogging Web application. The results show that while plugin interactions exist, there is a significant amount of sharing that allows variability-aware execution to scale to 2^50 configurations within seven minutes of running time. During our study, with Varex, we were able to detect two plugin conflicts: one was recently reported on WordPress forum and another one was not previously discovered. --- paper_title: Incremental Test Generation for Software Product Lines paper_content: Recent advances in mechanical techniques for systematic testing have increased our ability to automatically find subtle bugs, and hence, to deploy more dependable software. This paper builds on one such systematic technique, scope-bounded testing, to develop a novel specification-based approach for efficiently generating tests for products in a software product line. Given properties of features as first-order logic formulas in Alloy, our approach uses SAT-based analysis to automatically generate test inputs for each product in a product line. To ensure soundness of generation, we introduce an automatic technique for mapping a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experimental results using different data structure product lines show that an incremental approach can provide an order of magnitude speedup over conventional techniques. We also present a further optimization using dedicated integer constraint solvers for feature properties that introduce integer constraints, and show how to use a combination of solvers in tandem for solving Alloy formulas. --- paper_title: A product line based aspect-oriented generative unit testing approach to building quality components paper_content: The quality of component-based systems highly depends on how effectively testing is carried out. To achieve the maximal testing effectiveness, this paper presents a product line based aspect oriented approach to unit testing. The aspect product line facilitates the automatic creation of aspect test cases that deal with specific quality requirements. An expandable repository of reusable aspect test cases has been developed. A prototype tool is built to verify and lever up the approach. --- paper_title: Avoiding redundant testing in application engineering paper_content: Many software product line testing techniques have been presented in the literature. The majority of those techniques address how to define reusable test assets (such as test models or test scenarios) in domain engineering and how to exploit those assets during application engineering. In addition to test case reuse however, the execution of test cases constitutes one important activity during application testing. Without a systematic support for the test execution in application engineering, while considering the specifics of product lines, product line artifacts might be tested redundantly. Redundant testing in application engineering, however, can lead to an increased testing effort without increasing the chance of uncovering failures. In this paper, we propose the model-based ScenTED-DF technique to avoid redundant testing in application engineering. Our technique builds on data flow-based testing techniques for single systems and adapts and extends those techniques to consider product line variability. The paper sketches the prototypical implementation of our technique to show its general feasibility and automation potential, and it describes the results of experiments using an academic product line to demonstrate that ScenTED-DF is capable of avoiding redundant tests in application engineering. --- paper_title: Detection of feature interactions using feature-aware verification paper_content: A software product line is a set of software products that are distinguished in terms of features (i.e., end-user-visible units of behavior). Feature interactions-- situations in which the combination of features leads to emergent and possibly critical behavior --are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line-verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLVERIFIER for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only local knowledge. --- paper_title: Toward variability-aware testing paper_content: We investigate how to execute a unit test for all products of a product line without generating each product in isolation in a brute-force fashion. Learning from variability-aware analyses, we (a) design and implement a variability-aware interpreter and, alternatively, (b) reencode variability of the product line to simulate the test cases with a model checker. The interpreter internally reasons about variability, executing paths not affected by variability only once for the whole product line. The model checker achieves similar results by reusing powerful off-the-shelf analyses. We experimented with a prototype implementation for each strategy. We compare both strategies and discuss trade-offs and future directions. In the long run, we aim at finding an efficient testing approach that can be applied to entire product lines with millions of products. --- paper_title: Family-based performance measurement paper_content: Most contemporary programs are customizable. They provide many features that give rise to millions of program variants. Determining which feature selection yields an optimal performance is challenging, because of the exponential number of variants. Predicting the performance of a variant based on previous measurements proved successful, but induces a trade-off between the measurement effort and prediction accuracy. We propose the alternative approach of family-based performance measurement, to reduce the number of measurements required for identifying feature interactions and for obtaining accurate predictions. The key idea is to create a variant simulator (by translating compile-time variability to run-time variability) that can simulate the behavior of all program variants. We use it to measure performance of individual methods, trace methods to features, and infer feature interactions based on the call graph. We evaluate our approach by means of five feature-oriented programs. On average, we achieve accuracy of 98%, with only a single measurement per customizable program. Observations show that our approach opens avenues of future research in different domains, such an feature-interaction detection and testing. --- paper_title: Automated and Scalable T-wise Test Case Generation Strategies for Software Product Lines paper_content: Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability across their features. This leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large space of products is infeasible. One possible option is to test SPLs by generating test cases that cover all possible T feature interactions (T-wise). T-wise dramatically reduces the number of test products while ensuring reasonable SPL coverage. However, automatic generation of test cases satisfying T-wise using SAT solvers raises two issues. The encoding of SPL models and T-wise criteria into a set of formulas acceptable by the solver and their satisfaction which fails when processed ``all-at-once''. We propose a scalable toolset using Alloy to automatically generate test cases satisfying T-wise from SPL models. We define strategies to split T-wise combinations into solvable subsets. We design and compute metrics to evaluate strategies on Aspect OPTIMA, a concrete transactional SPL. --- paper_title: Feature consistency in compile-time-configurable system software: facing the linux 10,000 feature problem paper_content: Much system software can be configured at compile time to tailor it with respect to a broad range of supported hardware architectures and application domains. A good example is the Linux kernel, which provides more than 10,000 configurable features, growing rapidly. From the maintenance point of view, compile-time configurability imposes big challenges. The configuration model (the selectable features and their constraints as presented to the user) and the configurability that is actually implemented in the code have to be kept in sync, which, if performed manually, is a tedious and error-prone task. In the case of Linux, this has led to numerous defects in the source code, many of which are actual bugs. We suggest an approach to automatically check for configurability-related implementation defects in large-scale configurable system software. The configurability is extracted from its various implementation sources and examined for inconsistencies, which manifest in seemingly conditional code that is in fact unconditional. We evaluate our approach with the latest version of Linux, for which our tool detects 1,776 configurability defects, which manifest as dead/superfluous source code and bugs. Our findings have led to numerous source-code improvements and bug fixes in Linux: 123 patches (49 merged) fix 364 defects, 147 of which have been confirmed by the corresponding Linux developers and 20 as fixing a new bug. --- paper_title: Reducing combinatorics in testing product lines paper_content: A Software Product Line (SPL) is a family of programs where each program is defined by a unique combination of features. Testing or checking properties of an SPL is hard as it may require the examination of a combinatorial number of programs. In reality, however, features are often irrelevant for a given test - they augment, but do not change, existing behavior, making many feature combinations unnecessary as far as testing is concerned. In this paper we show how to reduce the amount of effort in testing an SPL. We represent an SPL in a form where conventional static program analysis techniques can be applied to find irrelevant features for a test. We use this information to reduce the combinatorial number of SPL programs to examine. --- paper_title: FeatureIDE: An Extensible Framework for Feature-Oriented Software Development paper_content: FeatureIDE is an open-source framework for feature-oriented software development (FOSD) based on Eclipse. FOSD is a paradigm for the construction, customization, and synthesis of software systems. Code artifacts are mapped to features, and a customized software system can be generated given a selection of features. The set of software systems that can be generated is called a software product line (SPL). FeatureIDE supports several FOSD implementation techniques such as feature-oriented programming, aspect-oriented programming, delta-oriented programming, and preprocessors. All phases of FOSD are supported in FeatureIDE, namely domain analysis, requirements analysis, domain implementation, and software generation. --- paper_title: A Case Study Implementing Features Using AspectJ paper_content: Software product lines aim to create highly configurable programs from a set of features. Common belief and recent studies suggest that aspects are well-suited for implementing features. We evaluate the suitability of AspectJ with respect to this task by a case study that refactors the embedded database system Berkeley DB into 38 features. Contrary to our initial expectations, the results were not encouraging. As the number of aspects in a feature grows, there is a noticeable decrease in code readability and maintainability. Most of the unique and powerful features of AspectJ were not needed. We document where AspectJ is unsuitable for implementing features of refactored legacy applications and explain why. --- paper_title: Analyzing the discipline of preprocessor annotations in 30 million lines of C code paper_content: The C preprocessor cpp is a widely used tool for implementing variable software. It enables programmers to express variable code (which may even crosscut the entire implementation) with conditional compilation. The C preprocessor relies on simple text processing and is independent of the host language (C, C++, Java, and so on). Language-independent text processing is powerful and expressive - programmers can make all kinds of annotations in the form of #ifdefs - but can render unpreprocessed code difficult to process automatically by tools, such as refactoring, concern management, and variability-aware type checking. We distinguish between disciplined annotations, which align with the underlying source-code structure, and undisciplined annotations, which do not align with the structure and hence complicate tool development. This distinction raises the question of how frequently programmers use undisciplined annotations and whether it is feasible to change them to disciplined annotations to simplify tool development and to enable programmers to use a wide variety of tools in the first place. By means of an analysis of 40 medium-sized to large-sized C programs, we show empirically that programmers use cpp mostly in a disciplined way: about 84% of all annotations respect the underlying source-code structure. Furthermore, we analyze the remaining undisciplined annotations, identify patterns, and discuss how to transform them into a disciplined form. --- paper_title: Automatic detection of feature interactions using the Java modeling language: an experience report paper_content: In the development of complex software systems, interactions between different program features increase the design complexity. Feature-oriented software development focuses on the representation and compositions of features. The implementation of features often cuts across object-oriented module boundaries and hence comprises interactions. The manual detection and treatment of feature interactions requires a deep knowledge of the implementation details of the features involved. Our goal is to detect interactions automatically using specifications by means of design by contract and automated theorem proving. We provide a software tool that operates on programs in Java and the Java Modeling Language (JML). We discuss which kinds of feature interactions can be detected automatically with our tool and how to detect other kinds of interactions. --- paper_title: Testing Software Product Lines Using Incremental Test Generation paper_content: We present a novel specification-based approach for generating tests for products in a software product line. Given properties of features as first-order logic formulas, our approach uses SAT-based analysis to automatically generate test inputs for each product in a product line. To ensure soundness of generation, we introduce an automatic technique for mapping a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experimental results using different data structure product lines show that incremental approach can provide an order of magnitude speed-up over conventional techniques. --- paper_title: Virtual Separation of Concerns -- A Second Chance for Preprocessors paper_content: Conditional compilation with preprocessors like cpp is a simple but eective means to implement variability. By annotating code fragments with #ifdef and #endif directives, dierent program variants with or without these fragments can be created, which can be used (among others) to implement software product lines. Although, preprocessors are frequently used in practice, they are often criticized for their negative eect on code quality and maintainability. In contrast to modularized implementations, for example using components or aspects, preprocessors neglect separation of concerns, are prone to introduce subtle errors, can entirely obfuscate the source code, and limit reuse. Our aim is to rehabilitate the preprocessor by showing how simple tool support can address these problems and emulate some benets of modularized implementations. At the same time we emphasize unique benets of preprocessors, like simplicity and language independence. Although we do not have a denitive answer on how to implement variability, we want highlight opportunities to improve preprocessors and encourage research toward novel preprocessor-based approaches. --- paper_title: Language-Independent and Automated Software Composition: The FeatureHouse Experience paper_content: Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages. Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages. In particular, we have integrated Java, C#, C, Haskell, Alloy, and JavaCC. A substantial number of case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties that a language must have in order to be ready for superimposition. We discuss perspectives of our approach and demonstrate how we extended FEATUREHOUSE with support for XML languages (in particular, XHTML, XMI/UML, and Ant) and alternative composition approaches (in particular, aspect weaving). Rounding off our previous work, we provide here a holistic view of the FEATUREHOUSE approach based on rich experience with numerous languages and case studies and reflections on several years of research. --- paper_title: Access Control in Feature-Oriented Programming paper_content: In feature-oriented programming (FOP) a programmer decomposes a program in terms of features. Ideally, features are implemented modularly so that they can be developed in isolation. Access control mechanisms in the form of access or visibility modifiers are an important ingredient to attain feature modularity as they allow programmers to hide and expose internal details of a module's implementation. But developers of contemporary feature-oriented languages have not considered access control mechanisms so far. The absence of a well-defined access control model for FOP breaks encapsulation of feature code and leads to unexpected program behaviors and inadvertent type errors. We raise awareness of this problem, propose three feature-oriented access modifiers, and present a corresponding access modifier model. We offer an implementation of the model on the basis of a fully-fledged feature-oriented compiler. Finally, by analyzing ten feature-oriented programs, we explore the potential of feature-oriented modifiers in FOP. --- paper_title: A Classification and Survey of Analysis Strategies for Software Product Lines paper_content: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses. --- paper_title: FeatureIDE: An Extensible Framework for Feature-Oriented Software Development paper_content: FeatureIDE is an open-source framework for feature-oriented software development (FOSD) based on Eclipse. FOSD is a paradigm for the construction, customization, and synthesis of software systems. Code artifacts are mapped to features, and a customized software system can be generated given a selection of features. The set of software systems that can be generated is called a software product line (SPL). FeatureIDE supports several FOSD implementation techniques such as feature-oriented programming, aspect-oriented programming, delta-oriented programming, and preprocessors. All phases of FOSD are supported in FeatureIDE, namely domain analysis, requirements analysis, domain implementation, and software generation. --- paper_title: SPL Conqueror: Toward optimization of non-functional properties in software product lines paper_content: A software product line (SPL) is a family of related programs of a domain. The programs of an SPL are distinguished in terms of features, which are end-user visible characteristics of programs. Based on a selection of features, stakeholders can derive tailor-made programs that satisfy functional requirements. Besides functional requirements, different application scenarios raise the need for optimizing non-functional properties of a variant. The diversity of application scenarios leads to heterogeneous optimization goals with respect to non-functional properties (e.g., performance vs. footprint vs. energy optimized variants). Hence, an SPL has to satisfy different and sometimes contradicting requirements regarding non-functional properties. Usually, the actually required non-functional properties are not known before product derivation and can vary for each application scenario and customer. Allowing stakeholders to derive optimized variants requires us to measure non-functional properties after the SPL is developed. Unfortunately, the high variability provided by SPLs complicates measurement and optimization of non-functional properties due to a large variant space. With SPL Conqueror, we provide a holistic approach to optimize non-functional properties in SPL engineering. We show how non-functional properties can be qualitatively specified and quantitatively measured in the context of SPLs. Furthermore, we discuss the variant-derivation process in SPL Conqueror that reduces the effort of computing an optimal variant. We demonstrate the applicability of our approach by means of nine case studies of a broad range of application domains (e.g., database management and operating systems). Moreover, we show that SPL Conqueror is implementation and language independent by using SPLs that are implemented with different mechanisms, such as conditional compilation and feature-oriented programming. --- paper_title: Reducing Configurations to Monitor in a Software Product Line paper_content: A software product line is a family of programs where each program is defined by a unique combination of features. Product lines, like conventional programs, can be checked for safety properties through execution monitoring. However, because a product line induces a number of programs that is potentially exponential in the number of features, it would be very expensive to use existing monitoring techniques: one would have to apply those techniques to every single program. Doing so would also be wasteful because many programs can provably never violate the stated property. We introduce a monitoring technique dedicated to product lines that, given a safety property, statically determines the feature combinations that cannot possibly violate the property, thus reducing the number of programs to monitor. Experiments show that our technique is effective, particularly for safety properties that crosscut many optional features. --- paper_title: ASADAL: a tool system for co-development of software and test environment based on product line engineering paper_content: Recently, product line software engineering (PLSE) is gaining popularity. To employ PLSE methods, many organizations are looking for a tool system that supports PLSE methods so that core assets and target software can be developed and tested in an effective and systematic way.ASADAL (A System Analysis and Design Aid tooL) supports the entire lifecycle of software development process based on a PLSE method called FORM (Feature-Oriented Reuse Method) [6]. It supports domain analysis, architecture and component design, code generation, and simulation-based verification and validation (V&V). Using the tool, users may co-develop target software and its test environment and verify software in a continuous and incremental way. --- paper_title: Dead or Alive: finding zombie features in the Linux kernel paper_content: When an interference signals is detected by a base station, the base station issues a report to a switching apparatus. The switching apparatus checks self to see whether it is in a standby mode waiting for a frequency change completion report from another base station which is in the process of changing the control channel frequency. If the switching apparatus is satisfied that the station should be given a new control channel frequency, it issues a frequency change command to the requesting base station, and the base station receiving the command revises the current control channel frequency. In the meantime, all other requests from other stations are denied by the switching apparatus until the requesting base station has successfully completed the process of changing the control channel frequency. The base station acknowledges the completion of frequency change process by sending a frequency change completion report to the switching apparatus. --- paper_title: Integration Testing of Software Product Lines Using Compositional Symbolic Execution paper_content: Software product lines are families of products defined by feature commonality and variability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse shared assets and reduce test effort. The use of feature dependence graphs has also been employed to reduce testing effort, but little work has focused on code level analysis of dataflow between features. In this paper we present a compositional symbolic execution technique that works in concert with a feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically analyze feature interactions. We experiment with two product lines and determine that our technique can reduce the overall number of interactions that must be considered during testing, and requires less time to run than a traditional symbolic execution technique. --- paper_title: Language-independent reference checking in software product lines paper_content: Feature-Oriented Software Development (FOSD) is a paradigm for the development of software product lines. A challenge in FOSD is to guarantee that all software systems of a software product line are correct. Recent work on type checking product lines can provide a guarantee of type correctness without generating all possible systems. We generalize previous results by abstracting from the specifics of particular programming languages. In a first attempt, we present a reference-checking algorithm that performs key tasks of product-line type checking independently of the target programming language. Experiments with two sample product lines written in Java and C are encouraging and give us confidence that this approach is promising. --- paper_title: Access Control in Feature-Oriented Programming paper_content: In feature-oriented programming (FOP) a programmer decomposes a program in terms of features. Ideally, features are implemented modularly so that they can be developed in isolation. Access control mechanisms in the form of access or visibility modifiers are an important ingredient to attain feature modularity as they allow programmers to hide and expose internal details of a module's implementation. But developers of contemporary feature-oriented languages have not considered access control mechanisms so far. The absence of a well-defined access control model for FOP breaks encapsulation of feature code and leads to unexpected program behaviors and inadvertent type errors. We raise awareness of this problem, propose three feature-oriented access modifiers, and present a corresponding access modifier model. We offer an implementation of the model on the basis of a fully-fledged feature-oriented compiler. Finally, by analyzing ten feature-oriented programs, we explore the potential of feature-oriented modifiers in FOP. --- paper_title: Clafer tools for product line engineering paper_content: Clafer is a lightweight yet expressive language for structural modeling: feature modeling and configuration, class and object modeling, and metamodeling. Clafer Tools is an integrated set of tools based on Clafer. In this paper, we describe some product-line variability modeling scenarios of Clafer Tools from the viewpoints of product-line owner, product-line engineer, and product engineer. --- paper_title: PLEDGE: a product line editor and test generation tool paper_content: Specific requirements of clients lead to the development of variants of the same software. These variants form a Software Product Line (SPL). Ideally, testing a SPL involves testing all the software products that can be configured through the combination of features. This, however, is intractable in practice since a) large SPLs can lead to millions of possible software variants and b) the testing process is usually limited by budget and time constraints. To overcome this problem, this paper introduces PLEDGE, an open source tool that selects and prioritizes the product configurations maximizing the feature interactions covered. The uniqueness of PLEDGE is that it bypasses the computation of the feature interactions, allowing to scale to large SPLs. --- paper_title: Aspect Composition Applying the Design by Contract Principle paper_content: The composition of software units has been one of the main research topics in computer science. This paper addresses the composition validation problem evolving in this context. It focuses on the composition for a certain kind of units called aspects. Aspects are a new concept which is introduced by aspect-oriented programming aiming at a better separation of concerns. Cross-cutting code is captured and localised in these aspects. Some of the cross-cutting features which are expressed in aspects cannot be woven with other features into the same application since two features could be mutually exclusive. With a growing number of aspects, manual control of these dependencies becomes error-prone or even impossible. We show how assertions can be useful in this respect to support the software developer. --- paper_title: Emergo: a tool for improving maintainability of preprocessor-based product lines paper_content: When maintaining a feature in preprocessor-based Software Product Lines (SPLs), developers are susceptible to introduce problems into other features. This is possible because features eventually share elements (like variables and methods) with the maintained one. This scenario might be even worse when hiding features by using techniques like Virtual Separation of Concerns (VSoC), since developers cannot see the feature dependencies and, consequently, they become unaware of them. Emergent Interfaces was proposed to minimize this problem by capturing feature dependencies and then providing information about other features that can be impacted during a maintenance task. In this paper, we present Emergo, a tool capable of computing emergent interfaces between the feature we are maintaining and the others. Emergo relies on feature-sensitive dataflow analyses in the sense it takes features and the SPL feature model into consideration when computing the interfaces. --- paper_title: Variability-aware parsing in the presence of lexical macros and conditional compilation paper_content: In many projects, lexical preprocessors are used to manage different variants of the project (using conditional compilation) and to define compile-time code transformations (using macros). Unfortunately, while being a simple way to implement variability, conditional compilation and lexical macros hinder automatic analysis, even though such analysis is urgently needed to combat variability-induced complexity. To analyze code with its variability, we need to parse it without preprocessing it. However, current parsing solutions use unsound heuristics, support only a subset of the language, or suffer from exponential explosion. As part of the TypeChef project, we contribute a novel variability-aware parser that can parse almost all unpreprocessed code without heuristics in practicable time. Beyond the obvious task of detecting syntax errors, our parser paves the road for further analysis, such as variability-aware type checking. We implement variability-aware parsers for Java and GNU C and demonstrate practicability by parsing the product line MobileMedia and the entire X86 architecture of the Linux kernel with 6065 variable features. --- paper_title: Delta-oriented programming of software product lines paper_content: Feature-oriented programming (FOP) implements software product lines by composition of feature modules. It relies on the principles of stepwise development. Feature modules are intended to refer to exactly one product feature and can only extend existing implementations. To provide more flexibility for implementing software product lines, we propose delta-oriented programming (DOP) as a novel programming language approach. A product line is represented by a core module and a set of delta modules. The core module provides an implementation of a valid product that can be developed with well-established single application engineering techniques. Delta modules specify changes to be applied to the core module to implement further products by adding, modifying and removing code. Application conditions attached to delta modules allow handling combinations of features explicitly. A product implementation for a particular feature configuration is generated by applying incrementally all delta modules with valid application condition to the core module. In order to evaluate the potential of DOP, we compare it to FOP, both conceptually and empirically. --- paper_title: Exploring variability-aware execution for testing plugin-based web applications paper_content: In plugin-based systems, plugin conflicts may occur when two or more plugins interfere with one another, changing their expected behaviors. It is highly challenging to detect plugin conflicts due to the exponential explosion of the combinations of plugins (i.e., configurations). In this paper, we address the challenge of executing a test case over many configurations. Leveraging the fact that many executions of a test are similar, our variability-aware execution runs common code once. Only when encountering values that are different depending on specific configurations will the execution split to run for each of them. To evaluate the scalability of variability-aware execution on a large real-world setting, we built a prototype PHP interpreter called Varex and ran it on the popular WordPress blogging Web application. The results show that while plugin interactions exist, there is a significant amount of sharing that allows variability-aware execution to scale to 2^50 configurations within seven minutes of running time. During our study, with Varex, we were able to detect two plugin conflicts: one was recently reported on WordPress forum and another one was not previously discovered. --- paper_title: Incremental Test Generation for Software Product Lines paper_content: Recent advances in mechanical techniques for systematic testing have increased our ability to automatically find subtle bugs, and hence, to deploy more dependable software. This paper builds on one such systematic technique, scope-bounded testing, to develop a novel specification-based approach for efficiently generating tests for products in a software product line. Given properties of features as first-order logic formulas in Alloy, our approach uses SAT-based analysis to automatically generate test inputs for each product in a product line. To ensure soundness of generation, we introduce an automatic technique for mapping a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experimental results using different data structure product lines show that an incremental approach can provide an order of magnitude speedup over conventional techniques. We also present a further optimization using dedicated integer constraint solvers for feature properties that introduce integer constraints, and show how to use a combination of solvers in tandem for solving Alloy formulas. --- paper_title: A product line based aspect-oriented generative unit testing approach to building quality components paper_content: The quality of component-based systems highly depends on how effectively testing is carried out. To achieve the maximal testing effectiveness, this paper presents a product line based aspect oriented approach to unit testing. The aspect product line facilitates the automatic creation of aspect test cases that deal with specific quality requirements. An expandable repository of reusable aspect test cases has been developed. A prototype tool is built to verify and lever up the approach. --- paper_title: Avoiding redundant testing in application engineering paper_content: Many software product line testing techniques have been presented in the literature. The majority of those techniques address how to define reusable test assets (such as test models or test scenarios) in domain engineering and how to exploit those assets during application engineering. In addition to test case reuse however, the execution of test cases constitutes one important activity during application testing. Without a systematic support for the test execution in application engineering, while considering the specifics of product lines, product line artifacts might be tested redundantly. Redundant testing in application engineering, however, can lead to an increased testing effort without increasing the chance of uncovering failures. In this paper, we propose the model-based ScenTED-DF technique to avoid redundant testing in application engineering. Our technique builds on data flow-based testing techniques for single systems and adapts and extends those techniques to consider product line variability. The paper sketches the prototypical implementation of our technique to show its general feasibility and automation potential, and it describes the results of experiments using an academic product line to demonstrate that ScenTED-DF is capable of avoiding redundant tests in application engineering. --- paper_title: Evolutionary Search-based Test Generation for Software Product Line Feature Models paper_content: Product line-based software engineering is a paradigm that models the commonalities and variabilities of different applications of a given domain of interest within a unique framework and enhances rapid and low cost development of new applications based on reuse engineering principles. Despite the numerous advantages of software product lines, it is quite challenging to comprehensively test them. This is due to the fact that a product line can potentially represent many different applications; therefore, testing a single product line requires the test of its various applications. Theoretically, a product line with n software features can be a source for the development of 2n application. This requires the test of 2n applications if a brute-force comprehensive testing strategy is adopted. In this paper, we propose an evolutionary testing approach based on Genetic Algorithms to explore the configuration space of a software product line feature model in order to automatically generate test suites. We will show through the use of several publicly-available product line feature models that the proposed approach is able to generate test suites of O(n) size complexity as opposed to O(2n) while at the same time form a suitable tradeoff balance between error coverage and feature coverage in its generated test suites. --- paper_title: Detection of feature interactions using feature-aware verification paper_content: A software product line is a set of software products that are distinguished in terms of features (i.e., end-user-visible units of behavior). Feature interactions-- situations in which the combination of features leads to emergent and possibly critical behavior --are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line-verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLVERIFIER for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only local knowledge. --- paper_title: Toward variability-aware testing paper_content: We investigate how to execute a unit test for all products of a product line without generating each product in isolation in a brute-force fashion. Learning from variability-aware analyses, we (a) design and implement a variability-aware interpreter and, alternatively, (b) reencode variability of the product line to simulate the test cases with a model checker. The interpreter internally reasons about variability, executing paths not affected by variability only once for the whole product line. The model checker achieves similar results by reusing powerful off-the-shelf analyses. We experimented with a prototype implementation for each strategy. We compare both strategies and discuss trade-offs and future directions. In the long run, we aim at finding an efficient testing approach that can be applied to entire product lines with millions of products. --- paper_title: Family-based performance measurement paper_content: Most contemporary programs are customizable. They provide many features that give rise to millions of program variants. Determining which feature selection yields an optimal performance is challenging, because of the exponential number of variants. Predicting the performance of a variant based on previous measurements proved successful, but induces a trade-off between the measurement effort and prediction accuracy. We propose the alternative approach of family-based performance measurement, to reduce the number of measurements required for identifying feature interactions and for obtaining accurate predictions. The key idea is to create a variant simulator (by translating compile-time variability to run-time variability) that can simulate the behavior of all program variants. We use it to measure performance of individual methods, trace methods to features, and infer feature interactions based on the call graph. We evaluate our approach by means of five feature-oriented programs. On average, we achieve accuracy of 98%, with only a single measurement per customizable program. Observations show that our approach opens avenues of future research in different domains, such an feature-interaction detection and testing. --- paper_title: Automated and Scalable T-wise Test Case Generation Strategies for Software Product Lines paper_content: Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability across their features. This leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large space of products is infeasible. One possible option is to test SPLs by generating test cases that cover all possible T feature interactions (T-wise). T-wise dramatically reduces the number of test products while ensuring reasonable SPL coverage. However, automatic generation of test cases satisfying T-wise using SAT solvers raises two issues. The encoding of SPL models and T-wise criteria into a set of formulas acceptable by the solver and their satisfaction which fails when processed ``all-at-once''. We propose a scalable toolset using Alloy to automatically generate test cases satisfying T-wise from SPL models. We define strategies to split T-wise combinations into solvable subsets. We design and compute metrics to evaluate strategies on Aspect OPTIMA, a concrete transactional SPL. --- paper_title: MoSo-PoLiTe: tool support for pairwise and model-based software product line testing paper_content: Testing Software Product Lines is a very challenging task and approaches like combinatorial testing and model-based testing are frequently used to reduce the effort of testing Software Product Lines and to reuse test artifacts. In this contribution we present a tool chain realizing our MoSo-PoLiTe concept combining combinatorial and model-based testing. Our tool chain contains a pairwise configuration selection component on the basis of a feature model. This component implements an heuristic finding a minimal subset of configurations covering 100% pairwise interaction. Additionally, our tool chain allows the model-based test case generation for each configuration within this generated subset. This tool chain is based on commercial tools since it was developed within industrial cooperations. A non-commercial implementation of pairwise configuration selection is available and an integration with an Open Source model-based testing tool is under development. --- paper_title: Feature consistency in compile-time-configurable system software: facing the linux 10,000 feature problem paper_content: Much system software can be configured at compile time to tailor it with respect to a broad range of supported hardware architectures and application domains. A good example is the Linux kernel, which provides more than 10,000 configurable features, growing rapidly. From the maintenance point of view, compile-time configurability imposes big challenges. The configuration model (the selectable features and their constraints as presented to the user) and the configurability that is actually implemented in the code have to be kept in sync, which, if performed manually, is a tedious and error-prone task. In the case of Linux, this has led to numerous defects in the source code, many of which are actual bugs. We suggest an approach to automatically check for configurability-related implementation defects in large-scale configurable system software. The configurability is extracted from its various implementation sources and examined for inconsistencies, which manifest in seemingly conditional code that is in fact unconditional. We evaluate our approach with the latest version of Linux, for which our tool detects 1,776 configurability defects, which manifest as dead/superfluous source code and bugs. Our findings have led to numerous source-code improvements and bug fixes in Linux: 123 patches (49 merged) fix 364 defects, 147 of which have been confirmed by the corresponding Linux developers and 20 as fixing a new bug. --- paper_title: Reducing combinatorics in testing product lines paper_content: A Software Product Line (SPL) is a family of programs where each program is defined by a unique combination of features. Testing or checking properties of an SPL is hard as it may require the examination of a combinatorial number of programs. In reality, however, features are often irrelevant for a given test - they augment, but do not change, existing behavior, making many feature combinations unnecessary as far as testing is concerned. In this paper we show how to reduce the amount of effort in testing an SPL. We represent an SPL in a form where conventional static program analysis techniques can be applied to find irrelevant features for a test. We use this information to reduce the combinatorial number of SPL programs to examine. --- paper_title: FeatureIDE: An Extensible Framework for Feature-Oriented Software Development paper_content: FeatureIDE is an open-source framework for feature-oriented software development (FOSD) based on Eclipse. FOSD is a paradigm for the construction, customization, and synthesis of software systems. Code artifacts are mapped to features, and a customized software system can be generated given a selection of features. The set of software systems that can be generated is called a software product line (SPL). FeatureIDE supports several FOSD implementation techniques such as feature-oriented programming, aspect-oriented programming, delta-oriented programming, and preprocessors. All phases of FOSD are supported in FeatureIDE, namely domain analysis, requirements analysis, domain implementation, and software generation. --- paper_title: A Case Study Implementing Features Using AspectJ paper_content: Software product lines aim to create highly configurable programs from a set of features. Common belief and recent studies suggest that aspects are well-suited for implementing features. We evaluate the suitability of AspectJ with respect to this task by a case study that refactors the embedded database system Berkeley DB into 38 features. Contrary to our initial expectations, the results were not encouraging. As the number of aspects in a feature grows, there is a noticeable decrease in code readability and maintainability. Most of the unique and powerful features of AspectJ were not needed. We document where AspectJ is unsuitable for implementing features of refactored legacy applications and explain why. --- paper_title: Analyzing the discipline of preprocessor annotations in 30 million lines of C code paper_content: The C preprocessor cpp is a widely used tool for implementing variable software. It enables programmers to express variable code (which may even crosscut the entire implementation) with conditional compilation. The C preprocessor relies on simple text processing and is independent of the host language (C, C++, Java, and so on). Language-independent text processing is powerful and expressive - programmers can make all kinds of annotations in the form of #ifdefs - but can render unpreprocessed code difficult to process automatically by tools, such as refactoring, concern management, and variability-aware type checking. We distinguish between disciplined annotations, which align with the underlying source-code structure, and undisciplined annotations, which do not align with the structure and hence complicate tool development. This distinction raises the question of how frequently programmers use undisciplined annotations and whether it is feasible to change them to disciplined annotations to simplify tool development and to enable programmers to use a wide variety of tools in the first place. By means of an analysis of 40 medium-sized to large-sized C programs, we show empirically that programmers use cpp mostly in a disciplined way: about 84% of all annotations respect the underlying source-code structure. Furthermore, we analyze the remaining undisciplined annotations, identify patterns, and discuss how to transform them into a disciplined form. --- paper_title: PACOGEN: Automatic Generation of Pairwise Test Configurations from Feature Models paper_content: Feature models are commonly used to specify variability in software product lines. Several tools support feature models for variability management at different steps in the development process. However, tool support for test configuration generation is currently limited. This test generation task consists in systematically selecting a set of configurations that represent a relevant sample of the variability space and that can be used to test the product line. In this paper we propose \pw tool to analyze feature models and automatically generate a set of configurations that cover all pair wise interactions between features. \pw tool relies on constraint programming to generate configurations that satisfy all constraints imposed by the feature model and to minimize the set of the tests configurations. This work also proposes an extensive experiment, based on the state-of-the art SPLOT feature models repository, showing that \pw tool scales over variability spaces with millions of configurations and covers pair wise with less configurations than other available tools. --- paper_title: Automatic detection of feature interactions using the Java modeling language: an experience report paper_content: In the development of complex software systems, interactions between different program features increase the design complexity. Feature-oriented software development focuses on the representation and compositions of features. The implementation of features often cuts across object-oriented module boundaries and hence comprises interactions. The manual detection and treatment of feature interactions requires a deep knowledge of the implementation details of the features involved. Our goal is to detect interactions automatically using specifications by means of design by contract and automated theorem proving. We provide a software tool that operates on programs in Java and the Java Modeling Language (JML). We discuss which kinds of feature interactions can be detected automatically with our tool and how to detect other kinds of interactions. --- paper_title: Testing Software Product Lines Using Incremental Test Generation paper_content: We present a novel specification-based approach for generating tests for products in a software product line. Given properties of features as first-order logic formulas, our approach uses SAT-based analysis to automatically generate test inputs for each product in a product line. To ensure soundness of generation, we introduce an automatic technique for mapping a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experimental results using different data structure product lines show that incremental approach can provide an order of magnitude speed-up over conventional techniques. --- paper_title: Virtual Separation of Concerns -- A Second Chance for Preprocessors paper_content: Conditional compilation with preprocessors like cpp is a simple but eective means to implement variability. By annotating code fragments with #ifdef and #endif directives, dierent program variants with or without these fragments can be created, which can be used (among others) to implement software product lines. Although, preprocessors are frequently used in practice, they are often criticized for their negative eect on code quality and maintainability. In contrast to modularized implementations, for example using components or aspects, preprocessors neglect separation of concerns, are prone to introduce subtle errors, can entirely obfuscate the source code, and limit reuse. Our aim is to rehabilitate the preprocessor by showing how simple tool support can address these problems and emulate some benets of modularized implementations. At the same time we emphasize unique benets of preprocessors, like simplicity and language independence. Although we do not have a denitive answer on how to implement variability, we want highlight opportunities to improve preprocessors and encourage research toward novel preprocessor-based approaches. --- paper_title: A Classification and Survey of Analysis Strategies for Software Product Lines paper_content: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses. ---
Title: An overview on analysis tools for software product lines Section 1: INTRODUCTION Description 1: Introduce the concept of software product lines, the importance of feature-oriented software development (FOSD), and the need for efficient analysis tools. Section 2: CHARACTERISTICS OF PRODUCT-LINE TOOLS Description 2: Describe the different aspects of FOSD, including domain analysis, domain design and specification, domain implementation, and product configuration and generation, and introduce the categorization of tools. Section 3: PRODUCT-LINE TESTING Description 3: Discuss the tools available for testing product lines, including those for sampling and beyond sampling, highlighting different strategies and examples of tools. Section 4: PRODUCT-LINE VERIFICATION Description 4: Outline the tools used for the verification of product lines, including type checking, static analysis, software model checking, theorem proving, and consistency checking, with examples of each. Section 5: FURTHER PRODUCT-LINE ANALYSES Description 5: Cover additional analysis tools for non-functional properties and code metrics, providing examples and explaining their relevance to software product lines. Section 6: OVERVIEW ON TOOL SUPPORT Description 6: Summarize the tools discussed in previous sections, categorizing them by analysis technique and product-line implementation technique, and discuss the current state of tool support. Section 7: CONCLUSION Description 7: Conclude the paper by emphasizing the importance of efficient analysis for software product lines, summarizing the contributions, and referring to additional resources such as the website maintained for product-line tools.
A Survey on Cooperative Jamming Applied to Physical Layer Security
9
--- paper_title: Security, privacy, and accountability in wireless access networks paper_content: The presence of ubiquitous connectivity provided by wireless communications and mobile computing has changed the way humans interact with information. At the same time, it has made communication security and privacy a hot-button issue. In this article we address the security and privacy concerns in wireless access networks. We first discuss the general cryptographic means to design privacy-preserving security protocols, where the dilemma of attaining both security and privacy goals, especially user accountability vs. user privacy, is highlighted. We then present a novel authentication framework that integrates a new key management scheme based on the principle of separation of powers and an adapted construction of Boneh and Shacham's group signature scheme, as an enhanced resort to simultaneously achieve security, privacy, and accountability in wireless access networks. --- paper_title: Physical-Layer Security: From Information Theory to Security Engineering paper_content: This complete guide to physical-layer security presents the theoretical foundations, practical implementation, challenges and benefits of a groundbreaking new model for secure communication. Using a bottom-up approach from the link level all the way to end-to-end architectures, it provides essential practical tools that enable graduate students, industry professionals and researchers to build more secure systems by exploiting the noise inherent to communications channels. The book begins with a self-contained explanation of the information-theoretic limits of secure communications at the physical layer. It then goes on to develop practical coding schemes, building on the theoretical insights and enabling readers to understand the challenges and opportunities related to the design of physical layer security schemes. Finally, applications to multi-user communications and network coding are also included. --- paper_title: On the Jamming Power Allocation for Secure Amplify-and-Forward Relaying via Cooperative Jamming paper_content: In this paper, we investigate secure communications in two-hop wireless relaying networks with one eavesdropper. To prevent the eavesdropper from intercepting the source message, the destination sends an intended jamming noise to the relay, which is referred to as cooperative jamming. This jamming noise helps protecting the source message from being captured reliably at the eavesdropper, while the destination cancels its self-intended noise. According to the channel information available at the destination, we derive three jamming power allocation strategies to minimize the outage probability of the secrecy rate. In addition, we derive analytic results quantifying the jamming power consumption of the proposed allocation methods. --- paper_title: Cryptographic design vulnerabilities paper_content: Strong cryptography is very powerful when it is done right, but it is not a panacea. Focusing on cryptographic algorithms while ignoring other aspects of security is like defending your house not by building a fence around it, but by putting an immense stake in the ground and hoping that your adversary runs right into it. Counterpane Systems has spent years designing, analyzing, and breaking cryptographic systems. While they do research on published algorithms and protocols, most of their work examines actual products. They've designed and analyzed systems that protect privacy, ensure confidentiality, provide fairness, and facilitate commerce. They've worked with software, stand-alone hardware, and everything in between. They've broken their share of algorithms, but they can almost always find attacks that bypass the algorithms altogether. Counterpane Systems don't have to try every possible key or even find flaws in the algorithms. They exploit errors in design, errors in implementation, and errors in installation. Sometimes they invent a new trick to break a system, but most of the time they exploit the same old mistakes that designers make over and over again. The article conveys some of the lessons this company has learned. --- paper_title: Guaranteeing Secrecy using Artificial Noise paper_content: The broadcast nature of the wireless medium makes the communication over this medium vulnerable to eavesdropping. This paper considers the problem of secret communication between two nodes, over a fading wireless medium, in the presence of a passive eavesdropper. The assumption used is that the transmitter and its helpers (amplifying relays) have more antennas than the eavesdropper. The transmitter ensures secrecy of communication by utilizing some of the available power to produce 'artificial noise', such that only the eavesdropper's channel is degraded. Two scenarios are considered, one where the transmitter has multiple transmit antennas, and the other where amplifying relays simulate the effect of multiple antennas. The channel state information (CSI) is assumed to be publicly known, and hence, the secrecy of communication is independent of the secrecy of CSI. --- paper_title: Signal Processing Approaches to Secure Physical Layer Communications in Multi-Antenna Wireless Systems paper_content: This book introduces various signal processing approaches to enhance physical layer secrecy in multi-antenna wireless systems. Wireless physical layer secrecy has attracted much attention in recent years due to the broadcast nature of the wireless medium and its inherent vulnerability to eavesdropping. While most articles on physical layer secrecy focus on the information-theoretic aspect, we focus specifically on the signal processing aspects, including beamforming and precoding techniques for data transmission and discriminatory training schemes for channel estimation. The discussions will cover cases with collocated and with distributed antennas, i.e., relays. The topics covered will be of interest to researchers in the signal processing community as well to practitioners and engineers working in this area. This book will also review recent works that apply these signal processing approaches to more advanced wireless systems, such as OFDM systems, multicell systems, cognitive radio, multihop networks etc. This will draw interest from researchers that wish to pursue the topic further in these new directions. This book is divided into three parts: (i) data transmission, (ii) channel estimation and (iii) advanced applications. Even though many works exist in the literature on these topics, the approaches and perspectives taken were largely diverse. This book provides a more organized and systematic view of these designs and to lay a solid foundation for future work in these areas. Moreover, by presenting the work from a signal processing perspective, this book will also trigger more research interest from the signal processing community and further advance the field of physical layer secrecy along the described directions. This book allows readers to gain basic understanding of works on physical layer secrecy, knowledge of how signal processing techniques can be applied to this area, and the application of these techniques in advanced wireless applications. --- paper_title: Physical layer security in wireless networks: a tutorial paper_content: Wireless networking plays an extremely important role in civil and military applications. However, security of information transfer via wireless networks remains a challenging issue. It is critical to ensure that confidential data are accessible only to the intended users rather than intruders. Jamming and eavesdropping are two primary attacks at the physical layer of a wireless network. This article offers a tutorial on several prevalent methods to enhance security at the physical layer in wireless networks. We classify these methods based on their characteristic features into five categories, each of which is discussed in terms of two metrics. First, we compare their secret channel capacities, and then we show their computational complexities in exhaustive key search. Finally, we illustrate their security requirements via some examples with respect to these two metrics. --- paper_title: Relay and jammer selection schemes for improving physical layer security in two-way cooperative networks paper_content: This paper is concerned with the relay and jammers selection in two-way cooperative networks to improve their physical layer security. Three different categories of selection schemes are proposed which are; selection schemes without jamming, selection schemes with conventional jamming and selection schemes with controlled jamming. The selection process is analyzed for two different network models; single eavesdropper model and multiple cooperating and non-cooperating eavesdroppers' model. The proposed schemes select three intermediate nodes during two communication phases and use the Decode-and-Forward (DF) strategy to assist the sources to deliver their data to the corresponding destinations. The performance of the proposed schemes is analyzed in terms of ergodic secrecy rate and secrecy outage probability metrics. The obtained results show that the selection schemes with jamming outperform the schemes without jamming when the intermediate nodes are distributed dispersedly between sources and eavesdropper nodes. However, when the intermediate nodes cluster gets close to one of the sources, they are not superior any more due to the strong interference on the destination nodes. Therefore, a hybrid scheme which switches between selection schemes with jamming and schemes without jamming is introduced to overcome the negative effects of interference. Finally, a comparison between relay and jammers selection schemes in both one-way and two-way cooperative networks is given in terms of both secrecy metrics. The obtained results reveal that, despite the presence of cooperating eavesdroppers, the proposed selection schemes are still able to improve both the secrecy rate and the secrecy outage probability of the two-way cooperative networks. --- paper_title: Harvest-and-jam: Improving security for wireless energy harvesting cooperative networks paper_content: The emerging radio signal enabled simultaneous wireless information and power transfer (SWIPT), has drawn significant attention. To achieve secrecy transmission by cooperative jamming, especially in the upcoming 5G networks with self-sustainable mobile base stations (BSs) and yet not to add extra power consumption, we propose in this paper a new relay protocol, i.e., harvest-and-jam (HJ), in a relay wiretap channel with an additional set of spare helpers. Specifically, in the first transmission phase, a single-antenna transmitter (Tx) transfers signals carrying both information and energy to a multi-antenna amplify-and-forward (AF) relay and a group of multi-antenna helpers; in the second transmission phase, the AF relay processes the information and forwards it to the receiver while each of the helpers generates an artificial noise (AN), the power of which is constrained by its previously harvested energy, to interfere with the eavesdropper. By optimizing the transmit beamforming matrix for the AF relay and the covariance matrix for the AN, we maximize the secrecy rate for the receiver subject to transmit power constraints for the AF relay and all helpers. The formulated problem is shown to be non-convex, for which we propose an iterative algorithm based on alternating optimization. Finally, the performance of the proposed scheme is evaluated by simulations as compared to other heuristic schemes. --- paper_title: Wireless Information-Theoretic Security paper_content: This paper considers the transmission of confidential data over wireless channels. Based on an information-theoretic formulation of the problem, in which two legitimates partners communicate over a quasi-static fading channel and an eavesdropper observes their transmissions through a second independent quasi-static fading channel, the important role of fading is characterized in terms of average secure communication rates and outage probability. Based on the insights from this analysis, a practical secure communication protocol is developed, which uses a four-step procedure to ensure wireless information-theoretic security: (i) common randomness via opportunistic transmission, (ii) message reconciliation, (iii) common key generation via privacy amplification, and (iv) message protection with a secret key. A reconciliation procedure based on multilevel coding and optimized low-density parity-check (LDPC) codes is introduced, which allows to achieve communication rates close to the fundamental security limits in several relevant instances. Finally, a set of metrics for assessing average secure key generation rates is established, and it is shown that the protocol is effective in secure key renewal-even in the presence of imperfect channel state information. --- paper_title: On Cooperative and Malicious Behaviors in Multirelay Fading Channels paper_content: Multirelay networks exploit spatial diversity by transmitting user's messages through multiple relay paths. Most works in the literature on cooperative or relay networks assume that all terminals are fully cooperative and neglect the effect of possibly existing malicious relay behaviors. In this work, we consider a multirelay network that consists of both cooperative and malicious relays, and aims to obtain an improved understanding on the optimal behaviors of these two groups of relays via information-theoretic mutual information games. By modeling the set of cooperative relays and the set of malicious relays as two players in a zero-sum game with the maximum achievable rate as the utility, the optimal transmission strategies of both types of relays are derived by identifying the Nash equilibrium of the proposed game. Our main contributions are twofold. First, a generalization to previous works is obtained by allowing malicious relays to either listen or attack in Phase 1 (source-relay transmission phase). This is in contrast to previous works that only allow the malicious relays to listen in Phase 1 and to attack in Phase 2 (relay-destination transmission phase). The latter is shown to be suboptimal in our problem. Second, the impact of CSI knowledge at the destination on the optimal attack strategy that can be adopted by the malicious relays is identified. In particular, for the more practical scenario where the interrelay CSI is unknown at the destination, the constant attack is shown to be optimal as opposed to the commonly considered Gaussian attack. --- paper_title: Secrecy Capacity Optimization via Cooperative Relaying and Jamming for WANETs paper_content: Cooperative wireless networking, which is promising in improving the system operation efficiency and reliability by acquiring more accurate and timely information, has attracted considerable attentions to support many services in practice. However, the problem of secure cooperative communication has not been well investigated yet. In this paper, we exploit physical layer security to provide secure cooperative communication for wireless ad hoc networks (WANETs) where involve multiple source-destination pairs and malicious eavesdroppers. By characterizing the security performance of the system by secrecy capacity, we study the secrecy capacity optimization problem in which security enhancement is achieved via cooperative relaying and cooperative jamming. Specifically, we propose a system model where a set of relay nodes can be exploited by multiple source-destination pairs to achieve physical layer security. We theoretically present a corresponding formulation for the relay assignment problem and develop an optimal algorithm to solve it in polynomial time. To further increase the system secrecy capacity, we exploit the cooperative jamming technique and propose a smart jamming algorithm to interfere the eavesdropping channels. Through extensive experiments, we validate that our proposed algorithms significantly increase the system secrecy capacity under various network settings. --- paper_title: Physical layer security in wireless networks: a tutorial paper_content: Wireless networking plays an extremely important role in civil and military applications. However, security of information transfer via wireless networks remains a challenging issue. It is critical to ensure that confidential data are accessible only to the intended users rather than intruders. Jamming and eavesdropping are two primary attacks at the physical layer of a wireless network. This article offers a tutorial on several prevalent methods to enhance security at the physical layer in wireless networks. We classify these methods based on their characteristic features into five categories, each of which is discussed in terms of two metrics. First, we compare their secret channel capacities, and then we show their computational complexities in exhaustive key search. Finally, we illustrate their security requirements via some examples with respect to these two metrics. --- paper_title: Wireless Secrecy in Cellular Systems With Infrastructure-Aided Cooperation paper_content: In cellular systems, confidentiality of uplink transmission with respect to eavesdropping terminals can be ensured by creating intentional interference via scheduling of concurrent downlink transmissions. In this paper, this basic idea is explored from an information-theoretic standpoint by focusing on a two-cell scenario where the involved base stations (BSs) are connected via a finite-capacity backbone link. A number of transmission strategies are considered that aim at improving uplink confidentiality under constraints on the downlink rate that acts as an interfering signal. The strategies differ mainly in the way the backbone link is exploited by the cooperating downlink to the uplink-operated BSs. Achievable rates are derived for both the Gaussian (unfaded) and the fading cases, under different assumptions on the channel state information available at different nodes. Numerical results are also provided to corroborate the analysis. Extensions to scenarios with more than two cells are briefly discussed as well. Overall, the analysis reveals that a combination of scheduling and base-station cooperation is a promising means to improve transmission confidentiality in cellular systems. --- paper_title: Cooperative Jamming and Power Allocation for Wireless Relay Networks in Presence of Eavesdropper paper_content: Relying on physical layer security is an attractive alternative of utilizing cryptographic algorithms at upper layers of protocol stack for secure communications. In this paper, we consider a two-hop wireless relay network in the presence of an eavesdropper. Our scenario of interest spans over a four-node network model including a source, a destination, a trusted relay, and an untrusted eavesdropper in which the relay forwards the source message in a decode-and-forward (DF) fashion. The source and relay are allowed to use some of their available power to transmit jamming signals in order to create interference at the eavesdropper. The relay and destination are assumed to have the knowledge of the jamming signals. An important question is how to allocate the transmission power of the message signal and that of the jamming signal. First, we propose an optimal power allocation solution in which the knowledge of global channel state information (CSI) is required. To facilitate practical system design, two simple yet sub-optimal power allocation solutions are proposed which do not rely on eavesdropper's channels. For the purpose of performance comparisons, power allocation problems for two benchmark schemes without jamming are also analyzed. --- paper_title: Cooperative jamming via spectrum leasing paper_content: Secure communication rates can be facilitated or enhanced via deployment of cooperative jammers in a multi-terminal environment. Such an approach typically assumes dedicated and/or altruistic jamming nodes, investing their resources for the good of the whole system. In this paper, we demonstrate that jammers can be recruited to provide significant improvements of secrecy rates even when this assumption is alleviated. A game-theoretic framework is proposed where a source node, towards the maximization of its secrecy rate, utilizes the jamming services from a set of non-altruistic nodes, compensating them with a fraction of its bandwidth for transmission of their user data. With the goal of maximizing their user-data transmission rate priced by the invested power, potential cooperative jammers will provide the jamming/transmitting power that is generally proportional to the amount of leased bandwidth. Elaborating initially on a single-jammer scenario, interaction between the source and a cooperative jammer is modeled as the Stackelberg leader-follower game. The scheme is further extended to involve multiple potential jammers, applying competition mechanisms such as the auctioning and power control game, while maintaining the Stackelberg framework. --- paper_title: Recruiting multi-antenna transmitters as cooperative jammers: An auction-theoretic approach paper_content: This paper proposes and investigates a distributed mechanism that motivates otherwise non-cooperative terminals to participate as cooperative jammers assisting a source-destination pair that communicates over a wireless medium, in the presence of an eavesdropper from whom the communicated messages need to be kept secret. The cooperation incentive is provided by an opportunity for potential jammers, possibly equipped with multiple antennas, to utilize the spectrum belonging to the ongoing secure transmission for their own data traffic. A fully decentralized framework is put forth through a competition of potential cooperative jammers for spectrum access by trying to make the jamming offer that most improves the secrecy rate of the source-destination pair. Effective arbitration of cooperative jamming is performed using auction theory, with the source in the role of the auctioneer, and the jammers acting as bidders. The proposed scheme can be alternatively seen as a practical basis for the implementation of cognitive radio networks operating according to the property-rights model, i.e., spectrum leasing. --- paper_title: Deaf Cooperation and Relay Selection Strategies for Secure Communication in Multiple Relay Networks paper_content: In this paper, we investigate the roles of cooperative jamming (CJ) and noise forwarding (NF) in improving the achievable secrecy rates of a Gaussian wiretap channel (GWT). In particular, we study the role of a deaf helper in confusing the eavesdropper in a GWT channel by either transmitting white Gaussian noise (cooperative jamming) or by transmitting a dummy codeword of no context yet drawn from a codebook known to both the destination and the eavesdropper (noise forwarding). We first derive the conditions under which each mode of deaf cooperation improves over the secrecy capacity of the original wiretap channel and show that a helping node can be either a useful cooperative jammer or a useful noise forwarder but not both at the same time. Secondly, we derive the optimal power allocation for both the source and the helping node to be used in each of the two modes of deaf helping. Thirdly, we consider the deaf helper selection problem where there are N relays present in the system and it is required to select the best K deaf helpers, K ≥ 1, that yield the maximum possible achievable secrecy rate. For the case of K=1, we give the optimal selection strategy with optimal power allocation. The computational complexity of the optimal selection strategy when K > 1 is relatively large, especially for large values of K and N. Thus, we propose a suboptimal strategy for the selection problem when K > 1. We derive the complexity of the proposed selection strategies and show that, for K > 1, our suboptimal strategy, which works in a greedy fashion, enjoys a significantly less computational complexity than the optimal strategy. Nevertheless, as demonstrated by numerical examples, our suboptimal strategy gives rise to reasonable performance gains in terms of the achievable secrecy rate with respect to the case of K=1. --- paper_title: Cooperative jamming and power allocation in three-phase two-way relaying wiretap systems paper_content: The security of the three-phase two-way relaying system with an eavesdropper is investigated in this paper. A cooperative jamming and power allocation scheme is proposed to enhance the system secrecy capacity. When a source node transmits user signals to the relay node, the other source node interferes the relay node with pre-defined jamming signals simultaneously. Optimum power allocation between the user and jamming signals at each source node is analyzed. Our analytical results suggest that the proposed cooperative jamming scheme improves on the system secrecy capacity, especially when the channel gains of the two source-relay links are of large difference. Simulation results in close agreement with analytical results clearly demonstrate the advantage of the proposed cooperative jamming scheme. --- paper_title: Physical layer security from inter-session interference in large wireless networks paper_content: Physical layer secrecy in wireless networks in the presence of eavesdroppers of unknown location is considered. In contrast to prior schemes, which have expended energy in the form of cooperative jamming to enable secrecy, we develop schemes where multiple transmitters send their signals in a cooperative fashion to confuse the eavesdroppers. Hence, power is not expended on “artificial noise”; rather, the signal of a given transmitter is protected by the aggregate interference produced by the other transmitters. We introduce a two-hop strategy for the case of equal path-loss between all pairs of nodes, and then consider its embedding within a multi-hop approach for the general case of an extended network. In each case, we derive an achievable number of eavesdroppers that can be present in the region while secure communication between all sources and intended destinations is ensured. --- paper_title: Distributed Coalition Formation of Relay and Friendly Jammers for Secure Cooperative Networks paper_content: In this paper, we investigate cooperation of conventional relays and friendly jammers subject to secrecy constraints for cooperative networks consisting of one source node, one corresponding destination node, one malicious eavesdropper node, and several intermediate nodes. In order to obtain a higher secrecy rate, the source selects one conventional relay and several friendly jammers from the intermediate nodes to assist message transmission, and in return, it needs to make a payment. Each intermediate node here has two possible identities to choose, i.e., to be a conventional relay or a friendly jammer, which results in a direct impact on the final utility of the intermediate node. After the intermediate nodes determine their identities, they seek to find optimal partners forming coalitions, which improves their chances to be selected by the source and thus to obtain the payoffs in the end. We formulate this cooperation as a coalitional game with transferable utility and also study its properties. Furthermore, we define a Max-Pareto order for comparison of the coalition value, based on which we employ the merge-and-split rules. We also construct a distributed merge-and-split coalition formation algorithm for the defined coalition formation game. The simulation results verify the efficiency of the proposed coalition formation algorithm. --- paper_title: Strongly Secure Communications Over the Two-Way Wiretap Channel paper_content: We consider the problem of secure communications over the two-way wiretap channel under a strong secrecy criterion. We improve existing results by developing an achievable region based on strategies that exploit both the interference at the eavesdropper's terminal and cooperation between legitimate users. We leverage the notion of channel resolvability for the multiple-access channel to analyze cooperative jamming and we show that the artificial noise created by cooperative jamming induces a source of common randomness that can be used for secret-key agreement. We illustrate the gain provided by this coding technique in the case of the Gaussian two-way wiretap channel, and we show significant improvements for some channel configurations. --- paper_title: Cooperative jamming and power allocation with untrusty two-way relay nodes paper_content: This study investigates the security of the two-way relaying system with untrusty relay nodes. Cooperative jamming schemes are considered for bi-directional secrecy communications. The transmit power of each source node is divided into two parts corresponding to the user and jamming signals, respectively. Two different assumptions of the jamming signals are considered. When the jamming signals are a priori known at the two source nodes, closed-form power allocation expressions at two source nodes are derived. Under the assumption of unknown jamming signals, it is proven that the cooperative jamming is useless for the system secrecy capacity, because that all the power should be allocated to the user signals at each source node. Relay selection is also investigated based on the analysis of cooperative jamming. Simulation results are presented to compare the system secrecy capacities under the two jamming signal assumptions. --- paper_title: CSI-Secured Orthogonal Jamming Method for Wireless Physical Layer Security paper_content: In this paper, we propose a new cooperative jamming (CJ) method to prevent eavesdroppers from using beam-formers to suppress the jamming signals. In this method, the broadcasting of system-node channel state information (CSI) is first averted. The transmitting and jamming signals are then aligned in two separate single dimensions in complex space. The decode-and-forward relaying protocol is employed.The proposed method is more effective than current CJ techniques. --- paper_title: On Secrecy Capacity of Gaussian Wiretap Channel Aided by A Cooperative Jammer paper_content: We study the secrecy capacity of Gaussian wiretap channel aided by an external jammer/helper. Both the transmitter and the intended receiver are equipped with a single antenna, while the eavesdropper and the jammer are equipped with M and N antennas, respectively. Generally, an analytical form of the secrecy capacity in this scenario is difficult to obtain. Instead, we consider a null-space jamming scheme which totally nulls out the jamming signal at the legitimate receiver, and derive lower and upper bounds on its maximal achievable secrecy rate R<sub>N</sub>. The relationship between the average secrecy capacity C̅<sub>N</sub> of Gaussian wiretap channel and the average secrecy rate R̅<sub>N</sub> achieved by the null-space jamming scheme is investigated, and we prove that R̅<sub>N</sub> ≤ C̅<sub>N</sub> ≤ R̅<sub>N+1</sub>. Based on this inequality and the derived lower and upper bounds on R<sub>N</sub>, the upper and lower bounds on the average secrecy capacity of Gaussian wiretap channel aided by an external jammer can be obtained, where our result shows that when N > M, the average secrecy capacity increases linearly with the total transmit power; while when N ≤ M - 1, there exists a performance ceiling on it. --- paper_title: Joint Cooperative Beamforming and Jamming to Secure AF Relay Systems With Individual Power Constraint and No Eavesdropper's CSI paper_content: Cooperative beamforming and jamming are two efficient schemes to improve the physical-layer security of a wireless relay system in the presence of passive eavesdroppers. However, in most works these two techniques are adopted separately. In this letter, we propose a joint cooperative beamforming and jamming scheme to enhance the security of a cooperative relay network, where a part of intermediate nodes adopt distributed beamforming while others jam the eavesdropper, simultaneously. Since the instantaneous channel state information (CSI) of the eavesdropper may not be known, we propose a cooperative artificial noise transmission based secrecy strategy, subjected to the individual power constraint of each node. The beamformer weights and power allocation can be obtained by solving a second-order convex cone programming (SOCP) together with a linear programming problem. Simulations show the joint scheme greatly improves the security. --- paper_title: Harvest-and-jam: Improving security for wireless energy harvesting cooperative networks paper_content: The emerging radio signal enabled simultaneous wireless information and power transfer (SWIPT), has drawn significant attention. To achieve secrecy transmission by cooperative jamming, especially in the upcoming 5G networks with self-sustainable mobile base stations (BSs) and yet not to add extra power consumption, we propose in this paper a new relay protocol, i.e., harvest-and-jam (HJ), in a relay wiretap channel with an additional set of spare helpers. Specifically, in the first transmission phase, a single-antenna transmitter (Tx) transfers signals carrying both information and energy to a multi-antenna amplify-and-forward (AF) relay and a group of multi-antenna helpers; in the second transmission phase, the AF relay processes the information and forwards it to the receiver while each of the helpers generates an artificial noise (AN), the power of which is constrained by its previously harvested energy, to interfere with the eavesdropper. By optimizing the transmit beamforming matrix for the AF relay and the covariance matrix for the AN, we maximize the secrecy rate for the receiver subject to transmit power constraints for the AF relay and all helpers. The formulated problem is shown to be non-convex, for which we propose an iterative algorithm based on alternating optimization. Finally, the performance of the proposed scheme is evaluated by simulations as compared to other heuristic schemes. --- paper_title: Secure Transmission with Optimal Power Allocation in Untrusted Relay Networks paper_content: We consider the problem of secure transmission in two-hop amplify-and-forward untrusted relay networks. We analyze the ergodic secrecy capacity (ESC) and present compact expressions for the ESC in the high signal-to-noise ratio regime. We also examine the impact of large scale antenna arrays at either the source or the destination. For large antenna arrays at the source, we confirm that the ESC is solely determined by the channel between the relay and the destination. For very large antenna arrays at the destination, we confirm that the ESC is solely determined by the channel between the source and the relay. --- paper_title: Secrecy Capacity Optimization via Cooperative Relaying and Jamming for WANETs paper_content: Cooperative wireless networking, which is promising in improving the system operation efficiency and reliability by acquiring more accurate and timely information, has attracted considerable attentions to support many services in practice. However, the problem of secure cooperative communication has not been well investigated yet. In this paper, we exploit physical layer security to provide secure cooperative communication for wireless ad hoc networks (WANETs) where involve multiple source-destination pairs and malicious eavesdroppers. By characterizing the security performance of the system by secrecy capacity, we study the secrecy capacity optimization problem in which security enhancement is achieved via cooperative relaying and cooperative jamming. Specifically, we propose a system model where a set of relay nodes can be exploited by multiple source-destination pairs to achieve physical layer security. We theoretically present a corresponding formulation for the relay assignment problem and develop an optimal algorithm to solve it in polynomial time. To further increase the system secrecy capacity, we exploit the cooperative jamming technique and propose a smart jamming algorithm to interfere the eavesdropping channels. Through extensive experiments, we validate that our proposed algorithms significantly increase the system secrecy capacity under various network settings. --- paper_title: Hybrid Cooperative Beamforming and Jamming for Physical-Layer Security of Two-Way Relay Networks paper_content: In this paper, we propose a hybrid cooperative beamforming and jamming scheme to enhance the physical-layer security of a single-antenna-equipped two-way relay network in the presence of an eavesdropper. The basic idea is that in both cooperative transmission phases, some intermediate nodes help to relay signals to the legitimate destination adopting distributed beamforming, while the remaining nodes jam the eavesdropper, simultaneously, which takes the data transmissions in both phases under protection. Two different schemes are proposed, with and without the instantaneous channel state information of the eavesdropper, respectively, and both are subjected to the more practical individual power constraint of each cooperative node. Under the general channel model, it is shown that both problems can be transformed into a semi-definite programming (SDP) problem with an additional rank-1 constraint. A current state of the art technique for handling such a problem is the semi-definite relaxation (SDR) and randomization techniques. In this paper, however, we propose a penalty function method incorporating the rank-1 constraint into the objective function. Although the so-obtained problem is not convex, we develop an efficient iterative algorithm to solve it. Each iteration is a convex SDP problem, thus it can be efficiently solved using the interior point method. When the channels are reciprocal such as in TDD mode, we show that the problems become second-order convex cone programming ones. Numerical evaluation results are provided and analyzed to show the properties and efficiency of the proposed hybrid security scheme, and also demonstrate that our optimization algorithms outperform the SDR technique. --- paper_title: Optimal Cooperative Jamming for Multiuser Broadcast Channel with Multiple Eavesdroppers paper_content: Cooperative jamming for multiuser multiple input multiple output (MIMO) broadcast channel is studied to enhance the physical layer security with the help of a friendly jammer. We assume the base station transmits multiple independent data streams to multiple legitimate users. During the transmission, however, there are multiple eavesdroppers with multiple antennas that have interests in the streams from the base station. In order to wiretap the desired streams, the eavesdroppers may collude or not, and maximize the signal-to-interference-plus-noise ratio (SINR) of the desired streams using receive beamforming. The optimal cooperative jammer is designed to keep the achieved SINR at eavesdroppers below a threshold to guarantee that the transmission from the base station to legitimate users is confidential. One main advantage of the proposed cooperative jamming scheme is that no modification is needed for the existing precoding schemes at the base station and decoding schemes at legitimate users. Thus, any existing practical precoding/decoding schemes for multiuser MIMO broadcast channel can be applied directly with the help of a friendly jammer using the proposed cooperative jamming. --- paper_title: MIMO decode-and-forward relay beamforming for secrecy with cooperative jamming paper_content: In this paper, we consider decode-and-forward (DF) relay beamforming for secrecy with cooperative jamming (CJ) in the presence of multiple eavesdroppers. The communication between a source-destination pair is aided by a multiple-input multiple-output (MIMO) relay. The source has one transmit antenna and the destination and eavesdroppers have one receive antenna each. The source and the MIMO relay are constrained with powers P S and P R , respectively. We relax the rank-1 constraint on the signal beamforming matrix and transform the secrecy rate max-min optimization problem to a single maximization problem, which is solved by semidefinite programming techniques. We obtain the optimum source power, signal relay weights, and jamming covariance matrix. We show that the solution of the rank-relaxed optimization problem has rank-1. Numerical results show that CJ can improve the secrecy rate. --- paper_title: On the Jamming Power Allocation for Secure Amplify-and-Forward Relaying via Cooperative Jamming paper_content: In this paper, we investigate secure communications in two-hop wireless relaying networks with one eavesdropper. To prevent the eavesdropper from intercepting the source message, the destination sends an intended jamming noise to the relay, which is referred to as cooperative jamming. This jamming noise helps protecting the source message from being captured reliably at the eavesdropper, while the destination cancels its self-intended noise. According to the channel information available at the destination, we derive three jamming power allocation strategies to minimize the outage probability of the secrecy rate. In addition, we derive analytic results quantifying the jamming power consumption of the proposed allocation methods. --- paper_title: Secrecy Transmission With a Helper: To Relay or to Jam paper_content: In this paper, we consider the problem of secure communications for a four-node system consisting of one source, one destination, one eavesdropper, and one helper. We investigate the question of which role should the helper act to improve the secrecy, to jam, or to relay. Two transmission schemes are investigated: 1) direct transmission scheme (DTS) with jamming and 2) relay transmission scheme (RTS). We consider both the path-loss and fading-in channel models and define a notion of distance normalized signal-to-noise-ratio (DN-SNR) to account for propagation. The ergodic secrecy rate (ESR) is adopted as the performance metric and semi-closed-form expressions of ESR for the two schemes are derived. Additionally, optimal power allocations in both low and high DN-SNR regimes are characterized analytically. We give the performance comparison of the two schemes from the perspective of energy efficiency in the low DN-SNR regime, and characterize the secrecy degree of freedom in the high DN-SNR regime. In the high DN-SNR regime, DTS provides higher secrecy rate compared with RTS, while in the low DN-SNR regime, RTS outperforms DTS. Furthermore, we show that eavesdropper’s position impacts greatly on security. --- paper_title: Joint relay selection and artificial jamming power allocation for secure DF relay networks paper_content: This paper studies cooperative transmission for securing a decode-and-forward (DF) two-hop network where massive cooperative nodes coexist with a potential single eavesdropper. With only the statistical channel state information (CSIs) of the eavesdroppers, we propose an opportunistic relaying with artificial jamming secrecy scheme, where a “best” cooperative node is chosen to forward the confidential signal and the others act as jammers to send the jamming signals to confuse the eavesdropper. Utilizing the limiting distribution technique of extreme order statistics, we optimize the power allocation between the confidential information and jamming signal based on the statistical CSIs of the legitimate nodes for ergodic secrecy rate (ESR) maximization. Although the optimization problems are non-convex, we propose a sequential parametric convex approximation (SPCA) algorithm to locate the KKT solutions. Then, we derive an analytical closed-form expression of the achievable ESR, which reduces the complexity of the system analysis and design. --- paper_title: Cooperative jamming and power allocation with untrusty two-way relay nodes paper_content: This study investigates the security of the two-way relaying system with untrusty relay nodes. Cooperative jamming schemes are considered for bi-directional secrecy communications. The transmit power of each source node is divided into two parts corresponding to the user and jamming signals, respectively. Two different assumptions of the jamming signals are considered. When the jamming signals are a priori known at the two source nodes, closed-form power allocation expressions at two source nodes are derived. Under the assumption of unknown jamming signals, it is proven that the cooperative jamming is useless for the system secrecy capacity, because that all the power should be allocated to the user signals at each source node. Relay selection is also investigated based on the analysis of cooperative jamming. Simulation results are presented to compare the system secrecy capacities under the two jamming signal assumptions. --- paper_title: Joint Design of Optimal Cooperative Jamming and Power Allocation for Linear Precoding paper_content: Linear precoding and cooperative jamming for multiuser broadcast channel is studied to enhance the physical layer security. We consider the system where multiple independent data streams are transmitted from the base station to multiple legitimate users with the help of a friendly jammer. It is assumed that a normalized linear precoding matrix is given at the base station, whereas the power allocated to each user is to be determined. The problem is to jointly design the power allocation across different users for linear precoding and the cooperative jamming at the friendly jammer. The goal is to maximize a lower bound of the secrecy rate, provided that a minimum communication rate to the users is guaranteed. The optimal solution is obtained when the number of antennas at the friendly jammer is no less than the total number of antennas at the users and eavesdropper. Moreover, a suboptimal algorithm is proposed, which can be applied for all the scenarios. Numerical results demonstrate that the proposed schemes are effective for secure communications. --- paper_title: Hybrid Opportunistic Relaying and Jamming With Power Allocation for Secure Cooperative Networks paper_content: This paper studies the cooperative transmission for securing a decode-and-forward (DF) two-hop network where multiple cooperative nodes coexist with a potential eavesdropper. Under the more practical assumption that only the channel distribution information (CDI) of the eavesdropper is known, we propose an opportunistic relaying with artificial jamming secrecy scheme, where a “best” cooperative node is chosen among a collection of $N$ possible candidates to forward the confidential signal and the others send jamming signals to confuse the eavesdroppers. We first investigate the ergodic secrecy rate (ESR) maximization problem by optimizing the power allocation between the confidential signal and jamming signals. In particular, we exploit the limiting distribution technique of extreme order statistics to build an asymptotic closed-form expression of the achievable ESR and the power allocation is optimized to maximize the ESR lower bound. Although the optimization problems are non-convex, we propose a sequential parametric convex approximation (SPCA) algorithm to locate the Karush-Kuhn-Tucker (KKT) solutions. Furthermore, taking the time variance of the legitimate links' CSIs into consideration, we address the impacts of the outdated CSIs to the proposed secrecy scheme, and derive an asymptotic ESR. Finally, we generalize the analysis to the scenario with multiple eavesdroppers, and give the asymptotic analytical results of the achievable ESR. Simulation results confirm our analytical results. --- paper_title: Competing for Secrecy in the MISO Interference Channel paper_content: A secure communication game is considered for the two-user MISO Gaussian interference channel with confidential messages, where each transmitter aims to maximize the difference between its secrecy rate and the secrecy rate of the other transmitter. In this novel problem, the weaker link tries to minimize the extra secret information obtained by its adversary, while the stronger link attempts to maximize it. We provide an information theoretic formulation for this non-cooperative zero-sum game scenario and determine that, under the assumption of Gaussian signaling at the transmitters, there exists a unique Nash equilibrium solution for the proposed problem. Moreover, we obtain in closed-form the optimal strategies (beamformers) at the transmitters that result in the Nash equilibrium. While the NE strategies are achievable in one shot when full channel state information is available at the transmitters (CSIT), we also propose an iterative step-by-step algorithm that converges to the NE point, but only requires limited CSIT. Numerical results are presented to illustrate the analytical findings and to study the role of different channel parameters on the NE strategies. --- paper_title: On the Jamming Power Allocation for Secure Amplify-and-Forward Relaying via Cooperative Jamming paper_content: In this paper, we investigate secure communications in two-hop wireless relaying networks with one eavesdropper. To prevent the eavesdropper from intercepting the source message, the destination sends an intended jamming noise to the relay, which is referred to as cooperative jamming. This jamming noise helps protecting the source message from being captured reliably at the eavesdropper, while the destination cancels its self-intended noise. According to the channel information available at the destination, we derive three jamming power allocation strategies to minimize the outage probability of the secrecy rate. In addition, we derive analytic results quantifying the jamming power consumption of the proposed allocation methods. --- paper_title: An Effective Secure Transmission Scheme for AF Relay Networks with Two-Hop Information Leakage paper_content: In this letter, we propose a novel scheme to improve physical layer security for amplify-and-forward (AF) relay networks with two-hop information leakage. Unlike traditional joint cooperative beamforming (CB) and cooperative jamming (CJ) design, where mere artificial noise (AN) is exploited by the jammers to degrade eavesdroppers' channels, in our proposed scheme, the jammers send AN along with the confidential signal to simultaneously confuse the eavesdroppers and enhance the quality of received signals at the destination. In particular, the CB vectors and AN covariance matrix are jointly optimized for the relays and jammers under individual power constraints. By decoupling this resultant design problem, suboptimal solutions are obtained using the semidefinite relaxation technique. Simulation results are also provided to demonstrate that the proposed scheme has an obvious advantage in enhancing the secrecy rate. --- paper_title: Relay Selection for Security Enhancement in Cognitive Relay Networks paper_content: This letter proposes several relay selection policies for secure communication in cognitive decode-and-forward relay networks, where a pair of cognitive relays is opportunistically selected for security protection against eavesdropping. The first relay transmits the secrecy information to the destination, and the second relay, as a friendly jammer, transmits the jamming signal to confound the eavesdropper. We present new exact closed-form expressions for the secrecy outage probability. Our analysis and simulation results strongly support our conclusion that the proposed relay selection policies can enhance the performance of secure cognitive radio. We also confirm that the error floor phenomenon is created in the absence of jamming. --- paper_title: Destination Assisted Cooperative Jamming for Wireless Physical-Layer Security paper_content: A wireless network with one source, one destination, one eavesdropper, and multiple decode-and-forward relays is considered. A two-slot cooperative relaying scheme is proposed that targets at maximizing the secrecy rate. In the first slot, the source transmits the information bearing signal, and at the same time, it cooperates with the destination in jamming the eavesdropper without creating interference at the relay. In the second slot, one optimally selected relay retransmits the decoded source signal, and at the same time, that particular relay cooperates with the source to jam the eavesdropper without creating interference at the destination. Optimal relay selection and also optimal power allocation among the first/second slot data signal and jamming noise are proposed. It is shown that the secrecy rate of the proposed scheme scales with the total system power P0 and the number of available relays K according to 1/2log2(1 + P0 /8logK) - 1.6 bits/channel use . Although the proposed power allocation and relay selection assume global CSI available, the performance under imperfect relay CSI is also investigated. Also, the performance under distributed relay selection with limited feedback is demonstrated. --- paper_title: On Cooperative and Malicious Behaviors in Multirelay Fading Channels paper_content: Multirelay networks exploit spatial diversity by transmitting user's messages through multiple relay paths. Most works in the literature on cooperative or relay networks assume that all terminals are fully cooperative and neglect the effect of possibly existing malicious relay behaviors. In this work, we consider a multirelay network that consists of both cooperative and malicious relays, and aims to obtain an improved understanding on the optimal behaviors of these two groups of relays via information-theoretic mutual information games. By modeling the set of cooperative relays and the set of malicious relays as two players in a zero-sum game with the maximum achievable rate as the utility, the optimal transmission strategies of both types of relays are derived by identifying the Nash equilibrium of the proposed game. Our main contributions are twofold. First, a generalization to previous works is obtained by allowing malicious relays to either listen or attack in Phase 1 (source-relay transmission phase). This is in contrast to previous works that only allow the malicious relays to listen in Phase 1 and to attack in Phase 2 (relay-destination transmission phase). The latter is shown to be suboptimal in our problem. Second, the impact of CSI knowledge at the destination on the optimal attack strategy that can be adopted by the malicious relays is identified. In particular, for the more practical scenario where the interrelay CSI is unknown at the destination, the constant attack is shown to be optimal as opposed to the commonly considered Gaussian attack. --- paper_title: Secure Relay and Jammer Selection for Physical Layer Security paper_content: Secure relay and jammer selection for physical-layer security is studied in a wireless network with multiple intermediate nodes and eavesdroppers, where each intermediate node either helps to forward messages as a relay, or broadcasts noise as a jammer. We derive a closed-form expression for the secrecy outage probability (SOP), and we develop two relay and jammer selection methods for SOP minimization. In both methods a selection vector and a corresponding threshold are designed and broadcast by the destination to ensure each intermediate node knows its own role while knowledge of the relay and jammer set is kept secret from all eavesdroppers. Simulation results show the SOP of the proposed methods are very close to that obtained by an exhaustive search, and that maintaining the privacy of the selection result greatly improves the SOP performance. --- paper_title: Opportunistic Relaying for Secrecy Communications: Cooperative Jamming vs. Relay Chatting paper_content: In this letter, we study the opportunistic use of relays for secret communications, and propose two transmission schemes that do not require the knowledge of the eavesdropper's channel state information. Both analytic and numerical results are provided. --- paper_title: Improving Secrecy Rate via Spectrum Leasing for Friendly Jamming paper_content: Cooperative jamming paradigm in secure communications enlists network nodes to transmit noise or structured codewords, in order to impair the eavesdropper's ability to decode messages to be kept confidential from it. Such an approach can significantly help in facilitating secure communication between legitimate parties but, by definition, assumes dedicated and/or altruistic nodes willing to act as cooperative jammers. In this paper, it is demonstrated that cooperative jamming leads to meaningful secrecy rate improvements even when this assumption is removed. A distributed mechanism is developed that motivates jamming participation of otherwise non-cooperative terminals, by compensating them with an opportunity to use the fraction of legitimate parties' spectrum for their own data traffic. With the goal of maximizing their data transmission rate priced by the invested power, cooperative jammers provide the jamming/transmitting power that is generally proportional to the amount of leased bandwidth. The fully decentralized framework is facilitated through a game-theoretic model, with the legitimate parties as the spectrum owners acting as the game leader, and the set of assisting jammers constituting the follower. To facilitate the behavior of non-cooperative and competitive multiple jammers, auctioning and power control mechanisms are applied for a follower sub-game in a two-layer leader-follower game framework. --- paper_title: Power control Stackelberg game in cooperative anti-jamming communications paper_content: As wireless networks are vulnerable to malicious attacks, security issues have aroused enormous research interest. In this paper, we focus on the anti-jamming power control of transmitters in a cooperative wireless network attacked by a smart jammer with the capability to sense the ongoing transmission power before making a jamming decision. By modeling the interaction between transmitters and a jammer as a Stackelberg game, we analyze the optimal strategies for both transmitters and the jammer and thus derive the Stackelberg equilibrium of the game. In addition, the Nash equilibrium of the anti-jamming game is also derived to compare with the Stackelberg equilibrium strategy. Finally, the impacts of the fading channel gains of the transmitters and the jammer on their utilities and the SINR are measured, respectively. Simulation results are presented to verify the performance of the proposed Stackelberg equilibrium strategy. ---
Title: A Survey on Cooperative Jamming Applied to Physical Layer Security Section 1: INTRODUCTION Description 1: Introduce the significance of wireless communications, the associated security challenges, and the traditional vs. physical layer security solutions. Summarize the paper's contributions. Section 2: Physical Layer Security Description 2: Describe the generic wireless communication network model, the concept of secrecy capacity, and various methods to enhance physical layer security. Section 3: Cooperative jamming Description 3: Discuss the concept of cooperative jamming and how it aids in enhancing physical layer security by introducing interferers to reduce the eavesdropper's SNR. Section 4: Artificial Jamming Signals types Description 4: Categorize and explain the different types of artificial jamming signals used in cooperative jamming, highlighting their impact on communication security. Section 5: Cooperative Jamming with Multiple Antennas and Beamforming Description 5: Explore the use of multiple antennas and beamforming techniques in conjunction with cooperative jamming to enhance security. Section 6: Cooperative jamming with Power Allocation method Description 6: Discuss various power allocation strategies in cooperative jamming to optimize the secrecy capacity and minimize outage probability. Section 7: Jamming Policies Description 7: Outline different jamming policies, including relay selection and game theory approaches, to enhance secure communication. Section 8: CONCLUSION Description 8: Summarize the surveyed challenges and techniques in physical layer security and jamming, and conclude with considerations for future research directions. Section 9: ACKNOWLEDGMENT Description 9: Acknowledge the support and contributions towards the research work presented in the paper.
Wireless Sensor Network Routing Protocols: A Survey
9
--- paper_title: Integrated comparison of energy efficient routing protocols in wireless sensor network: A survey paper_content: The dramatic increase in sensors application over the past 20 years, made it clear that the sensors will make a revolution like that witnessed in microcomputers in 1980s. Moreover, some researchers have labeled the first decade of the 21st century as the “Sensor Decade” [1]. Sensor nodes are powered by limited capacity of batteries and because of this limitation, the energy consumption of a sensor node must be tightly controlled. WSN life time mainly depends on the lifetime of limited power source of the nodes. Therefore, energy consumption is main concern in WSNs. So, during the operation of each sensor node, the sources that consume energy must be analyzed and maintained efficiently. In this paper, we have focused on features of Sensor Networks, Clustering and Routing and more precisely on Energy Efficient (E.E) routing protocols and finally, made an integrated comparison among these E.E Routing protocols. --- paper_title: Integrated comparison of energy efficient routing protocols in wireless sensor network: A survey paper_content: The dramatic increase in sensors application over the past 20 years, made it clear that the sensors will make a revolution like that witnessed in microcomputers in 1980s. Moreover, some researchers have labeled the first decade of the 21st century as the “Sensor Decade” [1]. Sensor nodes are powered by limited capacity of batteries and because of this limitation, the energy consumption of a sensor node must be tightly controlled. WSN life time mainly depends on the lifetime of limited power source of the nodes. Therefore, energy consumption is main concern in WSNs. So, during the operation of each sensor node, the sources that consume energy must be analyzed and maintained efficiently. In this paper, we have focused on features of Sensor Networks, Clustering and Routing and more precisely on Energy Efficient (E.E) routing protocols and finally, made an integrated comparison among these E.E Routing protocols. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. --- paper_title: Adaptive protocols for information dissemination in wireless sensor networks paper_content: In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminates information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of two specific SPIN protocols, comparing them to other possible approaches and a theoretically optimal protocol. We find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches. We also find that, in terms of dissemination rate and energy usage, the SPlN protocols perform close to the theoretical optimum. --- paper_title: Adaptive protocols for information dissemination in wireless sensor networks paper_content: In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminates information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of two specific SPIN protocols, comparing them to other possible approaches and a theoretically optimal protocol. We find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches. We also find that, in terms of dissemination rate and energy usage, the SPlN protocols perform close to the theoretical optimum. --- paper_title: SPIN-IT: a data centric routing protocol for image retrieval in wireless networks paper_content: We propose and analyze a routing protocol for mobile ad hoc networks that supports efficient image retrieval based on metadata queries. In digital photography, metadata describes captured information about an image and provides the key element needed for advanced techniques for sharing pictures. Our goal was to find an efficient way to utilize metadata to retrieve images in a wireless network of imaging devices. Building on the SPIN (sensor protocols for information via negotiation) protocol for metadata negotiation, we designed SPIN-IT (SPIN-image transfer), a protocol where wireless imaging devices use metadata queries to retrieve desired pictures. This protocol provides low bandwidth query-based communication prior to the transfer of image data to set up routes to desired data rather than routes to specific nodes. We compare SPIN-IT to a centralized approach and discuss the advantages of each design for different picture-sharing scenarios. --- paper_title: Rumor routing algorthim for sensor networks paper_content: Advances in micro-sensor and radio technology will enable small but smart sensors to be deployed for a wide range of environmental monitoring applications. In order to constrain communication overhead, dense sensor networks call for new and highly efficient methods for distributing queries to nodes that have observed interesting events in the network. A highly efficient data-centric routing mechanism will offer significant power cost reductions [17], and improve network longevity. Moreover, because of the large amount of system and data redundancy possible, data becomes disassociated from specific node and resides in regions of the network [10][7][8]. This paper describes and evaluates through simulation a scheme we call Rumor Routing, which allows for queries to be delivered to events in the network. Rumor Routing is tunable, and allows for tradeoffs between setup overhead and delivery reliability. It's intended for contexts in which geographic routing criteria are not applicable because a coordinate system is not available or the phenomenon of interest is not geographically correlated. --- paper_title: A New Gradient-Based Routing Protocol in Wireless Sensor Networks paper_content: Every physical event results in a natural information gradient in the proximity of the phenomenon. Moreover, many physical phenomena follow the diffusion laws. This natural information gradient can be used to design efficient information-driven routing protocols for sensor networks. Information-driven routing protocols based on the natural information gradient, may be categorized into two major approaches: (i) the single-path approach and (ii) the multiple-path approach. In this paper, using a regular grid topology, we develop analytical models for the query success rate and the overhead of both approaches for ideal and lossy wireless link conditions. We validate our analytical models using simulations. Also, both the analytical and the simulation models are used to characterize each approach in terms of overhead, query success rate and increase in path length. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. --- paper_title: Improvement on LEACH Protocol of Wireless Sensor Network paper_content: This paper studies LEACH protocol, and puts forward energy-LEACH and multihop-LEACH protocols. Energy-LEACH protocol improves the choice method of the cluster head, makes some nodes which have more residual energy as cluster heads in next round. Multihop-LEACH protocol improves communication mode from single hop to multi-hop between cluster head and sink. Simulation results show that energy-LEACH and multihop-LEACH protocols have better performance than LEACH protocols. --- paper_title: Improvement on LEACH Protocol of Wireless Sensor Network paper_content: This paper studies LEACH protocol, and puts forward energy-LEACH and multihop-LEACH protocols. Energy-LEACH protocol improves the choice method of the cluster head, makes some nodes which have more residual energy as cluster heads in next round. Multihop-LEACH protocol improves communication mode from single hop to multi-hop between cluster head and sink. Simulation results show that energy-LEACH and multihop-LEACH protocols have better performance than LEACH protocols. --- paper_title: A two-levels hierarchy for low-energy adaptive clustering hierarchy (TL-LEACH) paper_content: Wireless sensor networks with thousands of tiny sensor nodes are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper we propose a modification to a well-known protocol for sensor networks called Low Energy Adaptive Clustering Hierarchy (LEACH). This last is designed for sensor networks where end- user wants to remotely monitor the environment. In such situation, the data from the individual nodes must be sent to a central base station, often located far from the sensor network, through which the end-user can access the data. In this context our contribution is represented by building a two-level hierarchy to realize a protocol that saves better the energy consumption. Our TL-LEACH uses random rotation of local cluster base stations (primary cluster-heads and secondary cluster-heads). In this way we build, where it is possible, a two-level hierarchy. This permits to better distribute the energy load among the sensors in the network especially when the density of network is higher. TL- LEACH uses localized coordination to enable scalability and robustness. We evaluated the performances of our protocol with NS-2 and we observed that our protocol outperforms the LEACH in terms of energy consumption and lifetime of the network. --- paper_title: Improvement on LEACH Protocol of Wireless Sensor Network (VLEACH) paper_content: This paper presents a new version of LEACH protocol called VLEACH which aims to reduce energy consumption within the wireless network. We evaluate both LEACH and V-LEACH through extensive simulations using OMNET++ simulator which shows that VLEACH performs better than LEACH protocol. --- paper_title: U-LEACH: A novel routing protocol for heterogeneous Wireless Sensor Networks paper_content: A Wireless Sensor Network comprises of a number of energy-constraint sensor nodes deployed in the area of interest. In our paper, we have proposed a new hierarchical clustering based routing protocol for the heterogeneous wireless sensor networks. The proposed protocol named as Universal - Low Energy Adaptive Cluster Hierarchy (U-LEACH) is an energy efficient protocol showing a significant reduction in the energy consumption by the sensor nodes. Unlike LEACH, in U-LEACH, the selection of Cluster-Head depends on the initial and the residual energy of the nodes. In a particular cluster, the transfer of information between the nodes takes place by forming a chain, starting from the farthest node from the base station. Data aggregation has also been implemented successfully to slam down the energy-consumption. Simulations results show substantial improvement of the U-LEACH over its nemesis protocols. --- paper_title: CHIRON: An energy-efficient chain-based hierarchical routing protocol in wireless sensor networks paper_content: Due to the power restriction of sensor nodes, efficient routing, in wireless sensor networks, is a critical approach to saving node's energy and thus prolonging the network lifetime. Even the chain-based routing is one of significant routing mechanisms, several common flaws, such as data propagation delay and redundant transmission, are associated. In this paper, we propose an energy efficient Chain-Based Hierarchical Routing Protocol, named as CHIRON, to alleviate such deficiencies. Based on the BeamStar concept [9], the main idea of CHIRON is to split the sensing field into a number of smaller areas, so that it can create multiple shorter chains to reduce the data transmission delay and redundant path, and therefore effectively conserve the node energy and prolong the network lifetime. Simulation results show that, in contrast to Enhanced PEGASIS and PEGASIS protocols, the proposed CHIRON can achieve about 15% and 168% improvements on average data propagation delay, 30% and 65% improvements on redundant transmission path, respectively. By these contributions, the network lifetime can also be extended to about 14%∼7% and 50%∼23%, under various small and large simulation areas, respectively. --- paper_title: TEEN: a routing protocol for enhanced efficiency in wireless sensor networks paper_content: Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols. --- paper_title: An efficient clustering protocol increasing wireless sensor networks life time paper_content: Due to their rapid and promising development, Wireless Sensor Networks (WSNs) have been predicted to invade all domains in our daily life in the near future. However in order to reach their maturity, researchers must find solution to some difficulties which are slowing down the wide spread use of these networks. These difficulties are inherent to their constrained specificities which require adapted solutions unrelated to classical wire networks. One the activity done by WSN that consumed the most energy is the number of packets sent/received. In order to reduce this number and to ensure WSN successful operation, hierarchical clustering protocols have been developed. In this paper, we present WB-TEEN and WBM-TEEN: two hierarchical routing protocols, based on nodes clustering and improving the well known protocol Threshold sensitive Energy Efficient sensor Network protocol (TEEN). This improvement is accomplished in a way such that each cluster is nodes balanced and the total energy consumption between sensor nodes and cluster heads is minimized by using multi-hops intra-cluster communication. Simulation results (using NS2 simulator) show that the proposed protocols exhibit better performance than Low Energy Adaptive Clustering Hierarchy (LEACH) and TEEN in terms of energy consumption and network lifetime prolongation. --- paper_title: Energy-Efficient Routing Protocol Based on Clustering and Least Spanning Tree in Wireless Sensor Networks paper_content: A particular wireless sensor networks (WSNs), cluster-based WSN (CWSN) have received more and more attention due to the limited energy of battery-powered nodes, but rarely consider the shortest path at the same time. In this paper, we propose an novel energy-efficient routing protocol based on clustering and least spanning tree for wireless sensor network to prolong network lifetime and shorten path while emphasizing energy conservation at the same time. Clustering includes partitioning stage and choosing stage, namely, partitions the multi-hop network and then chooses cluster-heads, cluster-head is responsible for receiving, sending and maintaining information in its cluster. Then all cluster-heads will construct a least spanning tree to prolong network lifetime, save energy and shorten path. Simulation results show that the system?s performance have further improved by using clustering and least spanning tree. It is a promising approach and deserves more future research. --- paper_title: SEP: A stable election protocol for clustered heterogeneous wireless sensor networks paper_content: We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources—this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes. --- paper_title: HSEP: Heterogeneity-aware Hierarchical Stable Election Protocol for WSNs paper_content: Wireless Sensor Networks (WSNs) are increasing to handle complex situations and functions. In these networks some of the nodes become Cluster Heads (CHs) which are responsible to aggregate data of from cluster members and transmit it to Base Stations (BS). Those clustering techniques which are designed for homogenous network are not enough efficient for consuming energy. Stable Election Protocol (SEP) introduces heterogeneity in WSNs, consisting of two type of nodes. SEP is based on weighted election probabilities of each node to become CH according to remaining energy of nodes. We propose Heterogeneity-aware Hierarchal Stable Election Protocol (HSEP) having two level of energies. Simulation results show that HSEP prolongs stability period and network lifetime, as compared to conventional routing protocols and having higher average throughput than selected clustering protocols in WSNs. --- paper_title: Minimum-energy mobile wireless networks revisited paper_content: We propose a protocol that, given a communication network, computes a subnetwork such that, for every pair $(u,v)$ of nodes connected in the original network, there is a minimum-energy path between $u$ and $v$ in the subnetwork (where a minimum-energy path is one that allows messages to be transmitted with a minimum use of energy). The network computed by our protocol is in general a subnetwork of the one computed by the protocol given in [13]. Moreover, our protocol is computationally simpler. We demonstrate the performance improvements obtained by using the subnetwork computed by our protocol through simulation. --- paper_title: Geographical and energy aware routing: A recursive data dissemination protocol for wireless sensor networks paper_content: Future sensor networks will be composed of a large number of densely deployed sensors/actuators. A key feature of such networks is that their nodes are untethered and unattended. Consequently, energy efficiency is an important design consideration for these networks. Motivated by the fact that sensor network queries may often be geographical, we design and evaluate an energy efficient routing algorithm that propagates a query to the appropriate geographical region, without flooding. The proposed Geographic and Energy Aware Routing (GEAR) algorithm uses energy aware neighbor selection to route a packet towards the target region and Recursive Geographic Forwarding or Restricted Flooding algorithm to disseminate the packet inside the destina- --- paper_title: Trajectory based forwarding and its applications paper_content: Trajectory based forwarding (TBF) is a novel methodto forward packets in a dense ad hoc network that makes it possible to route a packet along a predefined curve. It is a hybrid between source based routing and Cartesian forwarding in that the trajectory is set by the source, but the forwarding decision is based on the relationship to the trajectory rather than names of intermediate nodes. The fundamental aspects of TBF are: it decouples path naming from the actual path; it provides cheap path diversity; it trades off communication for computation. These aspects address the double scalability issue with respect to mobility rate and network size. In addition, TBF provides a common framework for many services such as: broadcasting, discovery, unicast, multicast and multipath routing in ad hoc networks. TBF requires that nodes know their position relative to a coordinate system. While a global coordinate system afforded by a system such as GPS would be ideal, approximate positioning methods provided by other algorithms are also usable. --- paper_title: Insect sensory systems inspired computing and communications paper_content: Insects are the most successful group of living things in terms of the number of species, the biomass and their distribution. Entomological research has revealed that the insect sensory systems are crucial for their success. Compared to human brains, the insect central nerve systems are extremely primitive and simple, both structurally and functionally, and are of minimal learning ability. Faced with these constraints, insects have evolved a set of extremely effective sensory systems that are structurally simple, functionally versatile and powerful, and highly distributed, as well as noise and fault tolerant. As a result, in recent years insect sensory systems have been inspirational to new communications and computing paradigms, which have lead to significant advances. However, we believe that the potential for insect-inspired solutions for communications and computing is far from being fully recognized. In particular, the contrasting similarity between the ubiquitous existences of insect sensory networks in nature and the idea of pervasive computing has received little attention. For example, the chemosensory communication systems in many of the moth, ant and beetle populations are essentially ''wireless'' sensory networks. The difference between the ''wireless'' network of an insect population and an engineered wireless sensor network is that insects encode messages with semiochemicals (also known as infochemicals) rather than with radio frequencies; in addition, the computing node is the individual insect powered by its brain, sensory and neuromotor systems, rather than a microchip-powered sensor. The objectives of this paper are threefold: (1) to introduce the state-of-the art research in insect sensory systems from entomological perspectives; (2) to propose potential new research problems inspired by insect sensory system with focusing on unexplored fields; and (3) to justify how and why insect sensory systems may inspire novel computing and communications paradigms. --- paper_title: AntNet: Distributed Stigmergetic Control for Communications Networks paper_content: This paper introduces AntNet, a novel approach to the adaptive learning of routing tables in communications networks. AntNet is a distributed, mobile agents based Monte Carlo system that was inspired by recent work on the ant colony metaphor for solving optimization problems. AntNet's agents concurrently explore the network and exchange collected information. The communication among the agents is indirect and asynchronous, mediated by the network itself. This form of communication is typical of social insects and is called stigmergy. We compare our algorithm with six state-of-the-art routing algorithms coming from the telecommunications and machine learning fields. The algorithms' performance is evaluated over a set of realistic testbeds. We run many experiments over real and artificial IP datagram networks with increasing number of nodes and under several paradigmatic spatial and temporal traffic distributions. Results are very encouraging. AntNet showed superior performance under all the experimental conditions with respect to its competitors. We analyze the main characteristics of the algorithm and try to explain the reasons for its superiority. --- paper_title: Ant Colony-Based Reinforcement Learning Algorithm for Routing in Wireless Sensor Networks paper_content: The field of routing and sensor networking is an important and challenging research area of network computing today. Advancements in sensor networks enable a wide range of environmental monitoring and object tracking applications. Routing in sensor networks is a difficult problem: as the size of the network increases, routing becomes more complex. Therefore, biologically-inspired intelligent algorithms are used to tackle this problem. Ant routing has shown excellent performance for sensor networks. In this paper, we present a biologically-inspired swarm intelligence-based routing algorithm, which is suitable for sensor networks. Our proposed ant routing algorithm also meet the enhanced sensor network requirements, including energy consumption, success rate, and time delay. The paper concludes with the measurement data we have found. --- paper_title: Ant System Based Anycast Routing in Wireless Sensor Networks paper_content: Anycast is a mechanism that it sends the data groups to the nearest interface during which they have the same anycast address. Ant colony system, a population-based algorithm, provides natural and intrinsic way of exploration of search space in optimization settings in determining optimal anycast tree. In this paper, we propose a sink selection heuristic algorithm called Minimum Ant-based Data Fusion Tree(MADFT) for energy constraint wireless sensor networks. Different from existing schemes, MADAT not only optimizes over the data transmission cost, but also incorporates the cost for data fusion which can be significant for emerging sensor networks with vectorial data and/or security requirements. Via simulation, it is shown that this algorithm has excellent performance behavior and provides a near-optimal solution. --- paper_title: Swarm intelligence optimization based routing algorithm for Wireless Sensor Networks paper_content: Energy balance is an important performance index in the design of routing algorithm for wireless sensor networks. A swarm intelligence optimization based routing algorithm for wireless sensor networks is proposed, whose kernel idea is taking less hop numbers into consideration and choosing the nodes with less pheromone as next hop to avoid some nodespsila prematurely exhausting their energy because of too concentrated routes through the nodes. The experiments show that the algorithm proposed in this paper is better than the directed diffusion routing protocol both in end-to-end delay and global energy balance and can effectively balance the global energy consumption and prolong the network lifetime. --- paper_title: An Improved Gossiping Data Distribution Technique with Emphasis on Reliability and Resource Constraints paper_content: In this paper we present an improved Gossiping data distribution technique with emphasis on the location of nodes called "LGossiping" that reliably disseminate information among sensors in a wireless sensor network. Nodes running LGossiping data distribution technique use global positioning system to relay data throughout the network. Each node decides upon position knowledge of the others. This allows each sensor to send its data reliably to one of known neighbors instead of sending data blindly to one of neighbors which may not be near to source node and cause data lost. We outlined SystemC as simulation language and simulate and analyzed Gossiping and LGossiping protocols. In each experiment source node propagate data among nodes with the hope of reaching to sink node. This procedure is called data transmission. We found that LGossiping with transmission radius greater than or equal to 4 (TR≫=4) is 30% more reliable than Gossiping for a specific number of experiments. LGossiping is an improvement of Gossiping in terms of reliability. Therefore reliability in this improved method is critical. Moreover, network energy consumption with respect to different TR is considered. We concluded that network energy consumption decreases as the transmission radius increases. As termination, increasing area during the experiment is mentioned. In order to have constant reliability with constant number of nodes in LGossiping method in immense areas, with each 600m2 increase of area transmission radius should increases 1 meter. --- paper_title: TAG: a Tiny AGgregation service for ad-hoc sensor networks paper_content: We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution. --- paper_title: The design of an acquisitional query processor for sensor networks paper_content: We discuss the design of an acquisitional query processor for data collection in sensor networks. Acquisitional issues are those that pertain to where, when, and how often data is physically acquired (sampled) and delivered to query processing operators. By focusing on the locations and costs of acquiring data, we are able to significantly reduce power consumption over traditional passive systems that assume the a priori existence of data. We discuss simple extensions to SQL for controlling data acquisition, and show how acquisitional issues influence query optimization, dissemination, and execution. We evaluate these issues in the context of TinyDB, a distributed query processor for smart sensor devices, and show how acquisitional techniques can provide significant reductions in power consumption on our sensor devices. --- paper_title: Breath: A Self-Adapting Protocol for Wireless Sensor Networks in Control and Automation paper_content: The novel cross-layer protocol Breath for wireless sensor networks is designed, implemented, and experimentally evaluated. The Breath protocol is based on randomized routing, MAC and duty-cycling, which allow it to minimize the energy consumption of the network while ensuring a desired packet delivery end-to-end reliability and delay. The system model includes a set of source nodes that transmit packets via multi-hop communication to the destination. A constrained optimization problem, for which the objective function is the network energy consumption and the constraints are the packet latency and reliability, is posed and solved. It is shown that the communication layers can be jointly optimized for energy efficiency. The optimal working point of the network is achieved with a simple algorithm, which adapts to traffic variations with negligible overhead. The protocol was implemented on a test-bed with off-the-shelf wireless sensor nodes. It is compared with a standard IEEE 802.15.4 solution. Experimental results show that Breath meets the latency and reliability requirements, and that it exhibits a good distribution of the working load, thus ensuring a long lifetime of the network. --- paper_title: Localized power-aware routing in linear wireless sensor networks paper_content: Energy-efficency is a key concern when designing protocols for wireless sensor networks (WSN). This is of particular importance in commercial applications where demonstrable return on investment is a crucial factor. One such commercial application that motivated this work is telemetry and control for freight railroad trains. Since a railroad train has a global linear structure by nature, we consider in this paper linear WSNs as sensor networks having, roughly, a linear topology. Aiming at such networks, we introduce two routing schemes that efficiently utilize energy: Minimum Energy Relay Routing (MERR) and Adaptive MERR (AMERR). We derive a theoretical lower bound on the optimal power consumption of routing in a linear WSN, where we assume a Poisson model for the distribution of nodes along a linear path. We evaluate the efficiency of our protocols with respect to the theoretical optimal lower bound and with respect to other well-known protocols. AMERR achieves optimal performance for practical deployment settings, while MERR rapidly approaches optimal performance as sensors are more densely deployed. Compared to other protocols, we show that MERR and AMERR are less complex and have better scalability. We also postulate how both protocols might be generalized to a two-dimensional WSN. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. --- paper_title: Energy Efficient Adaptive Multipath Routing for Wireless Sensor Networks paper_content: Routing in wireless sensor networks is a demanding task. This demand has led to a number of routing protocols which efficiently utilize the limited resources available at the sensor nodes. All these protocols typically find the minimum energy path. In this paper we take a view that, always using the minimum energy path deprives the nodes energy quickly and the time taken to determine an alternate path increases. Multipath routing schemes distribute traffic among multiple paths instead of routing all the traffic along a single path. Two key questions that arise in multipath routing are how many paths are needed and how to select these paths. Clearly, the number and the quality of the paths selected dictate the performance of a multipath routing scheme. We propose an energy efficient adaptive multipath routing technique which utilizes multiple paths between source and the sink, adaptive because they have low routing overhead. This protocol is intended to provide a reliable transmission environment with low energy consumption, by efficiently utilizing the energy availability and the received signal strength of the nodes to identify multiple routes to the destination. Simulation results show that the energy efficient adaptive multipath routing scheme achieves much higher performance than the classical routing protocols, even in the presence of high node density and overcomes simultaneous packet forwarding --- paper_title: Reliable Multi-path Routing Protocol in Wireless Sensor Networks paper_content: There are often some faults happening in the wireless sensor networks due to node failure or energy exhaustion. It is because the nodes in sensor networks are energy constrained and usually deployed in harsh environments. These factors strongly influence reliability of data transmission and decrease the lifetime of networks. An important issue in designing wireless sensor network is the reliable routing protocols which provide as much reliability of data transmission as possible. Another important factor is to make the best use of the limited energy presented by nodes of wireless sensor networks. This paper proposes a new routing protocol called REEM(Reliable and Energy-Efficient Multi-path routing protocol) to settle above problems. Simulation results show that our protocol surpasses the MSR, AOMDV and ARAMA protocols in terms of reliability and network lifetime. --- paper_title: Energy Efficient Multipath Routing in Large Scale Sensor Networks with Multiple Sink Nodes paper_content: Due to the battery resource constraint, it is a critical issue to save energy in wireless sensor networks, particularly in large sensor networks. One possible solution is to deploy multiple sink nodes simultaneously. In this paper, we propose a protocol called MRMS (Multipath Routing in large scale sensor networks with Multiple Sink nodes) which incorporates multiple sink nodes, a new path cost metric for improving path selection, dynamic cluster maintenance and path switching to improve energy efficiency. MRMS is shown to increase the lifetime of sensor nodes substantially compared to other algorithms based on a series of simulation experiments. --- paper_title: Energy-balancing multipath routing protocol for wireless sensor networks paper_content: A Wireless Sensor Network (WSN) is a collection of wireless sensor nodes forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, due to the limited range of each node's wireless transmissions, it may be necessary for one sensor node to ask for the aid of other sensor nodes in forwarding a packet to its destination, usually the base station. One big issue when designing wireless sensor network is the routing protocol to make the best use of the severe resource constraints presented by WSN, especially the energy limitation. In this paper, we propose a new scheme called EBMR: Energy-Balancing Multipath Routing Protocol that uses multipath alternately to prolong the lifetime of the network. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. ---
Title: Wireless Sensor Network Routing Protocols: A Survey Section 1: INTRODUCTION Description 1: This section provides an overview of Wireless Sensor Networks (WSNs) and highlights the challenges in routing due to unique characteristics and diverse applications. It also outlines the scope and objectives of the survey. Section 2: RELATED WORK Description 2: This section reviews the existing literature on WSN routing protocols, identifying gaps and distinguishing the scope of this survey from previous efforts. Section 3: ROUTING PROTOCOLS IN WIRELESS SENSOR NETWORKS Description 3: This section categorizes and discusses various routing protocols for WSNs, providing detailed descriptions of each protocol. Section 4: Network Structure Protocols Description 4: This subsection examines routing protocols based on network structure, including data-centric, hierarchical, and location-based protocols. Section 5: Protocol Operation Routing Protocols Description 5: This subsection explores routing protocols based on operational principles such as multi-path, query-based, negotiation-based, QoS-based, and bio-inspired routing protocols. Section 6: Coherent and Non-coherent Protocols Description 6: This subsection discusses routing protocols focused on data processing strategies, distinguishing between coherent and non-coherent methods. Section 7: Established Path Based Routing Protocol Description 7: This subsection categorizes protocols based on the communication paths they follow, including proactive, reactive, and hybrid protocols. Section 8: Initiator of Communication Description 8: This subsection classifies routing protocols based on whether communication initiation is source-driven or sink-driven. Section 9: CONCLUSION Description 9: This section summarizes the survey findings and proposes future research directions, emphasizing the need for WSN routing protocol integration with wired networks and exploring applications for security and environmental monitoring.
A Survey of Stack-Sorting Disciplines
5
--- paper_title: Computing permutations with double-ended queues, parallel stacks and parallel queues paper_content: A memory may be regarded as a computer with input, output and storage facilities, but with no explicit functional capability. The only possible outputs are permutations of a multiset of its inputs. Thus the natural question to ask of a class of memories is, what permutations can its members compute? We are particularly interested here in switchyard networks studied by Knuth [1968], Even and Itai [1971], and Tarjan [1972], where the permutations are of the set of inputs, rather than of a multiset of them. --- paper_title: The Solution of a Conjecture of Stanley and Wilf for All Layered Patterns paper_content: Proving a conjecture of Wilf and Stanley in hitherto the most general case, we show that for any layered patternqthere is a constantcso thatqis avoided by less thancnpermutations of lengthn. This will imply the solution of this conjecture for at least 2kpatterns of lengthk, for anyk. --- paper_title: Permutations generated by token passing in graphs paper_content: Abstract A transportation graph is a directed graph with a designated input node and a designated output node. Initially, the input node contains an ordered set of tokens 1,2,3, … The tokens are removed from the input node in this order and transferred through the graph to the output node in a series of moves; each move transfers a token from a node to an adjacent node. Two or more tokens cannot reside on an internal node simultaneously. When the tokens arrive at the output node they will appear in a permutation of their original order. The main result is a description of the possible arrival permutations in terms of regular sets. This description allows the number of arrival permutations of each length to be computed. The theory is then applied to packet-switching networks and has implications for the resequencing problem. It is also applied to some complex data structures and extends previously known results to the case that the data structures are of bounded capacity. A by-product of this investigation is a new proof that permutations which avoid the pattern 321 are in one to one correspondence with those that avoid 312. --- paper_title: Computing permutations with double-ended queues, parallel stacks and parallel queues paper_content: A memory may be regarded as a computer with input, output and storage facilities, but with no explicit functional capability. The only possible outputs are permutations of a multiset of its inputs. Thus the natural question to ask of a class of memories is, what permutations can its members compute? We are particularly interested here in switchyard networks studied by Knuth [1968], Even and Itai [1971], and Tarjan [1972], where the permutations are of the set of inputs, rather than of a multiset of them. --- paper_title: Permutations generated by token passing in graphs paper_content: Abstract A transportation graph is a directed graph with a designated input node and a designated output node. Initially, the input node contains an ordered set of tokens 1,2,3, … The tokens are removed from the input node in this order and transferred through the graph to the output node in a series of moves; each move transfers a token from a node to an adjacent node. Two or more tokens cannot reside on an internal node simultaneously. When the tokens arrive at the output node they will appear in a permutation of their original order. The main result is a description of the possible arrival permutations in terms of regular sets. This description allows the number of arrival permutations of each length to be computed. The theory is then applied to packet-switching networks and has implications for the resequencing problem. It is also applied to some complex data structures and extends previously known results to the case that the data structures are of bounded capacity. A by-product of this investigation is a new proof that permutations which avoid the pattern 321 are in one to one correspondence with those that avoid 312. --- paper_title: Sorting with two ordered stacks in series paper_content: The permutations that can be sorted by two stacks in series are considered, subject to the condition that each stack remains ordered. A forbidden characterisation of such permutations is obtained and the number of permutations of each length is determined by a generating function. --- paper_title: Sorting permutations with networks of stacks paper_content: Sea water contaminated with suspended organic matter and the like can be purified by expanding a mixture of the water with air so as to generate a froth, and letting the froth carrying the impurities overflow from a tubular riser attached to the expansion vessel while purified water is drawn off from the bottom of the vessel. The froth is stabilized by maintaining a layer of more alkaline liquid on the inner wall of the riser. The apparatus employed uses a rotating nozzle for continuously rinsing the wall of the riser with a liquid more alkaline than the froth, such as the original sea water prior to its being mixed with air. --- paper_title: Sorting with parallel pop - stacks paper_content: Detrimental color changes in liquid diphenylamine age-resisters have been inhibited and/or reduced by combining with the age-resisters small amounts of a color inhibitor selected from the group consisting of trialkylol amines, glycol diesters of sulphur-containing monocarboxylic acids, and dialkyl esters of thiodicarboxylic acids in a weight ratio of color inhibitor to age-resister of from 0.25:99.75 to 5:95. --- paper_title: Computing permutations with double-ended queues, parallel stacks and parallel queues paper_content: A memory may be regarded as a computer with input, output and storage facilities, but with no explicit functional capability. The only possible outputs are permutations of a multiset of its inputs. Thus the natural question to ask of a class of memories is, what permutations can its members compute? We are particularly interested here in switchyard networks studied by Knuth [1968], Even and Itai [1971], and Tarjan [1972], where the permutations are of the set of inputs, rather than of a multiset of them. --- paper_title: Raney Paths and a Combinatorial Relationship between Rooted Nonseparable Planar Maps and Two-Stack-Sortable Permutations paper_content: An encoding of the set of two-stack-sortable permutations (TSS) in terms of lattice paths and ordered lists of strings is obtained. These lattice paths are called Raney paths. The encoding yields combinatorial decompositions for two complementary subsets of TSS, which are the analogues of previously known decompositions for the set of nonseparable rooted planar maps (NS). This provides a combinatorial relationship between TSS and NS, and, hence, a bijection is determined between these sets that is different, simpler, and more refined than the previously known bijection. --- paper_title: A proof of Julian West's conjecture that the number of two-stacksortable permutations of length n is 2(3n)!/((n + 1)!(2n + 1)!) paper_content: Abstract The Polya-Schutzenberger-Tutte methodology of weight enumeration, combined with about 10 hours of CPU time (of Maple running on Drexel University's Sun network) established Julian West's conjecture that 2-stack-sortable permutations are enumerated by sequence # 651 in the Sloane listing. --- paper_title: Permutations with forbidden subsequences and nonseparable planar maps paper_content: The goal of the present work is to connect combinatorially a family of maps to a family of permutations with forbidden subsequences. We obtain a generating tree of nonseparable planar rooted maps and show that this tree is the generating tree of a family of permutations. The distribution of these permutations is then obtained. Finally, the different steps leading to the combinatorial proof of West's conjecture are listed. --- paper_title: Restricted permutations and the wreath product paper_content: Restricted permutations are those constrained by having to avoid subsequences ordered in various prescribed ways. A closed set is a set of permutations all satisfying a given basis set of restrictions. A wreath product construction is introduced and it is shown that this construction gives rise to a number of useful techniques for deciding the finite basis question and solving the enumeration problem. Several applications of these techniques are given. --- paper_title: Pattern Matching For Permutations paper_content: Given a permutation T of 1 to n, and a permutation P of 1 to k, for k ⩽ n, we wish to find a k-element subsequence of T whose elements are ordered according to the permutation P. For example, if P is (1, 2, …, k), then we wish to find an increasing subsequence of length k in T; this special case was done in time O(n log log n) by Chang and Wang. We prove that the general problem is NP-complete. We give a polynomial time algorithm for the decision problem, and the corresponding counting problem, in the case that P is separable — i.e., contains neither the subpattern (3, 1, 4, 2) nor its reverse, the subpattern (2, 4, 1, 3). --- paper_title: Sorted and/or sortable permutations paper_content: Abstract In his Ph.D. thesis, Julian West (Permutations with restricted subsequences and stack-sortable permutations, MIT, 1990) studied in depth a map Π that acts on permutations of the symmetric group S n by partially sorting them through a stack. The main motivation of this paper is to characterize and count the permutations of Π( S n ) , which we call sorted permutations. This is equivalent to counting preorders of increasing binary trees. We first find a local characterization of sorted permutations. Then, using an extension of Zeilberger's factorization of two-stack sortable permutations (D. Zeilberger, Discrete Math. 102 (1992) 85–93), we obtain for the generating function of sorted permutations an unusual functional equation. Out of curiosity, we apply the same treatment to four other families of permutations (general permutations, one-stack sortable permutations, two-stack sortable permutations, sorted and sortable permutations) and compare the functional equations we obtain. All of them have similar features, involving a divided difference. Moreover, most of them have interesting q-analogs obtained by counting inversions. We solve (some of) our equations. --- paper_title: A Bijective Census of Nonseparable Planar Maps paper_content: Bijections are obtained between nonseparable planar maps and two different kinds of trees: description trees and skew ternary trees. A combinatorial relation between the latter and ternary trees allows bijective enumeration and random generation of nonseparable planar maps. The involved bijections take account of the usual combinatorial parameters and give a bijective proof of formulae established by Brown and Tutte. These results, combined with a bijection due to Goulden and West, give a purely combinatorial enumeration of two-stack-sortable permutations. --- paper_title: Symmetry and Unimodality in t-Stack Sortable Permutations paper_content: We present the first nontrivial results on t-stack sortable permutations by constructively proving that the sequence Wt(n, k) of the numbers of t-stack sortable permutations with k descents is symmetric and unimodal. --- paper_title: On the Neggers–Stanley Conjecture and the Eulerian Polynomials paper_content: Abstract We prove combinatorially that the W -polynomials of naturally labeled graded posets of rank 1 or 2 (an antichain has rank 0) are unimodal, thus providing further supporting evidence for the Neggers–Stanley conjecture. For such posets we also obtain a combinatorial proof that the W -polynomials are symmetric. Combinatorial proofs that the Eulerian polynomials are log-concave and unimodal are given and we construct a simplicial complex Δ with the property that the Hilbert function of the exterior algebra modulo the Stanley–Reisner ideal of Δ is the sequence of Eulerian numbers, thus providing a combinatorial proof of a result of Brenti. --- paper_title: Hilbert Polynomials in Combinatorics paper_content: We prove that several polynomials naturally arising in combinatorics are Hilbert polynomials of standard graded commutative k-algebras. --- paper_title: Symmetry and Unimodality in t-Stack Sortable Permutations paper_content: We present the first nontrivial results on t-stack sortable permutations by constructively proving that the sequence Wt(n, k) of the numbers of t-stack sortable permutations with k descents is symmetric and unimodal. --- paper_title: On the Neggers–Stanley Conjecture and the Eulerian Polynomials paper_content: Abstract We prove combinatorially that the W -polynomials of naturally labeled graded posets of rank 1 or 2 (an antichain has rank 0) are unimodal, thus providing further supporting evidence for the Neggers–Stanley conjecture. For such posets we also obtain a combinatorial proof that the W -polynomials are symmetric. Combinatorial proofs that the Eulerian polynomials are log-concave and unimodal are given and we construct a simplicial complex Δ with the property that the Hilbert function of the exterior algebra modulo the Stanley–Reisner ideal of Δ is the sequence of Eulerian numbers, thus providing a combinatorial proof of a result of Brenti. --- paper_title: A simplicial complex of 2-stack sortable permutations paper_content: For each n, we construct a simplicial complex whose k-dimensional faces are in one-to-one correspondence with 2-stack sortable permutations of length n having k ascents. ---
Title: A Survey of Stack-Sorting Disciplines Section 1: Introduction Description 1: Provide an overview of the stack sorting problem, its historical context, and the motivation behind studying it. Introduce the main concepts and the scope of the survey. Section 2: Variations on a Single Stack Description 2: Discuss the different generalizations of the basic stack model, including deques, pop-stacks, (r, s)-stacks, and fork-stacks. Explain their sorting abilities and associated enumeration problems. Section 3: Systems of Stacks Description 3: Explore the models involving multiple stacks either in parallel or in series. Discuss key results and challenges in sorting with these systems, including known sorting algorithms and NP-completeness results. Section 4: Enumeration of West t-Stack-Sortable Permutations by Ascents Description 4: Review the enumeration of permutations sorted by West's t-stack model, focusing on properties such as unimodality, log-concavity, and real zeros. Discuss bijections with combinatorial structures like β(1, 0)-trees and related theorems. Section 5: Open Problems and Future Directions Description 5: Highlight unresolved questions and potential areas for future research in the context of stack-sorting and permutation patterns. Suggest new models or extensions to existing ones that could be explored.
A literature survey on robust and efficient eye localization in real-life scenarios
7
--- paper_title: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze paper_content: Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. --- paper_title: Region-based template deformation and masking for eye-feature extraction and description paper_content: We propose an improved method for eye-feature extraction, descriptions, and tracking using deformable templates. Some existing algorithms are exploited to locate the initial position of eye features and then deformable templates are used for extracting and describing the eye features. Rather than using original energy minimization for matching the templates, the region-based approach is proposed for template deformation. Based on the region properties, the new strategy avoids problems such as template shrinking, adjusting the weights of energy terms, failure of orientation adjustment due to some exceptional cases. Our strategies are also coupled with Canny edge operator to give a new back-end processing. By integrating the local edge information from the edge detection and the global collector from our region-based template deformation, this processing stage can generate accurate eye-feature descriptions. Finally, the template deformation process is applied to tracking eye features. --- paper_title: Real-time head tracking from the deformation of eye contours using a piecewise affine camera paper_content: Abstract A computer vision based approach for human–computer interaction through head movements is presented and evaluated in a non-immersive virtual reality context. Once intercepted and tracked in real-time using a piecewise affine camera model and affine-deformable eye contours, user head displacements are estimated and remapped onto the tridimensional graphic environment according to a natural interface metaphor. Both the real-time performance of the tracker and the improved head parameter estimation accuracy – as compared to the one obtainable using globally affine camera models – encourage the use of this approach to support diverse advanced interaction scenarios and applications. --- paper_title: Robust Face Detection Using the Hausdorff Distance paper_content: The localization of human faces in digital images is a fundamental step in the process of face recognition. This paper presents a shape comparison approach to achieve fast, accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on grayscale still images. The Hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image. The paper describes an efficient implementation, making this approach suitable for real-time applications. A two-step process that allows both coarse detection and exact localization of faces is presented. Experiments were performed on a large test set base and rated with a new validation measurement. --- paper_title: Locating and extracting the eye in human face images paper_content: Facial feature extraction is an important step in automated visual interpretation and human face recognition. Among the facial features, the eye plays the most important part in the recognition process. The deformable template can be used in extracting the eye boundaries. However, the weaknesses of the deformable template are that the processing time is lengthy and that its success relies on the initial position of the template. In this paper, the head boundary is first located in a head-and-shoulders image. The approximate positions of the eyes are estimated by means of average anthropometric measures. Corners, the salient features of the eyes, are detected and used to set the initial parameters of the eye templates. The corner detection scheme introduced in this paper can provide accurate information about the corners. Based on the corner positions, we can accurately locate the templates in relation to the eye images and greatly reduce the processing time for the templates. The performance of the deformable template is assessed with and without using the information on corner positions. Experiments show that a saving in execution time of about 40% on average and a better eye boundary representation can be achieved by using the corner information. --- paper_title: Eye Spacing Measurement for Facial Recognition paper_content: Few approaches to automated facial recognition have employed geometric measurement of characteristic features of a human face. Eye spacing measurement has been identified as an important step in achieving this goal. Measurement of spacing has been made by application of the Hough transform technique to detect the instance of a circular shape and of an ellipsoidal shape which approximate the perimeter of the iris and both the perimeter of the sclera and the shape of the region below the eyebrows respectively. Both gradient magnitude and gradient direction were used to handle the noise contaminating the feature space. Results of this application indicate that measurement of the spacing by detection of the iris is the most accurate of these three methods with measurement by detection of the position of the eyebrows the least accurate. However, measurement by detection of the eyebrows' position is the least constrained method. Application of these techniques has led to measurement of a characteristic feature of the human face with sufficient accuracy to merit later inclusion in a full package for automated facial recognition. --- paper_title: Feature extraction from faces using deformable templates paper_content: A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images. > --- paper_title: On improving eye feature extraction using deformable templates paper_content: Abstract An improved method of extracting eye features from facial images using eye templates is described. It retains all advantages of the deformable template method originally proposed by A. L. Yuille, P. W. Hallinan and D. S. Cohen ( Int. J. Comput. Vision 99–111 (1989)) and rectifies some of its weaknesses. This is achieved by the following modifications. First, the original eye template and the overall energy function to represent the most salient features of the eye are modified. Secondly, in order to simplify the issue of selecting weights for the energy terms, the value of each energy term is normalized in the range 0–1 and only two different weights are assigned. This weighting schedule does not require expert knowledge therefore it is more user friendly. Thirdly, all parameters of the template are changed simultaneously during the minimization process rather than using a sequential procedure. This scheme prevents some parameters of the eye template from being overly changed, helps the algorithm to converge to the global minimum, and reduces the processing time. The selection of initial parameters of the eye template is based on an eye window obtained in preprocessing. Experimental results are presented to demonstrate the efficacy of the algorithm. A comparison study of various processing schemes is also given. --- paper_title: Face alignment using local hough voting paper_content: We present a novel Hough voting-based method to improve the efficiency and accuracy of fiducial points localization, which can be conveniently integrated with any global prior model for final face alignment. Specifically, two or more stable facial components (e.g., eyes) are first localized and fixed as anchor points, based on which a separate local voting map is constructed for each fiducial point using kernel density estimation. The voting map allows us to effectively constrain the search region of fiducial points by exploiting the local spatial constraints imposed by it. In addition, a multioutput ridge regression method is adopted to align the voting map and the response map of local detectors to the ground truth map, and the learned transformations are then exploited to further increases the robustness of the algorithm against various appearance variations. Encouraging experimental results are given on several publicly available face databases. --- paper_title: Towards a system for automatic facial feature detection paper_content: Abstract A model-based methodology is proposed to detect facial features from a front-view ID-type picture. The system is composed of three modules: context (i.e. face location), eye, and mouth. The context module is a low resolution module which defines a face template in terms of intensity valley regions. The valley regions are detected using morphological filtering and 8-connected blob coloring. The objective is to generate a list of hypothesized face locations ranked by face likelihood. The detailed analysis is left for the high resolution eye and mouth modules. The aim for both is to confirm as well as refine the locations and shapes of their respective features of interest. The detection is done via a two-step modelling approach based on the Hough transform and the deformable template technique. The results show that facial features can be located very quickly with Adequate or better fit in over 80% of the images with the proposed system. --- paper_title: Feature extraction from faces using deformable templates paper_content: A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images.<<ETX>> --- paper_title: Variance projection function and its application to eye detection for human face recognition paper_content: Abstract We present a new approach for eye detection using the variance projection function. The variance projection function is developed and employed to locate landmarks of the human eye which are then used to guide the detection of the eye position and shape. A number of eye images are selected to evaluate the capability of the proposed method and the results are encouraging. --- paper_title: Robust Face Detection Using the Hausdorff Distance paper_content: The localization of human faces in digital images is a fundamental step in the process of face recognition. This paper presents a shape comparison approach to achieve fast, accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on grayscale still images. The Hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image. The paper describes an efficient implementation, making this approach suitable for real-time applications. A two-step process that allows both coarse detection and exact localization of faces is presented. Experiments were performed on a large test set base and rated with a new validation measurement. --- paper_title: Eye Spacing Measurement for Facial Recognition paper_content: Few approaches to automated facial recognition have employed geometric measurement of characteristic features of a human face. Eye spacing measurement has been identified as an important step in achieving this goal. Measurement of spacing has been made by application of the Hough transform technique to detect the instance of a circular shape and of an ellipsoidal shape which approximate the perimeter of the iris and both the perimeter of the sclera and the shape of the region below the eyebrows respectively. Both gradient magnitude and gradient direction were used to handle the noise contaminating the feature space. Results of this application indicate that measurement of the spacing by detection of the iris is the most accurate of these three methods with measurement by detection of the position of the eyebrows the least accurate. However, measurement by detection of the eyebrows' position is the least constrained method. Application of these techniques has led to measurement of a characteristic feature of the human face with sufficient accuracy to merit later inclusion in a full package for automated facial recognition. --- paper_title: Feature extraction from faces using deformable templates paper_content: A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images.<<ETX>> --- paper_title: Object Detection with Discriminatively Trained Part-Based Models paper_content: We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function. --- paper_title: Variance projection function and its application to eye detection for human face recognition paper_content: Abstract We present a new approach for eye detection using the variance projection function. The variance projection function is developed and employed to locate landmarks of the human eye which are then used to guide the detection of the eye position and shape. A number of eye images are selected to evaluate the capability of the proposed method and the results are encouraging. --- paper_title: PRECISE EYE AND MOUTH LOCALIZATION paper_content: The literature on the topic has shown a strong correlation between the degree of precision of face localization and the face recognition performance. Hence, there is a need for precise facial feature detectors, as well as objective measures for their evaluation and comparison. In this paper, we will present significant improvements to a previous method for precise eye center localization, by integrating a module for mouth localization. The technique is based on Support Vector Machines trained on optimally chosen Haar wavelet coefficients. The method has been tested on several public databases; the results are reported and compared according to a standard error measure. The tests show that the algorithm achieves high precision of localization. --- paper_title: Facial feature localization using weighted vector concentration approach paper_content: We propose an efficient and generic facial feature localization method based on a weighted vector concentration approach. Our method does not require any specific priors on facial shape but implicitly learns its structural information from a training data. Unlike previous work, facial feature points are globally estimated by the concentration of directional vectors from sampling points on a face region, and those vectors are weighted by using local likelihood patterns which discriminate the appropriate position of the feature points. The directional vectors and local likelihood patterns are provided through nearest neighbor search between local patterns around the sampling points and a trained codebook of extended templates. The combination of the global vector concentration and the verification with the local likelihood patterns achieves robust facial feature point detection. We demonstrate that our method outperforms state-of-the-art method based on the Active Shape Models in our evaluation. --- paper_title: Pictorial Structures for Object Recognition paper_content: In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images. --- paper_title: On detection of multiple object instances using hough transforms paper_content: To detect multiple objects of interest, the methods based on Hough transform use non-maxima supression or mode seeking in order to locate and to distinguish peaks in Hough images. Such postprocessing requires tuning of extra parameters and is often fragile, especially when objects of interest tend to be closely located. In the paper, we develop a new probabilistic framework that is in many ways related to Hough transform, sharing its simplicity and wide applicability. At the same time, the framework bypasses the problem of multiple peaks identification in Hough images, and permits detection of multiple objects without invoking nonmaximum suppression heuristics. As a result, the experiments demonstrate a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem. --- paper_title: Using geometric properties of topographic manifold to detect and track eyes for human-computer interaction paper_content: Automatic eye detection and tracking is an important component for advanced human-computer interface design. Accurate eye localization can help develop a successful system for face recognition and emotion identification. In this article, we propose a novel approach to detect and track eyes using geometric surface features on topographic manifold of eye images. First, in the joint spatial-intensity domain, a facial image is treated as a 3D terrain surface or image topographic manifold. In particular, eye regions exhibit certain intrinsic geometric traits on this topographic manifold, namely, the pit-labeled center and hillside-like surround regions. Applying a terrain classification procedure on the topographic manifold of facial images, each location of the manifold can be labeled to generate a terrain map. We use the distribution of terrain labels to represent the eye terrain pattern. The Bhattacharyya affinity is employed to measure the distribution similarity between two topographic manifolds. Based on the Bhattacharyya kernel, a support vector machine is applied for selecting proper eye pairs from the pit-labeled candidates. Second, given detected eyes on the first frame of a video sequence, a mutual-information-based fitting function is defined to describe the similarity between two terrain surfaces of neighboring frames. By optimizing the fitting function, eye locations are updated for subsequent frames. The distinction of the proposed approach lies in that both eye detection and eye tracking are performed on the derived topographic manifold, rather than on an original-intensity image domain. The robustness of the approach is demonstrated under various imaging conditions and with different facial appearances, using both static images and video sequences without background constraints. --- paper_title: Detection of eye locations in unconstrained visual images paper_content: This paper describes a computational approach for accurately determining the location of human eyes in unconstrained monoscopic gray level images. The proposed method is based on exploiting the flow field characteristics that arise due to the presence of a dark iris surrounded by a light sclera. A novel aspect of the proposed method lies in its use of both spatial and temporal information to detect the location of the eyes. The spatial processing utilizes flow field information to select a pool of potential candidate locations for the eyes. Temporal processing uses the principle of continuity to filter out the actual location of the eyes from the pool of potential candidates. Extensions for gaze angle determination, and the tracking of human point-of-regard are indicated. --- paper_title: Robust Object Detection with Interleaved Categorization and Segmentation paper_content: This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. ::: ::: The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. ::: ::: An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems. --- paper_title: An Implicit Shape Model for Combined Object Categorization and Segmentation paper_content: We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. --- paper_title: Fast PRISM: Branch and Bound Hough Transform for Object Class Detection paper_content: This paper addresses the task of efficient object class detection by means of the Hough transform. This approach has been made popular by the Implicit Shape Model (ISM) and has been adopted many times. Although ISM exhibits robust detection performance, its probabilistic formulation is unsatisfactory. The PRincipled Implicit Shape Model (PRISM) overcomes these problems by interpreting Hough voting as a dual implementation of linear sliding-window detection. It thereby gives a sound justification to the voting procedure and imposes minimal constraints. We demonstrate PRISM's flexibility by two complementary implementations: a generatively trained Gaussian Mixture Model as well as a discriminatively trained histogram approach. Both systems achieve state-of-the-art performance. Detections are found by gradient-based or branch and bound search, respectively. The latter greatly benefits from PRISM's feature-centric view. It thereby avoids the unfavourable memory trade-off and any on-line pre-processing of the original Efficient Subwindow Search (ESS). Moreover, our approach takes account of the features' scale value while ESS does not. Finally, we show how to avoid soft-matching and spatial pyramid descriptors during detection without losing their positive effect. This makes algorithms simpler and faster. Both are possible if the object model is properly regularised and we discuss a modification of SVMs which allows for doing so. --- paper_title: The Representation and Matching of Pictorial Structures paper_content: The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of "goodness" of matching or detection. --- paper_title: Real-time detection of nodding and head-shaking by directly detecting and tracking the "between-eyes" paper_content: Among head gestures, nodding and head-shaking are very common and used often. Thus the detection of such gestures is basic to a visual understanding of human responses. However it is difficult to detect them in real-time, because nodding and head-shaking are fairly small and fast head movements. We propose an approach for detecting nodding and head-shaking in real time from a single color video stream by directly detecting and tracking a point between the eyes, or what we call the "between-eyes". Along a circle of a certain radius centered at the "between-eyes", the pixel value has two cycles of bright parts (forehead and nose bridge) and dark parts (eyes and brows). The output of the proposed circle-frequency filter has a local maximum at these characteristic points. To distinguish the true "between-eyes" from similar characteristic points in other face parts, we do a confirmation with eye detection. Once the "between-eyes" is detected, a small area around it is copied as a template and the system enters the tracking mode. Combining with the circle-frequency filtering and the template, the tracking is done not by searching around but by selecting candidates using the template; the template is then updated. Due to this special tracking algorithm, the system can track the "between-eyes" stably and accurately. It runs at 13 frames/s rate without special hardware. By analyzing the movement of the point, we can detect nodding and head-shaking. Some experimental results are shown. --- paper_title: Illumination Invariant Face Recognition Using Near-Infrared Images paper_content: Most current face recognition systems are designed for indoor, cooperative-user applications. However, even in thus-constrained applications, most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. In this paper, we present a novel solution for illumination invariant face recognition for indoor, cooperative-user applications. First, we present an active near infrared (NIR) imaging system that is able to produce face images of good condition regardless of visible lights in the environment. Second, we show that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone; based on this, we use local binary pattern (LBP) features to compensate for the monotonic transform, thus deriving an illumination invariant face representation. Then, we present methods for face recognition using NIR images; statistical learning algorithms are used to extract most discriminative features from a large pool of invariant LBP features and construct a highly accurate face matching engine. Finally, we present a system that is able to achieve accurate and fast face recognition in practice, in which a method is provided to deal with specular reflections of active NIR lights on eyeglasses, a critical issue in active NIR image-based face recognition. Extensive, comparative results are provided to evaluate the imaging hardware, the face and eye detection algorithms, and the face recognition algorithms and systems, with respect to various factors, including illumination, eyeglasses, time lapse, and ethnic groups --- paper_title: Real-time eye detection and tracking under various light conditions paper_content: Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera. --- paper_title: Robust Real-Time Eye Detection and Tracking Under Variable Lighting Conditions and Various Face Orientations paper_content: Most eye trackers based on active IR illumination require distinctive bright pupil effect to work well. However, due to a variety of factors such as eye closure, eye occlusion, and external illumination interference, pupils are not bright enough for these methods to work well. This tends to significantly limit their scope of application. In this paper, we present an integrated eye tracker to overcome these limitations. By combining the latest technologies in appearance-based object recognition and tracking with active IR illumination, our eye tracker can robustly track eyes under variable and realistic lighting conditions and under various face orientations. In addition, our integrated eye tracker is able to handle occlusion, glasses, and to simultaneously track multiple people with different distances and poses to the camera. Results from an extensive experiment shows a significant improvement of our technique over existing eye tracking techniques. --- paper_title: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze paper_content: Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. --- paper_title: Locating and extracting the eye in human face images paper_content: Facial feature extraction is an important step in automated visual interpretation and human face recognition. Among the facial features, the eye plays the most important part in the recognition process. The deformable template can be used in extracting the eye boundaries. However, the weaknesses of the deformable template are that the processing time is lengthy and that its success relies on the initial position of the template. In this paper, the head boundary is first located in a head-and-shoulders image. The approximate positions of the eyes are estimated by means of average anthropometric measures. Corners, the salient features of the eyes, are detected and used to set the initial parameters of the eye templates. The corner detection scheme introduced in this paper can provide accurate information about the corners. Based on the corner positions, we can accurately locate the templates in relation to the eye images and greatly reduce the processing time for the templates. The performance of the deformable template is assessed with and without using the information on corner positions. Experiments show that a saving in execution time of about 40% on average and a better eye boundary representation can be achieved by using the corner information. --- paper_title: Automatic Eye Detection and Its Validation paper_content: The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions. --- paper_title: Eye localization in low and standard definition content with application to face matching paper_content: In this paper we address the problem of eye localization for the purpose of face matching in low and standard definition image and video content. In addition to an explorative study that aimed at discovering the effect of eye localization accuracy on face matching performance, we also present a probabilistic eye localization method based on well-known multi-scale local binary patterns (LBPs). These patterns provide a simple but powerful spatial description of texture, and are robust to the noise typical to low and standard definition content. The extensive evaluation involving multiple eye localizers and face matchers showed that the shape of the eye localizer error distribution has a big impact on face matching performance. Conditioned by the error distribution shape and the minimum required eye localization accuracy, eye localization can boost the performance of naive face matchers and allow for more efficient face matching without degrading its performance. The evaluation also showed that our proposed method has superior accuracy with respect to the state-of-the-art on eye localization, and that it fulfills the criteria for improving the face matching performance and efficiency mentioned above. --- paper_title: A performance evaluation of local descriptors paper_content: In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors. --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Facial feature detection with optimal pixel reduction SVM paper_content: Automatic facial feature localization has been a long-standing challenge in the field of computer vision for several decades. This can be explained by the large variation a face in an image can have due to factors such as position, facial expression, pose, illumination, and background clutter. Support Vector Machines (SVMs) have been a popular statistical tool for facial feature detection. Traditional SVM approaches to facial feature detection typically extract features from images (e.g. multiband filter, SIFT features) and learn the SVM parameters. Independently learning features and SVM parameters might result in a loss of information related to the classification process. This paper proposes an energy-based framework to jointly perform relevant feature weighting and SVM parameter learning. Preliminary experiments on standard face databases have shown significant improvement in speed with our approach. --- paper_title: Precise eye localization through a general-to-specific model definition paper_content: We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate. --- paper_title: A novel pupil localization method based on GaborEye model and radial symmetry operator paper_content: The eyes are the most important facial landmarks on the human face. The accuracy of face normalization, which is critical to the performance of the following face analysis steps, depends on the locations of the two eyes, as well as their relatively constant interocular distance. In this paper, we propose a novel GaborEye model for eye localization. Based on the special gray distribution in the eye-and-brow region, a proper Gabor kernel is adaptively chosen to convolute with the face image to highlight the eye-and-brow region, which can be exploited to segment the two pupil regions efficiently. After getting the region of the pupil, a fast radial symmetry operator is used to locate the center of the pupil. Extensive experiments show that the method can accurately locate the pupils, and it is robust to the variations of face poses, expressions, accessories and illuminations. --- paper_title: Face recognition with learning-based descriptor paper_content: We present a novel approach to address the representation issue and the matching issue in face recognition (verification). Firstly, our approach encodes the micro-structures of the face by a new learning-based encoding method. Unlike many previous manually designed encoding methods (e.g., LBP or SIFT), we use unsupervised learning techniques to learn an encoder from the training examples, which can automatically achieve very good tradeoff between discriminative power and invariance. Then we apply PCA to get a compact face descriptor. We find that a simple normalization mechanism after PCA can further improve the discriminative ability of the descriptor. The resulting face representation, learning-based (LE) descriptor, is compact, highly discriminative, and easy-to-extract. To handle the large pose variation in real-life scenarios, we propose a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combinations (e.g., frontal v.s. frontal, frontal v.s. left) of the matching face pair. Our approach is comparable with the state-of-the-art methods on the Labeled Face in Wild (LFW) benchmark (we achieved 84.45% recognition rate), while maintaining excellent compactness, simplicity, and generalization ability across different datasets. --- paper_title: Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions paper_content: Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources-Gabor wavelets and LBP-showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions. --- paper_title: Locally Assembled Binary (LAB) feature with feature-centric cascade for fast and accurate face detection paper_content: In this paper, we describe a novel type of feature for fast and accurate face detection. The feature is called Locally Assembled Binary (LAB) Haar feature. LAB feature is basically inspired by the success of Haar feature and Local Binary Pattern (LBP) for face detection, but it is far beyond a simple combination. In our method, Haar features are modified to keep only the ordinal relationship (named by binary Haar feature) rather than the difference between the accumulated intensities. Several neighboring binary Haar features are then assembled to capture their co-occurrence with similar idea to LBP. We show that the feature is more efficient than Haar feature and LBP both in discriminating power and computational cost. Furthermore, a novel efficient detection method called feature-centric cascade is proposed to build an efficient detector, which is developed from the feature-centric method. Experimental results on the CMU+MIT frontal face test set and CMU profile test set show that the proposed method can achieve very good results and amazing detection speed. --- paper_title: Eye localization through multiscale sparse dictionaries paper_content: This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method. --- paper_title: Face Description with Local Binary Patterns: Application to Face Recognition paper_content: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed --- paper_title: Regression and classification approaches to eye localization in face images paper_content: We address the task of accurately localizing the eyes in face images extracted by a face detector, an important problem to be solved because of the negative effect of poor localization on face recognition accuracy. We investigate three approaches to the task: a regression approach aiming to directly minimize errors in the predicted eye positions, a simple Bayesian model of eye and non-eye appearance, and a discriminative eye detector trained using AdaBoost. By using identical training and test data for each method we are able to perform an unbiased comparison. We show that, perhaps surprisingly, the simple Bayesian approach performs best on databases including challenging images, and performance is comparable to more complex state-of-the-art methods. --- paper_title: Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels paper_content: We present a generic detection/localization algorithm capable of searching for a visual object of interest without training. The proposed method operates using a single example of an object of interest to find similar matches, does not require prior knowledge (learning) about objects being sought, and does not require any preprocessing step or segmentation of a target image. Our method is based on the computation of local regression kernels as descriptors from a query, which measure the likeness of a pixel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. We illustrate optimality properties of the algorithm using a naive-Bayes framework. The algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the query and all patches in the target image. By employing nonparametric significance tests and nonmaxima suppression, we detect the presence and location of objects similar to the given query. The approach is extended to account for large variations in scale and rotation. High performance is demonstrated on several challenging data sets, indicating successful detection of objects in diverse contexts and under different imaging conditions. --- paper_title: The FERET evaluation methodology for face-recognition algorithms paper_content: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1199 individuals are included in the FERET database, which is divided into development and sequestered portions. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to (1) assess the state of the art, (2) identify future areas of research, and (3) measure algorithm performance on large databases. --- paper_title: Average of Synthetic Exact Filters paper_content: This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time. --- paper_title: Automatic Eye Detection and Its Validation paper_content: The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions. --- paper_title: Learning to localize objects with structured output regression paper_content: Sliding window classifiers are among the most successful and widely applied techniques for object localization. However, training is typically done in a way that is not specific to the localization task. First a binary classifier is trained using a sample of positive and negative examples, and this classifier is subsequently applied to multiple regions within test images. We propose instead to treat object localization in a principled way by posing it as a problem of predicting structured data: we model the problem not as binary classification, but as the prediction of the bounding box of objects located in images. The use of a joint-kernelframework allows us to formulate the training procedure as a generalization of an SVM, which can be solved efficiently. We further improve computational efficiency by using a branch-and-bound strategy for localization during both training and testing. Experimental evaluation on the PASCAL VOC and TU Darmstadt datasets show that the structured training procedure improves performance over binary training as well as the best previously published scores. --- paper_title: Neural network-based face detection paper_content: We present a neural network-based upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present a straightforward procedure for aligning positive face examples for training. To collect negative examples, we use a bootstrap algorithm, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting nonface training examples, which must be chosen to span the entire space of nonface images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further improve the accuracy. Comparisons with several other state-of-the-art face detection systems are presented, showing that our system has comparable performance in terms of detection and false-positive rates. --- paper_title: View-based and modular eigenspaces for face recognition paper_content: We describe experiments with eigenfaces for recognition and interactive search in a large-scale face database. Accurate visual recognition is demonstrated using a database of O(10/sup 3/) faces. The problem of recognition under general viewing orientation is also examined. A view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose. In addition, a modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer. This modular representation yields higher recognition rates as well as a more robust framework for face recognition. An automatic feature extraction technique using feature eigentemplates is also demonstrated. > --- paper_title: Max-margin additive classifiers for detection paper_content: We present methods for training high quality object detectors very quickly. The core contribution is a pair of fast training algorithms for piece-wise linear classifiers, which can approximate arbitrary additive models. The classifiers are trained in a max-margin framework and significantly outperform linear classifiers on a variety of vision datasets. We report experimental results quantifying training time and accuracy on image classification tasks and pedestrian detection, including detection results better than the best previous on the INRIA dataset with faster training. --- paper_title: Robust precise eye location by adaboost and SVM techniques paper_content: This paper presents a novel approach for eye detection using a hierarchy cascade classifier based on Adaboost statistical learning method combined with SVM (Support Vector Machines) post classifier. On the first stage a face detector is used to locate the face in the whole image. After finding the face, an eye detector is used to detect the possible eye candidates within the face areas. Finally, the precise eye positions are decided by the eye-pair SVM classifiers which using geometrical and relative position information of eye-pair and the face. Experimental results show that this method can effectively cope with various image conditions and achieve better location performance on diverse test sets than some newly proposed methods. --- paper_title: Facial feature detection with optimal pixel reduction SVM paper_content: Automatic facial feature localization has been a long-standing challenge in the field of computer vision for several decades. This can be explained by the large variation a face in an image can have due to factors such as position, facial expression, pose, illumination, and background clutter. Support Vector Machines (SVMs) have been a popular statistical tool for facial feature detection. Traditional SVM approaches to facial feature detection typically extract features from images (e.g. multiband filter, SIFT features) and learn the SVM parameters. Independently learning features and SVM parameters might result in a loss of information related to the classification process. This paper proposes an energy-based framework to jointly perform relevant feature weighting and SVM parameter learning. Preliminary experiments on standard face databases have shown significant improvement in speed with our approach. --- paper_title: For your eyes only paper_content: In this paper, we take a look at an enhanced approach for eye detection under difficult acquisition circumstances such as low-light, distance, pose variation, and blur. We present a novel correlation filter based eye detection pipeline that is specifically designed to reduce face alignment errors, thereby increasing eye localization accuracy and ultimately face recognition accuracy. The accuracy of our eye detector is validated using data derived from the Labeled Faces in the Wild (LFW) and the Face Detection on Hard Datasets Competition 2011 (FDHD) sets. The results on the LFW dataset also show that the proposed algorithm exhibits enhanced performance, compared to another correlation filter based detector, and that a considerable increase in face recognition accuracy may be achieved by focusing more effort on the eye localization stage of the face recognition process. Our results on the FDHD dataset show that our eye detector exhibits superior performance, compared to 11 different state-of-the-art algorithms, on the entire set of difficult data without any per set modifications to our detection or preprocessing algorithms. The immediate application of eye detection is automatic face recognition, though many good applications exist in other areas, including medical research, training simulators, communication systems for the disabled, and automotive engineering. --- paper_title: Precise eye localization through a general-to-specific model definition paper_content: We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate. --- paper_title: Robust precise eye location under probabilistic framework paper_content: Eye feature location is an important step in automatic visual interpretation and human face recognition. In this paper, a novel approach for locating eye centers in face areas under probabilistic framework is devised. After grossly locating a face, we first find the areas which left and right eyes lies in. Then an appearance-based eye detector is used to detect the possible left and right eye separately. According to their probabilities, the candidates are subsampled to merge those in near positions. Finally, the remaining left and right eye candidates are paired; each possible eye pair is normalized and verified. According to their probabilities, the precise eye positions are decided. The experimental results demonstrate that our method can effectively cope with different eye variations and achieve better location performance on diverse test sets than some newly proposed methods. And the influence of precision of eye location on face recognition is also probed. The location of other face organs such as mouth and nose can be incorporated in the framework easily. --- paper_title: Beyond sliding windows: Object localization by efficient subwindow search paper_content: Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition. --- paper_title: Classification using intersection kernel support vector machines is efficient paper_content: Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1 - chi2 and other kernels of similar form. We also introduce novel features based on a multi-level histograms of oriented edge energy and present experiments on various detection datasets. On the INRIA pedestrian dataset an approximate IKSVM classifier based on these features has the current best performance, with a miss rate 13% lower at 10-6 False Positive Per Window than the linear SVM detector of Dalal & Triggs. On the Daimler Chrysler pedestrian dataset IKSVM gives comparable accuracy to the best results (based on quadratic SVM), while being 15times faster. In these experiments our approximate IKSVM is up to 2000times faster than a standard implementation and requires 200times less memory. Finally we show that a 50times speedup is possible using approximate IKSVM based on spatial pyramid features on the Caltech 101 dataset with negligible loss of accuracy. --- paper_title: Locally Assembled Binary (LAB) feature with feature-centric cascade for fast and accurate face detection paper_content: In this paper, we describe a novel type of feature for fast and accurate face detection. The feature is called Locally Assembled Binary (LAB) Haar feature. LAB feature is basically inspired by the success of Haar feature and Local Binary Pattern (LBP) for face detection, but it is far beyond a simple combination. In our method, Haar features are modified to keep only the ordinal relationship (named by binary Haar feature) rather than the difference between the accumulated intensities. Several neighboring binary Haar features are then assembled to capture their co-occurrence with similar idea to LBP. We show that the feature is more efficient than Haar feature and LBP both in discriminating power and computational cost. Furthermore, a novel efficient detection method called feature-centric cascade is proposed to build an efficient detector, which is developed from the feature-centric method. Experimental results on the CMU+MIT frontal face test set and CMU profile test set show that the proposed method can achieve very good results and amazing detection speed. --- paper_title: Fast PRISM: Branch and Bound Hough Transform for Object Class Detection paper_content: This paper addresses the task of efficient object class detection by means of the Hough transform. This approach has been made popular by the Implicit Shape Model (ISM) and has been adopted many times. Although ISM exhibits robust detection performance, its probabilistic formulation is unsatisfactory. The PRincipled Implicit Shape Model (PRISM) overcomes these problems by interpreting Hough voting as a dual implementation of linear sliding-window detection. It thereby gives a sound justification to the voting procedure and imposes minimal constraints. We demonstrate PRISM's flexibility by two complementary implementations: a generatively trained Gaussian Mixture Model as well as a discriminatively trained histogram approach. Both systems achieve state-of-the-art performance. Detections are found by gradient-based or branch and bound search, respectively. The latter greatly benefits from PRISM's feature-centric view. It thereby avoids the unfavourable memory trade-off and any on-line pre-processing of the original Efficient Subwindow Search (ESS). Moreover, our approach takes account of the features' scale value while ESS does not. Finally, we show how to avoid soft-matching and spatial pyramid descriptors during detection without losing their positive effect. This makes algorithms simpler and faster. Both are possible if the object model is properly regularised and we discuss a modification of SVMs which allows for doing so. --- paper_title: Eye localization through multiscale sparse dictionaries paper_content: This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method. --- paper_title: Regression and classification approaches to eye localization in face images paper_content: We address the task of accurately localizing the eyes in face images extracted by a face detector, an important problem to be solved because of the negative effect of poor localization on face recognition accuracy. We investigate three approaches to the task: a regression approach aiming to directly minimize errors in the predicted eye positions, a simple Bayesian model of eye and non-eye appearance, and a discriminative eye detector trained using AdaBoost. By using identical training and test data for each method we are able to perform an unbiased comparison. We show that, perhaps surprisingly, the simple Bayesian approach performs best on databases including challenging images, and performance is comparable to more complex state-of-the-art methods. --- paper_title: Online domain adaptation of a pre-trained cascade of classifiers paper_content: Many classifiers are trained with massive training sets only to be applied at test time on data from a different distribution. How can we rapidly and simply adapt a classifier to a new test distribution, even when we do not have access to the original training data? We present an on-line approach for rapidly adapting a “black box” classifier to a new test data set without retraining the classifier or examining the original optimization criterion. Assuming the original classifier outputs a continuous number for which a threshold gives the class, we reclassify points near the original boundary using a Gaussian process regression scheme. We show how this general procedure can be used in the context of a classifier cascade, demonstrating performance that far exceeds state-of-the-art results in face detection on a standard data set. We also draw connections to work in semi-supervised learning, domain adaptation, and information regularization. --- paper_title: A robust eye localization method for low quality face images paper_content: Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments. --- paper_title: Pictorial Structures for Object Recognition paper_content: In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images. --- paper_title: Automatic Facial Expression Analysis A Survey paper_content: The Automatic Facial Expression Recognition has been one of the latest research topic since 1990’s.There have been recent advances in detecting face, facial expression recognition and classification. There are multiple methods devised for facial feature extraction which helps in identifying face and facial expressions. This paper surveys some of the published work since 2003 till date. Various methods are analysed to identify the Facial expression. The Paper also discusses about the facial parameterization using Facial Action Coding System(FACS) action units and the methods which recognizes the action units parameters using facial expression data that are extracted. Various kinds of facial expressions are present in human face which can be identified based on their geometric features, appearance features and hybrid features . The two basic concepts of extracting features are based on facial deformation and facial motion. This article also identifies the techniques based on the characteristics of expressions and classifies the suitable methods that can be implemented. --- paper_title: Average of Synthetic Exact Filters paper_content: This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time. --- paper_title: Eye localization in low and standard definition content with application to face matching paper_content: In this paper we address the problem of eye localization for the purpose of face matching in low and standard definition image and video content. In addition to an explorative study that aimed at discovering the effect of eye localization accuracy on face matching performance, we also present a probabilistic eye localization method based on well-known multi-scale local binary patterns (LBPs). These patterns provide a simple but powerful spatial description of texture, and are robust to the noise typical to low and standard definition content. The extensive evaluation involving multiple eye localizers and face matchers showed that the shape of the eye localizer error distribution has a big impact on face matching performance. Conditioned by the error distribution shape and the minimum required eye localization accuracy, eye localization can boost the performance of naive face matchers and allow for more efficient face matching without degrading its performance. The evaluation also showed that our proposed method has superior accuracy with respect to the state-of-the-art on eye localization, and that it fulfills the criteria for improving the face matching performance and efficiency mentioned above. --- paper_title: Face detection in color images paper_content: Human face detection is often the first step in applications such as video surveillance, human computer interface, face recognition, and image database management. We propose a face detection algorithm for color images in the presence of varying lighting conditions as well as complex backgrounds. Our method detects skin regions over the entire image, and then generates face candidates based on the spatial arrangement of these skin patches. The algorithm constructs eye, mouth, and boundary maps for verifying each face candidate. Experimental results demonstrate successful detection over a wide variety of facial variations in color, position, scale, rotation, pose, and expression from several photo collections. --- paper_title: What Is the Set of Images of an Object Under All Possible Illumination Conditions? paper_content: The appearance of an object depends on both the viewpoint from which it is observed and the light sources by which it is illuminated. If the appearance of two objects is never identical for any pose or lighting conditions, then–in theory–the objects can always be distinguished or recognized. The question arises: What is the set of images of an object under all lighting conditions and pose? In this paper, we consider only the set of images of an object under variable illumination, including multiple, extended light sources and shadows. We prove that the set of n-pixel images of a convex object with a Lambertian reflectance function, illuminated by an arbitrary number of point light sources at infinity, forms a convex polyhedral cone in R^n and that the dimension of this illumination cone equals the number of distinct surface normals. Furthermore, the illumination cone can be constructed from as few as three images. In addition, the set of n-pixel images of an object of any shape and with a more general reflectance function, seen under all possible illumination conditions, still forms a convex cone in R^n. Extensions of these results to color images are presented. These results immediately suggest certain approaches to object recognition. Throughout, we present results demonstrating the illumination cone representation. --- paper_title: Facial feature detection using distance vector fields paper_content: A novel method for eye and mouth detection and eye center and mouth corner localization, based on geometrical information is presented in this paper. First, a face detector is applied to detect the facial region, and the edge map of this region is calculated. The distance vector field of the face is extracted by assigning to every facial image pixel a vector pointing to the closest edge pixel. The x and y components of these vectors are used to detect the eyes and mouth regions. Luminance information is used for eye center localization, after removing unwanted effects, such as specular highlights, whereas the hue channel of the lip area is used for the detection of the mouth corners. The proposed method has been tested on the XM2VTS and BioID databases, with very good results. --- paper_title: Fiducial point localization in color images of face foregrounds paper_content: Abstract We describe a method for the automatic identification of facial features (eyes, nose, mouth and chin) and the precise localization of their fiducial points (e.g. nose tip, mouth and eye corners) in color images of face foregrounds. The algorithm requires as input 2D color images, representing face foregrounds with homogeneous background; it is scale-independent, it deals with either frontal, rotated (up to 30°) or slightly tilted (up to 10°) faces, and it is robust to different facial expressions, requiring the mouth closed and the eyes opened, and no wearing glasses. The method proceeds with subsequent refinements: first, it identifies the sub images containing each feature, afterwards, it processes the single features separately by a blend of techniques which use both color and shape information. The system has been tested on three databases: the XM2VTS database, the University of Stirling database, and ours, for a total of 1650 images. The obtained results are described quantitatively and discussed. --- paper_title: Acquiring linear subspaces for face recognition under variable lighting paper_content: Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition. --- paper_title: Automatic eye detection using intensity filtering and K-means clustering paper_content: This paper proposes a novel eye detection method, which can locate the accurate positions of the eyes from frontal face images. The proposed method is robust to pose changes, different facial expressions and illumination variations. Initially, it utilizes image enhancement, Gabor transformation and cluster analysis to extract eye windows. It then localizes the pupil centers by applying two neighborhood operators within the eye windows. Experiments with the color FERET and the LFW (Labeled Face in the Wild) datasets (including a total of 3587 images) are used to evaluate this method. The experimental results demonstrate the consistent robustness and efficiency of the proposed method. --- paper_title: Combining Face and Eye Detectors in a High- Performance Face-Detection System paper_content: A combined face and eye detector system based on multiresolution local ternary patterns and local phase quantization descriptors can achieve noticeable performance improvements by extracting features locally. --- paper_title: Facial features detection robust to pose, illumination and identity paper_content: This paper addresses the problem of automatic detection of salient facial features. Face images are described using local normalized Gaussian receptive fields. Face features are learned using a clustering of the Gaussian derivative responses. We have found that a single cluster provides a robust detector for salient facial features robust to pose, illumination and identity. In this paper we describe how this cluster is learned and which facial features have found to be salient --- paper_title: Face recognition system using local autocorrelations and multiscale integration paper_content: In this paper we investigate the performance of a technique for face recognition based on the computation of 25 local autocorrelation coefficients. We use a large database of 11,600 frontal facial images of 116 persons, organized in training and test sets, for evaluation. Autocorrelation coefficients are computationally inexpensive, inherently shift-invariant and quite robust against changes in facial expression. We focus on the difficult problem of recognizing a large number of known human faces while rejecting other, unknown faces which lie quite close in pattern space. A multiresolution system achieves a recognition rate of 95%, while falsely accepting only 1.5% of unknown faces. It operates at a speed of about one face per second. Without rejection of unknown faces, we obtain a peak recognition rate of 99.9%. The good performance indicates that local autocorrelation coefficients have a surprisingly high information content. --- paper_title: A novel pupil localization method based on GaborEye model and radial symmetry operator paper_content: The eyes are the most important facial landmarks on the human face. The accuracy of face normalization, which is critical to the performance of the following face analysis steps, depends on the locations of the two eyes, as well as their relatively constant interocular distance. In this paper, we propose a novel GaborEye model for eye localization. Based on the special gray distribution in the eye-and-brow region, a proper Gabor kernel is adaptively chosen to convolute with the face image to highlight the eye-and-brow region, which can be exploited to segment the two pupil regions efficiently. After getting the region of the pupil, a fast radial symmetry operator is used to locate the center of the pupil. Extensive experiments show that the method can accurately locate the pupils, and it is robust to the variations of face poses, expressions, accessories and illuminations. --- paper_title: Robust precise eye location under probabilistic framework paper_content: Eye feature location is an important step in automatic visual interpretation and human face recognition. In this paper, a novel approach for locating eye centers in face areas under probabilistic framework is devised. After grossly locating a face, we first find the areas which left and right eyes lies in. Then an appearance-based eye detector is used to detect the possible left and right eye separately. According to their probabilities, the candidates are subsampled to merge those in near positions. Finally, the remaining left and right eye candidates are paired; each possible eye pair is normalized and verified. According to their probabilities, the precise eye positions are decided. The experimental results demonstrate that our method can effectively cope with different eye variations and achieve better location performance on diverse test sets than some newly proposed methods. And the influence of precision of eye location on face recognition is also probed. The location of other face organs such as mouth and nose can be incorporated in the framework easily. --- paper_title: A pupil localization algorithm based on adaptive Gabor filtering and negative radial symmetry paper_content: The pupil localization algorithm is very important for a face recognition system. Traditional pupil localization algorithms are easy to be affected by uneven illuminations and accessories. Aiming at those limitations, a novel pupil localization algorithm is proposed in this paper. The algorithm firstly implements face image tilt adjustment and extracts eye region through the horizontal intensity gradient integral projection and Gabor filtering. Then in order to increase the eye detection accuracy, PCA is applied to select Gabor filter, and the projection enhancement algorithm is presented. At last the Negative Radial Symmetry is presented to locate the pupil position precisely in eye windows. Experimental results show that the method can locate the pupil position accurately, and demonstrate robustness to uneven lighting, noise, accessories and pose variations. --- paper_title: Face Recognition: The Problem of Compensating for Changes in Illumination Direction paper_content: A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gabor-like filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems based only on such representations failed to recognize up to 20 percent of the faces in our database. Humans performed considerably better under the same conditions. We discuss possible reasons for this superiority and alternative methods for overcoming illumination effects in recognition. --- paper_title: Blur insensitive texture classification using local phase quantization paper_content: In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred. --- paper_title: Frontal-view face detection and facial feature extraction using color and morphological operations paper_content: A novel algorithm for front-view facial contour detection and features extraction is described. A skin-color face segmentation method is developed to detect the face region firstly. In order to more precisely locate the face contour and features such as eyes, mouth and nostrils, we eliminate the ears and neck from the face region by using morphological operations and knowledge about the face structure. We have found some consistent rules that can be used to locate some contour points from the skeleton of the processed frontal face region. The face contour can then be fitted as an ellipse using these points. Then the facial features can be located in the interior of the face contour. Experiments have been done with a number of images of frontal view of the head including some with a slight pan obtained from an Internet face database (University of Stirling). Also a number of face images captured by a color CCD camera are tried by our method. The robustness is confirmed with correct detection rate of over 90%. --- paper_title: Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions paper_content: Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources-Gabor wavelets and LBP-showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions. --- paper_title: Robust Facial Features Localization on Rotation Arbitrary Multi-View Face in Complex Background paper_content: Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background. --- paper_title: Face recognition: features versus templates paper_content: Two new algorithms for computer recognition of human faces, one based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second based on almost-gray-level template matching, are presented. The results obtained for the testing sets show about 90% correct recognition using geometrical features and perfect recognition using template matching. > --- paper_title: A multiscale retinex for bridging the gap between color images and the human observation of scenes paper_content: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour. --- paper_title: Unsupervised Joint Alignment of Complex Images paper_content: Many recognition algorithms depend on careful positioning of an object into a canonical pose, so the position of features relative to a fixed coordinate system can be examined. Currently, this positioning is done either manually or by training a class-specialized learning algorithm with samples of the class that have been hand-labeled with parts or poses. In this paper, we describe a novel method to achieve this positioning using poorly aligned examples of a class with no additional labeling. Given a set of unaligned examplars of a class, such as faces, we automatically build an alignment mechanism, without any additional labeling of parts or poses in the data set. Using this alignment mechanism, new members of the class, such as faces resulting from a face detector, can be precisely aligned for the recognition process. Our alignment method improves performance on a face recognition task, both over unaligned images and over images aligned with a face alignment algorithm specifically developed for and trained on hand-labeled face images. We also demonstrate its use on an entirely different class of objects (cars), again without providing any information about parts or pose to the learning algorithm. --- paper_title: Facial expression recognition - A real time approach paper_content: Face localization, feature extraction, and modeling are the major issues in automatic facial expression recognition. In this paper, a method for facial expression recognition is proposed. A face is located by extracting the head contour points using the motion information. A rectangular bounding box is fitted for the face region using those extracted contour points. Among the facial features, eyes are the most prominent features used for determining the size of a face. Hence eyes are located and the visual features of a face are extracted based on the locations of eyes. The visual features are modeled using support vector machine (SVM) for facial expression recognition. The SVM finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 98.5%. --- paper_title: An Image Preprocessing Algorithm for Illumination Invariant Face Recognition paper_content: Face recognition algorithms have to deal with significant amounts of illumination variations between gallery and probe images. State-of-the-art commercial face recognition algorithms still struggle with this problem. We propose a new image preprocessing algorithm that compensates for illumination variations in images. From a single brightness image the algorithm first estimates the illumination field and then compensates for it to mostly recover the scene reflectance. Unlike previously proposed approaches for illumination compensation, our algorithm does not require any training steps, knowledge of 3D face models or reflective surface models. We apply the algorithm to face images prior to recognition. We demonstrate large performance improvements with several standard face recognition algorithms across multiple, publicly available face databases. --- paper_title: Face Recognition by Elastic Bunch Graph Matching paper_content: We present a system for recognizing human faces from single images out of a large database with one image per person. The task is difficult because of image variation in terms of position, size, expression, and pose. The system collapses most of this variance by extracting concise face descriptions in the form of image graphs. In these, fiducial points on the face (eyes, mouth etc.) are described by sets of wavelet components (jets). Image graph extraction is based on a novel approach, the bunch graph, which is constructed from a small set of sample image graphs. Recognition is based on a straight-forward comparison of image graphs. We report recognition experiments on the FERET database and the Bochum database, including recognition across pose. --- paper_title: Average of Synthetic Exact Filters paper_content: This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time. --- paper_title: Automatic Eye Detection and Its Validation paper_content: The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions. --- paper_title: Unconstrained correlation filters. paper_content: A mathematical analysis of the distortion tolerance in correlation filters is presented. A good measure for distortion performance is shown to be a generalization of the minimum average correlation energy criterion. To optimize the filter's performance, we remove the usual hard constraints on the outputs in the synthetic discriminant function formulation. The resulting filters exhibit superior distortion tolerance while retaining the attractive features of their predecessors such as the minimum average correlation energy filter and the minimum variance synthetic discriminant function filter. The proposed theory also unifies several existing approaches and examines the relationship between different formulations. The proposed filter design algorithm requires only simple statistical parameters and the inversion of diagonal matrices, which makes it attractive from a computational standpoint. Several properties of these filters are discussed with illustrative examples. --- paper_title: Human detection in images via L1-norm Minimization Learning paper_content: In recent years, sparse representation originating from signal compressed sensing theory has attracted increasing interest in computer vision research community. However, to our best knowledge, no previous work utilizes L1-norm minimization for human detection. In this paper we develop a novel human detection system based on L1-norm Minimization Learning (LML) method. The method is on the observation that a human object can be represented by a few features from a large feature set (sparse representation). And the sparse representation can be learned from the training samples by exploiting the L1-norm Minimization principle, which can also be called feature selection procedure. This procedure enables the feature representation more concise and more adaptive to object occlusion and deformation. After that a classifier is constructed by linearly weighting features and comparing the result with a calculated threshold. Experiments on two datasets validate the effectiveness and efficiency of the proposed method. --- paper_title: Spatio-temporal Saliency detection using phase spectrum of quaternion fourier transform paper_content: Salient areas in natural scenes are generally regarded as the candidates of attention focus in human eyes, which is the key stage in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), neuromorphic vision toolkit (NVT) and etc., but they demand high computational cost and their remarkable results mostly rely on the choice of parameters. Recently a simple and fast approach based on Fourier transform called spectral residual (SR) was proposed, which used SR of the amplitude spectrum to obtain the saliency map. The results are good, but the reason is questionable. --- paper_title: Minimum average correlation energy filters paper_content: The synthesis of a new category of spatial filters that produces sharp output correlation peaks with controlled peak values is considered. The sharp nature of the correlation peak is the major feature emphasized, since it facilitates target detection. Since these filters minimize the average correlation plane energy as the first step in filter synthesis, we refer to them as minimum average correlation energy filters. Experimental laboratory results from optical implementation of the filters are also presented and discussed. --- paper_title: PRECISE EYE AND MOUTH LOCALIZATION paper_content: The literature on the topic has shown a strong correlation between the degree of precision of face localization and the face recognition performance. Hence, there is a need for precise facial feature detectors, as well as objective measures for their evaluation and comparison. In this paper, we will present significant improvements to a previous method for precise eye center localization, by integrating a module for mouth localization. The technique is based on Support Vector Machines trained on optimally chosen Haar wavelet coefficients. The method has been tested on several public databases; the results are reported and compared according to a standard error measure. The tests show that the algorithm achieves high precision of localization. --- paper_title: On Mixtures of Linear SVMs for Nonlinear Classification paper_content: In this paper, we propose a new method for training mixtures of linear SVM classifiers for purposes of non-linear data classification. We do this by packaging linear SVMs into a probabilistic formulation and embedding them in the mixture of experts model. The weights of the mixture model are generated by the gating network dependent on the input data. The new mixture of linear SVMs can be then trained efficiently using the EM algorithm. Unlike previous SVM-based mixture of expert models, which use a divide-and-conquer strategy to reduce the burden of training for large scale data sets, the main purpose of our approach is to improve the efficiency for testing. Experimental results show that our proposed model can achieve the efficiency of linear classifiers in the prediction phase while still maintaining the classification performance of nonlinear classifiers. --- paper_title: Deformable model fitting with a mixture of local experts paper_content: Local experts have been used to great effect for fitting deformable models to images. Typically, the best location in an image for the deformable model's landmarks are found through a locally exhaustive search using these experts. In order to achieve efficient fitting, these experts should afford an efficient evaluation, which often leads to forms with restricted discriminative capacity. In this work, a framework is proposed in which multiple simple experts can be utilized to increase the capacity of the detections overall. In particular, the use of a mixture of linear classifiers is proposed, the computational complexity of which scales linearly with the number of mixture components. The fitting objective is maximized using the expectation maximization (EM) algorithm, where approximations to the true objective are made in order to facilitate efficient and numerically stable fitting. The efficacy of the proposed approach is evaluated on the task of generic face fitting where performance improvement is observed over two existing methods. --- paper_title: Visual routines for eye location using learning and evolution paper_content: Eye location is used as a test bed for developing navigation routines implemented as visual routines within the framework of adaptive behavior-based AI. The adaptive eye location approach seeks first where salient objects are, and then what their identity is. Specifically, eye location involves: 1) the derivation of the saliency attention map, and 2) the possible classification of salient locations as eve regions. The saliency ("where") map is derived using a consensus between navigation routines encoded as finite-state automata exploring the facial landscape and evolved using genetic algorithms (GAs). The classification ("what") stage is concerned with the optimal selection of features, and the derivation of decision trees, using GAs, to possibly classify salient locations as eyes. The experimental results, using facial image data, show the feasibility of our method, and suggest a novel approach for the adaptive development of task-driven active perception and navigational mechanisms. --- paper_title: Precise eye localization through a general-to-specific model definition paper_content: We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate. --- paper_title: Beyond sliding windows: Object localization by efficient subwindow search paper_content: Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition. --- paper_title: Fast PRISM: Branch and Bound Hough Transform for Object Class Detection paper_content: This paper addresses the task of efficient object class detection by means of the Hough transform. This approach has been made popular by the Implicit Shape Model (ISM) and has been adopted many times. Although ISM exhibits robust detection performance, its probabilistic formulation is unsatisfactory. The PRincipled Implicit Shape Model (PRISM) overcomes these problems by interpreting Hough voting as a dual implementation of linear sliding-window detection. It thereby gives a sound justification to the voting procedure and imposes minimal constraints. We demonstrate PRISM's flexibility by two complementary implementations: a generatively trained Gaussian Mixture Model as well as a discriminatively trained histogram approach. Both systems achieve state-of-the-art performance. Detections are found by gradient-based or branch and bound search, respectively. The latter greatly benefits from PRISM's feature-centric view. It thereby avoids the unfavourable memory trade-off and any on-line pre-processing of the original Efficient Subwindow Search (ESS). Moreover, our approach takes account of the features' scale value while ESS does not. Finally, we show how to avoid soft-matching and spatial pyramid descriptors during detection without losing their positive effect. This makes algorithms simpler and faster. Both are possible if the object model is properly regularised and we discuss a modification of SVMs which allows for doing so. --- paper_title: Automatic Eye Detection and Its Validation paper_content: The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions. --- paper_title: Convolutional face finder: a neural architecture for fast and robust face detection paper_content: In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to /spl plusmn/20 degrees in image plane and turned up to /spl plusmn/60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns. --- paper_title: Robust precise eye location by adaboost and SVM techniques paper_content: This paper presents a novel approach for eye detection using a hierarchy cascade classifier based on Adaboost statistical learning method combined with SVM (Support Vector Machines) post classifier. On the first stage a face detector is used to locate the face in the whole image. After finding the face, an eye detector is used to detect the possible eye candidates within the face areas. Finally, the precise eye positions are decided by the eye-pair SVM classifiers which using geometrical and relative position information of eye-pair and the face. Experimental results show that this method can effectively cope with various image conditions and achieve better location performance on diverse test sets than some newly proposed methods. --- paper_title: Robust Object Detection with Interleaved Categorization and Segmentation paper_content: This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. ::: ::: The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. ::: ::: An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems. --- paper_title: Facial features detection robust to pose, illumination and identity paper_content: This paper addresses the problem of automatic detection of salient facial features. Face images are described using local normalized Gaussian receptive fields. Face features are learned using a clustering of the Gaussian derivative responses. We have found that a single cluster provides a robust detector for salient facial features robust to pose, illumination and identity. In this paper we describe how this cluster is learned and which facial features have found to be salient --- paper_title: Robust precise eye location under probabilistic framework paper_content: Eye feature location is an important step in automatic visual interpretation and human face recognition. In this paper, a novel approach for locating eye centers in face areas under probabilistic framework is devised. After grossly locating a face, we first find the areas which left and right eyes lies in. Then an appearance-based eye detector is used to detect the possible left and right eye separately. According to their probabilities, the candidates are subsampled to merge those in near positions. Finally, the remaining left and right eye candidates are paired; each possible eye pair is normalized and verified. According to their probabilities, the precise eye positions are decided. The experimental results demonstrate that our method can effectively cope with different eye variations and achieve better location performance on diverse test sets than some newly proposed methods. And the influence of precision of eye location on face recognition is also probed. The location of other face organs such as mouth and nose can be incorporated in the framework easily. --- paper_title: Fast PRISM: Branch and Bound Hough Transform for Object Class Detection paper_content: This paper addresses the task of efficient object class detection by means of the Hough transform. This approach has been made popular by the Implicit Shape Model (ISM) and has been adopted many times. Although ISM exhibits robust detection performance, its probabilistic formulation is unsatisfactory. The PRincipled Implicit Shape Model (PRISM) overcomes these problems by interpreting Hough voting as a dual implementation of linear sliding-window detection. It thereby gives a sound justification to the voting procedure and imposes minimal constraints. We demonstrate PRISM's flexibility by two complementary implementations: a generatively trained Gaussian Mixture Model as well as a discriminatively trained histogram approach. Both systems achieve state-of-the-art performance. Detections are found by gradient-based or branch and bound search, respectively. The latter greatly benefits from PRISM's feature-centric view. It thereby avoids the unfavourable memory trade-off and any on-line pre-processing of the original Efficient Subwindow Search (ESS). Moreover, our approach takes account of the features' scale value while ESS does not. Finally, we show how to avoid soft-matching and spatial pyramid descriptors during detection without losing their positive effect. This makes algorithms simpler and faster. Both are possible if the object model is properly regularised and we discuss a modification of SVMs which allows for doing so. --- paper_title: Regression and classification approaches to eye localization in face images paper_content: We address the task of accurately localizing the eyes in face images extracted by a face detector, an important problem to be solved because of the negative effect of poor localization on face recognition accuracy. We investigate three approaches to the task: a regression approach aiming to directly minimize errors in the predicted eye positions, a simple Bayesian model of eye and non-eye appearance, and a discriminative eye detector trained using AdaBoost. By using identical training and test data for each method we are able to perform an unbiased comparison. We show that, perhaps surprisingly, the simple Bayesian approach performs best on databases including challenging images, and performance is comparable to more complex state-of-the-art methods. --- paper_title: Feature-based affine-invariant localization of faces paper_content: We present a novel method for localizing faces in person identification scenarios. Such scenarios involve high resolution images of frontal faces. The proposed algorithm does not require color, copes well in cluttered backgrounds, and accurately localizes faces including eye centers. An extensive analysis and a performance evaluation on the XM2VTS database and on the realistic BioID and BANCA face databases is presented. We show that the algorithm has precision superior to reference methods. --- paper_title: Annotated Facial Landmarks in the Wild: A large-scale, real-world database for facial landmark localization paper_content: Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework. --- paper_title: Automatic Eye Detection and Its Validation paper_content: The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions. --- paper_title: Robust Face Detection Using the Hausdorff Distance paper_content: The localization of human faces in digital images is a fundamental step in the process of face recognition. This paper presents a shape comparison approach to achieve fast, accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on grayscale still images. The Hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image. The paper describes an efficient implementation, making this approach suitable for real-time applications. A two-step process that allows both coarse detection and exact localization of faces is presented. Experiments were performed on a large test set base and rated with a new validation measurement. --- paper_title: Regression and classification approaches to eye localization in face images paper_content: We address the task of accurately localizing the eyes in face images extracted by a face detector, an important problem to be solved because of the negative effect of poor localization on face recognition accuracy. We investigate three approaches to the task: a regression approach aiming to directly minimize errors in the predicted eye positions, a simple Bayesian model of eye and non-eye appearance, and a discriminative eye detector trained using AdaBoost. By using identical training and test data for each method we are able to perform an unbiased comparison. We show that, perhaps surprisingly, the simple Bayesian approach performs best on databases including challenging images, and performance is comparable to more complex state-of-the-art methods. --- paper_title: Annotated Facial Landmarks in the Wild: A large-scale, real-world database for facial landmark localization paper_content: Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework. --- paper_title: Overview of the face recognition grand challenge paper_content: Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery. --- paper_title: Face recognition: A literature survey paper_content: As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered. --- paper_title: Robust precise eye location by adaboost and SVM techniques paper_content: This paper presents a novel approach for eye detection using a hierarchy cascade classifier based on Adaboost statistical learning method combined with SVM (Support Vector Machines) post classifier. On the first stage a face detector is used to locate the face in the whole image. After finding the face, an eye detector is used to detect the possible eye candidates within the face areas. Finally, the precise eye positions are decided by the eye-pair SVM classifiers which using geometrical and relative position information of eye-pair and the face. Experimental results show that this method can effectively cope with various image conditions and achieve better location performance on diverse test sets than some newly proposed methods. --- paper_title: Robust precise eye location under probabilistic framework paper_content: Eye feature location is an important step in automatic visual interpretation and human face recognition. In this paper, a novel approach for locating eye centers in face areas under probabilistic framework is devised. After grossly locating a face, we first find the areas which left and right eyes lies in. Then an appearance-based eye detector is used to detect the possible left and right eye separately. According to their probabilities, the candidates are subsampled to merge those in near positions. Finally, the remaining left and right eye candidates are paired; each possible eye pair is normalized and verified. According to their probabilities, the precise eye positions are decided. The experimental results demonstrate that our method can effectively cope with different eye variations and achieve better location performance on diverse test sets than some newly proposed methods. And the influence of precision of eye location on face recognition is also probed. The location of other face organs such as mouth and nose can be incorporated in the framework easily. --- paper_title: FaceTracer: A Search Engine for Large Collections of Images with Faces paper_content: We have created the first image search engine based entirely on faces. Using simple text queries such as "smiling men with blond hair and mustaches," users can search through over 3.1 million faces which have been automatically labeled on the basis of several facial attributes. Faces in our database have been extracted and aligned from images downloaded from the internet using a commercial face detector, and the number of images and attributes continues to grow daily. Our classification approach uses a novel combination of Support Vector Machines and Adaboost which exploits the strong structure of faces to select and train on the optimal set of features for each attribute. We show state-of-the-art classification results compared to previous works, and demonstrate the power of our architecture through a functional, large-scale face search engine. Our framework is fully automatic, easy to scale, and computes all labels off-line, leading to fast on-line search performance. In addition, we describe how our system can be used for a number of applications, including law enforcement, social networks, and personal photo management. Our search engine will soon be made publicly available. --- paper_title: The FERET evaluation methodology for face-recognition algorithms paper_content: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1199 individuals are included in the FERET database, which is divided into development and sequestered portions. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to (1) assess the state of the art, (2) identify future areas of research, and (3) measure algorithm performance on large databases. --- paper_title: Average of Synthetic Exact Filters paper_content: This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time. --- paper_title: Annotated Facial Landmarks in the Wild: A large-scale, real-world database for facial landmark localization paper_content: Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework. --- paper_title: Automatic Eye Detection and Its Validation paper_content: The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions. --- paper_title: Object Detection with Discriminatively Trained Part-Based Models paper_content: We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function. --- paper_title: Learning to localize objects with structured output regression paper_content: Sliding window classifiers are among the most successful and widely applied techniques for object localization. However, training is typically done in a way that is not specific to the localization task. First a binary classifier is trained using a sample of positive and negative examples, and this classifier is subsequently applied to multiple regions within test images. We propose instead to treat object localization in a principled way by posing it as a problem of predicting structured data: we model the problem not as binary classification, but as the prediction of the bounding box of objects located in images. The use of a joint-kernelframework allows us to formulate the training procedure as a generalization of an SVM, which can be solved efficiently. We further improve computational efficiency by using a branch-and-bound strategy for localization during both training and testing. Experimental evaluation on the PASCAL VOC and TU Darmstadt datasets show that the structured training procedure improves performance over binary training as well as the best previously published scores. --- paper_title: PRECISE EYE AND MOUTH LOCALIZATION paper_content: The literature on the topic has shown a strong correlation between the degree of precision of face localization and the face recognition performance. Hence, there is a need for precise facial feature detectors, as well as objective measures for their evaluation and comparison. In this paper, we will present significant improvements to a previous method for precise eye center localization, by integrating a module for mouth localization. The technique is based on Support Vector Machines trained on optimally chosen Haar wavelet coefficients. The method has been tested on several public databases; the results are reported and compared according to a standard error measure. The tests show that the algorithm achieves high precision of localization. --- paper_title: Detector of Facial Landmarks Learned by the Structured Output SVM paper_content: In this paper we describe a detector of facial landmarks based on the Deformable Part Models. We treat the task of landmark detection as an instance of the structured output classification problem. We propose to learn the parameters of the detector from data by the Structured Output Support Vector Machines algorithm. In contrast to the previous works, the objective function of the learning algorithm is directly related to the performance of the resulting detector which is controlled by a user-defined loss function. The resulting detector is real-time on a standard PC, simple to implement and it can be easily modified for detection of a different set of landmarks. We evaluate performance of the proposed landmark detector on a challenging “Labeled Faces in the Wild” (LFW) database. The empirical results demonstrate that the proposed detector is consistently more accurate than two public domain implementations based on the Active Appearance Models and the Deformable Part Models. We provide an open-source implementation of the proposed detector and the manual annotation of the facial landmarks for all images in the LFW database. --- paper_title: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze paper_content: Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. --- paper_title: Pictorial Structures for Object Recognition paper_content: In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images. --- paper_title: On detection of multiple object instances using hough transforms paper_content: To detect multiple objects of interest, the methods based on Hough transform use non-maxima supression or mode seeking in order to locate and to distinguish peaks in Hough images. Such postprocessing requires tuning of extra parameters and is often fragile, especially when objects of interest tend to be closely located. In the paper, we develop a new probabilistic framework that is in many ways related to Hough transform, sharing its simplicity and wide applicability. At the same time, the framework bypasses the problem of multiple peaks identification in Hough images, and permits detection of multiple objects without invoking nonmaximum suppression heuristics. As a result, the experiments demonstrate a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem. --- paper_title: Face detection, pose estimation, and landmark localization in the wild paper_content: We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com). --- paper_title: Max-margin additive classifiers for detection paper_content: We present methods for training high quality object detectors very quickly. The core contribution is a pair of fast training algorithms for piece-wise linear classifiers, which can approximate arbitrary additive models. The classifiers are trained in a max-margin framework and significantly outperform linear classifiers on a variety of vision datasets. We report experimental results quantifying training time and accuracy on image classification tasks and pedestrian detection, including detection results better than the best previous on the INRIA dataset with faster training. --- paper_title: Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions paper_content: Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources-Gabor wavelets and LBP-showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions. --- paper_title: Classification using intersection kernel support vector machines is efficient paper_content: Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1 - chi2 and other kernels of similar form. We also introduce novel features based on a multi-level histograms of oriented edge energy and present experiments on various detection datasets. On the INRIA pedestrian dataset an approximate IKSVM classifier based on these features has the current best performance, with a miss rate 13% lower at 10-6 False Positive Per Window than the linear SVM detector of Dalal & Triggs. On the Daimler Chrysler pedestrian dataset IKSVM gives comparable accuracy to the best results (based on quadratic SVM), while being 15times faster. In these experiments our approximate IKSVM is up to 2000times faster than a standard implementation and requires 200times less memory. Finally we show that a 50times speedup is possible using approximate IKSVM based on spatial pyramid features on the Caltech 101 dataset with negligible loss of accuracy. --- paper_title: Attribute and simile classifiers for face verification paper_content: We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance. --- paper_title: FaceTracer: A Search Engine for Large Collections of Images with Faces paper_content: We have created the first image search engine based entirely on faces. Using simple text queries such as "smiling men with blond hair and mustaches," users can search through over 3.1 million faces which have been automatically labeled on the basis of several facial attributes. Faces in our database have been extracted and aligned from images downloaded from the internet using a commercial face detector, and the number of images and attributes continues to grow daily. Our classification approach uses a novel combination of Support Vector Machines and Adaboost which exploits the strong structure of faces to select and train on the optimal set of features for each attribute. We show state-of-the-art classification results compared to previous works, and demonstrate the power of our architecture through a functional, large-scale face search engine. Our framework is fully automatic, easy to scale, and computes all labels off-line, leading to fast on-line search performance. In addition, we describe how our system can be used for a number of applications, including law enforcement, social networks, and personal photo management. Our search engine will soon be made publicly available. --- paper_title: The IMM Face Database, An Annotated Dataset of 240 Face Images paper_content: A machine for screwing down the spoke nuts of a bicycle wheel while controlling the torque applied to each nut in response to the position of the nut on the wheel in relation to the order of tightening of the nuts. The torque applied is progressively greater from nut to nut starting from the first until a maximum torque is applied after the spokes over half the wheel circumference have been tightened. ---
Title: A literature survey on robust and efficient eye localization in real-life scenarios Section 1: Introduction Description 1: Write about the importance of eye localization, challenges faced in real-life scenarios, and the main contributions of the survey. Section 2: Localizing eyes in a single face Description 2: Review existing techniques for eye localization categorized based on the information or patterns used for model building. Include subcategories and methodologies within each category. Section 3: Measuring eye characteristics Description 3: Discuss methods that exploit inherent features of eyes such as shape and intensity contrast, and mention their limitations under uncontrolled conditions. Section 4: Learning statistical appearance model Description 4: Cover statistical models using photometric appearance features, detailing appearance feature extraction, representation, and various statistical modeling techniques. Section 5: Towards the development of a robust eye localization system Description 5: Present a global system architecture for eye localization, including preprocessing methods, accuracy and efficiency tradeoffs, and postprocessing methods. Section 6: Performance evaluation Description 6: Explain different metrics used to evaluate eye localization performance and the major face databases used for benchmark evaluations. Section 7: Conclusion and prospect Description 7: Summarize key findings, the current state of research, and suggest promising future research directions and challenges remaining to be addressed.
Survey on Improved Scheduling in Hadoop MapReduce in Cloud Environments
15
--- paper_title: The Google file system paper_content: We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use. --- paper_title: Scheduling Hadoop Jobs to Meet Deadlines paper_content: User constraints such as deadlines are important requirements that are not considered by existing cloud-based data processing environments such as Hadoop. In the current implementation, jobs are scheduled in FIFO order by default with options for other priority based schedulers. In this paper, we extend real time cluster scheduling approach to account for the two-phase computation style of MapReduce. We develop criteria for scheduling jobs based on user specified deadline constraints and discuss our implementation and preliminary evaluation of a Deadline Constraint Scheduler for Hadoop which ensures that only jobs whose deadlines can be met are scheduled for execution. --- paper_title: Dynamic proportional share scheduling in Hadoop paper_content: We present the Dynamic Priority (DP) parallel task scheduler for Hadoop. It allows users to control their allocated capacity by adjusting their spending over time. This simple mechanism allows the scheduler to make more efficient decisions about which jobs and users to prioritize and gives users the tool to optimize and customize their allocations to fit the importance and requirements of their jobs. Additionally, it gives users the incentive to scale back their jobs when demand is high, since the cost of running on a slot is then also more expensive. We envision our scheduler to be used by deadline or budget optimizing agents on behalf of users. We describe the design and implementation of the DP scheduler and experimental results. We show that our scheduler enforces service levels more accurately and also scales to more users with distinct service levels than existing schedulers. --- paper_title: Delay scheduling: a simple technique for achieving locality and fairness in cluster scheduling paper_content: As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing. --- paper_title: Improving MapReduce Performance in Heterogeneous Environments paper_content: MapReduce is emerging as an important programming model for large-scale data-parallel applications such as web indexing, data mining, and scientific simulation. Hadoop is an open-source implementation of MapReduce enjoying wide adoption and is often used for short jobs where low response time is critical. Hadoop's performance is closely tied to its task scheduler, which implicitly assumes that cluster nodes are homogeneous and tasks make progress linearly, and uses these assumptions to decide when to speculatively re-execute tasks that appear to be stragglers. In practice, the homogeneity assumptions do not always hold. An especially compelling setting where this occurs is a virtualized data center, such as Amazon's Elastic Compute Cloud (EC2). We show that Hadoop's scheduler can cause severe performance degradation in heterogeneous environments. We design a new scheduling algorithm, Longest Approximate Time to End (LATE), that is highly robust to heterogeneity. LATE can improve Hadoop response times by a factor of 2 in clusters of 200 virtual machines on EC2. --- paper_title: Dynamic proportional share scheduling in Hadoop paper_content: We present the Dynamic Priority (DP) parallel task scheduler for Hadoop. It allows users to control their allocated capacity by adjusting their spending over time. This simple mechanism allows the scheduler to make more efficient decisions about which jobs and users to prioritize and gives users the tool to optimize and customize their allocations to fit the importance and requirements of their jobs. Additionally, it gives users the incentive to scale back their jobs when demand is high, since the cost of running on a slot is then also more expensive. We envision our scheduler to be used by deadline or budget optimizing agents on behalf of users. We describe the design and implementation of the DP scheduler and experimental results. We show that our scheduler enforces service levels more accurately and also scales to more users with distinct service levels than existing schedulers. --- paper_title: Scheduling Hadoop Jobs to Meet Deadlines paper_content: User constraints such as deadlines are important requirements that are not considered by existing cloud-based data processing environments such as Hadoop. In the current implementation, jobs are scheduled in FIFO order by default with options for other priority based schedulers. In this paper, we extend real time cluster scheduling approach to account for the two-phase computation style of MapReduce. We develop criteria for scheduling jobs based on user specified deadline constraints and discuss our implementation and preliminary evaluation of a Deadline Constraint Scheduler for Hadoop which ensures that only jobs whose deadlines can be met are scheduled for execution. ---
Title: Survey on Improved Scheduling in Hadoop MapReduce in Cloud Environments Section 1: INTRODUCTION Description 1: This section introduces cloud computing and its significance, particularly focusing on the definition and advantages of cloud computing, and provides an overview of the paper's structure. Section 2: HADOOP Description 2: This section provides an overview of Hadoop, including its components such as Hadoop Distributed File System (HDFS) and Hadoop MapReduce, and their relevance in cloud environments. Section 3: HDFS-Distributed file system Description 3: This section discusses the Hadoop Distributed File System (HDFS), its architecture, and how it manages data storage across multiple nodes. Section 4: Hadoop MapReduce Overview Description 4: This section explains the MapReduce programming model, its components, and the execution process within the Hadoop framework. Section 5: SCHEDULING IN HADOOP Description 5: This section details the various scheduling algorithms used in Hadoop, including the default FIFO Scheduler, Fair Scheduler, and Capacity Scheduler, highlighting their characteristics and differences. Section 6: Default FIFO Scheduler Description 6: This section describes the default FIFO scheduling algorithm in Hadoop and its limitations in a multi-user environment. Section 7: Fair Scheduler Description 7: This section elaborates on the Fair Scheduler developed by Facebook, including its approach to fair resource allocation and preemption capabilities. Section 8: Capacity Scheduler Description 8: This section explains the Capacity Scheduler developed by Yahoo, focusing on its method of ensuring fair resource distribution among a large number of users. Section 9: SCHEDULER IMPROVEMENTS Description 9: This section covers recent advancements and proposed improvements in Hadoop scheduling, including Delay Scheduler, Dynamic Proportional Scheduler, and others. Section 10: Longest Approximate Time to End (LATE) - Speculative Execution Description 10: This section discusses the LATE algorithm for speculative execution to optimize job performance in heterogeneous clusters. Section 11: Delay Scheduling Description 11: This section explains the concept of delay scheduling and how it improves data locality and scheduling fairness. Section 12: Dynamic Priority Scheduling Description 12: This section outlines the Dynamic Priority Scheduler and its mechanism for dynamic capacity allocation based on user priorities. Section 13: Deadline Constraint Scheduler Description 13: This section covers the Deadline Constraint Scheduler, which aims to meet job deadlines while optimizing system utilization. Section 14: Resource Aware Scheduling Description 14: This section introduces resource-aware scheduling mechanisms that consider various system resource metrics to enhance job scheduling efficiency. Section 15: CONCLUSION & FUTURE WORK Description 15: This section summarizes the findings of the paper and discusses potential future research directions, particularly focusing on scheduling in heterogeneous Hadoop clusters.
A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms
13
--- paper_title: Robust Eye and Pupil Detection Method for Gaze Tracking paper_content: Robust and accurate pupil detection is a prerequisite for gaze detection. Hence, we propose a new eye/pupil detection method for gaze detection on a large display. The novelty of our research can be summarized by the following four points. First, in order to overcome the performance limitations of conventional methods of eye detection, such as adaptive boosting (Adaboost) and continuously adaptive mean shift (CAMShift) algorithms, we propose adaptive selection of the Adaboost and CAMShift methods. Second, this adaptive selection is based on two parameters: pixel differences in successive images and matching values determined by CAMShift. Third, a support vector machine (SVM)-based classifier is used with these two parameters as the input, which improves the eye detection performance. Fourth, the center of the pupil within the detected eye region is accurately located by means of circular edge detection, binarization and calculation of the geometric center. The experimental results show that the proposed method can detect the center of the pupil at a speed of approximately 19.4 frames/s with an RMS error of approximately 5.75 pixels, which is superior to the performance of conventional detection methods. --- paper_title: MobiGaze: development of a gaze interface for handheld mobile devices paper_content: Handheld mobile devices that have a touch screen are widely used but are awkward to use with one hand. To solve this problem, we propose MobiGaze, which is a user interface that uses one's gaze (gaze interface) to operate a handheld mobile device. By using stereo cameras, the user's line of sight is detected in 3D, enabling the user to interact with a mobile device by means of his/her gaze. We have constructed a prototype system of MobiGaze that consists of two cameras with IR-LED, a Windows-based notebook PC, and iPod touch. Moreover, we have developed several applications for MobiGaze. --- paper_title: Eye gaze tracking techniques for interactive applications paper_content: This paper presents a review of eye gaze tracking technology and focuses on recent advancements that might facilitate its use in general computer applications. Early eye gaze tracking devices were appropriate for scientific exploration in controlled environments. Although it has been thought for long that they have the potential to become important computer input devices as well, the technology still lacks important usability requirements that hinders its applicability. We present a detailed description of the pupil-corneal reflection technique due to its claimed usability advantages, and show that this method is still not quite appropriate for general interactive applications. Finally, we present several recent techniques for remote eye gaze tracking with improved usability. These new solutions simplify or eliminate the calibration procedure and allow free head motion. --- paper_title: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze paper_content: Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. --- paper_title: Passive driver gaze tracking with active appearance models paper_content: Monocular gaze estimation is usually performed by locating the pupils, and the inner and outer eye corners in the image of the driver’s head. Of these feature points, the eye corners are just as important, and perhaps harder to detect, than the pupils. The eye corners are usually found using local feature detectors and trackers. In this paper, we describe a monocular driver gaze tracking system which uses a global head model, specifically an Active Appearance Model (AAM), to track the whole head. From the AAM, the eye corners, eye region, and head pose are robustly extracted and then used to estimate the gaze. --- paper_title: Robust Head Mounted Wearable Eye Tracking System for Dynamical Calibration paper_content: In this work, a new head mounted eye tracking system is presented. Based on computer vision techniques, the system integrates eye images and head movement, in real time, performing a robust gaze point tracking. Nystagmus movements due to vestibulo-ocular reflex are monitored and integrated. The system proposed here is a strongly improved version of a previous platform called HATCAM, which was robust against changes of illumination conditions. The new version, called HAT-Move, is equipped with accurate inertial motion unit to detect the head movement enabling eye gaze even in dynamical conditions. HAT-Move performance is investigated in a group of healthy subjects in both static and dynamic conditions, i.e. when head is kept still or free to move. Evaluation was performed in terms of amplitude of the angular error between the real coordinates of the fixed points and those computed by the system in two experimental setups, specifically, in laboratory settings and in a 3D virtual reality (VR) scenario. The achieved results showed that HAT-Move is able to achieve eye gaze angular error of about 1 degree along both horizontal and vertical directions --- paper_title: Homography normalization for robust gaze estimation in uncalibrated setups paper_content: Homography normalization is presented as a novel gaze estimation method for uncalibrated setups. The method applies when head movements are present but without any requirements to camera calibration or geometric calibration. The method is geometrically and empirically demonstrated to be robust to head pose changes and despite being less constrained than cross-ratio methods, it consistently performs favorably by several degrees on both simulated data and data from physical setups. The physical setups include the use of off-the-shelf web cameras with infrared light (night vision) and standard cameras with and without infrared light. The benefits of homography normalization and uncalibrated setups in general are also demonstrated through obtaining gaze estimates (in the visible spectrum) using only the screen reflections on the cornea. --- paper_title: EyeTab: model-based gaze estimation on unmodified tablet computers paper_content: Despite the widespread use of mobile phones and tablets, hand-held portable devices have only recently been identified as a promising platform for gaze-aware applications. Estimating gaze on portable devices is challenging given their limited computational resources, low quality integrated front-facing RGB cameras, and small screens to which gaze is mapped. In this paper we present EyeTab, a model-based approach for binocular gaze estimation that runs entirely on an unmodified tablet. EyeTab builds on set of established image processing and computer vision algorithms and adapts them for robust and near-realtime gaze estimation. A technical prototype evaluation with eight participants in a normal indoors office setting shows that EyeTab achieves an average gaze estimation accuracy of 6.88° of visual angle at 12 frames per second. --- paper_title: Towards Gaze Interaction in Immersive Virtual Reality : Evaluation of a Monocular Eye Tracking Set-Up paper_content: Of all senses, it is visual perception that is predominantly deluded in Virtual Realities. Yet, the eyes of the observer, despite the fact that they are the fastest perceivable moving body part, have gotten relatively little attention as an interaction modality. A solid integration of gaze, however, provides great opportunities for implicit and explicit human-computer interaction. We present our work on integrating a lightweight head-mounted eye tracking system in a CAVE-like Virtual Reality Set-Up and provide promising data from a user study on the achieved accuracy and latency. --- paper_title: Towards the development of a standardized performance evaluation framework for eye gaze estimation systems in consumer platforms paper_content: There is a need to standardize the performance of eye gaze estimation (EGE) methods in various platforms for human computer interaction (HCI). Because of lack of consistent schemes or protocols for summative evaluation of EGE systems, performance results in this field can neither be compared nor reproduced with any consistency. In contemporary literature, gaze tracking accuracy is measured under non-identical sets of conditions, with variable metrics and most results do not report the impact of system meta-parameters that significantly affect tracking performances. In this work, the diverse nature of these research outcomes and system parameters which affect gaze tracking in different platforms is investigated and their error contributions are estimated quantitatively. Then the concept and development of a performance evaluation framework is proposed- that can define design criteria and benchmark quality measures for the eye gaze research community. --- paper_title: Eye-gaze systems - An analysis of error sources and potential accuracy in consumer electronics use cases paper_content: Several generic CE use cases and corresponding techniques for eye gaze estimation (EGE) are reviewed. The optimal approaches for each use case are determined from a review of recent literature. In addition, the most probable error sources for EGE are determined and the impact of these error sources is quantified. A discussion and analysis of the research outcome is given and future work outlined. --- paper_title: Detecting eye position and gaze from a single camera and 2 light sources paper_content: We introduce a new method for computing the 3D position of an eye and its gaze direction from a single camera and at least two near infra-red light sources. The method is based on the theory of spherical optical surfaces and uses the Gullstrand model of the eye to estimate the positions of the center of the cornea and the center of the pupil in 3D. The direction of gaze can then be computed from the vector connecting these two points. The point of regard can also be computed from the intersection of the direction of gaze with an object in the scene. We have simulated this model using ray traced images of the eye, and obtained very promising results. The major contribution of this new technique over current eye tracking technology is that the system does not require to be calibrated with the user before each user session, and it allows for free head motion. --- paper_title: From Gaze Control to Attentive Interfaces paper_content: Interactive applications that make use of eye tracking have traditionally been based on command-and-control. Applications that make more subtle use of eye gaze have recently become increasingly popular in the domain of attentive interfaces that adapt their behaviour based on the visual attention of the user. We provide a review of the main systems and application domains where this genre of interfaces has been used. --- paper_title: Limbus/pupil switching for wearable eye tracking under variable lighting conditions paper_content: We present a low-cost wearable eye tracker built from off-the-shelf components. Based on the open source openEyes project (the only other similar effort that we are aware of), our eye tracker operates in the visible spectrum and variable lighting conditions. The novelty of our approach rests in automatically switching between tracking the pupil/iris boundary in bright light to tracking the iris/sclera boundary (limbus) in dim light. Additional improvements include a semi-automatic procedure for calibrating the eye and scene cameras, as well as an automatic procedure for initializing the location of the pupil in the first image frame. The system is accurate to two degrees visual angle in both indoor and outdoor environments. --- paper_title: Eye Gaze for Consumer Electronics: Controlling and commanding intelligent systems. paper_content: Over the last several years, there has been much research and investigation on finding new ways to interact with the smart systems that currently form an integral part of our lives. One of the most widely researched fields in human?computer interaction (HCI) has been the use of human eye gaze as an input modality to control and command intelligent systems. For example, gaze-based schemes for hands-free input to computers for text entry/scrolling/pointing was proposed as early as in 1989 for disabled persons [1]. In the field of commercial applications, gaze-based interactions have brought immersive experiences in the world of virtual gaming and multimedia entertainment [2]. Eye gaze is also a significant feature for detecting the attention and intent of an individual. For example, gaze tracking could be implemented in a car to detect driver consciousness or in a smartphone to switch operations by sensing user attentiveness. --- paper_title: New Solution to the Midas Touch Problem: Identification of Visual Commands Via Extraction of Focal Fixations paper_content: Abstract Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed. --- paper_title: General theory of remote gaze estimation using the pupil center and corneal reflections paper_content: This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented. --- paper_title: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze paper_content: Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. --- paper_title: Remote Gaze Tracking System on a Large Display paper_content: We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user’s facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s. --- paper_title: Gaze-based selection of standard-size menu items paper_content: With recent advances in eye tracking technology, eye gaze gradually gains acceptance as a pointing modality. Its relatively low accuracy, however, determines the need to use enlarged controls in eye-based interfaces rendering their design rather peculiar. Another factor impairing pointing performance is deficient robustness of an eye tracker's calibration. To facilitate pointing at standard-size menus, we developed a technique that uses dynamic target expansion for on-line correction of the eye tracker's calibration. Correction is based on the relative change in the gaze point location upon the expansion. A user study suggests that the technique affords a dramatic six-fold improvement in selection accuracy. This is traded off against a much smaller reduction in performance speed (39%). The technique is thus believed to contribute to development of universal-access solutions supporting navigation through standard menus by eye gaze alone. --- paper_title: Towards Accurate Eye Tracker Calibration – Methods and Procedures☆ paper_content: Abstract Eye movement is a new emerging modality in human computer interfaces. With better access to devices which are able to measure eye movements (so called eye trackers) it becomes accessible even in ordinary environments. However, the first problem that must be faced when working with eye movements is a correct mapping from an output of eye tracker to a gaze point – place where the user is looking at the screen. That is why the work must always be started with calibration of the device. The paper describes the process of calibration, analyses of the possible steps and ways how to simplify this process. --- paper_title: General theory of remote gaze estimation using the pupil center and corneal reflections paper_content: This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented. --- paper_title: Novel Eye Gaze Tracking Techniques Under Natural Head Movement paper_content: Most available remote eye gaze trackers have two characteristics that hinder them being widely used as the important computer input devices for human computer interaction. First, they have to be calibrated for each user individually; second, they have low tolerance for head movement and require the users to hold their heads unnaturally still. In this paper, by exploiting the eye anatomy, we propose two novel solutions to allow natural head movement and minimize the calibration procedure to only one time for a new individual. The first technique is proposed to estimate the 3D eye gaze directly. In this technique, the cornea of the eyeball is modeled as a convex mirror. Via the properties of convex mirror, a simple method is proposed to estimate the 3D optic axis of the eye. The visual axis, which is the true 3D gaze direction of the user, can be determined subsequently after knowing the angle deviation between the visual axis and optic axis by a simple calibration procedure. Therefore, the gaze point on an object in the scene can be obtained by simply intersecting the estimated 3D gaze direction with the object. Different from the first technique, our second technique does not need to estimate the 3D eye gaze directly, and the gaze point on an object is estimated from a gaze mapping function implicitly. In addition, a dynamic computational head compensation model is developed to automatically update the gaze mapping function whenever the head moves. Hence, the eye gaze can be estimated under natural head movement. Furthermore, it minimizes the calibration procedure to only one time for a new individual. The advantage of the proposed techniques over the current state of the art eye gaze trackers is that it can estimate the eye gaze of the user accurately under natural head movement, without need to perform the gaze calibration every time before using it. Our proposed methods will improve the usability of the eye gaze tracking technology, and we believe that it represents an important step for the eye tracker to be accepted as a natural computer input device. --- paper_title: Multiperson Visual Focus of Attention from Head Pose and Meeting Contextual Cues paper_content: This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements. --- paper_title: A novel 2-D mapping-based remote eye gaze tracking method using two IR light sources paper_content: Consumers nowadays demand for more convenient human-device interaction (HDI) in every device they use. As a part of the HDI technologies, remote eye gaze tracking (REGT) is getting more attentions. Among state-of-the-art REGT methods, the ones which utilize four infrared (IR) lights have better accuracy and robustness to head movements. In this paper, a novel glint estimation method is proposed, which enables four-glint-based REGT methods to operate with only two IR light sources. Given two glints obtained from IR light sources attached on the upper or lower corners of the screen, the proposed method estimates the other two virtual glints based on mathematical and geometric principles. Then, the point of gaze (POG) can be estimated using a mapping function established between the four glints and the screen through a user calibration process. Experimental results show that the REGT system using the proposed glint estimation method can achieve highly competitive performance compared with the ones which use four IR light sources.1 --- paper_title: Taxonomic study of polynomial regressions applied to the calibration of video-oculographic systems paper_content: Of gaze tracking techniques, video-oculography (VOG) is one of the most attractive because of its versatility and simplicity. VOG systems based on general purpose mapping methods use simple polynomial expressions to estimate a user's point of regard. Although the behaviour of such systems is generally acceptable, a detailed study of the calibration process is needed to facilitate progress in improving accuracy and tolerance to user head movement. To date, there has been no thorough comparative study of how mapping equations affect final system response. After developing a taxonomic classification of calibration functions, we examine over 400,000 models and evaluate the validity of several conventional assumptions. The rigorous experimental procedure employed enabled us to optimize the calibration process for a real VOG gaze tracking system and, thereby, halve the calibration time without detrimental effect on accuracy or tolerance to head movement. --- paper_title: Mapping the Pupil-Glint Vector to Gaze Coordinates in a Simple Video-Based Eye Tracker paper_content: In a video-based eye tracker, the normalized pupil-glint vector changes as the eyes move. Using an appropriate model, the pupil-glint vector can be mapped to gaze coordinates. Using a simple hardware configuration with one camera and one infrared source, several mapping functions – some from literature and some derived here – were compared with one another with respect to the accuracy that could be achieved. The study served to confirm the results of a previous study with another data set and to expand on the possibilities that are considered from the previous study. The data of various participants was examined for trends which led to derivation of a mapping model that proved to be more accurate than all but one model from literature. It was also shown that the best calibration configuration for this hardware setup is one that contains fourteen targets while taking about 20 seconds for the procedure to be completed. --- paper_title: 2D Gaze Estimation Based on Pupil-Glint Vector Using an Artificial Neural Network paper_content: Gaze estimation methods play an important role in a gaze tracking system. A novel 2D gaze estimation method based on the pupil-glint vector is proposed in this paper. First, the circular ring rays location (CRRL) method and Gaussian fitting are utilized for pupil and glint detection, respectively. Then the pupil-glint vector is calculated through subtraction of pupil and glint center fitting. Second, a mapping function is established according to the corresponding relationship between pupil-glint vectors and actual gaze calibration points. In order to solve the mapping function, an improved artificial neural network (DLSR-ANN) based on direct least squares regression is proposed. When the mapping function is determined, gaze estimation can be actualized through calculating gaze point coordinates. Finally, error compensation is implemented to further enhance accuracy of gaze estimation. The proposed method can achieve a corresponding accuracy of 1.29°, 0.89°, 0.52°, and 0.39° when a model with four, six, nine, or 16 calibration markers is utilized for calibration, respectively. Considering error compensation, gaze estimation accuracy can reach 0.36°. The experimental results show that gaze estimation accuracy of the proposed method in this paper is better than that of linear regression (direct least squares regression) and nonlinear regression (generic artificial neural network). The proposed method contributes to enhancing the total accuracy of a gaze tracking system. --- paper_title: Subpixel eye gaze tracking paper_content: This paper addresses the accuracy problem of an eye gaze tracking system. We first analyze the technical barrier for a gaze tracking system to achieve a desired accuracy, and then propose a subpixel tracking method to break this barrier. We present new algorithms for detecting the inner eye corner and the center of an iris at subpixel accuracy, and we apply these new methods in developing a real-time gaze tracking system. Experimental results indicate that the new methods achieve an average accuracy within 1.4/spl deg/ using normal eye image resolutions. --- paper_title: Implementation of an Eye Gaze Tracking System for the Disabled People paper_content: The paper proposes a modified pupil center corneal reflection(PCCR) hardware method to improve the system accuracy. The modified PCCR eye gaze tracking system, a new version of the PCCR eye gaze tracking system supplemented by the relation between IR LED position and the distance from the eye gaze tracking system to the monitor screen, improves the tracking accuracy within one degree. The system also suggests a circuit that can do power control adaptively between the minimum and maximum power. It is confirmed that the system performs well both for indoors and outdoors in spite of the reduced calculation. Besides, very convenient mouse functions are proposed so that can be used on PC with the eye gaze tracking functions. The user group confirmed their performances that result in high level of satisfaction with excellent tracking function. Above all, the paper suggests the adaptive exposure control algorithm for the proposed system which is robust against light. The adaptive exposure control algorithm presents excellent performance over the existing system for both indoors and outdoors even when the calculation is down to one fifth. --- paper_title: Implicit Calibration of a Remote Gaze Tracker paper_content: We describe a system designed to monitor the gaze of a user working naturally at a computer workstation. The system consists of three cameras situated between the keyboard and the monitor. Free head movements are allowed within a three-dimensional volume approximately 40 centimeters in diameter. Two fixed, wide-field "face" cameras equipped with active-illumination systems enable rapid localization of the subject's pupils. A third steerable "eye" camera has a relatively narrow field of view, and acquires the images of the eyes which are used for gaze estimation. Unlike previous approaches which construct an explicit three-dimensional representation of the subject's head and eye, we derive mappings for steering control and gaze estimation using a procedure we call implicit calibration. Implicit calibration is performed by collecting a "training set" of parameters and associated measurements, and solving for a set of coefficients relating the measurements back to the parameters of interest. Preliminary data on three subjects indicate an median gaze estimation error of ap-proximately 0.8 degree. --- paper_title: A novel approach to 3-d gaze tracking using stereo cameras paper_content: A novel approach to three-dimensional (3-D) gaze tracking using 3-D computer vision techniques is proposed in this paper. This method employs multiple cameras and multiple point light sources to estimate the optical axis of user's eye without using any user-dependent parameters. Thus, it renders the inconvenient system calibration process which may produce possible calibration errors unnecessary. A real-time 3-D gaze tracking system has been developed which can provide 30 gaze measurements per second. Moreover, a simple and accurate calibration method is proposed to calibrate the gaze tracking system. Before using the system, each user only has to stare at a target point for a few (2-3) seconds so that the constant angle between the 3-D line of sight and the optical axis can be estimated. The test results of six subjects showed that the gaze tracking system is very promising achieving an average estimation error of under 1/spl deg/. --- paper_title: Real time eye gaze tracking with Kinect paper_content: Traditional gaze tracking systems rely on explicit infrared lights and high resolution cameras to achieve high performance and robustness. These systems, however, require complex setup and thus are restricted in lab research and hard to apply in practice. In this paper, we propose to perform gaze tracking with a consumer level depth sensor (Kinect). Leveraging on Kinect's capability to obtain 3D coordinates, we propose an efficient model-based gaze tracking system. We first build a unified 3D eye model to relate gaze directions and eye features (pupil center, eyeball center, cornea center) through subject-dependent eye parameters. A personal calibration framework is further proposed to estimate the subject-dependent eye parameters. Finally we can perform real time gaze tracking given the 3D coordinates of eye features from Kinect and the subject-dependent eye parameters from personal calibration procedure. Experimental results with 6 subjects prove the effectiveness of the proposed 3D eye model and the personal calibration framework. Furthermore, the gaze tracking system is able to work in real time (20 fps) and with low resolution eye images. --- paper_title: General theory of remote gaze estimation using the pupil center and corneal reflections paper_content: This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented. --- paper_title: User-calibration-free remote eye-gaze tracking system with extended tracking range paper_content: A novel general method to extend the tracking range of user-calibration-free remote eye-gaze tracking (REGT) systems that are based on the analysis of stereo-images from multiple cameras is presented. The method consists of two distinct phases. In the brief initial phase, estimates of the center of the pupil and corneal reflections in pairs of stereo-images are used to estimate automatically a set of subject-specific eye parameters. In the second phase, these subject — specific eye parameters are used with estimates of the center of the pupil and corneal reflections in images from any one of the systems' cameras to compute the Point-of-Gaze (PoG). Experiments with a system that includes two cameras show that the tracking range for horizontal gaze directions can be extended from ±23.2° when the two cameras are used as a stereo pair to ±35.5° when the two cameras are used independently to estimate the POG. --- paper_title: A single camera eye-gaze tracking system with free head motion paper_content: Eye-gaze as a form of human machine interface holds great promise for improving the way we interact with machines. Eye-gaze tracking devices that are non-contact, non-restrictive, accurate and easy to use will increase the appeal for including eye-gaze information in future applications. The system we have developed and which we describe in this paper achieves these goals using a single high resolution camera with a fixed field of view. The single camera system has no moving parts which results in rapid reacquisition of the eye after loss of tracking. Free head motion is achieved using multiple glints and 3D modeling techniques. Accuracies of under 1° of visual angle are achieved over a field of view of 14x12x20 cm and over various hardware configurations, camera resolutions and frame rates. --- paper_title: Calibration-free gaze tracking using a binocular 3D eye model paper_content: This paper presents a calibration-free method for estimating the point of gaze (POG) on a display by using two pairs of stereo cameras. By using one pair of cameras and two light sources, the optical axis of the eye and the position of the center of the cornea can be estimated. This estimation is carried out by using a spherical model of the cornea. One pair of cameras is used for the estimation of the optical axis of the left eye, and the other pair is used for the estimation of the optical axis of the right eye. The point of intersection of optical axis with the display is termed the point of the optical axis (POA). The POG is approximately estimated as the midpoint of the line joining POAs of both the eyes with the display. We have developed a prototype system based on this method and demonstrated that the midpoint of POAs was closer to the fiducial point that the user gazed at than each POA. --- paper_title: Hybrid Method for 3-D Gaze Tracking Using Glint and Contour Features paper_content: Glint features have important roles in gaze-tracking systems. However, when the operation range of a gaze-tracking system is enlarged, the performance of glint-feature-based (GFB) approaches will be degraded mainly due to the curvature variation problem at around the edge of the cornea. Although the pupil contour feature may provide complementary information to help estimating the eye gaze, existing methods do not properly handle the cornea refraction problem, leading to inaccurate results. This paper describes a contour-feature-based (CFB) 3-D gaze-tracking method that is compatible to cornea refraction. We also show that both the GFB and CFB approaches can be formulated in a unified framework and, thus, they can be easily integrated. Furthermore, it is shown that the proposed CFB method and the GFB method should be integrated because the two methods provide complementary information that helps to leverage the strength of both features, providing robustness and flexibility to the system. Computer simulations and real experiments show the effectiveness of the proposed approach for gaze tracking. --- paper_title: Gaze estimation method based on an aspherical model of the cornea: surface of revolution about the optical axis of the eye paper_content: A novel gaze estimation method based on a novel aspherical model of the cornea is proposed in this paper. The model is a surface of revolution about the optical axis of the eye. The calculation method is explained on the basis of the model. A prototype system for estimating the point of gaze (POG) has been developed using this method. The proposed method has been found to be more accurate than the gaze estimation method based on a spherical model of the cornea. --- paper_title: Novel Eye Gaze Tracking Techniques Under Natural Head Movement paper_content: Most available remote eye gaze trackers have two characteristics that hinder them being widely used as the important computer input devices for human computer interaction. First, they have to be calibrated for each user individually; second, they have low tolerance for head movement and require the users to hold their heads unnaturally still. In this paper, by exploiting the eye anatomy, we propose two novel solutions to allow natural head movement and minimize the calibration procedure to only one time for a new individual. The first technique is proposed to estimate the 3D eye gaze directly. In this technique, the cornea of the eyeball is modeled as a convex mirror. Via the properties of convex mirror, a simple method is proposed to estimate the 3D optic axis of the eye. The visual axis, which is the true 3D gaze direction of the user, can be determined subsequently after knowing the angle deviation between the visual axis and optic axis by a simple calibration procedure. Therefore, the gaze point on an object in the scene can be obtained by simply intersecting the estimated 3D gaze direction with the object. Different from the first technique, our second technique does not need to estimate the 3D eye gaze directly, and the gaze point on an object is estimated from a gaze mapping function implicitly. In addition, a dynamic computational head compensation model is developed to automatically update the gaze mapping function whenever the head moves. Hence, the eye gaze can be estimated under natural head movement. Furthermore, it minimizes the calibration procedure to only one time for a new individual. The advantage of the proposed techniques over the current state of the art eye gaze trackers is that it can estimate the eye gaze of the user accurately under natural head movement, without need to perform the gaze calibration every time before using it. Our proposed methods will improve the usability of the eye gaze tracking technology, and we believe that it represents an important step for the eye tracker to be accepted as a natural computer input device. --- paper_title: A single-camera remote eye tracker paper_content: Many eye-tracking systems either require the user to keep their head still or involve cameras or other equipment mounted on the user's head. While acceptable for research applications, these limitations make the systems unsatisfactory for prolonged use in interactive applications. Since the goal of our work is to use eye trackers for improved visual communication through gaze guidance [1,2] and for Augmentative and Alternative Communication (AAC) [3], we are interested in less invasive eye tracking techniques. --- paper_title: Detecting eye position and gaze from a single camera and 2 light sources paper_content: We introduce a new method for computing the 3D position of an eye and its gaze direction from a single camera and at least two near infra-red light sources. The method is based on the theory of spherical optical surfaces and uses the Gullstrand model of the eye to estimate the positions of the center of the cornea and the center of the pupil in 3D. The direction of gaze can then be computed from the vector connecting these two points. The point of regard can also be computed from the intersection of the direction of gaze with an object in the scene. We have simulated this model using ray traced images of the eye, and obtained very promising results. The major contribution of this new technique over current eye tracking technology is that the system does not require to be calibrated with the user before each user session, and it allows for free head motion. --- paper_title: A free-head, simple calibration, gaze tracking system that enables gaze-based interaction paper_content: Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle). --- paper_title: Eye gaze tracking using an active stereo head paper_content: In the eye gaze tracking problem, the goal is to determine where on a monitor screen a computer user is looking, ie., the gaze point. Existing systems generally have one of two limitations: either the head must remain fixed in front of a stationary camera, or, to allow for head motion, the user must wear an obstructive device. We introduce a 3D eye tracking system where the head motion is allowed without the need for markers or worn devices. We use a pair of stereo systems: a wide angle stereo system detects the face and steers an active narrow FOV stereo system to track the eye at high resolution. For high resolution tracking, the eye is modeled in 3D, including the corneal ball, pupil and fovea. We discuss the calibration of the stereo systems, the eye model, eye detection and tracking, and we close with an evaluation of the accuracy of the estimated gaze point on the monitor. --- paper_title: 3D eye model-based gaze estimation from a depth sensor paper_content: In this paper, we address the 3D eye gaze estimation problem using a low-cost, simple-setup, and non-intrusive consumer depth sensor (Kinect sensor). We present an effective and accurate method based on 3D eye model to estimate the point of gaze of a subject with the tolerance of free head movement. To determine the parameters involved in the proposed eye model, we propose i) an improved convolution-based means of gradients iris center localization method to accurately and efficiently locate the iris center in 3D space; ii) a geometric constraints-based method to estimate the eyeball center under the constraints that all the iris center points are distributed on a sphere originated from the eyeball center and the sizes of two eyeballs of a subject are identical; iii) an effective Kappa angle calculation method based on the fact that the visual axes of both eyes intersect at a same point with the screen plane. The final point of gaze is calculated by using the estimated eye model parameters. We experimentally evaluate our gaze estimation method on five subjects. The experimental results show the good performance of the proposed method with an average estimation accuracy of 3.78°, which outperforms several state-of-the-arts. --- paper_title: Taxonomic study of polynomial regressions applied to the calibration of video-oculographic systems paper_content: Of gaze tracking techniques, video-oculography (VOG) is one of the most attractive because of its versatility and simplicity. VOG systems based on general purpose mapping methods use simple polynomial expressions to estimate a user's point of regard. Although the behaviour of such systems is generally acceptable, a detailed study of the calibration process is needed to facilitate progress in improving accuracy and tolerance to user head movement. To date, there has been no thorough comparative study of how mapping equations affect final system response. After developing a taxonomic classification of calibration functions, we examine over 400,000 models and evaluate the validity of several conventional assumptions. The rigorous experimental procedure employed enabled us to optimize the calibration process for a real VOG gaze tracking system and, thereby, halve the calibration time without detrimental effect on accuracy or tolerance to head movement. --- paper_title: Towards accurate and robust cross-ratio based gaze trackers through learning from simulation paper_content: Cross-ratio (CR) based methods offer many attractive properties for remote gaze estimation using a single camera in an uncalibrated setup by exploiting invariance of a plane projectivity. Unfortunately, due to several simplification assumptions, the performance of CR-based eye gaze trackers decays significantly as the subject moves away from the calibration position. In this paper, we introduce an adaptive homography mapping for achieving gaze prediction with higher accuracy at the calibration position and more robustness under head movements. This is achieved with a learning-based method for compensating both spatially-varying gaze errors and head pose dependent errors simultaneously in a unified framework. The model of adaptive homography is trained offline using simulated data, saving a tremendous amount of time in data collection. We validate the effectiveness of the proposed approach using both simulated and real data from a physical setup. We show that our method compares favorably against other state-of-the-art CR based methods. --- paper_title: Homography normalization for robust gaze estimation in uncalibrated setups paper_content: Homography normalization is presented as a novel gaze estimation method for uncalibrated setups. The method applies when head movements are present but without any requirements to camera calibration or geometric calibration. The method is geometrically and empirically demonstrated to be robust to head pose changes and despite being less constrained than cross-ratio methods, it consistently performs favorably by several degrees on both simulated data and data from physical setups. The physical setups include the use of off-the-shelf web cameras with infrared light (night vision) and standard cameras with and without infrared light. The benefits of homography normalization and uncalibrated setups in general are also demonstrated through obtaining gaze estimates (in the visible spectrum) using only the screen reflections on the cornea. --- paper_title: Augmenting the robustness of cross-ratio gaze tracking methods to head movement paper_content: Remote gaze estimation using a single non-calibrated camera, simple user calibration or calibration free, and robust to head movements are very desirable features of eye tracking systems. Because cross-ratio (CR) is an invariant property of projective geometry, gaze estimation methods that rely on this property have the potential to provide these features, though most current implementations rely on a few simplifications that compromise the performance of the method. In this paper, the CR method for gaze tracking is revisited, and we introduce a new method that explicitly compensates head movements using a simple 3 parameter eye model. The method uses a single non-calibrated camera and requires a simple calibration procedure per user to estimate the eye parameters. We have conducted simulations and experiments with real users that show significant improvements over current state-of-the-art CR methods that do not explicitly compensate for head motion. --- paper_title: Free head motion eye gaze tracking using a single camera and multiple light sources paper_content: One of the main limitations of current remote eye gaze tracking (REGT) techniques is that the user's head must remain within a very limited area in front of the monitor screen. In this paper, we present a free head motion REGT technique. By projecting a known rectangular pattern of lights, the technique estimates the gaze position relative to this rectangle using an invariant property of projective geometry. We carry extensive analysis of similar methods using an eye model to compare their accuracy. Based on these results, we propose a new estimation procedure that compensates the angular difference between the eye visual axis and optical axis. We have developed a real time (30 fps) prototype using a single camera and 5 light sources to generate the light pattern. Experimental results shows that the accuracy of the system is about 1deg of visual angle --- paper_title: Gaze direction estimation using support vector machine with active appearance model paper_content: In recent years, research on human-computer interaction is becoming popular, most of which uses body movements, gestures or eye gaze direction. Until now, gazing estimation is still an active research domain. We propose an efficient method to solve the problem of the eye gaze point. We first locate the eye region by modifying the characteristics of the Active Appearance Model (AAM). Then by employing the Support Vector Machine (SVM), we estimate the five gazing directions through classification. The original 68 facial feature points in AAM are modified into 36 eye feature points. According to the two-dimensional coordinates of feature points, we classify different directions of eye gazing. The modified 36 feature points describe the contour of eyes, iris size, iris location, and the position of pupils. In addition, the resolution of cameras does not affect our method to determine the direction of line of sight accurately. The final results show the independence of classifications, less classification errors, and more accurate estimation of the gazing directions. --- paper_title: Appearance-Based Gaze Estimation in the Wild paper_content: Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild. --- paper_title: Manifold Alignment for Person Independent Appearance-Based Gaze Estimation paper_content: We show that dually supervised manifold embedding can improve the performance of machine learning based person-independent and thus calibration-free gaze estimation. For this purpose, we perform a manifold embedding for each person in the training dataset and then learn a linear transformation that aligns the individual, person-dependent manifolds. We evaluate the effect of manifold alignment on the recently presented Columbia dataset, where we analyze the influence on 6 regression methods and 8 feature variants. Using manifold alignment, we are able to improve the person-independent gaze estimation performance by up to 31.2 % compared to the best approach without manifold alignment. --- paper_title: Estimation of eye gaze direction angles based on active appearance models paper_content: In this paper we demonstrate efficient methods for continuous estimation of eye gaze angles with application to sign language videos. The difficulty of the task lies on the fact that those videos contain images with low face resolution since they are recorded from distance. First, we proceed to the modeling of face and eyes region by training and fitting Global and Local Active Appearance Models (LAAM). Next, we propose a system for eye gaze estimation based on a machine learning approach. In the first stage of our method, we classify gaze into discrete classes using GMMs that are based either on the parameters of the LAAM, or on HOG descriptors for the eyes region. We also propose a method for computing gaze direction angles from GMM log-likelihoods. We qualitatively and quantitatively evaluate our methods on two sign language databases and compare with a state of the art geometric model of the eye based on LAAM landmarks, which provides an estimate in direction angles. Finally, we further evaluate our framework by getting ground truth data from an eye tracking system Our proposed methods, and especially the GMMs using LAAM parameters, demonstrate high accuracy and robustness even in challenging tasks. --- paper_title: Local Binary Pattern Histogram features for on-screen eye-gaze direction estimation and a comparison of appearance based methods paper_content: Human Computer Interaction (HCI) has become an important focus of both computer science researches and industrial applications. And, on-screen gaze estimation is one of the hottest topics in this rapidly growing field. Eye-gaze direction estimation is a sub-research area of on-screen gaze estimation and the number of studies that focused on the estimation of on-screen gaze direction is limited. Due to this, various appearance-based video-oculography methods are investigated in this work. Firstly, a new dataset is created via user images taken from daylight censored cameras located at computer screen. Then, Local Binary Pattern Histogram (LBPH), which is used in this work for the first time to obtain on-screen gaze direction information, and Principal Component Analysis (PCA) methods are employed to extract image features. And, parameter optimized Support Vector Machine (SVM), Artificial Neural Networks (ANNs) and k-Nearest Neighbor (k-NN) learning methods are adopted in order to estimate on-screen gaze direction. Finally, these methods' abilities to correctly estimate the on-screen gaze direction are compared using the resulting classification accuracies of applied methods and previous works. The best classification accuracy of 96.67% is obtained when using LBPH and SVM method pair which is better than previous works. The results also show that appearance based methods are pretty applicable for estimating on-screen gaze direction. --- paper_title: Statistical models of appearance for eye tracking and eye-blink detection and measurement paper_content: A statistical active appearance model (AAM) is developed to track and detect eye blinking. The model has been designed to be robust to variations of head pose or gaze. In particular we analyze and determine the model parameters which encode the variations caused by blinking. This global model is further extended using a series of sub-models to enable independent modeling and tracking of the two eye regions. Several methods to enable measurement and detection of eye-blink are proposed and evaluated. The results of various tests on different image databases are presented to validate each model. --- paper_title: Eye detection using discriminatory Haar features and a new efficient SVM paper_content: This paper presents an accurate and efficient eye detection method using the discriminatory Haar features (DHFs) and a new efficient support vector machine (eSVM). The DHFs are extracted by applying a discriminating feature extraction (DFE) method to the 2D Haar wavelet transform. The DFE method is capable of extracting multiple discriminatory features for two-class problems based on two novel measure vectors and a new criterion in the whitened principal component analysis (PCA) space. The eSVM significantly improves the computational efficiency upon the conventional SVM for eye detection without sacrificing the generalization performance. Experiments on the Face Recognition Grand Challenge (FRGC) database and the BioID face database show that (i) the DHFs exhibit promising classification capability for eye detection problem; (ii) the eSVM runs much faster than the conventional SVM; and (iii) the proposed eye detection method achieves near real-time eye detection speed and better eye detection performance than some state-of-the-art eye detection methods. A discriminating feature extraction (DFE) method for two-class problems is proposed.The DFE method is applied to derive the discriminatory Haar features (DHFs) for eye detection.An efficient support vector machine (eSVM) is proposed to improve the efficiency of the SVM.An accurate and efficient eye detection method is presented using the DHFs and the eSVM. --- paper_title: Eye Tracking for Everyone paper_content: From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2:5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10–15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at http://gazecapture.csail.mit.edu. --- paper_title: Estimating 3D Gaze Directions Using Unlabeled Eye Images via Synthetic Iris Appearance Fitting paper_content: Estimating three-dimensional (3D) human eye gaze by capturing a single eye image without active illumination is challenging. Although the elliptical iris shape provides a useful cue, existing methods face difficulties in ellipse fitting due to unreliable iris contour detection. These methods may fail frequently especially with low resolution eye images. In this paper, we propose a synthetic iris appearance fitting (SIAF) method that is model-driven to compute 3D gaze direction from iris shape. Instead of fitting an ellipse based on exactly detected iris contour, our method first synthesizes a set of physically possible iris appearances and then optimizes inside this synthetic space to find the best solution to explain the captured eye image. In this way, the solution is highly constrained and guaranteed to be physically feasible. In addition, the proposed advanced image analysis techniques also help the SIAF method be robust to the unreliable iris contour detection. Furthermore, with multiple eye images, we propose a SIAF-joint method that can further reduce the gaze error by half, and it also resolves the binary ambiguity which is inevitable in conventional methods based on simple ellipse fitting. --- paper_title: Gaze tracking by Binocular Vision and LBP features paper_content: In this paper, a new method for eye gaze tracking is proposed under natural head movement. In this method, Local-Binary-Pattern Texture Feature (LBP) is adopted to calculate the eye features according to the characteristic of the eye, and a precise Binocular Vision approach is used to detect the space coordinate of the eye. The combined features of space coordinates and LBP features of the eyes are fed into Support Vector Regression (SVR) to match the gaze mapping function, in the hope of tracking gaze direction under natural head movement. The experimental results prove that the proposed method can determine the gaze direction accurately. --- paper_title: Appearance-Based Gaze Tracking with Free Head Movement paper_content: In this work, we develop an appearance-based gaze tracking system allowing user to move their head freely. The main difficulty of the appearance-based gaze tracking method is that the eye appearance is sensitive to head orientation. To overcome the difficulty, we propose a 3-D gaze tracking method combining head pose tracking and appearance-based gaze estimation. We use a random forest approach to model the neighbor structure of the joint head pose and eye appearance space, and efficiently select neighbors from the collected high dimensional data set. Li-optimization is then used to seek for the best solution for regression from the selected neighboring samples. Experiment results shows that it can provide robust binocular gaze tracking results with less constraints but still provides moderate estimation accuracy of gaze estimation. --- paper_title: Free head motion eye gaze tracking using a single camera and multiple light sources paper_content: One of the main limitations of current remote eye gaze tracking (REGT) techniques is that the user's head must remain within a very limited area in front of the monitor screen. In this paper, we present a free head motion REGT technique. By projecting a known rectangular pattern of lights, the technique estimates the gaze position relative to this rectangle using an invariant property of projective geometry. We carry extensive analysis of similar methods using an eye model to compare their accuracy. Based on these results, we propose a new estimation procedure that compensates the angular difference between the eye visual axis and optical axis. We have developed a real time (30 fps) prototype using a single camera and 5 light sources to generate the light pattern. Experimental results shows that the accuracy of the system is about 1deg of visual angle --- paper_title: Eye-gaze tracking system based on particle swarm optimization and BP neural network paper_content: In order to enhance the practicability and accuracy of the eye-gaze tracking system, a new type low pixel eye feature point location method is adopted. This method can accurately extract the eye-gaze features, namely iris centre point and canthus points when the image pickup requirements are low. The eye-gaze tracking method based on particle swarm optimization (PSO) BP neural network is raised, to capture pictures of eyes under the same environment, and a regression model where the connection weights and threshold values are optimized by PSO algorithm is built via BP network. This method is free of the inherent defects of BP network. This method requires only a common camera and normal illumination intensity rather than high-standard hardware, which greatly cuts the restrictive requirements for the system hardware and thus enhances the system practicability. The experiment results show that PSO-BP model is of higher robustness and accuracy than BP model, and is of higher recognition rate and can effectively enhances the eye-gaze tracking accuracy. --- paper_title: Real-time eye gaze direction classification using convolutional neural network paper_content: Estimation eye gaze direction is useful in various human-computer interaction tasks. Knowledge of gaze direction can give valuable information regarding users point of attention. Certain patterns of eye movements known as eye accessing cues are reported to be related to the cognitive processes in the human brain. We propose a real-time framework for the classification of eye gaze direction and estimation of eye accessing cues. In the first stage, the algorithm detects faces using a modified version of the Viola-Jones algorithm. A rough eye region is obtained using geometric relations and facial landmarks. The eye region obtained is used in the subsequent stage to classify the eye gaze direction. A convolutional neural network is employed in this work for the classification of eye gaze direction. The proposed algorithm was tested on Eye Chimera database and found to outperform state of the art methods. The computational complexity of the algorithm is very less in the testing phase. The algorithm achieved an average frame rate of 24 fps in the desktop environment. --- paper_title: Eye-gaze tracking system by haar cascade classifier paper_content: Human can quickly and effortlessly focus on a few most interesting points in an image. Different observers tend to have the same fixations towards the same scene. In order to predict observer's fixations, eye gaze information can be used to reveal human attention and interest. This paper presents a real-time eye gaze tracking system. Haar cascade classifier is used to calculate the position of eye gaze based on the rectangular features of human eye. Then this position is adopted to match the space coordinates of screen representing where an observer is looking. The experimental results from different kinds of scenes validate the effectiveness of our system. --- paper_title: Driver gaze tracker using deformable template matching paper_content: In driver assistance system, human eye gaze direction is an important feature described some driver's situation such as distraction and fatigue. This paper proposes a method to track driver's gaze direction by using deformable template matching. The method is divided into three steps: first, identifying the face area. Second, localizing the eye area. Finally, combining the eye region model and sight algorithm to determine the driver's gaze direction. Experimental results show that this method can effectively identify human's eye and track gaze motion in a real-time running basis. --- paper_title: Eye Tracking by Template Matching using an Automatic Codebook Generation Scheme paper_content: We present an eye tracking algorithm which is robust against variations in scale, orientation and changes of eye appearances, such as eye blinking. The locations of the eye regions in the different frames are found using template matching. The method is kept invariant for rotations and scale by exploiting temporal information and by using a codebook of eye templates the method is robust against changes in eye region appearances. The entries of the codebook are generated automatically during the tracking of the eye regions and eventually represent a distinct set of eye appearances. Classification into different eye gestures, like blinking or different gaze directions, seems possible using these automatically learned patterns. --- paper_title: A 2 D eye gaze estimation system with low-resolution webcam images paper_content: In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown. --- paper_title: GazePointer: A real time mouse pointer control implementation based on eye gaze tracking paper_content: The field of Human-Computer Interaction (HCI) has witnessed a tremendous growth in the past decade. The advent of tablet PCs and cell phones allowing touch-based control has been hailed warmly. The researchers in this field have also explored the potential of `eye-gaze' as a possible means of interaction. Some commercial solutions have already been launched, but they are as yet expensive and offer limited usability. This paper strives to present a low cost real time system for eye-gaze based human-computer interaction. --- paper_title: Reducing shoulder-surfing by using gaze-based password entry paper_content: Shoulder-surfing -- using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information -- is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input. With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods. --- paper_title: Cheap and Easy PIN Entering Using Eye Gaze paper_content: PINs are one of the most popular methods to perform simple and fast user authentication.PIN stands for Personal Identification Number, which may have any number of digits or even letters.Nevertheless, 4-digit PIN is the most common and is used for instance in ATMs or cellular phones.The main advantage of the PIN is that it is easy to remember and fast to enter. There are, however,some drawbacks. One of them - addressed in this paper - is a possibility to steal PIN by a techniquecalled `shoulder surfing'. To avoid such problems a novel method of the PIN entering was proposed.Instead of using a numerical keyboard, the PIN may be entered by eye gazes, which is a hands-free,easy and robust technique. --- paper_title: Evaluation of the Potential of Gaze Input for Game Interaction paper_content: To evaluate the potential of gaze input for game interaction, we used two tasks commonly found in video game control, target acquisition and target tracking, in a set of two experiments. In the first experiment, we compared the target acquisition and target tracking performance of two eye trackers with four other input devices. Gaze input had a similar performance to the mouse for big targets, and better performance than a joystick, a device often used in gaming. In the second experiment, we compared target acquisition performance using either gaze or mouse for pointing, and either a mouse button or an EMG switch for clicking. The hands-free gaze-EMG input combination was faster than the mouse while maintaining a similar error rate. Our results suggest that there is a potential for gaze input in game interaction, given a sufficiently accurate and responsive eye tracker and a well-designed interface. --- paper_title: Evaluation of eye gaze interaction paper_content: Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures. --- paper_title: Eye-gaze tracking system based on particle swarm optimization and BP neural network paper_content: In order to enhance the practicability and accuracy of the eye-gaze tracking system, a new type low pixel eye feature point location method is adopted. This method can accurately extract the eye-gaze features, namely iris centre point and canthus points when the image pickup requirements are low. The eye-gaze tracking method based on particle swarm optimization (PSO) BP neural network is raised, to capture pictures of eyes under the same environment, and a regression model where the connection weights and threshold values are optimized by PSO algorithm is built via BP network. This method is free of the inherent defects of BP network. This method requires only a common camera and normal illumination intensity rather than high-standard hardware, which greatly cuts the restrictive requirements for the system hardware and thus enhances the system practicability. The experiment results show that PSO-BP model is of higher robustness and accuracy than BP model, and is of higher recognition rate and can effectively enhances the eye-gaze tracking accuracy. --- paper_title: Manual and gaze input cascaded (MAGIC) pointing paper_content: This work explores a new direction in utilizing eye gaze forcomputer input. Gaze tracking has long been considered as analternative or potentially superior pointing method for computerinput. We believe that many fundamental limitations exist withtraditional gaze pointing. In particular, it is unnatural tooverload a perceptual channel such as vision with a motor controltask. We therefore propose an alternative approach, dubbed MAGIC(Manual And Gaze Input Cascaded) pointing. With such an approach,pointing appears to the user to be a manual task, used for finemanipulation and selection. However, a large portion of the cursormovement is eliminated by warping the cursor to the eye gaze area,which encompasses the target. Two specific MAGIC pointingtechniques, one conservative and one liberal, were designed,analyzed, and implemented with an eye tracker we developed. Theywere then tested in a pilot study. This early- stage explorationshowed that the MAGIC pointing techniques might offer manyadvantages, including reduced physical effort and fatigue ascompared to traditional manual pointing, greater accuracy andnaturalness than traditional gaze pointing, and possibly fasterspeed than manual pointing. The pros and cons of the two techniquesare discussed in light of both performance data and subjectivereports. --- paper_title: Increasing the security of gaze-based cued-recall graphical passwords using saliency masks paper_content: With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user's interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry. --- paper_title: Head and gaze dynamics in visual attention and context learning paper_content: Future intelligent environments and systems may need to interact with humans while simultaneously analyzing events and critical situations. Assistive living, advanced driver assistance systems, and intelligent command-and-control centers are just a few of these cases where human interactions play a critical role in situation analysis. In particular, the behavior or body language of the human subject may be a strong indicator of the context of the situation. In this paper we demonstrate how the interaction of a human observer's head pose and eye gaze behaviors can provide significant insight into the context of the event. Such semantic data derived from human behaviors can be used to help interpret and recognize an ongoing event. We present examples from driving and intelligent meeting rooms to support these conclusions, and demonstrate how to use these techniques to improve contextual learning. --- paper_title: Using Eye Gaze Patterns to Identify User Tasks paper_content: Users of today’s desktop interface often suffer from interruption overload. Our research seeks to develop an attention manager that mitigates the disruptive effects of interruptions by identifying moments of low mental workload in a user’s task sequence. To develop such a system, however, we need effective mechanisms to identify user tasks in real-time. In this paper, we show how eye gaze patterns may be used to identify user tasks. We also show that gaze patterns can indicate usability issues of an interface as well as the mental workload that the interface induces on a user. Our results can help inform the design of an attention manager and may lead to new methods to evaluate user interfaces. --- paper_title: Robust Eye and Pupil Detection Method for Gaze Tracking paper_content: Robust and accurate pupil detection is a prerequisite for gaze detection. Hence, we propose a new eye/pupil detection method for gaze detection on a large display. The novelty of our research can be summarized by the following four points. First, in order to overcome the performance limitations of conventional methods of eye detection, such as adaptive boosting (Adaboost) and continuously adaptive mean shift (CAMShift) algorithms, we propose adaptive selection of the Adaboost and CAMShift methods. Second, this adaptive selection is based on two parameters: pixel differences in successive images and matching values determined by CAMShift. Third, a support vector machine (SVM)-based classifier is used with these two parameters as the input, which improves the eye detection performance. Fourth, the center of the pupil within the detected eye region is accurately located by means of circular edge detection, binarization and calculation of the geometric center. The experimental results show that the proposed method can detect the center of the pupil at a speed of approximately 19.4 frames/s with an RMS error of approximately 5.75 pixels, which is superior to the performance of conventional detection methods. --- paper_title: Gaze tracking system at a distance for controlling IPTV paper_content: Gaze tracking is used for detecting the position that a user is looking at. In this research, a new gaze-tracking system and method are proposed to control a large-screen TV at a distance. This research is novel in the following four ways as compared to previous work: First, this is the first system for gaze tracking on a large-screen TV at a distance. Second, in order to increase convenience, the user's eye is captured by a remote gaze-tracking camera not requiring a user to wear any device. Third, without the complicated calibrations among the screen, the camera and the eye coordinates, the gaze position on the TV screen is obtained by using a simple 2D method based on a geometric transform with pupil center and four cornea specular reflections. Fourth, by using a near-infrared (NIR) passing filter on the camera and NIR illuminators, the pupil region becomes distinctive in the input image irrespective of the change in the environmental visible light. Experimental results showed that the proposed system could be used as a new interface for controlling a TV with a 60-inch-wide screen (16:9). --- paper_title: Driver gaze tracker using deformable template matching paper_content: In driver assistance system, human eye gaze direction is an important feature described some driver's situation such as distraction and fatigue. This paper proposes a method to track driver's gaze direction by using deformable template matching. The method is divided into three steps: first, identifying the face area. Second, localizing the eye area. Finally, combining the eye region model and sight algorithm to determine the driver's gaze direction. Experimental results show that this method can effectively identify human's eye and track gaze motion in a real-time running basis. --- paper_title: A High Speed Eye Tracking System with Robust Pupil Center Estimation Algorithm paper_content: This paper presents a new high-speed head- mounted binocular eye position measurement system using infrared lighting and image processing technology. Current eye tracking systems either run on-line at a lower speed, do the processing off-line, or use dedicated hardware to reach high on-line processing rates. The eye position measurement system we developed only uses a general-purpose computing system and off-the-shelf imaging devices. The binocular system can provide on-line horizontal and vertical measurement at a speed of 150 Hz on a desktop system with a 3 GHz Pentium processor. We report a novel dual-mode software system and a two-step processing algorithm to increase the processing rate. A symmetric mass center algorithm is designed to provide more accurate measurements when the pupil area is partially occluded. Using synthetic eye images, we show that our algorithm performs consistently better than some of the widely used algorithms in industry. --- paper_title: Eye movement driven head-mounted camera: it looks where the eyes look paper_content: A first proof of concept was developed for a head-mounted video camera system that is continuously aligned with the user's orientation of gaze. Eye movements are tracked by video-oculography and used as signals to drive servo motors that rotate the camera. Thus, the sensorimotor output of a biological system for the control of eye movements - evolved over millions of years s used to move an artificial eye. All the capabilities of multi-sensory processing for eye, head, and surround motions are detected by the vestibular, visual, and somatosensory systems and used to drive a technical camera system. A camera guided in this way mimics the natural exploration of a visual scene and acquires video sequences from the perspective of a mobile user, while the vestibulo-ocular reflex naturally stabilizes the "gaze-in-space" of the camera during head movements and locomotion. Various applications in health care, industry, and research are conceivable. --- paper_title: Exploring natural eye-gaze-based interaction for immersive virtual reality paper_content: Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses. --- paper_title: Eye Tracking by Template Matching using an Automatic Codebook Generation Scheme paper_content: We present an eye tracking algorithm which is robust against variations in scale, orientation and changes of eye appearances, such as eye blinking. The locations of the eye regions in the different frames are found using template matching. The method is kept invariant for rotations and scale by exploiting temporal information and by using a codebook of eye templates the method is robust against changes in eye region appearances. The entries of the codebook are generated automatically during the tracking of the eye regions and eventually represent a distinct set of eye appearances. Classification into different eye gestures, like blinking or different gaze directions, seems possible using these automatically learned patterns. --- paper_title: Binocular eye-tracking for the control of a 3D immersive multimedia user interface paper_content: In this paper, we present an innovative approach to design a gazecontrolled Multimedia User Interface for modern, immersive headsets. The wide-spread availability of consumer grade Virtual Reality Head Mounted Displays such as the Oculus RiftTM transformed VR to a commodity available for everyday use. However, Virtual Environments require new paradigms of User Interfaces, since standard 2D interfaces are designed to be viewed from a static vantage point only, e.g. the computer screen. Additionally, traditional input methods such as the keyboard and mouse are hard to manipulate when the user wears a Head Mounted Display. We present a 3D Multimedia User Interface based on eye-tracking and develop six applications which cover commonly operated actions of everyday computing such as mail composing and multimedia viewing. We perform a user study to evaluate our system by acquiring both quantitative and qualitative data. The study indicated that users make less type errors while operating the eye-controlled interface compared to using the standard keyboard during immersive viewing. Subjects stated that they enjoyed the eye-tracking 3D interface more than the keyboard/mouse combination. --- paper_title: Robust Head Mounted Wearable Eye Tracking System for Dynamical Calibration paper_content: In this work, a new head mounted eye tracking system is presented. Based on computer vision techniques, the system integrates eye images and head movement, in real time, performing a robust gaze point tracking. Nystagmus movements due to vestibulo-ocular reflex are monitored and integrated. The system proposed here is a strongly improved version of a previous platform called HATCAM, which was robust against changes of illumination conditions. The new version, called HAT-Move, is equipped with accurate inertial motion unit to detect the head movement enabling eye gaze even in dynamical conditions. HAT-Move performance is investigated in a group of healthy subjects in both static and dynamic conditions, i.e. when head is kept still or free to move. Evaluation was performed in terms of amplitude of the angular error between the real coordinates of the fixed points and those computed by the system in two experimental setups, specifically, in laboratory settings and in a 3D virtual reality (VR) scenario. The achieved results showed that HAT-Move is able to achieve eye gaze angular error of about 1 degree along both horizontal and vertical directions --- paper_title: Head-mounted binocular gaze detection for selective visual recognition systems paper_content: Abstract For an effective and efficient visual recognition system, the region-of-interest extraction of users is one of most important image processing duties. Generally, the position of the pupil in an eyeball is directly related to the user's interest in an input visual image. However, when using a monocular eye tracking system, it is not easy to discern an accurate position of the region-of-interest of the user in a three-dimensional space. In this paper, an eye tracking system based on binocular gaze detection is presented. The proposed system is designed as a wearable device with three mini-cameras and two hot-mirrors for users to see through. The two eye monitoring system acquires the pupil images of both eyes through the hot-mirrors illuminated by infrared LEDs; the front-view camera acquires the image visible to the user. The experiment results show that the proposed system improves upon the region-of-interest localization error of the users in both the horizontal and vertical directions for the front-view images acquired by the user. Through the proposed system, it became possible to accurately extract the user's region-of-interest, and so can be used to improve the image processing speed through focused information processing for the region-of-interest and narrow the selective information acquisition needed for users. --- paper_title: 3D gaze tracking method using Purkinje images on eye optical model and pupil paper_content: Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.961 (0.48 cm) on the X-axis, 1.601 (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space. & 2011 Elsevier Ltd. All rights reserved. --- paper_title: Estimating 3-D Point-of-Regard in a Real Environment Using a Head-Mounted Eye-Tracking System paper_content: Unlike conventional portable eye-tracking methods that estimate the position of the mounted camera using 2-D image coordinates, the techniques that are proposed here present richer information about person's gaze when moving over a wide area. They also include visualizing scanpaths when the user with a head-mounted device makes natural head movements. We employ a Visual SLAM technique to estimate the head pose and extract environmental information. When the person's head moves, the proposed method obtains a 3-D point-of-regard. Furthermore, scanpaths can be appropriately overlaid on image sequences to support quantitative analysis. Additionally, a 3-D environment is employed to detect objects of focus and to visualize an attention map. --- paper_title: Wearable Reading Assist System: Augmented Reality Document Combining Document Retrieval and Eye Tracking paper_content: We present a new system that assists people's reading activity by combining a wearable eye tracker, a see-through head mounted display, and an image based document retrieval engine. An image based document retrieval engine is used for identification of the reading document, whereas an eye tracker is used to detect which part of the document the reader is currently reading. The reader can refer to the glossary of the latest viewed key word by looking at the see-through head mounted display. This novel document reading assist application, which is the integration of a document retrieval system into an everyday reading scenario for the first time, enriches people's reading life. In this paper, we i) investigate the performance of the state-of-the-art image based document retrieval method using a wearable camera, ii) propose a method for identification of the word the reader is attendant, and iii) conduct pilot studies for evaluation of the system in this reading context. The results show the potential of a document retrieval system in combination with a gaze based user-oriented system. --- paper_title: Design and implementation of an interactive HMD for wearable AR system paper_content: In this work, we present an interactive optical see-through HMD (head-mounted device) which makes use of a user's gaze information for the interaction in the AR (augmented reality) environment. In particular, we propose a method to employ a user's half-blink information for more efficient interaction. As the interaction is achieved by using a user's eye gaze and half-blink information, the proposed system can provide more efficient computing environment. In addition, the proposed system can be quite helpful to those who have difficulties in using conventional interaction methods which use hands or feet. The experimental results present the robustness and efficiency of the proposed system. --- paper_title: Demo of FaceVR: real-time facial reenactment and eye gaze control in virtual reality paper_content: We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call. --- paper_title: openEyes: a low-cost head-mounted eye-tracking solution paper_content: Eye tracking has long held the promise of being a useful methodology for human computer interaction. However, a number of barriers have stood in the way of the integration of eye tracking into everyday applications, including the intrusiveness, robustness, availability, and price of eye-tracking systems. To lower these barriers, we have developed the openEyes system. The system consists of an open-hardware design for a digital eye tracker that can be built from low-cost off-the-shelf components, and a set of open-source software tools for digital image capture, manipulation, and analysis in eye-tracking applications. We expect that the availability of this system will facilitate the development of eye-tracking applications and the eventual integration of eye tracking into the next generation of everyday human computer interfaces. We discuss the methods and technical challenges of low-cost eye tracking as well as the design decisions that produced our current system. --- paper_title: Gaze tracking based on active appearance model and multiple support vector regression on mobile devices paper_content: Gaze tracking technology is a convenient interfacing method for mobile devices. Most previous studies used a large-sized desktop or head-mounted display. In this study, we propose a novel gaze tracking method using an active appearance model (AAM) and multiple support vector regression (SVR) on a mobile device. Our research has four main contributions. First, in calculating the gaze position, the amount of facial rotation and translation based on four feature values is computed using facial feature points detected by AAM. Second, the amount of eye rotation based on two feature values is computed for measuring eye gaze position. Third, to compensate for the fitting error of an AAM in facial rotation, we use the adaptive discrete Kalman filter (DKF), which applies a different velocity of state transition matrix to the facial feature points. Fourth, we obtain gaze position on a mobile device based on multiple SVR by separating the rotation and translation of face and eye rotation. Experimental results show that the root mean square (rms) gaze error is 36.94 pixels on the 4.5-in. screen of a mobile device with a screen resolution of 800×600 pixels. --- paper_title: Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain–machine interfaces paper_content: Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5‐1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s −1 , more than ten times that of invasive and semi-invasive brain‐machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’. (Some figures may appear in colour only in the online journal) --- paper_title: Gaze estimation based on head movements in virtual reality applications using deep learning paper_content: Gaze detection in Virtual Reality systems is mostly performed using eye-tracking devices. The coordinates of the sight, as well as other data regarding the eyes, are used as input values for the applications. While this trend is becoming more and more popular in the interaction design of immersive systems, most visors do not come with an embedded eye-tracker, especially those that are low cost and maybe based on mobile phones. We suggest implementing an innovative gaze estimation system into virtual environments as a source of information regarding users intentions. We propose a solution based on a combination of the features of the images and the movement of the head as an input of a Deep Convolutional Neural Network capable of inferring the 2D gaze coordinates in the imaging plane. --- paper_title: Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras paper_content: Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the user's constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individual's eye and even after the HMD moves on the user's face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants. --- paper_title: ModulAR: Eye-Controlled Vision Augmentations for Head Mounted Displays paper_content: In the last few years, the advancement of head mounted display technology and optics has opened up many new possibilities for the field of Augmented Reality. However, many commercial and prototype systems often have a single display modality, fixed field of view, or inflexible form factor. In this paper, we introduce Modular Augmented Reality (ModulAR), a hardware and software framework designed to improve flexibility and hands-free control of video see-through augmented reality displays and augmentative functionality. To accomplish this goal, we introduce the use of integrated eye tracking for on-demand control of vision augmentations such as optical zoom or field of view expansion. Physical modification of the device's configuration can be accomplished on the fly using interchangeable camera-lens modules that provide different types of vision enhancements. We implement and test functionality for several primary configurations using telescopic and fisheye camera-lens systems, though many other customizations are possible. We also implement a number of eye-based interactions in order to engage and control the vision augmentations in real time, and explore different methods for merging streams of augmented vision into the user's normal field of view. In a series of experiments, we conduct an in depth analysis of visual acuity and head and eye movement during search and recognition tasks. Results show that methods with larger field of view that utilize binary on/off and gradual zoom mechanisms outperform snapshot and sub-windowed methods and that type of eye engagement has little effect on performance. --- paper_title: Real-time nonintrusive monitoring and prediction of driver fatigue paper_content: This paper describes a real-time online prototype driver-fatigue monitor. It uses remotely located charge-coupled-device cameras equipped with active infrared illuminators to acquire video images of the driver. Various visual cues that typically characterize the level of alertness of a person are extracted in real time and systematically combined to infer the fatigue level of the driver. The visual cues employed characterize eyelid movement, gaze movement, head movement, and facial expression. A probabilistic model is developed to model human fatigue and to predict fatigue based on the visual cues obtained. The simultaneous use of multiple visual cues and their systematic combination yields a much more robust and accurate fatigue characterization than using a single visual cue. This system was validated under real-life fatigue conditions with human subjects of different ethnic backgrounds, genders, and ages; with/without glasses; and under different illumination conditions. It was found to be reasonably robust, reliable, and accurate in fatigue characterization. --- paper_title: Driver fatigue alarm based on eye detection and gaze estimation paper_content: The driver assistant system has attracted much attention as an essential component of intelligent transportation systems. One task of driver assistant system is to prevent the drivers from fatigue. For the fatigue detection it is natural that the information about eyes should be utilized. The driver fatigue can be divided into two types, one is the sleep with eyes close and another is the sleep with eyes open. Considering that the fatigue detection is related with the prior knowledge and probabilistic statistics, the dynamic Bayesian network is used as the analysis tool to perform the reasoning of fatigue. Two kinds of experiments are performed to verify the system effectiveness, one is based on the video got from the laboratory and another is based on the video got from the real driving situation. Ten persons participate in the test and the experimental result is that, in the laboratory all the fatigue events can be detected, and in the practical vehicle the detection ratio is about 85%. Experiments show that in most of situations the proposed system works and the corresponding performance is satisfying. --- paper_title: Analyzing the relationship between head pose and gaze to model driver visual attention paper_content: Monitoring driver behavior is crucial in the design of advanced driver assistance systems (ADAS) that can detect driver actions, providing necessary warnings when not attentive to driving tasks. The visual attention of a driver is an important aspect to consider, as most driving tasks require visual resources. Previous work has investigated algorithms to detect driver visual attention by tracking the head or eye movement. While tracking pupil can give an accurate direction of visual attention, estimating gaze on vehicle environment is a challenging problem due to changes in illumination, head rotations, and occlusions (e.g. hand, glasses). Instead, this paper investigates the use of the head pose as a coarse estimate of the driver visual attention. The key challenge is the non-trivial relation between head and eye movements while glancing to a target object, which depends on the driver, the underlying cognitive and visual demand, and the environment. First, we evaluate the performance of a state-of-the-art head pose detection algorithm over natural driving recordings, which are compared with ground truth estimations derived from AprilTags attached to a headband. Then, the study proposes regression models to estimate the drivers' gaze based on the head position and orientation, which are built with data from natural driving recordings. The proposed system achieves high accuracy over the horizontal direction, but moderate/low performance over the vertical direction. We compare results while our participants were driving, and when the vehicle was parked. --- paper_title: Driver Gaze Tracking and Eyes Off the Road Detection System paper_content: Distracted driving is one of the main causes of vehicle collisions in the United States. Passively monitoring a driver's activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the driver's focus of attention. This paper proposes an inexpensive vision-based system to accurately detect Eyes Off the Road (EOR). The system has three main components: 1) robust facial feature tracking; 2) head pose and gaze estimation; and 3) 3-D geometric reasoning to detect EOR. From the video stream of a camera installed on the steering wheel column, our system tracks facial features from the driver's face. Using the tracked landmarks and a 3-D face model, the system computes head pose and gaze direction. The head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions. Finally, using a 3-D geometric analysis, the system reliably detects EOR. --- paper_title: Real-time eye detection and tracking for driver observation under various light conditions paper_content: Eye tracking is one of the key technologies for future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Thus, nonintrusive methods for eye detection and tracking are important for many applications of vision-based driver-automotive interaction. One problem common to many eye tracking methods proposed so far is their sensitivity to lighting condition change. This tends to significantly limit their scope for automotive applications. In this paper we present a new realtime eye detection and tracking method that works under variable and realistic lighting conditions. By combining imaging by using IR light and appearance-based object recognition techniques, our method can robustly track eyes even when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eye detection and tracking via the use of a support vector machine and mean shift tracking. Our experimental results show the feasibility of our approach and the validity for the method is extended for drivers wearing sunglasses. --- paper_title: Recognition of a Driver's Gaze for Vehicle Headlamp Control paper_content: In this paper, we propose a novel method for gaze recognition of a driver coping with rotation of a driver's face. Frontal face images and left half profile images were separately trained using the Viola-Jones (V-J) algorithm to produce classifiers that can detect faces. The right half profile can be detected by mirroring the entire image when neither a frontal face nor a left half profile was detected. As an initial step, this method was used to simultaneously detect the driver's face. Then, we applied a regressional version of linear discriminant analysis (LDAr) to the detected facial region to extract important features for classification. Finally, these features were used to classify the driver's gaze in seven directions. In the feature extraction step, LDAr tries to find features that maximize the ratio of interdistances among samples with large differences in the target value to those with small differences in the target value. Therefore, the resultant features are more fitted to regression problems than conventional feature extraction methods. In addition to LDAr, in this paper, a 2-D extension of LDAr is also developed and used as a feature extraction method for gaze recognition. The experimental results show that the proposed method achieves a good gaze recognition rate under various rotation angles of a driver's head, resulting in a reliable headlamp control performance. --- paper_title: Real-time system for monitoring driver vigilance paper_content: In this paper we present a non-intrusive prototype computer vision system for real-time monitoring driver's vigilance. It is based on a hardware system, for real time acquisition of driver's images using an active IR illuminator, and their software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. These are the eyelid movements and the pose face. The system has been tested with different sequences recorded on night and day driving conditions in a motorway and with different users. We show some experimental results and some conclusions about the performance of the system. --- paper_title: Eye-Gaze Tracking Method Driven by Raspberry PI Applicable in Automotive Traffic Safety paper_content: This paper comes as a response to the fact that, lately, more and more accidents are caused by people who fall asleep at the wheel. Eye tracking is one of the most important aspects in driver assistance systems since human eyes hold much in-formation regarding the driver's state, like attention level, gaze and fatigue level. The number of times the subject blinks will be taken into account for identification of the subject's drowsiness. Also the direction of where the user is looking will be estimated according to the location of the user's eye gaze. The developed algorithm was implemented on a Raspberry Pi board in order to create a portable system. The main determination of this project is to conceive an active eyetracking based system, which focuses on the drowsiness detection amongst fatigue related deficiencies in driving. --- paper_title: Real-Time Gaze Estimator Based on Driver's Head Orientation for Forward Collision Warning System paper_content: This paper presents a vision-based real-time gaze zone estimator based on a driver's head orientation composed of yaw and pitch. Generally, vision-based methods are vulnerable to the wearing of eyeglasses and image variations between day and night. The proposed method is novel in the following four ways: First, the proposed method can work under both day and night conditions and is robust to facial image variation caused by eyeglasses because it only requires simple facial features and not specific features such as eyes, lip corners, and facial contours. Second, an ellipsoidal face model is proposed instead of a cylindrical face model to exactly determine a driver's yaw. Third, we propose new features-the normalized mean and the standard deviation of the horizontal edge projection histogram-to reliably and rapidly estimate a driver's pitch. Fourth, the proposed method obtains an accurate gaze zone by using a support vector machine. Experimental results from 200 000 images showed that the root mean square errors of the estimated yaw and pitch angles are below 7 under both daylight and nighttime conditions. Equivalent results were obtained for drivers with glasses or sunglasses, and 18 gaze zones were accurately estimated using the proposed gaze estimation method. --- paper_title: Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos paper_content: Analysis of driver's head behavior is an integral part of driver monitoring system. Driver's coarse gaze direction or gaze zone is a very important cue in understanding driver- state. Many existing gaze zone estimators are, however, limited to single camera perspectives, which are vulnerable to occlu- sions of facial features from spatially large head movements away from the frontal pose. Non-frontal glances away from the driving direction, though, are of special interest as interesting events, critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for gaze zone estimation using head pose dynamics to operate robustly and continuously even during large head movements. For experimental evaluations, we collected a dataset from naturalistic on-road driving in urban streets and freeways. A human expert provided the gaze zone ground truth using all vision information including eyes and surround context. Our emphasis is to understand the efficacy of the head pose dynamic information in predicting eye-gaze-based zone ground truth. We conducted several experiments in designing the dynamic features and compared the performance against static head pose based approach. Analyses show that dynamic information significantly improves the results. --- paper_title: Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification paper_content: Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions. --- paper_title: Estimating Gaze Direction of Vehicle Drivers Using a Smartphone Camera paper_content: Many automated driver monitoring technologies have been proposed to enhance vehicle and road safety. Most existing solutions involve the use of specialized embedded hardware, primarily in high-end automobiles. This paper explores driver assistance methods that can be implemented on mobile devices such as a consumer smartphone, thus offering a level of safety enhancement that is more widely accessible. Specifically, the paper focuses on estimating driver gaze direction as an indicator of driver attention. Input video frames from a smartphone camera facing the driver are first processed through a coarse head pose direction. Next, the locations and scales of face parts, namely mouth, eyes, and nose, define a feature descriptor that is supplied to an SVM gaze classifier which outputs one of 8 common driver gaze directions. A key novel aspect is an in-situ approach for gathering training data that improves generalization performance across drivers, vehicles, smartphones, and capture geometry. Experimental results show that a high accuracy of gaze direction estimation is achieved for four scenarios with different drivers, vehicles, smartphones and camera locations. --- paper_title: Real-Time Detection of Driver Cognitive Distraction Using Support Vector Machines paper_content: As use of in-vehicle information systems (IVISs) such as cell phones, navigation systems, and satellite radios has increased, driver distraction has become an important and growing safety concern. A promising way to overcome this problem is to detect driver distraction and adapt in-vehicle systems accordingly to mitigate such distractions. To realize this strategy, this paper applied support vector machines (SVMs), which is a data mining method, to develop a real-time approach for detecting cognitive distraction using drivers' eye movements and driving performance data. Data were collected in a simulator experiment in which ten participants interacted with an IVIS while driving. The data were used to train and test both SVM and logistic regression models, and three different model characteristics were investigated: how distraction was defined, which data were input to the model, and how the input data were summarized. The results show that the SVM models were able to detect driver distraction with an average accuracy of 81.1%, outperforming more traditional logistic regression models. The best performing model (96.1% accuracy) resulted when distraction was defined using experimental conditions (i.e., IVIS drive or baseline drive), the input data were comprised of eye movement and driving measures, and these data were summarized over a 40-s window with 95% overlap of windows. These results demonstrate that eye movements and simple measures of driving performance can be used to detect driver distraction in real time. Potential applications of this paper include the design of adaptive in-vehicle systems and the evaluation of driver distraction --- paper_title: Eye-Gaze Tracking Analysis of Driver Behavior While Interacting With Navigation Systems in an Urban Area paper_content: With the advent of global positioning system technology, smart phones are used as portable navigation systems. Guidelines that ensure driving safety while using conventional on-board navigation systems have already been published but do not extend to portable navigation systems; therefore, this study focused on the analysis of the eye-gaze tracking of drivers interacting with portable navigation systems in an urban area. Combinations of different display sizes and positions of portable navigation systems were adopted by 20 participants in a driving simulator experiment. An expectation maximum algorithm was proposed to classify the measured eye-gaze points; furthermore, three measures of glance frequency, glance time, and total glance time as a percentage were calculated. The results indicated that the convenient display position with a small visual angle can provide a significantly shorter glance time but a significantly higher glance frequency; however, the small-size display will bring on significantly longer glance time that may result in the increasing of visual distraction for drivers. The small-size portable display received significantly lower scores for subjective evaluation of acceptability and fatigue; moreover, the small-size portable display on the conventional built-in position received significantly lower subjective evaluation scores than that of the big-size one on the upper side of the dashboard. In addition, it indicated an increased risk of rear-end collision that the proportion of time that the time-to-collision was less than 1 s was significantly shorter for the portable navigation than that of traditional on-board one. --- paper_title: Head pose and gaze direction tracking for detecting a drowsy driver paper_content: This paper proposes a system that uses gaze direction tracking and head pose estimation to detect drowsiness of a driver. Head pose is estimated by calculating optic flow of the facial features, which are acquired with a corner detection algorithm. Analysis of the driver's head behavior leads to three moving components: nodding, shaking, and tilting. To track the gaze direction of the driver, we trace the center point of the pupil using CDF analysis and estimate the frequency of eye-movement. --- paper_title: Usability evaluation of eye tracking on an unmodified common tablet paper_content: This paper describes the design, implementation, and usability evaluation of a neural network based eye tracking system on an unmodified common tablet and discusses the challenges and implications of neural networks as an eye tracking component on a mobile platform. We objectively and subjectively evaluate the usability and performance tradeoffs of calibration, one of the fundamental components of eye tracking. The described system obtained an average spatial accuracy of 3.95° and an average temporal resolution of 0.65 Hz during trials. Results indicate that an increased neural network training set may be utilized to increase spatial accuracy, at the cost of greater physical effort and fatigue. --- paper_title: Eye gesture recognition on portable devices paper_content: Hand-held portable devices have received only little attention as a platform in the eye tracking community so far. This is mainly due to their -- until recently -- limited sensing capabilities and processing power. In this work-in-progress paper we present the first prototype eye gesture recognition system for portable devices that does not require any additional equipment. The system combines techniques from image processing, computer vision and pattern recognition to detect eye gestures in the video recorded using the built-in front-facing camera. In a five-participant user study we show that our prototype can recognise four different continuous eye gestures in near real-time with an average accuracy of 60% on an Android-based smartphone (17.6% false positives) and 67.3% on a laptop (5.9% false positives). This initial result is promising and underlines the potential of eye tracking and eye-based interaction on portable devices. --- paper_title: EyePhone: activating mobile phones with your eyes paper_content: As smartphones evolve researchers are studying new techniques to ease the human-mobile interaction. We propose EyePhone, a novel "hand-free" interfacing system capable of driving mobile applications/functions using only the user's eyes movement and actions (e.g., wink). EyePhone tracks the user's eye movement across the phone's display using the camera mounted on the front of the phone; more specifically, machine learning algorithms are used to: i) track the eye and infer its position on the mobile phone display as a user views a particular application; and ii) detect eye blinks that emulate mouse clicks to activate the target application under view. We present a prototype implementation of EyePhone on a Nokia N810, which is capable of tracking the position of the eye on the display, mapping this positions to an application that is activated by a wink. At no time does the user have to physically touch the phone display. --- paper_title: A system for tracking gaze on handheld devices paper_content: Many of the current gaze-tracking systems require that a subject’s head be stabilized and that the interface be fixed to a table. This article describes a prototype system for tracking gaze on the screen of mobile, handheld devices. The proposed system frees the user and the interface from previous constraints, allowing natural freedom of movement within the operational envelope of the system. The method is software-based, and integrates a commercial eye-tracking device (EyeLink I) with a magnetic positional tracking device (Polhemus FASTRAK). The evaluation of the system shows that it is capable of producing valid data with adequate accuracy. --- paper_title: MobiGaze: development of a gaze interface for handheld mobile devices paper_content: Handheld mobile devices that have a touch screen are widely used but are awkward to use with one hand. To solve this problem, we propose MobiGaze, which is a user interface that uses one's gaze (gaze interface) to operate a handheld mobile device. By using stereo cameras, the user's line of sight is detected in 3D, enabling the user to interact with a mobile device by means of his/her gaze. We have constructed a prototype system of MobiGaze that consists of two cameras with IR-LED, a Windows-based notebook PC, and iPod touch. Moreover, we have developed several applications for MobiGaze. --- paper_title: Integrating eye tracking and motion sensor on mobile phone for interactive 3D display paper_content: In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, ::: cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We ::: develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we ::: further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate ::: gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through ::: experiments on some public video sequences as well as videos acquired directly from mobile phone. --- paper_title: Visible-spectrum remote eye tracker for gaze communication paper_content: Many approaches have been proposed to create an eye tracker based on visible-spectrum. These efforts provide a possibility to create inexpensive eye tracker capable to operate outdoor. Although the resulted tracking accuracy is acceptable for a visible-spectrum head-mounted eye tracker, there are many limitations of these approaches to create a remote eye tracker. In this study, we propose a high-accuracy remote eye tracker that uses visible-spectrum imaging and several gaze communication interfaces suited to the tracker. The gaze communication interfaces are designed to assist people with motor disability. Our results show that the proposed eye tracker achieved an average accuracy of 0.77° and a frame rate of 28 fps with a personal computer. With a tablet device, the proposed eye tracker achieved an average accuracy of 0.82° and a frame rate of 25 fps. The proposed gaze communication interfaces enable users to type a complete sentence containing eleven Japanese characters in about a minute. --- paper_title: Rapid object detection using a boosted cascade of simple features paper_content: This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the "integral image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection. --- paper_title: Smart Tablet Monitoring by a Real-Time Head Movement and Eye Gestures Recognition System paper_content: Different research works are studying to integrate new ways of interaction with mobiles devices such as smartphone and tablets to provide a natural and easy mode of communication to people. In this paper we propose a new system to monitoring tablets through head motions and eye-gaze gestures recognition. This system is able to open browser application with simples' motions by the head and the eyes. For the face detection, we use a Viola and Jones technique, and for the eyes detection and tracking we used Haar classifier to the detection part and the template matching for the tracking part. We develop this system on android based tablet. Through the front-facing camera the system capture a real-time streaming video on which we implement the detection and recognition module. We test the system on 5 persons and the experiment results shows that this system is robust and it is invariant for the lightness and the moving state of the user. As known the tablet's resources are limited, in consequence the response time of our system is not satisfied for the real time condition. That is why we apply the cloud computing to solve this problem. --- paper_title: Improving mobile device interaction by eye tracking analysis paper_content: This paper describes a non-intrusive eyetracking tool for mobile devices by using images acquired by the front camera of the iPhone and iPod Touch. By tracking and interpreting the user's gaze to the smartphone's screen coordinates the user can interact with the device by using a more natural and spotaneous way. The application uses a Haar classifier based detection module for identifying the eyes in the acquired images and subsequently the CAMSHIFT algorithm to find and track the eyes movement and detect the user's gaze. The performance of the proposed tool was evaluated by testing the system on 16 users and the results shown that in about 79% of the times it was able to detect correctly the users' gaze. --- paper_title: EyeTab: model-based gaze estimation on unmodified tablet computers paper_content: Despite the widespread use of mobile phones and tablets, hand-held portable devices have only recently been identified as a promising platform for gaze-aware applications. Estimating gaze on portable devices is challenging given their limited computational resources, low quality integrated front-facing RGB cameras, and small screens to which gaze is mapped. In this paper we present EyeTab, a model-based approach for binocular gaze estimation that runs entirely on an unmodified tablet. EyeTab builds on set of established image processing and computer vision algorithms and adapts them for robust and near-realtime gaze estimation. A technical prototype evaluation with eight participants in a normal indoors office setting shows that EyeTab achieves an average gaze estimation accuracy of 6.88° of visual angle at 12 frames per second. --- paper_title: Exploiting Eye Tracking for Smartphone Authentication paper_content: Traditional user authentication methods using passcode or finger movement on smartphones are vulnerable to shoulder surfing attack, smudge attack, and keylogger attack. These attacks are able to infer a passcode based on the information collection of user’s finger movement or tapping input. As an alternative user authentication approach, eye tracking can reduce the risk of suffering those attacks effectively because no hand input is required. However, most existing eye tracking techniques are designed for large screen devices. Many of them depend on special hardware like high resolution eye tracker and special process like calibration, which are not readily available for smartphone users. In this paper, we propose a new eye tracking method for user authentication on a smartphone. It utilizes the smartphone’s front camera to capture a user’s eye movement trajectories which are used as the input of user authentication. No special hardware or calibration process is needed. We develop a prototype and evaluate its effectiveness on an Android smartphone. We recruit a group of volunteers to participate in the user study. Our evaluation results show that the proposed eye tracking technique achieves very high accuracy in user authentication. --- paper_title: Interacting with the Computer using Gaze Gestures paper_content: This paper investigates novel ways to direct computers by eye gaze. Instead of using fixations and dwell times, this work focuses on eye motion, in particular gaze gestures. Gaze gestures are insensitive to accuracy problems and immune against calibration shift. A user study indicates that users are able to perform complex gaze gestures intentionally and investigates which gestures occur unintentionally during normal interaction with the computer. Further experiments show how gaze gestures can be integrated into working with standard desktop applications and controlling media devices. --- paper_title: New Solution to the Midas Touch Problem: Identification of Visual Commands Via Extraction of Focal Fixations paper_content: Abstract Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed. --- paper_title: Towards the development of a standardized performance evaluation framework for eye gaze estimation systems in consumer platforms paper_content: There is a need to standardize the performance of eye gaze estimation (EGE) methods in various platforms for human computer interaction (HCI). Because of lack of consistent schemes or protocols for summative evaluation of EGE systems, performance results in this field can neither be compared nor reproduced with any consistency. In contemporary literature, gaze tracking accuracy is measured under non-identical sets of conditions, with variable metrics and most results do not report the impact of system meta-parameters that significantly affect tracking performances. In this work, the diverse nature of these research outcomes and system parameters which affect gaze tracking in different platforms is investigated and their error contributions are estimated quantitatively. Then the concept and development of a performance evaluation framework is proposed- that can define design criteria and benchmark quality measures for the eye gaze research community. --- paper_title: New Solution to the Midas Touch Problem: Identification of Visual Commands Via Extraction of Focal Fixations paper_content: Abstract Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed. --- paper_title: Eye and gaze tracking for interactive graphic display paper_content: This paper describes a computer vision system based on active IR illumination for real-time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a hierarchical classification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about 5° horizontally and 8° vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display. --- paper_title: Robust remote gaze estimation method based on multiple geometric transforms paper_content: The remote gaze estimation (RGE) technique has been widely used as a natural interface in consumer electronic devices for decades. Although outstanding outcomes on RGE have been recently reported in the literature, tracking gaze under large head movements is still an unsolved problem. General RGE methods estimate a user’s point of gaze (POG) using a mapping function representing the relationship between several infrared light sources and their corresponding corneal reflections (CRs) in the eye image. However, the minimum number of available CRs required for a valid POG estimation cannot be satisfied in those methods because the CRs often tend to be distorted or disappeared inevitably under the unconstrained eye and head movements. To overcome this problem, a multiple-transform-based method is proposed. In the proposed method, through three different geometric transform-based normalization processes, several nonlinear mapping functions are simultaneously obtained in the calibration process and then used to estimate the POG. The geometric transforms and mapping functions can be alternatively employed according to the number of available CRs even under large head movement. Experimental results on six subjects demonstrate the effectiveness of the proposed method. --- paper_title: Implicit Calibration of a Remote Gaze Tracker paper_content: We describe a system designed to monitor the gaze of a user working naturally at a computer workstation. The system consists of three cameras situated between the keyboard and the monitor. Free head movements are allowed within a three-dimensional volume approximately 40 centimeters in diameter. Two fixed, wide-field "face" cameras equipped with active-illumination systems enable rapid localization of the subject's pupils. A third steerable "eye" camera has a relatively narrow field of view, and acquires the images of the eyes which are used for gaze estimation. Unlike previous approaches which construct an explicit three-dimensional representation of the subject's head and eye, we derive mappings for steering control and gaze estimation using a procedure we call implicit calibration. Implicit calibration is performed by collecting a "training set" of parameters and associated measurements, and solving for a set of coefficients relating the measurements back to the parameters of interest. Preliminary data on three subjects indicate an median gaze estimation error of ap-proximately 0.8 degree. --- paper_title: Free head motion eye gaze tracking using a single camera and multiple light sources paper_content: One of the main limitations of current remote eye gaze tracking (REGT) techniques is that the user's head must remain within a very limited area in front of the monitor screen. In this paper, we present a free head motion REGT technique. By projecting a known rectangular pattern of lights, the technique estimates the gaze position relative to this rectangle using an invariant property of projective geometry. We carry extensive analysis of similar methods using an eye model to compare their accuracy. Based on these results, we propose a new estimation procedure that compensates the angular difference between the eye visual axis and optical axis. We have developed a real time (30 fps) prototype using a single camera and 5 light sources to generate the light pattern. Experimental results shows that the accuracy of the system is about 1deg of visual angle --- paper_title: A single-camera remote eye tracker paper_content: Many eye-tracking systems either require the user to keep their head still or involve cameras or other equipment mounted on the user's head. While acceptable for research applications, these limitations make the systems unsatisfactory for prolonged use in interactive applications. Since the goal of our work is to use eye trackers for improved visual communication through gaze guidance [1,2] and for Augmentative and Alternative Communication (AAC) [3], we are interested in less invasive eye tracking techniques. --- paper_title: Adaptive Linear Regression for Appearance-Based Gaze Estimation paper_content: We investigate the appearance-based gaze estimation problem, with respect to its essential difficulty in reducing the number of required training samples, and other practical issues such as slight head motion, image resolution variation, and eye blinking. We cast the problem as mapping high-dimensional eye image features to low-dimensional gaze positions, and propose an adaptive linear regression (ALR) method as the key to our solution. The ALR method adaptively selects an optimal set of sparsest training samples for the gaze estimation via l ::: 1 ::: -optimization. In this sense, the number of required training samples is significantly reduced for high accuracy estimation. In addition, by adopting the basic ALR objective function, we integrate the gaze estimation, subpixel alignment and blink detection into a unified optimization framework. By solving these problems simultaneously, we successfully handle slight head motion, image resolution variation and eye blinking in appearance-based gaze estimation. We evaluated the proposed method by conducting experiments with multiple users and variant conditions to verify its effectiveness. --- paper_title: A free-head, simple calibration, gaze tracking system that enables gaze-based interaction paper_content: Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle). --- paper_title: Towards the development of a standardized performance evaluation framework for eye gaze estimation systems in consumer platforms paper_content: There is a need to standardize the performance of eye gaze estimation (EGE) methods in various platforms for human computer interaction (HCI). Because of lack of consistent schemes or protocols for summative evaluation of EGE systems, performance results in this field can neither be compared nor reproduced with any consistency. In contemporary literature, gaze tracking accuracy is measured under non-identical sets of conditions, with variable metrics and most results do not report the impact of system meta-parameters that significantly affect tracking performances. In this work, the diverse nature of these research outcomes and system parameters which affect gaze tracking in different platforms is investigated and their error contributions are estimated quantitatively. Then the concept and development of a performance evaluation framework is proposed- that can define design criteria and benchmark quality measures for the eye gaze research community. --- paper_title: Towards the development of a standardized performance evaluation framework for eye gaze estimation systems in consumer platforms paper_content: There is a need to standardize the performance of eye gaze estimation (EGE) methods in various platforms for human computer interaction (HCI). Because of lack of consistent schemes or protocols for summative evaluation of EGE systems, performance results in this field can neither be compared nor reproduced with any consistency. In contemporary literature, gaze tracking accuracy is measured under non-identical sets of conditions, with variable metrics and most results do not report the impact of system meta-parameters that significantly affect tracking performances. In this work, the diverse nature of these research outcomes and system parameters which affect gaze tracking in different platforms is investigated and their error contributions are estimated quantitatively. Then the concept and development of a performance evaluation framework is proposed- that can define design criteria and benchmark quality measures for the eye gaze research community. ---
Title: A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms Section 1: INTRODUCTION Description 1: Introduce the advancements in eye gaze tracking technology, historical context, applications, and challenges in performance evaluation across different platforms. Section 2: Types of eye movements studied Description 2: Discuss the various types of eye movements studied in eye gaze research, including fixations, saccades, scanpath, gaze duration, and pupil size and blink. Section 3: Basic setup and method used for eye gaze estimation Description 3: Explain the typical setup for video-based eye gaze tracking systems and the common methodologies used for eye detection and gaze estimation. Section 4: Calibration Description 4: Describe the calibration process for eye gaze tracking systems, including the generalized structure and model of the human eye and the calibration routines. Section 5: Correspondence of eye gaze with head positions Description 5: Discuss how head orientation affects gaze tracking and the methods to compensate for head movement. Section 6: Estimation of gaze tracking accuracy Description 6: Explain the metrics and methods used to estimate the accuracy of gaze tracking systems. Section 7: EYE GAZE ESTIMATION ALGORITHMS Description 7: Review and categorize various eye gaze tracking algorithms, including 2D regression, 3D model-based, cross-ratio, appearance-based, and shape-based methods. Section 8: USER PLATFORMS IMPLEMENTING GAZE TRACKING Description 8: Describe the implementation of gaze tracking across different user platforms: desktop systems, TV and large display panels, head-mounted setups, automotive, and handheld devices. Section 9: Diversity of gaze estimation performance metrics in different user platforms Description 9: Discuss the variety of performance metrics used in gaze estimation research and the inconsistency in their reporting formats. Section 10: Platform specific factors affecting usability of gaze tracking systems Description 10: Analyze the factors affecting the practical performance of gaze tracking systems across different platforms and the impact of various error sources. Section 11: Need and rationale for developing comprehensive performance evaluation strategies for gaze estimation systems Description 11: Highlight the necessity for standardized performance evaluation strategies and the current gaps in the assessment of gaze tracking systems. Section 12: Concept of a performance evaluation framework for gaze estimation systems Description 12: Propose a framework for evaluating gaze tracking systems' performance, including standardized experiments for testing under various conditions. Section 13: Methodology Description 13: Outline the methodology of the proposed experimental framework, detailing the steps involved in evaluating the performance of gaze tracking systems. Section 14: Studying dynamic eye movement characteristics Description 14: Discuss plans for studying dynamic eye movements using the proposed framework, focusing on smooth pursuits in addition to fixations.
Narrative Science Systems: A Review
5
--- paper_title: Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse paper_content: Introduction: Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). Methods: A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. Results: In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. Conclusions: It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. --- paper_title: Generating Different Story Tellings from Semantic Representations of Narrative paper_content: In order to tell stories in different voices for different audiences, interactive story systems require: (1) a semantic representation of story structure, and (2) the ability to automatically generate story and dialogue from this semantic representation using some form of Natural Language Generation (nlg). However, there has been limited research on methods for linking story structures to narrative descriptions of scenes and story events. In this paper we present an automatic method for converting from Scheherazade's story intention graph, a semantic representation, to the input required by the personage nlg engine. Using 36 Aesop Fables distributed in DramaBank, a collection of story encodings, we train translation rules on one story and then test these rules by generating text for the remaining 35. The results are measured in terms of the string similarity metrics Levenshtein Distance and BLEU score. The results show that we can generate the 35 stories with correct content: the test set stories on average are close to the output of the Scheherazade realizer, which was customized to this semantic representation. We provide some examples of story variations generated by personage. In future work, we will experiment with measuring the quality of the same stories generated in different voices, and with techniques for making storytelling interactive. --- paper_title: A State-Event Transformation Mechanism for Generating Micro Structures of Story in an Integrated Narrative Generation System paper_content: A State-Event Transformation Mechanism for Generating Micro Structures of Story in an Integrated Narrative Generation System Kou Onodera ([email protected]) Graduate School of Software and Information Science, Iwate Prefectural University, 152-52 Sugo Takizawa, Iwate 020-0193 Japan Taisuke Akimoto ([email protected]) Graduate School of Software and Information Science, Iwate Prefectural University Takashi Ogata ([email protected]) Faculty of Software and Information Science, Iwate Prefectural University Abstract stories using a conceptual dictionary for noun/verb concepts and transformation rules. We analyze and classify the relationship between an action and the states in front and behind for 689 verb concepts to develop a mechanism that mutually transforms from an action to states or from states to an action and cyclically repeats the process. The action means an event in which a verb concept for an action is included as the central element. A temporal sequence of events is corresponding to a story. On the other hand, a collection of states means a static narrative knowledge supporting events. This paper describes the current version of state-event transformation system to transform story lines and story worlds each other using a knowledge base and a conceptual dictionary. Moreover, we extend the basic framework to a circulative generation mechanism with a simple mutation function. Through the preliminary performance checks, we confirmed that the transformation of story worlds and story lines is approximately logically adequate and the circular process produces the diversity of stories. The proposed system is a module in an integrated narrative generation system and the pilot version is already implemented. In the context of the narrative generation system, the proposed system plays roles for expanding the variation of discourse to be generated and limiting possible narrative elements at any given time in a narraive generation process. Narrative Generation and Proposed Mechanism Keywords: Narrative generation system; story generation; state; action; conceptual dictionary. Introduction In the context of natural language processing in the wide sense, the research of narrative generation system which aims at automatic generation of narrative texts by computer has been developing from 1960s. Meehan (1977) shows a classical approach and Bringsjord and Ferrucci (2000) is an example of the comparatively new result. Along with traditional literary genres, narrative and story will play important roles for digital entertainment genres such as computer game. The mechanism we propose in this paper is a part in an integrated narrative generation system we have studied as a project. The applicable goal is creating novel contents such as automatic generation game, which has not a fixed story, and narrative generation based narrative or literature, which is a form of novel containing narrative generation mechanisms. Moreover, narrative is the strongest method for organizing and structuring fragmentary information and a kind of collective knowledge in human being. The narrative generation system is also associated with a variety of issues such as the organic formation of fragmentary information, the diverse interpretation of an event or events, and so on (Ogata & Kanai, 2010). In this paper, we deal with a part of mechanism for generating a story, which is a sequence of events to be narrated, in the narrative generation process. In concrete terms, we propose a system to make a correlation between state and event (action) which are main elements to construct Three main modules of our narrative generation system are story generation mechanism, discourse mechanism, and surface expression mechanism. Story means a sequence of events to be narrated and discourse means the narrated structure of events, and both are described with conceptual representation. Expression contains surface representations including natural language, animated movie, music, and so on. The system has a conceptual dictionary and various narrative knowledge bases used mainly in story and discourse parts. Although our previous works was to develop the comparatively independent modules, we are currently starting to complete an integrated narrative generation system in which a variety of mechanisms are synthesized by standardizing data structure for event representation and constructing a conceptual dictionary to be used in a lot of modules commonly (Akimoto & Ogata, 2011). Figure 1 shows the overall structure of a pilot version of the system. The proposed system in this paper is correnponding to both “SL (story line)→SW (story world)” and ”SW→SL” in the “Structual operation module”. As mentioned later in detaile, a story line is a sequence of events and a story world is a collection of states. Generative parameters (input by user) Entire control or Control mechanism SL→SW control Story control SW→SL control PROPP control Discourse control CM control Expression control Music NLG mechanism control Movie control Discourse Movie Structural SL→SW SW→SL PROPP CM NLG techniques generation operation mechanisms - Each box is a function Conceptual dictionary - Each arrow means function call (and receive a result) - verb concepts (around 12000) - noun concepts (around 120000) Figure 1: The overall structure of an integrated narrative generation system --- paper_title: Live topic generation from event streams paper_content: Social platforms constantly record streams of heterogeneous data about human's activities, feelings, emotions and conversations opening a window to the world in real-time. Trends can be computed but making sense out of them is an extremely challenging task due to the heterogeneity of the data and its dynamics making often short-lived phenomena. We develop a framework which collects microposts shared on social platforms that contain media items as a result of a query, for example a trending event. It automatically creates different visual storyboards that reflect what users have shared about this particular event. More precisely it leverages on: (i) visual features from media items for near-deduplication, and (ii) textual features from status updates to interpret, cluster, and visualize media items. A screencast showing an example of these functionalities is published at: http://youtu.be/8iRiwz7cDYY while the prototype is publicly available at http://mediafinder.eurecom.fr. --- paper_title: Automatically generating stories from sensor data paper_content: Recent research in Augmented and Alternative Communication (AAC) has begun to make use of Natural Language Generation (NLG) techniques. This creates an opportunity for constructing stories from sensor data, akin to existing work in life-logging. This paper examines the potential of using NLG to merge the AAC and life-logging domains. It proposes a four stage hierarchy that categorises levels of complexity of output text. It formulates a key subproblem of clustering sensor data into narrative events and describes three potential approaches for resolving this subproblem. --- paper_title: Narrative Planning: Balancing Plot and Character paper_content: Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors - logical and aesthetic - that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem - to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm - the Intent-based Partial Order Causal Link (IPOCL) planner - that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners. --- paper_title: Automatically generating stories from sensor data paper_content: Recent research in Augmented and Alternative Communication (AAC) has begun to make use of Natural Language Generation (NLG) techniques. This creates an opportunity for constructing stories from sensor data, akin to existing work in life-logging. This paper examines the potential of using NLG to merge the AAC and life-logging domains. It proposes a four stage hierarchy that categorises levels of complexity of output text. It formulates a key subproblem of clustering sensor data into narrative events and describes three potential approaches for resolving this subproblem. --- paper_title: Automatic generation of video narratives from shared UGC paper_content: This paper introduces an evaluated approach to the automatic generation of video narratives from user generated content gathered in a shared repository. In the context of social events, end-users record video material with their personal cameras and upload the content to a common repository. Video narrative techniques, implemented using Narrative Structure Language (NSL) and ShapeShifting Media, are employed to automatically generate movies recounting the event. Such movies are personalized according to the preferences expressed by each individual end-user, for each individual viewing. This paper describes our prototype narrative system, MyVideos, deployed as a web application, and reports on its evaluation for one specific use case: assembling stories of a school concert by parents, relatives and friends. The evaluations carried out through focus groups, interviews and field trials, in the Netherlands and UK, provided validating results and further insights into this approach. --- paper_title: Database Management System paper_content: A method and a system of database divisional management for use with a parallel database system comprising an FES (front end server), BES's (back end servers), an IOS (I/O server) and disk units. The numbers of processors assigned to the FES, BES's and IOS, the number of disk units, and the number of partitions of the disk units are determined in accordance with the load pattern in question. Illustratively, there may be established a configuration of one FES, four BES's, one IOS and eight disk units. The number of BEST's is varied from one to four depending on the fluctuation in load, so that a scalable system configuration is implemented. When the number of BES's is increased or decreased, only the management information thereabout is transferred between nodes and not the data, whereby the desired degree of parallelism is obtained for high-speed query processing. --- paper_title: MixT: automatic generation of step-by-step mixed media tutorials paper_content: As software interfaces become more complicated, users rely on tutorials to learn, creating an increasing demand for effective tutorials. Existing tutorials, however, are limited in their presentation: Static step-by-step tutorials are easy to scan but hard to create and don't always give all of the necessary information for how to accomplish a step. In contrast, video tutorials provide very detailed information and are easy to create, but they are hard to scan as the video-player timeline does not give an overview of the entire task. We present MixT, which automatically generates mixed media tutorials that combine the strengths of these tutorial types. MixT tutorials include step-by-step text descriptions and images that are easy to scan and short videos for each step that provide additional context and detail as needed. We ground our design in a formative study that shows that mixed-media tutorials outperform both static and video tutorials. --- paper_title: Data Analysis for Massively Distributed Simulations paper_content: More computing power allows increases in the fidelity of simulations. Fast networking allows large clusters of high performance computing resources, often distributed across wide geographic areas, to be brought to bear on the simulations. This increase in fidelity has correspondingly increased the volumes of data simulations are capable of generating. Coordinating distant computing resources and making sense of this mass of data is a problem that must be addressed. Unless data are analyzed and converted into information, simulations will provide no useful knowledge. This paper reports on experiments using distributed analysis, particularly the Apache Hadoop framework, to address the analysis issues and suggests directions for enhancing the analysis capabilities to keep pace with the data generating capabilities found in modern simulation environments. Hadoop provides a scalable, but conceptually simple, distributed computation paradigm based on map/reduce operations implemented over a highly parallel, distributed filesystem. We developed map/reduce implementations of K-Means and ExpectationMaximization data mining algorithms that take advantage of the Hadoop framework. The Hadoop filesystem dramatically improves the disk scan time needed by these iterative data mining algorithms. We ran these algorithms across multiple Linux clusters over specially reserved high speed networks. The results of these experiments point to potential enhancements for Hadoop and other analysis tools. --- paper_title: Introduction to Artificial Intelligence paper_content: This book is an introduction on artificial intelligence. Topics include reasoning under uncertainty, robot plans, language understanding, and learning. The history of the field as well as intellectual ties to related disciplines are presented. ---
Title: Narrative Science Systems: A Review Section 1: INTRODUCTION Description 1: Introduce the importance of data and the potential of systems that can generate narratives from data. Section 2: RELATED WORK Description 2: Review significant research and methods in the field of narrative science systems, highlighting key contributions and approaches. Section 3: ARCHITECTURE OF NARRATION GENERATION SYSTEM Description 3: Describe the components and processes involved in building an automatic narration generation system. Section 4: DISCUSSION Description 4: Discuss the limitations, challenges, and potential improvements in narrative science systems, focusing on content quality and evaluation methods. Section 5: CONCLUSION Description 5: Summarize the study's findings and suggest future research directions for improving automatic narration generation systems.
Radio Frequency Identification - A Review of Low Cost Tag Security Proposals
10
--- paper_title: RFID Systems and Security and Privacy Implications paper_content: The Auto-ID Center is developing low-cost radio frequency identification (RFID) based systems with the initial application as next generation bar-codes. We describe RFID technology, summarize our approach and our research, and most importantly, describe the research opportunities in RFID for experts in cryptography and information security. The common theme in low-cost RFID systems is that computation resources are very limited, and all aspects of the RFID system are connected to each other. Understanding these connections and the resulting design trade-offs is an important prerequisite to effectively answering the challenges of security and privacy in low-cost RFID systems. --- paper_title: Communication by Means of Reflected Power paper_content: Point-to-point communication, with the carrier power generated at the receiving end and the transmitter replaced by a modulated reflector, represents a transmission system which possesses new and different characteristics. Radio, light, or sound waves (essentially microwaves, infrared, and ultrasonic waves) may be used for the transmission under approximate conditions of specular reflection. The basic theory for reflected power communication is discussed with reference to conventional radar transmission, and the law of propagation is derived and compared with the propagation law for radar. A few different methods for the modulation of reflectors are described, and various laboratory and field test results discussed. A few of the civilian applications of the principle are reviewed. It is believed that the reflected-power communication method may yield one or more of the following characteristics: high directivity, automatic pin-pointing in spite of atmospheric bending, elimination of interference fading, simple voice-transmitter design without tubes and circuits and power supplies, increased security, and simplified means for identification and navigation. --- paper_title: Communication by Means of Reflected Power paper_content: Point-to-point communication, with the carrier power generated at the receiving end and the transmitter replaced by a modulated reflector, represents a transmission system which possesses new and different characteristics. Radio, light, or sound waves (essentially microwaves, infrared, and ultrasonic waves) may be used for the transmission under approximate conditions of specular reflection. The basic theory for reflected power communication is discussed with reference to conventional radar transmission, and the law of propagation is derived and compared with the propagation law for radar. A few different methods for the modulation of reflectors are described, and various laboratory and field test results discussed. A few of the civilian applications of the principle are reviewed. It is believed that the reflected-power communication method may yield one or more of the following characteristics: high directivity, automatic pin-pointing in spite of atmospheric bending, elimination of interference fading, simple voice-transmitter design without tubes and circuits and power supplies, increased security, and simplified means for identification and navigation. ---
Title: Radio Frequency Identification - A Review of Low Cost Tag Security Proposals Section 1: INTRODUCTION Description 1: This section introduces RFID technology, its history, uses, market potential, and ongoing developments concerning its costs and implementations. Section 2: RFID BASICS Description 2: This section covers the fundamentals of RFID technology, including its components (tags, readers, and antennas) and the differences between active and passive tags. Section 3: RFID Reader Description 3: This section details the role and function of the RFID reader within the RFID system, explaining how it interacts with tags and processes data. Section 4: RFID Antenna Description 4: This section discusses the types of antennas used in RFID systems and how they affect communication range and effectiveness. Section 5: SECURING RFID TAGS Description 5: This section addresses the security challenges faced by RFID systems and surveys several low-cost proposals to secure RFID tags against unauthorized access. Section 6: Hash-Locking Tags Description 6: This section explains the hash-locking method proposed by Weis et al., detailing how it locks and unlocks tags using a one-way hashing algorithm. Section 7: Minimalist Encryption Approach Description 7: This section describes the minimalist cryptography approach proposed by Juels, which utilizes re-writable tag memory and limited computational power for encryption. Section 8: Universal Re-encryption Approach Description 8: This section outlines a re-encryption method based on universal re-encryption that does not require knowledge of a public key, enhancing RFID tag security. Section 9: Hopper Blum Authentication Description 9: This section introduces the Hopper-Blum (HB) protocol adapted for RFID authentication, detailing its process and potential vulnerabilities. Section 10: CONCLUSION Description 10: This section summarizes the discussed security proposals and emphasizes the need for an adopted standard in securing RFID systems against modern wireless attacks.
A Survey of Text Summarization Extractive Techniques
5
--- paper_title: Optimizing Text Summarization Based on Fuzzy Logic paper_content: In this paper we first analyze some state of the art methods for text summarization. We discuss what the main disadvantages of these methods are and then propose a new method using fuzzy logic. Comparisons of results show that our method beats most methods which use machine learning as their core. --- paper_title: Lexrank: graph-based centrality as salience in text summarization paper_content: We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents. --- paper_title: Automated Summarization Evaluation with Basic Elements paper_content: As part of evaluating a summary automatically, it is usual to determine how much of the contents of one or more human-produced ‘ideal’ summaries it contains. Past automated methods such as ROUGE compare using fixed word ngrams, which are not ideal for a variety of reasons. In this paper we describe a framework in which summary evaluation measures can be instantiated and compared, and we implement a specific evaluation method using very small units of content, called Basic Elements, that address some of the shortcomings of ngrams. This method is tested on DUC 2003, 2004, and 2005 systems and produces very good correlations with human judgments. --- paper_title: A trainable document summarizer paper_content: ●● ● ● ● To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. Document extracts consisting of roughly 20% of the original cart be as informative as the full text of a document, which suggests that even shorter extracts may be useful indicative summmies. The trends in our results are in agreement with those of Edmundson who used a subjectively weighted combination of features as opposed to training the feature weights using a corpus. --- paper_title: The automatic creation of literature abstracts paper_content: Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form is scanned by an IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the "auto-abstract." --- paper_title: Document and passage retrieval based on hidden Markov models paper_content: Introduced is a new approach to Information Retrieval developed on the basis of Hidden Markov Models (HMMs). HMMs are shown to provide a mathematically sound framework for retrieving documenta—documents with predefined boundaries and also entities of information that are of arbitrary lengths and formats (passage retrieval). Our retrieval model is shown to encompass promising capabilities: First, the position of occurrences of indexing features can be used for indexing. Positional information is essential, for instance, when considering phrases, negation, and the proximity of features. Second, from training collections we can derive automatically optimal weights for arbitrary features. Third, a query dependent structure can be determined for every document by segmenting the documents into passages that are either relevant or irrelevant to the query. The theoretical analysis of our retrieval model is complemented by the results of preliminary experiments. --- paper_title: New Methods in Automatic Extracting paper_content: This paper describes new methods of automatically extracting documents for screening purposes, i.e. the computer selection of sentences having the greatest potential for conveying to the reader the substance of the document. While previous work has focused on one component of sentence significance, namely, the presence of high-frequency content words (key words), the methods described here also treat three additional components: pragmatic words (cue words); title and heading words; and structural indicators (sentence location). The research has resulted in an operating system and a research methodology. The extracting system is parameterized to control and vary the influence of the above four components. The research methodology includes procedures for the compilation of the required dictionaries, the setting of the control parameters, and the comparative evaluation of the automatic extracts with manually produced extracts. The results indicate that the three newly proposed components dominate the frequency component in the production of better extracts. --- paper_title: Word Sequence Models for Single Text Summarization paper_content: The main problem for generating an extractive automatic text summary is to detect the most relevant information in the source document. For such purpose, recently some approaches have successfully employed the word sequence information from the self-text for detecting the candidate text fragments for composing the summary. In this paper, we employ the so-called n-grams and maximal frequent word sequences as features in a vector space model in order to determine the advantages and disadvantages for extractive text summarization. --- paper_title: Optimizing Text Summarization Based on Fuzzy Logic paper_content: In this paper we first analyze some state of the art methods for text summarization. We discuss what the main disadvantages of these methods are and then propose a new method using fuzzy logic. Comparisons of results show that our method beats most methods which use machine learning as their core. --- paper_title: Narrative text classification for automatic key phrase extraction in web document corpora paper_content: Automatic key phrase extraction is a useful tool in many text related applications such as clustering and summarization. State-of-the-art methods are aimed towards extracting key phrases from traditional text such as technical papers. Application of these methods on Web documents, which often contain diverse and heterogeneous contents, is of particular interest and challenge in the information age. In this work, we investigate the significance of narrative text classification in the task of automatic key phrase extraction in Web document corpora. We benchmark three methods, TFIDF, KEA, and Keyterm, used to extract key phrases from all the plain text and from only the narrative text of Web pages. ANOVA tests are used to analyze the ranking data collected in a user study using quantitative measures of acceptable percentage and quality value. The evaluation shows that key phrases extracted from the narrative text only are significantly better than those obtained from all plain text of Web pages. This demonstrates that narrative text classification is indispensable for effective key phrase extraction in Web document corpora. --- paper_title: Generic text summarization using local and global properties of sentences paper_content: With the proliferation of text data on the World-Wide Web, the development of methods for automatically summarizing these data becomes more important. Here, we propose a practical approach for extracting the most relevant sentences from the original document to form a summary. The idea of our approach is to exploit both the local and global properties of sentences. The local property can be considered as clusters of significant words within each sentence, while the global property can be though of as relations of all sentences in the document. These two properties are combined to get a single measure reflecting the informativeness of sentences. Experimental results show that our approach compares favorably to a commercial text summarizer. --- paper_title: Automatic Text Summarization using a Machine Learning Approach paper_content: In this paper we address the automatic summarization task. Recent research works on extractive-summary generation employ some heuristics, but few works indicate how to select the relevant features. We will present a summarization procedure based on the application of trainable Machine Learning algorithms which employs a set of features extracted directly from the original text. These features are of two kinds: statistical - based on the frequency of some elements in the text; and linguistic - extracted from a simplified argumentative structure of the text. We also present some computational results obtained with the application of our summarizer to some well known text databases, and we compare these results to some baseline summarization procedures. --- paper_title: Automatic text summarization with neural networks paper_content: A novel technique for summarizing news articles using neural networks is presented. A neural network is trained to learn the relevant features of sentences that should be included in the summary of the article. The neural network is then modified to generalize and combine the relevant features apparent in summary sentences. Finally, the modified neural network is used as a filter to summarize news articles. --- paper_title: Text Summarization Using Neural Networks paper_content: Originally published in WSEAS Transactions on Systems, Volume 3, Issue 2, April 2004 (pp ::: 960-963). --- paper_title: Sentence Features Fusion for Text Summarization Using Fuzzy Logic paper_content: The scoring mechanism of the text features is the unique way for determining the key ideas in the text to be presented as text summary. The efficiency of the technique used for scoring the text sentences could produce good summary. The feature scores are imprecise and uncertain, this marks the differentiation between the important features and unimportant is difficult task. In this paper, we introduce fuzzy logic to deal with this problem. Our approach used important features based on fuzzy logic to extract the sentences. In our experiment, we used 30 test documents in DUC2002 data set. Each document is prepared by preprocessing process: sentence segmentation, tokenization, removing stop word, and word stemming. Then, we use 9 important features and calculate their score for each sentence. We propose a method using fuzzy logic for sentence extraction and compare our results with the baseline summarizer and Microsoft Word 2007 summarizers. The results show that the highest average precision, recall, and F-measure for the summaries were obtained from fuzzy method. --- paper_title: A cue-based hub-authority approach for multi-document text summarization paper_content: Multi-document extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a document. Although some research has introduced the graph-based ranking algorithms such as PageRank and HITS into the text summarization, we propose a new approach under the hub-authority framework in this paper. Our approach combines the text content with some cues such as "cue phrase", "sentence length" and "first sentence" and explores the sub-topics in the multi-documents by bringing the features of these sub-topics into graph-based sentence ranking algorithms. We provide an evaluation of our method on DUC 2004 data. The results show that our approach is an effective graph-ranking schema in multi-document generic text summarization. --- paper_title: Structure-based query-specific document summarization paper_content: Summarization of text documents is increasingly important with the amount of data available on the Internet. The large majority of current approaches view documents as linear sequences of words and create query-independent summaries. However, ignoring the structure of the document degrades the quality of summaries. Furthermore, the popularity of web search engines requires query-specific summaries. We present a method to create query-specific summaries by adding structure to documents by extracting associations between their fragments. --- paper_title: MEAD - A Platform for Multidocument Multilingual Text Summarization paper_content: Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection. ---
Title: A Survey of Text Summarization Extractive Techniques Section 1: INTRODUCTION Description 1: This section introduces the importance of text summarization, differentiates between extractive and abstractive summarization methods, and explains the focus of the paper on extractive summarization methods. Section 2: TEXT SUMMARIZATION EARLY HISTORY Description 2: This section provides a historical overview of automatic text summarization, highlighting notable methods and systems from the 1950s to the 1990s. Section 3: VARIOUS FEATURES FOR SUMMARIZATION Description 3: This section discusses different features used in extractive summarization such as title words, sentence location, sentence length, proper nouns, upper-case words, cue phrases, and more. Section 4: EXTRACTIVE SUMMARIZATION METHODS Description 4: This section details various extractive summarization methods including TF-IDF, cluster-based methods, graph-theoretic approaches, machine learning approaches, LSA method, concept-obtained text summarization, neural networks, fuzzy logic, regression for estimating feature weights, multi-document extractive summarization, query-based extractive summarization, and multilingual extractive text summarization. Section 5: CONCLUSIONS Description 5: This section summarizes the insights gained from the survey, indicating the challenges, importance of feature weight determination, the need for NLP for coherence, and the evaluation methods for text summarization.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data
9
--- paper_title: Streamlines for illustrative real-time rendering paper_content: Line drawing techniques are important methods to illustrate shapes. Existing feature line methods, e.g., suggestive contours, apparent ridges, or photic extremum lines, solely determine salient regions and illustrate them with separate lines. Hatching methods convey the shape by drawing a wealth of lines on the whole surface. Both approaches are often not sufficient for a faithful visualization of organic surface models, e.g., in biology or medicine. In this paper, we present a novel object-space line drawing algorithm that conveys the shape of such surface models in real-time. Our approach employs contour- and feature-based illustrative streamlines to convey surface shape (ConFIS). For every triangle, precise streamlines are calculated on the surface with a given curvature vector field. Salient regions are detected by determining maxima and minima of a scalar field. Compared with existing feature lines and hatching methods, ConFIS uses the advantages of both categories in an effective and flexible manner. We demonstrate this with different anatomical and artificial surface models. In addition, we conducted a qualitative evaluation of our technique to compare our results with exemplary feature line and hatching methods. --- paper_title: Visual Perception from a Computer Graphics Perspective paper_content: This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. --- paper_title: High-quality two-level volume rendering of segmented data sets on consumer graphics hardware paper_content: One of the most important goals in volume rendering is to be able to visually separate and selectively enable specific objects of interest contained in a single volumetric data set, which can be approached by using explicit segmentation information. We show how segmented data sets can be rendered interactively on current consumer graphics hardware with high image quality and pixel-resolution filtering of object boundaries. In order to enhance object perception, we employ different levels of object distinction. First, each object can be assigned an individual transfer function, multiple of which can be applied in a single rendering pass. Second, different rendering modes such as direct volume rendering, iso-surfacing, and non-photorealistic techniques can be selected for each object. A minimal number of rendering passes is achieved by processing sets of objects that share the same rendering mode in a single pass. Third, local compositing modes such as alpha blending and MIP can be selected for each object in addition to a single global mode, thus enabling high-quality two-level volume rendering on GPUs. --- paper_title: Attention and Visual Memory in Visualization and Computer Graphics paper_content: A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see” details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics. --- paper_title: Smart 3d visualizations in clinical applications paper_content: We discuss techniques for the visualization of medical volume data dedicated for their clinical use. We describe the need for rapid dynamic interaction facilities with such visualizations and discuss emphasis techniques in more detail. Another crucial aspect of medical visualization is the integration of 2d and 3d visualizations. In order to organize this discussion, we introduce 6 "Golden" rules for medical visualizations. --- paper_title: Contextual Anatomic Mimesis Hybrid In-Situ Visualization Method for Improving Multi-Sensory Depth Perception in Medical Augmented Reality paper_content: The need to improve medical diagnosis and reduce invasive surgery is dependent upon seeing into a living human system. The use of diverse types of medical imaging and endoscopic instruments has provided significant breakthroughs, but not without limiting the surgeon's natural, intuitive and direct 3D perception into the human body. This paper presents a method for the use of augmented reality (AR) for the convergence of improved perception of 3D medical imaging data (mimesis) in context to the patient's own anatomy (in-situ) incorporating the physician's intuitive multi- sensory interaction and integrating direct manipulation with endoscopic instruments. Transparency of the video images recorded by the color cameras of a video see-through, stereoscopic head- mounted-display (HMD) is adjusted according to the position and line of sight of the observer, the shape of the patient's skin and the location of the instrument. The modified video image of the real scene is then blended with the previously rendered virtual anatomy. The effectiveness has been demonstrated in a series of experiments at the Chirurgische Klinik in Munich, Germany with cadaver and in-vivo studies. The results can be applied for designing medical AR training and educational applications. --- paper_title: Exploration of 4D MRI Blood Flow using Stylistic Visualization paper_content: Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters. --- paper_title: Information Visualization: Perception for Design paper_content: Most designers know that yellow text presented against a blue background reads clearly and easily, but how many can explain why, and what really are the best ways to help others and ourselves clearly see key patterns in a bunch of data? When we use software, access a website, or view business or scientific graphics, our understanding is greatly enhanced or impeded by the way the information is presented. ::: ::: This book explores the art and science of why we see objects the way we do. Based on the science of perception and vision, the author presents the key principles at work for a wide range of applications--resulting in visualization of improved clarity, utility, and persuasiveness. The book offers practical guidelines that can be applied by anyone: interaction designers, graphic designers of all kinds (including web designers), data miners, and financial analysts. ::: ::: ::: ::: Complete update of the recognized source in industry, research, and academic for applicable guidance on information visualizing. ::: ::: Includes the latest research and state of the art information on multimedia presentation. ::: ::: More than 160 explicit design guidelines based on vision science. ::: ::: A new final chapter that explains the process of visual thinking and how visualizations help us to think about problems. ::: ::: Packed with over 400 informative full color illustrations, which are key to understanding of the subject. ::: ::: Table of Contents ::: ::: ::: Chapter 1. Foundations for an Applied Science of Data Visualization ::: ::: Chapter 2. The Environment, Optics, Resolution, and the Display ::: ::: Chapter 3. Lightness, Brightness, Contrast and Constancy ::: ::: Chapter 4. Color ::: ::: Chapter 5. Visual Salience and Finding Information ::: ::: Chapter 6. Static and Moving Patterns ::: ::: Chapter 7. Space Perception ::: ::: Chapter 8. Visual Objects and Data Objects ::: ::: Chapter 9. Images, Narrative, and Gestures for Explanation ::: ::: Chapter 10. Interacting with Visualizations ::: ::: Chapter 11. Visual Thinking Processes --- paper_title: The haloed line effect for hidden line elimination. paper_content: The haloed line effect is a technique where when a line in three-dimensional space passes in front of another line, a gap is produced in the projection of the more distant line. The gap is produced as if an opaque halo surrounded the closer line. This method for approximate hidden-line-elimination is advantageous because explicit surface equations are not necessary. The relative depth of lines, axes, curves and lettering is easily perceived. This technique is especially suitable for the display of finite element grids, three-dimensional contour maps and ruled surfaces. When the lines or curves on a surface are closer than the gap size, the gaps produced close up to produce a complete hidden-line-elimination. A simple but efficient implementation is described which can be used in the rendering of a variety of three-dimensional situations. --- paper_title: Depth-Dependent Halos: Illustrative Rendering of Dense Line Data paper_content: We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work. --- paper_title: Visual Perception from a Computer Graphics Perspective paper_content: This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. --- paper_title: Enhancing Depth-Perception with Flexible Volumetric Halos paper_content: Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates. --- paper_title: The partial-occlusion effect: utilizing semitransparency in 3D human-computer interaction paper_content: This study investigates human performance when using semitransparent tools in interactive 3D computer graphics environments. The article briefly reviews techniques for presenting depth information and examples of applying semitransparency in computer interface design. We hypothesize that when the user moves a semitransparent surface in a 3D environment, the “partial-occlusion” effect introduced through semitransparency acts as an effective cue in target localization—an essential component in many 3D interaction tasks. This hypothesis was tested in an experiment in which subjects were asked to capture dynamic targets (virtual fish) with two versions of a 3D box cursor, one with and one without semitransparent surfaces. Results showed that the partial-occlusion effect through semitransparency significantly improved users' performance in terms of trial completion time, error rate, and error magnitude in both monoscopic and stereoscopic displays. Subjective evaluations supported the conclusions drawn from performance measures. The experimental results and their implications are discussed, with emphasis on the relative, discrete nature of the partial-occlusion effect and on interactions between different depth cues. The article concludes with proposals of a few future research issues and applications of semitransparency in human-computer interaction. --- paper_title: Vicinity shading for enhanced perception of volumetric data paper_content: This paper presents a shading model for volumetric data which enhances the perception of surfaces within the volume. The model incorporates uniform diffuse illumination, which arrives equally from all directions at each surface point in the volume. This illumination is attenuated by occlusions in the local vicinity of the surface point, resulting in shadows in depressions and crevices. Experiments by other authors have shown that perception of a surface is superior under uniform diffuse lighting, compared to illumination from point source lighting. --- paper_title: Information limitations in perception of shape from texture paper_content: Li and Zaidi (Li, A., and Zaidi, Q. (2000) Vision Research, 40, 217–242) showed that the veridical perception of the 3-dimensional (3D) shape of a corrugated surface from texture cues is entirely dependent on the visibility of critical patterns of oriented energy. These patterns are created by perspective projection of surface markings oriented along lines of maximum 3D curvature. In images missing these orientation modulations, observers confused concavities with convexities, and leftward slants with rightward slants. In this paper, it is shown that these results were a direct consequence of the physical information conveyed by different oriented components of the texture pattern. For texture patterns consisting of single gratings of arbitrary spatial frequency and orientation, equations are derived from perspective geometry that describe the local spatial frequency and orientation for any slant at any height above and below eye level. The analysis shows that only gratings oriented within a few degrees of the axis of maximum curvature exhibit distinct patterns of orientation modulations for convex, concave, and leftward and rightward slanted portions of a corrugated surface. All other gratings exhibit patterns of frequency and orientation modulations that are distinct for curvatures on the one hand and slants on the other, but that are nearly identical for curvatures of different sign, and nearly identical for slants of different direction. The perceived shape of surfaces was measured in a 5AFC paradigm (concave, convex, leftward slant, rightward slant, and flat-frontoparallel). Observers perceived all five shapes correctly only for gratings oriented within a few degrees of the axis of maximum curvature. For all other oriented gratings, observers could distinguish curvatures from slants, but could not distinguish signs of curvature or directions of slant. These results demonstrate that human observers utilize the shape information provided by texture components along both critical and non-critical orientations. © 2001 Elsevier Science Ltd. All rights reserved. --- paper_title: Conveying shape with texture: experimental investigations of texture's effects on shape categorization judgments paper_content: We describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). We found that: 1) shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. We found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation. --- paper_title: Parts of Visual Objects: An Experimental Test of the Minima Rule paper_content: Three experiments were conducted to test Hoffman and Richards's (1984) hypothesis that, for purposes of visual recognition, the human visual system divides three-dimensional shapes into parts at negative minima of curvature. In the first two experiments, subjects observed a simulated object (surface of revolution) rotating about a vertical axis, followed by a display of four alternative parts. They were asked to select a part that was from the object. Two of the four parts were divided at negative minima of curvature and two at positive maxima. When both a minima part and a maxima part from the object were presented on each trial (experiment 1), most of the correct responses were minima parts (101 versus 55). When only one part from the object—either a minima part or a maxima part—was shown on each trial (experiment 2), accuracy on trials with correct minima parts and correct maxima parts did not differ significantly. However, some subjects indicated that they reversed figure and ground, thereby changing ... --- paper_title: Illustrating transparent surfaces with curvature-directed strokes paper_content: Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for "texturing" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. --- paper_title: Real-world illumination and the perception of surface reflectance properties paper_content: Under typical viewing conditions, we find it easy to distinguish between different materials, such as metal, plastic, and paper. Recognizing materials from their surface reflectance properties (such as lightness and gloss) is a nontrivial accomplishment because of confounding effects of illumination. However, if subjects have tacit knowledge of the statistics of illumination encountered in the real world, then it is possible to reject unlikely image interpretations, and thus to estimate surface reflectance even when the precise illumination is unknown. A surface reflectance matching task was used to measure the accuracy of human surface reflectance estimation. The results of the matching task demonstrate that subjects can match surface reflectance properties reliably and accurately in the absence of context, as long as the illumination is realistic. Matching performance declines when the illumination statistics are not representative of the real world. Together these findings suggest that subjects do use stored assumptions about the statistics of real-world illumination to estimate surface reflectance. Systematic manipulations of pixel and wavelet properties of illuminations reveal that the visual system’s assumptions about illumination are of intermediate complexity (e.g., presence of edges and bright light sources), rather than of high complexity (e.g., presence of recognizable objects in the environment). --- paper_title: Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey paper_content: This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models. --- paper_title: Receptive fields and functional architecture of monkey striate cortex paper_content: 1. The striate cortex was studied in lightly anaesthetized macaque and spider monkeys by recording extracellularly from single units and stimulating the retinas with spots or patterns of light. Most cells can be categorized as simple, complex, or hypercomplex, with response properties very similar to those previously described in the cat. On the average, however, receptive fields are smaller, and there is a greater sensitivity to changes in stimulus orientation. A small proportion of the cells are colour coded.2. Evidence is presented for at least two independent systems of columns extending vertically from surface to white matter. Columns of the first type contain cells with common receptive-field orientations. They are similar to the orientation columns described in the cat, but are probably smaller in cross-sectional area. In the second system cells are aggregated into columns according to eye preference. The ocular dominance columns are larger than the orientation columns, and the two sets of boundaries seem to be independent.3. There is a tendency for cells to be grouped according to symmetry of responses to movement; in some regions the cells respond equally well to the two opposite directions of movement of a line, but other regions contain a mixture of cells favouring one direction and cells favouring the other.4. A horizontal organization corresponding to the cortical layering can also be discerned. The upper layers (II and the upper two-thirds of III) contain complex and hypercomplex cells, but simple cells are virtually absent. The cells are mostly binocularly driven. Simple cells are found deep in layer III, and in IV A and IV B. In layer IV B they form a large proportion of the population, whereas complex cells are rare. In layers IV A and IV B one finds units lacking orientation specificity; it is not clear whether these are cell bodies or axons of geniculate cells. In layer IV most cells are driven by one eye only; this layer consists of a mosaic with cells of some regions responding to one eye only, those of other regions responding to the other eye. Layers V and VI contain mostly complex and hypercomplex cells, binocularly driven.5. The cortex is seen as a system organized vertically and horizontally in entirely different ways. In the vertical system (in which cells lying along a vertical line in the cortex have common features) stimulus dimensions such as retinal position, line orientation, ocular dominance, and perhaps directionality of movement, are mapped in sets of superimposed but independent mosaics. The horizontal system segregates cells in layers by hierarchical orders, the lowest orders (simple cells monocularly driven) located in and near layer IV, the higher orders in the upper and lower layers. --- paper_title: Visual Perception from a Computer Graphics Perspective paper_content: This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. --- paper_title: View Direction, Surface Orientation and Texture Orientation for Perception of Surface Shape paper_content: Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. --- paper_title: Observer biases in the 3D interpretation of line drawings paper_content: Line drawings produced by contours traced on a surface can produce a vivid impression of the surface shape. The stability of this perception is notable considering that the information provided by the surface contours is quite ambiguous. We have studied the stability of line drawing perception from psychophysical and computational standpoints. For a given family of simple line drawings, human observers could perceive the drawings as depicting either an elliptic (egg-shaped) or hyperbolic (saddle-shaped) smooth surface patch. Rotation of the image along the line of sight and change in aspect ratio of the line drawing could bias the observer toward either interpretation. The results were modeled by a simple Bayesian observer that computes the probability to choose either interpretation given the information in the image and prior preferences. The model’s decision rule is noncommitting: for a given input image its responses are still probabilistic, reflecting variability in the modeled observers’ judgements. A good fit to the data was obtained when three observer assumptions were introduced: a preference for convex surfaces, a preference for surface contours aligned with the principal lines of curvature, and a preference for a surface orientation consistent with an object viewed from above. We discuss how these assumptions might reflect regularities of the visual world. © 1998 Elsevier Science Ltd. All rights reserved. --- paper_title: Minimodularity and the perception of layout paper_content: SUMMARY In natural vision, information overspecifies the relative distances between objects and their layout in three dimensions. Directed perception applies (Cutting, 1986), rather than direct or indirect perception, because any single source of information (or cue) might be adequate to reveal relative depth (or local depth order), but many are present and useful to observers. Such overspecification presents the theoretical problem of how perceivers use this multiplicity of information to arrive at a unitary appreciation of distance between objects in the environment. This article examines three models of directed perception: selection, in which only one source of information is used; addition, in which all sources are used in simple combination; and multiplication, in which interactions among sources can occur. To establish perceptual overspecification, we created stimuli with four possible sources of monocular spatial information, using all combinations of the presence or absence of relative size, height in the projection plane, occlusion, and motion parallax. Visual stimuli were computer generated and consisted of three untextured parallel planes arranged in depth. Three tasks were used: one of magnitude estimation of exocentric distance within a stimulus, one of dissimilarity judgment in how a pair of stimuli revealed depth, and one of choice judgment within a pair as to which one revealed depth best. Grouped and individual results of the one direct and two indirect scaling tasks suggest that perceivers use these sources of information in an additive fashion. That is, one source (or cue) is generally substitutable for another, and the more sources that are present, the more depth is revealed. This pattern of results suggests independent use of information by four separate, functional subsystems within the visual system, here called minimodules. Evidence for and advantages of mmimodularity are discussed. --- paper_title: Enhancing transparent skin surfaces with ridge and valley lines paper_content: There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. --- paper_title: Perceiving spatial relationships in computer-generated images paper_content: The sources of visual information that must be present to correctly interpret spatial relations in images, the relative importance of different visual information sources with regard to metric judgments of spatial relations in images, and the ways that the task in which the images are used affect the visual information's usefulness are discussed. Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented. Three experiments in which the influence of pictorial cues on perceived spatial relations in computer-generated images was assessed are discussed. Each experiment examined the accuracy with which subjects matched the position, orientation, and size of a test object with a standard by interactively translating, rotating, and scaling the test object. > --- paper_title: Perceptually-Based Depth-Ordering Enhancement for Direct Volume Rendering paper_content: Visualizing complex volume data usually renders selected parts of the volume semitransparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depth-ordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. The experimental results demonstrate the usefulness and effectiveness of our approach. --- paper_title: About the Influence of Illumination Models on Image Comprehension in Direct Volume Rendering paper_content: In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings. --- paper_title: Automatic Lighting Design using a Perceptual Quality Metric paper_content: Lighting has a crucial impact on the appearance of 3D objects and on the ability of an image to communicate information about a 3D scene to a human observer. This paper presents a new automatic lighting design approach for comprehensible rendering of 3D objects. Given a geometric model of a 3D object or scene, the material properties of the surfaces in the model, and the desired viewing parameters, our approach automatically determines the values of various lighting parameters by optimizing a perception-based image quality objective function. This objective function is designed to quantify the extent to which an image of a 3D scene succeeds in communicating scene information, such as the 3D shapes of the objects, fine geometric details, and the spatial relationships between the objects. Our results demonstrate that the proposed approach is an effective lighting design tool, suitable for users without expertise or knowledge in visual perception or in lighting design. --- paper_title: Volume illustration: nonphotorealistic rendering of volume models paper_content: Accurately and automatically conveying the structure of a volume model is a problem which has not been fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance but generally require substantial hand-tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using non-photorealistic rendering techniques. Since the features to be enhanced are defined on the basis of local volume characteristics rather than volume sample values, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing the structural perception of volume models through the amplification of features and the addition of illumination effects. --- paper_title: Visualizing 3D Flow paper_content: We discuss volume line integral convolution (LIC) techniques for effectively visualizing 3D flow, including using visibility-impeding halos and efficient asymmetric filter kernels. Specifically, we suggest techniques for selectively emphasizing critical regions of interest in a flow; facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines; efficiently incorporating an indication of orientation into a flow representation; and conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations. --- paper_title: Information limitations in perception of shape from texture paper_content: Li and Zaidi (Li, A., and Zaidi, Q. (2000) Vision Research, 40, 217–242) showed that the veridical perception of the 3-dimensional (3D) shape of a corrugated surface from texture cues is entirely dependent on the visibility of critical patterns of oriented energy. These patterns are created by perspective projection of surface markings oriented along lines of maximum 3D curvature. In images missing these orientation modulations, observers confused concavities with convexities, and leftward slants with rightward slants. In this paper, it is shown that these results were a direct consequence of the physical information conveyed by different oriented components of the texture pattern. For texture patterns consisting of single gratings of arbitrary spatial frequency and orientation, equations are derived from perspective geometry that describe the local spatial frequency and orientation for any slant at any height above and below eye level. The analysis shows that only gratings oriented within a few degrees of the axis of maximum curvature exhibit distinct patterns of orientation modulations for convex, concave, and leftward and rightward slanted portions of a corrugated surface. All other gratings exhibit patterns of frequency and orientation modulations that are distinct for curvatures on the one hand and slants on the other, but that are nearly identical for curvatures of different sign, and nearly identical for slants of different direction. The perceived shape of surfaces was measured in a 5AFC paradigm (concave, convex, leftward slant, rightward slant, and flat-frontoparallel). Observers perceived all five shapes correctly only for gratings oriented within a few degrees of the axis of maximum curvature. For all other oriented gratings, observers could distinguish curvatures from slants, but could not distinguish signs of curvature or directions of slant. These results demonstrate that human observers utilize the shape information provided by texture components along both critical and non-critical orientations. © 2001 Elsevier Science Ltd. All rights reserved. --- paper_title: Ridge-valley lines on meshes via implicit surface fitting paper_content: We propose a simple and effective method for detecting view-and scale-independent ridge-valley lines defined via first- and second-order curvature derivatives on shapes approximated by dense triangle meshes. A high-quality estimation of high-order surface derivatives is achieved by combining multi-level implicit surface fitting and finite difference approximations. We demonstrate that the ridges and valleys are geometrically and perceptually salient surface features, and, therefore, can be potentially used for shape recognition, coding, and quality evaluation purposes. --- paper_title: Conveying shape with texture: experimental investigations of texture's effects on shape categorization judgments paper_content: We describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). We found that: 1) shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. We found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation. --- paper_title: Illustrating transparent surfaces with curvature-directed strokes paper_content: Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for "texturing" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. --- paper_title: Apparent ridges for line drawing paper_content: Three-dimensional shape can be drawn using a variety of feature lines, but none of the current definitions alone seem to capture all visually-relevant lines. We introduce a new definition of feature lines based on two perceptual observations. First, human perception is sensitive to the variation of shading, and since shape perception is little affected by lighting and reflectance modification, we should focus on normal variation. Second, view-dependent lines better convey smooth surfaces. From this we define view-dependent curvature as the variation of the surface normal with respect to a viewing screen plane, and apparent ridges as the loci of points that maximize a view-dependent curvature. We present a formal definition of apparent ridges and an algorithm to render line drawings of 3D meshes. We show that our apparent ridges encompass or enhance aspects of several other feature lines. --- paper_title: Using NPR to evaluate perceptual shape cues in dynamic environments paper_content: We present a psychophysical experiment to determine the effectiveness of perceptual shape cues for rigidly moving objects in an interactive, highly dynamic task. We use standard non-photorealistic (NPR) techniques to carefully separate and study shape cues common to many rendering systems. Our experiment is simple to implement, engaging and intuitive for participants, and sensitive enough to detect significant differences between individual shape cues. We demonstrate our experimental design with a user study. In that study, participants are shown 16 moving objects, 4 of which are designated targets, rendered in different shape-from-X styles. Participants select targets projected onto a touch-sensitive table. We find that simple Lambertian shading offers the best shape cue in our user study, followed by contours and, lastly, texturing. Further results indicate that multiple shape cues should be used with care, as these may not behave additively. --- paper_title: Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey paper_content: This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models. --- paper_title: Comprehensible rendering of 3-D shapes paper_content: We propose a new rendering technique that produces 3-D images with enhanced visual comprehensibility. Shape features can be readily understood if certain geometric properties are enhanced. To achieve this, we develop drawing algorithms for discontinuities, edges, contour lines, and curved hatching. All of them are realized with 2-D image processing operations instead of line tracking processes, so that they can be efficiently combined with conventional surface rendering algorithms.Data about the geometric properties of the surfaces are preserved as Geometric Buffers (G-buffers). Each G-buffer contains one geometric property such as the depth or the normal vector of each pixel. By using G-buffers as intermediate results, artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as postprocesses. This permits a user to rapidly examine various combinations of enhancement techniques without excessive recomputation, and easily obtain the most comprehensible image.Our method can be widely applied for various purposes. Several of these, edge enhancement, line drawing illustrations, topographical maps, medical imaging, and surface analysis, are presented in this paper. --- paper_title: The Guild handbook of scientific illustration paper_content: Authors and Editors. Acknowledgements. Introduction. PART I. BASICS. Generalized Steps. Studio Basics. Archival Considerations. Light on Form. PAT II. RENDERING TECHNIQUES. Line and Ink. Pencil. Carbon Dust. Watercolor and Wash. Gouache and Acrylics. Airbrush. Murals and Dioramas. Model Building. Introduction to Computer Graphics. From 2-D to 3-D. PART III. SUBJECT MATTER. Illustrating Molecules. Illustrating Earth Sciences. Illustrating Astronomy. Illustrating Plants. Illustrating Fossils. Illustrating Invertebrates. Illustrating Fishes. Illustrating Amphibians and Reptiles. Illustrating Birds. Illustrating Mammals. Illustrating Animals in Their Habitats. Illustrating Humans and Their Artifacts. Illustrating Medical Subjects. PART IV. BEYOND BASICS. Using the Microscope. Charts and Diagrams. Cartography for the Scientific Illustrator. Copy Photography. The Printing Process. PART V. THE BUSINESS OF SCIENTIFIC ILLUSTRATION. Copyright. Contracts. Operating a Freelance Business. Index of Illustrators. Index. About the Editors. --- paper_title: Strategies for effectively visualizing 3D flow with volume LIC paper_content: This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. --- paper_title: Conveying the 3d shape of smoothly curving transparent surfaces via texture paper_content: Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data. --- paper_title: Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning paper_content: We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate. --- paper_title: View Direction, Surface Orientation and Texture Orientation for Perception of Surface Shape paper_content: Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. --- paper_title: Volume illustration: nonphotorealistic rendering of volume models paper_content: Accurately and automatically conveying the structure of a volume model is a problem which has not been fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance but generally require substantial hand-tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using non-photorealistic rendering techniques. Since the features to be enhanced are defined on the basis of local volume characteristics rather than volume sample values, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing the structural perception of volume models through the amplification of features and the addition of illumination effects. --- paper_title: The Chromostereoscopic Process: A Novel Single Image Stereoscopic Process paper_content: A novel stereoscopic depth encoding/decoding process has been developed which considerably simplifies the creation and presentation of stereoscopic images in a wide range of display media. The patented chromostereoscopic process is unique because the encoding of depth information is accomplished in a single image. The depth encoded image can be viewed with the unaided eye as a normal two dimensional image. The image attains the appearance of depth, however, when viewed by means of the inexpensive and compact depth decoding passive optical system. The process is compatible with photographic, printed, video, slide projected, computer graphic, and laser generated color images. The range of perceived depth in a given image can be selected by the viewer through the use of "tunable depth" decoding optics, allowing infinite and smooth tuning from exaggerated normal depth through zero depth to exaggerated inverse depth. The process is insensitive to the head position of the viewer. Depth encoding is accomplished by mapping the desired perceived depth of an image component into spectral color. Depth decoding is performed by an optical system which shifts the spatial positions of the colors in the image to create left and right views. The process is particularly well suited to the creation of stereoscopic laser shows. Other applications are also being pursued. --- paper_title: Enhancing transparent skin surfaces with ridge and valley lines paper_content: There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. --- paper_title: 3D Visualization of Vasculature: An Overview paper_content: A large variety of techniques has been developed to visualize vascular structures. These techniques differ in the necessary preprocessing effort, in the computational effort to create the visualizations, in the accuracy with respect to the underlying image data and in the visual quality of the result. In this overview, we compare 3D visualization methods and discuss their applicability for diagnosis, therapy planning and educational purposes. We consider direct volume rendering as well as surface rendering. --- paper_title: Conveying the 3d shape of smoothly curving transparent surfaces via texture paper_content: Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data. --- paper_title: Illustrative Visualization of Vascular Models for Static 2D Representations paper_content: Depth assessment of 3D vascular models visualized on 2D displays is often difficult, especially in complex workspace conditions such as in the operating room. To address these limitations, we propose a new visualization technique for 3D vascular models. Our technique is tailored to static monoscopic 2D representations, as they are often used during surgery. To improve depth assessment, we propose a combination of supporting lines, view-aligned quads, and illustrative shadows. In addition, a hatching scheme that uses different line styles depending on a distance measure is applied to encode vascular shape as well as the distance to tumors. The resulting visualization can be displayed on monoscopic 2D monitors and on 2D printouts without the requirement to use color or intensity gradients. A qualitative study with 15 participants and a quantitative study with 50 participants confirm that the proposed visualization technique significantly improves depth assessment of complex 3D vascular models. --- paper_title: Depth-enhanced maximum intensity projection paper_content: The two most common methods for the visualization of volumetric data are Direct Volume Rendering (DVR) and Maximum Intensity Projection (MIP). Direct Volume Rendering is superior to MIP in providing a larger amount of properly shaded details, because it employs a more complex shading model together with the use of user-defined transfer functions. However, the generation of adequate transfer functions is a laborious and time costly task, even for expert users. As a consequence, medical doctors often use MIP because it does not require the definition of complex transfer functions and because it gives good results on contrasted images. Unfortunately, MIP does not allow to perceive depth ordering and therefore spatial context is lost. In this paper we present a new approach to MIP rendering that uses depth and simple color blending to disambiguate the ordering of internal structures, while maintaining most of the details visible through MIP. It is usually faster than DVR and only requires the transfer function used by MIP rendering. --- paper_title: Adapted Surface Visualization of Cerebral Aneurysms with Embedded Blood Flow Information paper_content: Cerebral aneurysms are a vascular dilatation induced by a pathological change of the vessel wall and often require treatment to avoid rupture. Therefore, it is of main interest, to estimate the risk of rupture, to gain a deeper understanding of aneurysm genesis, and to plan an actual intervention, the surface morphology and the internal blood flow characteristics. Visual exploration is primarily used to understand such complex and variable type of data. Since the blood flow data is strongly influenced by the surrounding vessel morphology both have to be visually combined to efficiently support visual exploration. Since the flow is spatially embedded in the surrounding aneurysm surface, occlusion problems have to be tackled. Thereby, a meaningful visual reduction of the aneurysm surface that still provides morphological hints is necessary. We accomplish this by applying an adapted illustrative rendering style to the aneurysm surface. Our contribution lies in the combination and adaption of several rendering styles, which allow us to reduce the problem of occlusion and avoid most of the disadvantages of the traditional semi-transparent surface rendering, like ambiguities in perception of spatial relationships. In interviews with domain experts, we derived visual requirements. Later, we conducted an initial survey with 40 participants (13 medical experts of them), which leads to further improvements of our approach. --- paper_title: Adaptive Surface Visualization of Vessels with Animated Blood Flow paper_content: The investigation of hemodynamic information for the assessment of cardiovascular diseases CVDs gained importance in recent years. Improved flow measuring modalities and computational fluid dynamics CFD simulations yield in reliable blood flow information. For a visual exploration of the flow information, domain experts are used to investigate the flow information combined with its enclosed vessel anatomy. Since the flow is spatially embedded in the surrounding vessel surface, occlusion problems have to be resolved. A visual reduction of the vessel surface that still provides important anatomical features is required. We accomplish this by applying an adaptive surface visualization inspired by the suggestive contour measure. Furthermore, an illustration is employed to highlight the animated pathlines and to emphasize nearby surface regions. Our approach combines several visualization techniques to improve the perception of surface shape and depth. Thereby, we ensure appropriate visibility of the embedded flow information, which can be depicted with established or advanced flow visualization techniques. We apply our approach to cerebral aneurysms and aortas with simulated and measured blood flow. An informal user feedback with nine domain experts, we confirm the advantages of our approach compared with existing methods, e.g. semi-transparent surface rendering. Additionally, we assessed the applicability and usefulness of the pathline animation with highlighting nearby surface regions. --- paper_title: Toward a Perceptual Theory of Flow Visualization paper_content: Currently, most researchers in visualization pay very little attention to vision science. The exception is when the effective use of color is the subject. Little research in flow visualization includes a discussion of the related perceptual theory. Nor does it include an evaluation of effectiveness of the display techniques that are generated. This is so, despite Laidlaw's paper showing that such an evaluation is relatively straightforward. Of course, it's not always necessary to relate visualization research to perceptual theory. If the purpose of the research is to increase the efficiency of an algorithm, then the proper test is one of efficiency, not of perceptual validity. But when a new representation of data is the subject of research, addressing how perceptually effective it is - either by means of a straightforward empirical comparison with existing methods or analytically, relating the new mapping to perceptual theory - should be a matter of course. A strong interdisciplinary approach, including the disciplines of perception, design, and computer science will produce better science and better design in that empirically and theoretically validated visual display techniques will result. --- paper_title: Retro-rendering with Vector-Valued Light: Producing Local Illumination from the Transport Equation paper_content: Many rendering algorithms can be understood as numerical solvers for the light-transport equation. Local illumination is probably the most widely implemented rendering algorithm: it is simple, fast, and encoded in 3D graphics hardware. It is not, however, derived as a solution to the light-transport equation. We show that the light-transport equation can be re-interpreted to produce local illumination by using vector-valued light and matrix-valued reflectance. This result fills an important gap in the theory of rendering. Using this framework, local and global illumination result from merely changing the values of parameters in the governing equation, permitting the equation and its algorithmic implementation to remain fixed. --- paper_title: Interactive volume rendering of thin thread structures within multivalued scientific data sets paper_content: We present a threads and halos representation for interactive volume rendering of vector-field structure and describe a number of additional components that combine to create effective visualizations of multivalued 3D scientific data. After filtering linear structures, such as flow lines, into a volume representation, we use a multilayer volume rendering approach to simultaneously display this derived volume along with other data values. We demonstrate the utility of threads and halos in clarifying depth relationships within dense renderings and we present results from two scientific applications: visualization of second-order tensor valued magnetic resonance imaging (MRI) data and simulated 3D fluid flow data. In both application areas, the interactivity of the visualizations proved to be important to the domain scientists. Finally, we describe a PC-based implementation of our framework along with domain specific transfer functions, including an exploratory data culling tool, that enable fast data exploration. --- paper_title: Strategies for effectively visualizing 3D flow with volume LIC paper_content: This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. --- paper_title: Depth-Dependent Halos: Illustrative Rendering of Dense Line Data paper_content: We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work. --- paper_title: LineAO—Improved Three-Dimensional Line Rendering paper_content: Rendering large numbers of dense line bundles in three dimensions is a common need for many visualization techniques, including streamlines and fiber tractography. Unfortunately, depiction of spatial relations inside these line bundles is often difficult but critical for understanding the represented structures. Many approaches evolved for solving this problem by providing special illumination models or tube-like renderings. Although these methods improve spatial perception of individual lines or related sets of lines, they do not solve the problem for complex spatial relations between dense bundles of lines. In this paper, we present a novel approach that improves spatial and structural perception of line renderings by providing a novel ambient occlusion approach suited for line rendering in real time. --- paper_title: On the Tyranny of Hypothesis Testing in the Social Sciences paper_content: Gerd Gigerenzer, professor of psychology at the University of Salzburg (Austria), is coeditor, with L. Kruger, L.J. Daston, and M. Heidelberger, of The Probabilistic Revolution, Vol. 1: Ideas in History. •Zeno Swijtink is assistant professor f history and philosophy of science at Indiana University Bloomington. •Theodore Porter is associate professor of history at the University of Virginia (Charlottesville) and author of The Rise of Statistical Thinking. •Lorraine Daston is professor of the history of science at the University of Gottingen (Germany) and author of Classical Probability in the Enlightenment. •John Beatty is associate professor of the history of science and technology at the University of Minnesota (Minneapolis). •Lorenz Kruger is professor of philosophy at the University of Gottingen (Germany). •Geoffrey R. Loftus, professor of psychology at the University of Washington (Seattle), is coauthor, with E.F. Loftus of Essence of Statistics (2nd ed.). --- paper_title: The assumed light direction for perceiving shape from shading paper_content: Recovering 3D shape from shading is an ill-posed problem that the visual system can solve only by making use of additional information such as the position of the light source. Previous research has shown that people tend to assume light is above and slightly to the left of the object [Sun and Perona 1998]. We present a study to investigate whether the visual system also assumes the angle between the light direction and the viewing direction. We conducted a shape perception experiment in which subjects estimated surface orientation on smooth, virtual 3D shapes displayed monocularly using local Lambertian shading without cast shadows. We varied the angle between the viewing direction and the light direction within a range +/- 66 deg (above/below), and subjects indicated local surface orientation by rotating a gauge figure to appear normal to the surface [Koenderink et al. 1992]. Observer settings were more accurate and precise when the light was positioned above rather than below the viewpoint. Additionally, errors were minimized when the angle between the light direction and the viewing direction was 20--30 deg. Measurements of surface slant and tilt error support this result. These findings confirm the light-from-above prior and provide evidence that the angle between the viewing direction and the light direction is assumed to be 20--30 deg above the viewpoint. --- paper_title: Slant-tilt: The visual encoding of surface orientation paper_content: A specific form for the internal representation of local surface orientation is proposed, which is similar to Gibson's (1950) “amount and direction of slant”. Slant amount is usually quantifed by the angle σ between the surface normal and the line of sight (0°≦σ≦90°). Slant direction corresponds to the direction of the gradient of distance from the viewer to the surface, and may be defined by the image direction τ to which the surface normal would project (0°≦τ≦360°). Since the direction of slant is specified by the tilt of the projected surface normal, it is referred to as surface tilt (Stevens, 1979; Marr, 1982). The two degrees of freedom of orientation are therefore quantified by slant, an angle measured perpendicular to the image plane, and tilt, an angle measured in the image plane. The slanttilt form provides several computational advantages relative to some other proposals and is consistent with various psychological phenomena. Slant might be encoded by various means, e.g. by the cosine of the angle, by the tangent, or linearly by the angle itself. Experimental results are reported that suggest that slant is encoded by an internal parameter that varies linearly with slant angle, with resolution of roughly one part in 100. Thus we propose that surface orientation is encoded in human vision by two quantities, one varying linearly with slant angle, the other varying linearly with tilt angle. --- paper_title: View Direction, Surface Orientation and Texture Orientation for Perception of Surface Shape paper_content: Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. --- paper_title: Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning paper_content: We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate. --- paper_title: Expertise Differences in the Comprehension of Visualizations: a Meta-Analysis of Eye-Tracking Research in Professional Domains paper_content: This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch’s (Psychol Rev 102:211–245, 1995) theory of long-term working memory, Haider and Frensch’s (J Exp Psychol Learn Mem Cognit 25:172–190, 1999) information-reduction hypothesis, and the holistic model of image perception of Kundel et al. (Radiology 242:396–402, 2007). Eye movement and performance data were cumulated from 819 experts, 187 intermediates, and 893 novices. In support of the evaluated theories, experts, when compared with non-experts, had shorter fixation durations, more fixations on task-relevant areas, and fewer fixations on task-redundant areas; experts also had longer saccades and shorter times to first fixate relevant information, owing to superiority in parafoveal processing and selective attention allocation. Eye movements, reaction time, and performance accuracy were moderated by characteristics of visualization (dynamics, realism, dimensionality, modality, and text annotation), task (complexity, time-on-task, and task control), and domain (sports, medicine, transportation, other). These findings are discussed in terms of their implications for theories of visual expertise in professional domains and their significance for the design of learning environments. --- paper_title: Towards analyzing eye tracking data for evaluating interactive visualization systems paper_content: Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel visualization techniques beyond time and error measures. Up to now, most stimuli in eye tracking are either static stimuli or videos. Since interaction is an integral part of visualization, an evaluation should include interaction. In this paper, we present an extensive literature review on evaluation methods for interactive visualizations. Based on the literature review we propose ideas for analyzing eye movement data from interactive stimuli. This requires looking critically at challenges induced by interactive stimuli. The first step is to collect data using different study methods. In our case, we look at using eye tracking, interaction logs, and thinking-aloud protocols. In addition, this requires a thorough synchronization of the mentioned study methods. To analyze the collected data new analysis techniques have to be developed. We investigate existing approaches and how we can adapt them to new data types as well as sketch ideas how new approaches can look like. --- paper_title: Volume composition and evaluation using eye-tracking data paper_content: This article presents a method for automating rendering parameter selection to simplify tedious user interaction and improve the usability of visualization systems. Our approach acquires the important/interesting regions of a dataset through simple user interaction with an eye tracker. Based on this importance information, we automatically compute reasonable rendering parameters using a set of heuristic rules, which are adapted from visualization experience and psychophysical experiments. A user study has been conducted to evaluate these rendering parameters, and while the parameter selections for a specific visualization result are subjective, our approach provides good preliminary results for general users while allowing additional control adjustment. Furthermore, our system improves the interactivity of a visualization system by significantly reducing the required amount of parameter selections and providing good initial rendering parameters for newly acquired datasets of similar types. --- paper_title: Adapted Surface Visualization of Cerebral Aneurysms with Embedded Blood Flow Information paper_content: Cerebral aneurysms are a vascular dilatation induced by a pathological change of the vessel wall and often require treatment to avoid rupture. Therefore, it is of main interest, to estimate the risk of rupture, to gain a deeper understanding of aneurysm genesis, and to plan an actual intervention, the surface morphology and the internal blood flow characteristics. Visual exploration is primarily used to understand such complex and variable type of data. Since the blood flow data is strongly influenced by the surrounding vessel morphology both have to be visually combined to efficiently support visual exploration. Since the flow is spatially embedded in the surrounding aneurysm surface, occlusion problems have to be tackled. Thereby, a meaningful visual reduction of the aneurysm surface that still provides morphological hints is necessary. We accomplish this by applying an adapted illustrative rendering style to the aneurysm surface. Our contribution lies in the combination and adaption of several rendering styles, which allow us to reduce the problem of occlusion and avoid most of the disadvantages of the traditional semi-transparent surface rendering, like ambiguities in perception of spatial relationships. In interviews with domain experts, we derived visual requirements. Later, we conducted an initial survey with 40 participants (13 medical experts of them), which leads to further improvements of our approach. --- paper_title: Illustrative Visualization of Vascular Models for Static 2D Representations paper_content: Depth assessment of 3D vascular models visualized on 2D displays is often difficult, especially in complex workspace conditions such as in the operating room. To address these limitations, we propose a new visualization technique for 3D vascular models. Our technique is tailored to static monoscopic 2D representations, as they are often used during surgery. To improve depth assessment, we propose a combination of supporting lines, view-aligned quads, and illustrative shadows. In addition, a hatching scheme that uses different line styles depending on a distance measure is applied to encode vascular shape as well as the distance to tumors. The resulting visualization can be displayed on monoscopic 2D monitors and on 2D printouts without the requirement to use color or intensity gradients. A qualitative study with 15 participants and a quantitative study with 50 participants confirm that the proposed visualization technique significantly improves depth assessment of complex 3D vascular models. --- paper_title: Perceptually-Based Depth-Ordering Enhancement for Direct Volume Rendering paper_content: Visualizing complex volume data usually renders selected parts of the volume semitransparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depth-ordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. The experimental results demonstrate the usefulness and effectiveness of our approach. --- paper_title: Specular reflections and the perception of shape paper_content: Cable conduit installation equipped with L-and T-connecting members which are provided with releasable means for sliding on and interengagement with the installation conduit and/or means for sliding on and underlapping or overlapping with the conduit covering. --- paper_title: Toward a Perceptual Theory of Flow Visualization paper_content: Currently, most researchers in visualization pay very little attention to vision science. The exception is when the effective use of color is the subject. Little research in flow visualization includes a discussion of the related perceptual theory. Nor does it include an evaluation of effectiveness of the display techniques that are generated. This is so, despite Laidlaw's paper showing that such an evaluation is relatively straightforward. Of course, it's not always necessary to relate visualization research to perceptual theory. If the purpose of the research is to increase the efficiency of an algorithm, then the proper test is one of efficiency, not of perceptual validity. But when a new representation of data is the subject of research, addressing how perceptually effective it is - either by means of a straightforward empirical comparison with existing methods or analytically, relating the new mapping to perceptual theory - should be a matter of course. A strong interdisciplinary approach, including the disciplines of perception, design, and computer science will produce better science and better design in that empirically and theoretically validated visual display techniques will result. --- paper_title: Enhancing transparent skin surfaces with ridge and valley lines paper_content: There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. ---
Title: A Survey of Perceptually Motivated 3D Visualization of Medical Image Data Section 1: Introduction Description 1: Introduce the purpose and challenges of medical-image-data visualization, including different visualization techniques and the role of visual perception in improving these visualizations. Section 2: Related Visual Perception Research Description 2: Discuss visual perception research as it relates to medical visualization, with a focus on depth and shape perception. Section 3: Depth Perception Description 3: Explore the study of depth perception, classes of depth cues, and their application in stylization for medical visualization. Section 4: Shape Perception Description 4: Examine the complexity of visual perception of 3D shapes, including various mechanisms like shape-from-shading, shape-from-texture, and shape-from-silhouettes. Section 5: Perceptually Motivated Medical Visualization Description 5: Discuss selected examples of visualization techniques that have been applied to medical image data and are motivated by perceptual considerations, including volume visualization, vascular visualization, blood flow visualization, and fiber tract visualization. Section 6: Perceptual Experiment Methods for 3D Visualization Description 6: Provide a compact discussion of essential aspects of perceptual experiment methods for assessing shape and depth perception in medical visualization techniques. Section 7: Evaluation of Perceptually Motivated Visualizations Description 7: Summarize the evaluations performed to investigate the effectiveness of perceptually motivated visualization techniques, including methods, participants, tasks, and results. Section 8: Preliminary Guidelines and Research Agenda Description 8: Outline preliminary guidelines for medical visualization techniques and propose a research agenda for future studies in perceptually motivated visualization. Section 9: Concluding Remarks Description 9: Conclude the survey with remarks on the importance of visual perception research in improving medical visualization techniques and the need for more studies in this domain.
Operating System Verification — An Overview
14
--- paper_title: Report on a conference sponsored by the NATO Science Committee paper_content: A fluid energy mill having an inlet chamber, a classification chamber, and an upstack and downstack connecting opposite ends of the inlet chamber to the classification chamber. The inlet chamber, which is positioned angularly to the upstack and downstack, has a Venturi portion between the upstack and downstack. A feed inlet is also positioned angularly to the upstack as well as to the inlet chamber. This feed inlet also has a Venturi portion. A feed hopper is provided above the feed inlet and is in connection therewith. A pressure fluid nozzle is provided below the hopper to project the fluid and particles from the hopper through the Venturi in the feed inlet. A similar but opposed nozzle is provided below the downstack to project fluid through the Venturi in the inlet chamber toward the upstack. The fluid and particles propelled by each nozzle meet and impact below the upstack and the resultant fluid and particles pass upwardly through the upstack and centrifugally move through the mill where the lighter particles are centrifugally separated from the heavier particles and exhausted from the mill. --- paper_title: Mechanizing Proof: Computing, Risk, and Trust paper_content: Most aspects of our private and social lives---our safety, the integrity of the financial system, the functioning of utilities and other services, and national security---now depend on computing. But how can we know that this computing is trustworthy? In Mechanizing Proof, Donald MacKenzie addresses this key issue by investigating the interrelations of computing, risk, and mathematical proof over the last half century from the perspectives of history and sociology. His discussion draws on the technical literature of computer science and artificial intelligence and on extensive interviews with scientists and engineers. --- paper_title: Program verification: the very idea paper_content: The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility. --- paper_title: Ten commandments revisited: a ten-year perspective on the industrial application of formal methods paper_content: Ten years ago, our 1995 paper Ten Commandments of Formal Methods [5] suggested some guidelines to help ensure the success of a formal methods project. It proposed ten important requirements (or "commandments") for formal developers to consider and follow, based on our knowledge of several industrial application success stories, most of which have been reported in more detail in two books [17],[18]. The paper was surprisingly popular, is still widely referenced, and used as required reading in a number of formal methods courses. However, not all have agreed with some of our commandments, feeling that they may not be valid in the long-term. We re-examine the original commandments ten years on, and consider their validity in the light of a further decade of industrial best practice and experiences. --- paper_title: The B-Book: Assigning Programs to Meanings paper_content: Tribute Foreword Introduction Part I. Mathematics: 1. Mathematical reasoning 2. Set notation 3. Mathematical objects Part II. Abstract Machines: 4. Introduction to abstract machines 5. Formal definition of abstract machines 6. Theory of abstract machines 7. Constructing large abstract machines 8. Examples of abstract machines Part III. Programming: 9. Sequencing and loop 10. Programming examples Part IV. Refinement: 11. Refinement 12. Constructing large software systems 13. Examples of refinement Appendixes Index. --- paper_title: Towards Self-verification of HOL Light paper_content: The HOL Light prover is based on a logical kernel consisting of about 400 lines of mostly functional Ocaml. whose complete formal verification seems to be quite feasible. We would like to formally verify (i) that the abstract HOL logic is indeed correct, and (ii) that the OCaml code does correctly implement this logic. We have performed a full verification of an imperfect but quite detailed model of the basic HOL Light core, without definitional mechanisms, and this verification is entirely conducted with respect to a set-theoretic semantics within HOL Light itself. We will duly explain why the obvious logical and pragmatic difficulties do not vitiate this approach, even though it looks impossible or useless at first sight. Extension to include definitional mechanisms seems straightforward enough, and the results so far allay most of our practical worries. --- paper_title: Computer-Aided Reasoning: An Approach paper_content: From the Publisher: ::: An Approach ::: Computer-Aided Reasoning: An Approach is a textbook introduction to computer-aided reasoning. It can be used in graduate and upper-division undergraduate courses on software engineering or formal methods. It is also suitable in conjunction with other books in courses on hardware design, discrete mathematics, or theory, especially courses stressing formalism, rigor, or mechanized support. It is also appropriate for courses on artificial intelligence or automated reasoning and as a reference for business and industry. ::: Current hardware and software systems are often very complex and the trend is towards increased complexity. Many of these systems are of critical importance; therefore making sure that they behave as expected is also of critical importance. By modeling computing systems mathematically, we obtain models that we can prove behave correctly. The complexity of computing systems makes such proofs very long, complicated, and error-prone. To further increase confidence in our reasoning, we can use a computer program to check our proofs and even to automate some of their construction. ::: In this book we present: ::: ::: A practical functional programming language closely related to Common Lisp which is used to define functions (which can model computing systems) and to make assertions about defined functions; A formal logic in which defined functions correspond to axioms; the logic is first-order, includes induction, and allows us to prove theorems about the functions; The computer-aided reasoning system ACL2, which includes the programming language, the logic, and mechanical support for the proof process. ::: ::: The ACL2 system hasbeen successfully applied to projects of commercial interest, including microprocessor, modeling, hardware verification, microcode verification, and software verification. This book gives a methodology for modeling computing systems formally and for reasoning about those models with mechanized assistance. The practicality of computer-aided reasoning is further demonstrated in the companion book, Computer-Aided Reasoning: ACL2 Case Studies. ::: Approximately 140 exercises are distributed throughout the book. Additional material is freely available from the ACL2 home page on the Web, http://www.cs.utexas.edu/users/moore/ac12, including solutions to the exercises, additional exercises, case studies from the companion book, research papers, and the ACL2 system with detailed documentation. ::: ::: ACL2 Case Studies ::: Computer-Aided Reasoning: ACL2 Case Studies illustrates how the computer-aided reasoning system ACL2 can be used in productive and innovative ways to design, build, and maintain hardware and software systems. Included here are technical papers written by twenty-one contributors that report on self-contained case studies, some of which are sanitized industrial projects. The papers deal with a wide variety of ideas, including floating-point arithmetic, microprocessor simulation, model checking, symbolic trajectory evaluation, compilation, proof checking, real analysis, and several others. ::: Computer-Aided Reasoning: ACL2 Case Studies is meant for two audiences: those looking for innovative ways to design, build, and maintain hardware and software systems faster and more reliably, and those wishing to learn how to do this. The former audience includes project managers and students in survey-oriented courses. The latter audience includes students and professionals pursuing rigorous approaches to hardware and software engineering or formal methods. Computer-Aided Reasoning: ACL2 Case Studies can be used in graduate and upper-division undergraduate courses on Software Engineering, Formal Methods, Hardware Design, Theory of Computation, Artificial Intelligence, and Automated Reasoning. ::: The book is divided into two parts. Part I begins with a discussion of the effort involved in using ACL2. It also contains a brief introduction to the ACL2 logic and its mechanization, which is intended to give the reader sufficient background to read the case studies. A more thorough, textbook introduction to ACL2 may be found in the companion book, Computer-Aided Reasoning: An Approach. ::: The heart of the book is Part II, where the case studies are presented. The case studies contain exercises whose solutions are on the Web. In addition, the complete ACL2 scripts necessary to formalize the models and prove all the properties discussed are on the Web. For example, when we say that one of the case studies formalizes a floating-point multiplier and proves it correct, we mean that not only can you read an English description of the model and how it was proved correct, but you can obtain the entire formal content of the project and replay the proofs, if you wish, with your copy of ACL2. ::: ACL2 may be obtained from its home page, http://www.cs.utexas.edu/users/moore/ac12. The results reported in each case study, as ACL2 input scripts, as well as exercise solutions for both books, are available from this page. --- paper_title: Programming semantics for multiprogrammed computations paper_content: The semantics are defined for a number of meta-instructions which perform operations essential to the writing of programs in multiprogrammed computer systems. These meta-instructions relate to parallel processing, protecting of separate computations, program debugging, and the sharing among users of memory segments and other computing objects, the names of which are hierarchically structured. The language sophistication contemplated is midway between an assembly language and an advanced algebraic language. --- paper_title: On the derivation of secure components paper_content: The author discusses the problems in deriving a system from its specification when that specification includes simple trace-based information-flow security properties as well as safety properties. He presents two fundamental theorems of information-flow security which describe the inherent difficulties of deriving secure implementations and considers the implications of these results. It is concluded that it is dangerous to extrapolate from success in the case of two to the case of many. Results proved about systems with just low- and high-access users may not extend easily to full lattices. > --- paper_title: Secure Computer Systems: Mathematical Foundations paper_content: This paper reports the first results of an investigation into solutions to problems of security in computer systems; it establishes the basis for rigorous investigation by providing a general descriptive model of a computer system. --- paper_title: Proving that programs eventually do something good paper_content: In recent years we have seen great progress made in the area of automatic source-level static analysis tools. However, most of today's program verification tools are limited to properties that guarantee the absence of bad events (safety properties). Until now no formal software analysis tool has provided fully automatic support for proving properties that ensure that good events eventually happen (liveness properties). In this paper we present such a tool, which handles liveness properties of large systems written in C. Liveness properties are described in an extension of the specification language used in the SDV system. We have used the tool to automatically prove critical liveness properties of Windows device drivers and found several previously unknown liveness bugs. --- paper_title: Thorough static analysis of device drivers paper_content: Bugs in kernel-level device drivers cause 85% of the system crashes in the Windows XP operating system [44]. One of the sources of these errors is the complexity of the Windows driver API itself: programmers must master a complex set of rules about how to use the driver API in order to create drivers that are good clients of the kernel. We have built a static analysis engine that finds API usage errors in C programs. The Static Driver Verifier tool (SDV) uses this engine to find kernel API usage errors in a driver. SDV includes models of the OS and the environment of the device driver, and over sixty API usage rules. SDV is intended to be used by driver developers "out of the box." Thus, it has stringent requirements: (1) complete automation with no input from the user; (2) a low rate of false errors. We discuss the techniques used in SDV to meet these requirements, and empirical results from running SDV on over one hundred Windows device drivers. --- paper_title: The Flask Security Architecture: System Support for Diverse Security Policies paper_content: Operating systems must be flexible in their support for security policies, providing sufficient mechanisms for supporting the wide variety of real-world security policies. Such flexibility requires controlling the propagation of access rights, enforcing fine-grained access rights and supporting the revocation of previously granted access rights. Previous systems are lacking in at least one of these areas. In this paper we present an operating system security architecture that solves these problems. Control over propagation is provided by ensuring that the security policy is consulted for every security decision. This control is achieved without significant performance degradation through the use of a security decision caching mechanism that ensures a consistent view of policy decisions. Both fine-grained access rights and revocation support are provided by mechanisms that are directly integrated into the service-providing components of the system. The architecture is described through its prototype implementation in the Flask microkernel-based operating system, and the policy flexibility of the prototype is evaluated. We present initial evidence that the architecture's impact on both performance and code complexity is modest. Moreover, our architecture is applicable to many other types of operating systems and environments. --- paper_title: Microkernels Meet Recursive Virtual Machines paper_content: This paper describes a novel approach to providingmodular and extensible operating system functionality and encapsulated environments based on a synthesis of microkernel and virtual machine concepts. We have developed a software-based virtualizable architecture called Fluke that allows recursive virtual machines (virtual machines running on other virtual machines) to be implemented efficiently by a microkernel running on generic hardware. A complete virtual machine interface is provided at each level; efficiency derives from needing to implement only new functionality at each level. This infrastructure allows common OS functionality, such as process management, demand paging, fault tolerance, and debugging support, to be provided by cleanly modularized, independent, stackable virtual machine monitors, implemented as user processes. It can also provide uncommon or unique OS features, including the above features specialized for particular applications’ needs, virtual machines transparently distributed cross-node, or security monitors that allow arbitrary untrusted binaries to be executed safely. Our prototype implementation of this model indicates that it is practical to modularize operating systems this way. Some types of virtual machine layers impose almost no overhead at all, while others impose some overhead (typically 0–35%), but only on certain classes of applications. --- paper_title: TAME: Using PVS strategies for special-purpose theorem proving paper_content: TAME (Timed Automata Modeling Environment), an interface to the theorem proving system PVS, is designed for proving properties of three classes of automata: I/O automata, Lynch–Vaandrager timed automata, and SCR automata. TAME provides templates for specifying these automata, a set of auxiliary theories, and a set of specialized PVS strategies that rely on these theories and on the structure of automata defined using the templates. Use of the TAME strategies simplifies the process of proving automaton properties, particularly state and transition invariants. TAME provides two types of strategies: strategies for “automatic” proof and strategies designed to implement “natural” proof steps, i.e., proof steps that mimic the high-level steps in typical natural language proofs. TAME's “natural” proof steps can be used both to mechanically check hand proofs in a straightforward way and to create proof scripts that can be understood without executing them in the PVS proof checker. Several new PVS features can be used to obtain better control and efficiency in user-defined strategies such as those used in TAME. This paper describes the TAME strategies, their use, and how their implementation exploits the structure of specifications and various PVS features. It also describes several features, currently unsupported in PVS, that would either allow additional “natural” proof steps in TAME or allow existing TAME proof steps to be improved. Lessons learned from TAME relevant to the development of similar specialized interfaces to PVS or other theorem provers are discussed. --- paper_title: Running the manual: an approach to high-assurance microkernel development paper_content: We propose a development methodology for designing and prototyping high assurance microkernels, and describe our application of it. The methodology is based on rapid prototyping and iterative refinement of the microkernel in a functional programming language. The prototype provides a precise semi-formal model, which is also combined with a machine simulator to form a reference implementation capable of executing real user-level software, to obtain accurate feedback on the suitability of the kernel API during development phases. We extract from the prototype a machine-checkable formal specification in higher-order logic, which may be used to verify properties of the design, and also results in corrections to the design without the need for full verification. We found the approach leads to productive, highly iterative development where formal modelling, semi-formal design and prototyping, and end use all contribute to a more mature final design in a shorter period of time. --- paper_title: Basing a Modeling Environment on a General Purpose Theorem Prover paper_content: Abstract : A general purpose theorem prover can be thought of as an extremely flexible modeling environment in which one can define and analyze almost any kind of model. A disadvantage to the full flexibility of a general purpose theorem prover is the lack of any guidance on how to construct a model and how then to apply the theorem prover to analyzing the model. In the general environment supplied by the prover, much time can be consumed in deciding how to specify a model and in interacting with and understanding feedback from the prover. However, specification templates, together with proof strategies whose design follows certain principles, can be used in many general purpose provers to create specialized modeling environments that address these difficulties. A specialized modeling environment created in this way can be further extended and/or further specialized by drawing on the underlying theorem prover for additional capabilities, and provides a means of integrating powerful theorem proving capabilities into existing software development environments by way of appropriate translation schemes. This paper will use TAME (Timed Automata Modeling Environment) to illustrate the creation, extension, and specialization of a modeling environment based on PVS, and its integration into several software development environments. --- paper_title: Certifying low-level programs with hardware interrupts and preemptive threads paper_content: Hardware interrupts are widely used in the world's critical software systems to support preemptive threads, device drivers, operating system kernels, and hypervisors. Handling interrupts properly is an essential component of low-level system programming. Unfortunately, interrupts are also extremely hard to reason about: they dramatically alter the program control flow and complicate the invariants in low-level concurrent code (e.g., implementation of synchronization primitives). Existing formal verification techniques---including Hoare logic, typed assembly language, concurrent separation logic, and the assume-guarantee method---have consistently ignored the issues of interrupts; this severely limits the applicability and power of today's program verification systems. In this paper we present a novel Hoare-logic-like framework for certifying low-level system programs involving both hardware interrupts and preemptive threads. We show that enabling and disabling interrupts can be formalized precisely using simple ownership-transfer semantics, and the same technique also extends to the concurrent setting. By carefully reasoning about the interaction among interrupt handlers, context switching, and synchronization libraries, we are able to---for the first time---successfully certify a preemptive thread implementation and a large number of common synchronization primitives. Our work provides a foundation for reasoning about interrupt-based kernel programs and makes an important advance toward building fully certified operating system kernels and hypervisors. --- paper_title: An Open Framework for Certified System Software paper_content: Certified software consists of a machine executable program plus a machine checkable proof showing that the software is free of bugs with respect to a particular specification. Constructing certified system software is an important step toward building a reliable and secure computing platform for future critical applications. In addition to the benefits from provably safe components, architectures of certified systems may also be simplified to achieve better efficiency. However, because system software consists of program modules that use many different computation features and span different abstraction levels, it is difficult to design a single type system or program logic to certify the whole system. As a result, significant amount of kernel code of today's operating systems has to be implemented in unsafe languages, despite recent progress on type-safe languages. ::: In this thesis, I develop a new methodology to solve this problem, which applies different verification systems to certify different program modules, and then links the certified modules in an open framework to compose the whole certified software package. Specifically, this thesis makes contributions in the following two aspects. ::: First, I develop new Hoare-style program logics to certify low-level programs with different features, such as sequential programs with stack-based control abstractions and multi-threaded programs with unbounded thread creations. A common scheme in the design of these logics is modularity. They all support modular verification in the sense that program modules, such as functions and threads, can be certified separately without looking into implementation details of others. ::: Second, I propose an open verification framework that enables interoperation between different verification systems (a.k.a. foreign systems). The framework is open in that it is not designed with a priori knowledge of foreign systems. It is general enough to incorporate both existing verification systems and new program logics presented in this thesis. It also supports modularity and proof reuse. Modules can be certified separately without knowing about implementation details of other modules and about the verification systems in which other modules are certified. Soundness of the framework guarantees that specifications of modules are respected after linking. --- paper_title: Formal specification and verification of data separation in a separation kernel for an embedded system paper_content: Although many algorithms, hardware designs, and security protocols have been formally verified, formal verification of the security of software is still rare. This is due in large part to the large size of software, which results in huge costs for verification. This paper describes a novel and practical approach to formally establishing the security of code. The approach begins with a well-defined set of security properties and, based on the properties, constructs a compact security model containing only information needed to rea-son about the properties. Our approach was formulated to provide evidence for a Common Criteria evaluation of an embedded soft-ware system which uses a separation kernel to enforce data separation. The paper describes 1) our approach to verifying the kernel code and 2) the artifacts used in the evaluation: a Top Level Specification (TLS) of the kernel behavior, a formal definition of dataseparation, a mechanized proof that the TLS enforces data separation, code annotated with pre- and postconditions and partitioned into three categories, and a formal demonstration that each category of code enforces data separation. Also presented is the formal argument that the code satisfies the TLS. --- paper_title: The existence of refinement mappings paper_content: Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method. --- paper_title: Applying Formal Methods to a Certifiably Secure Software System paper_content: A major problem in verifying the security of code is that the code's large size makes it much too costly to verify in its entirety. This paper describes a novel and practical approach to verifying the security of code which substantially reduces the cost of verification. In this approach, a compact security model containing only information needed to reason about the security properties of interest is constructed and the security properties are represented formally in terms of the model. To reduce the cost of verification, the code to be verified is partitioned into three categories and only the first category, which is less than 10 percent of the code in our application, requires formal verification. The proof of the other two categories is relatively trivial. Our approach was developed to support a common criteria evaluation of the separation kernel of an embedded software system. This paper describes 1) our techniques and theory for verifying the kernel code and 2) the artifacts produced, that is, a top-level specification (TLS), a formal statement of the security property, a mechanized proof that the TLS satisfies the property, the partitioning of the code, and a demonstration that the code conforms to the TLS. This paper also presents the formal basis for the argument that the kernel code conforms to the TLS and consequently satisfies the security property. --- paper_title: Computer-Aided Reasoning: An Approach paper_content: From the Publisher: ::: An Approach ::: Computer-Aided Reasoning: An Approach is a textbook introduction to computer-aided reasoning. It can be used in graduate and upper-division undergraduate courses on software engineering or formal methods. It is also suitable in conjunction with other books in courses on hardware design, discrete mathematics, or theory, especially courses stressing formalism, rigor, or mechanized support. It is also appropriate for courses on artificial intelligence or automated reasoning and as a reference for business and industry. ::: Current hardware and software systems are often very complex and the trend is towards increased complexity. Many of these systems are of critical importance; therefore making sure that they behave as expected is also of critical importance. By modeling computing systems mathematically, we obtain models that we can prove behave correctly. The complexity of computing systems makes such proofs very long, complicated, and error-prone. To further increase confidence in our reasoning, we can use a computer program to check our proofs and even to automate some of their construction. ::: In this book we present: ::: ::: A practical functional programming language closely related to Common Lisp which is used to define functions (which can model computing systems) and to make assertions about defined functions; A formal logic in which defined functions correspond to axioms; the logic is first-order, includes induction, and allows us to prove theorems about the functions; The computer-aided reasoning system ACL2, which includes the programming language, the logic, and mechanical support for the proof process. ::: ::: The ACL2 system hasbeen successfully applied to projects of commercial interest, including microprocessor, modeling, hardware verification, microcode verification, and software verification. This book gives a methodology for modeling computing systems formally and for reasoning about those models with mechanized assistance. The practicality of computer-aided reasoning is further demonstrated in the companion book, Computer-Aided Reasoning: ACL2 Case Studies. ::: Approximately 140 exercises are distributed throughout the book. Additional material is freely available from the ACL2 home page on the Web, http://www.cs.utexas.edu/users/moore/ac12, including solutions to the exercises, additional exercises, case studies from the companion book, research papers, and the ACL2 system with detailed documentation. ::: ::: ACL2 Case Studies ::: Computer-Aided Reasoning: ACL2 Case Studies illustrates how the computer-aided reasoning system ACL2 can be used in productive and innovative ways to design, build, and maintain hardware and software systems. Included here are technical papers written by twenty-one contributors that report on self-contained case studies, some of which are sanitized industrial projects. The papers deal with a wide variety of ideas, including floating-point arithmetic, microprocessor simulation, model checking, symbolic trajectory evaluation, compilation, proof checking, real analysis, and several others. ::: Computer-Aided Reasoning: ACL2 Case Studies is meant for two audiences: those looking for innovative ways to design, build, and maintain hardware and software systems faster and more reliably, and those wishing to learn how to do this. The former audience includes project managers and students in survey-oriented courses. The latter audience includes students and professionals pursuing rigorous approaches to hardware and software engineering or formal methods. Computer-Aided Reasoning: ACL2 Case Studies can be used in graduate and upper-division undergraduate courses on Software Engineering, Formal Methods, Hardware Design, Theory of Computation, Artificial Intelligence, and Automated Reasoning. ::: The book is divided into two parts. Part I begins with a discussion of the effort involved in using ACL2. It also contains a brief introduction to the ACL2 logic and its mechanization, which is intended to give the reader sufficient background to read the case studies. A more thorough, textbook introduction to ACL2 may be found in the companion book, Computer-Aided Reasoning: An Approach. ::: The heart of the book is Part II, where the case studies are presented. The case studies contain exercises whose solutions are on the Web. In addition, the complete ACL2 scripts necessary to formalize the models and prove all the properties discussed are on the Web. For example, when we say that one of the case studies formalizes a floating-point multiplier and proves it correct, we mean that not only can you read an English description of the model and how it was proved correct, but you can obtain the entire formal content of the project and replay the proofs, if you wish, with your copy of ACL2. ::: ACL2 may be obtained from its home page, http://www.cs.utexas.edu/users/moore/ac12. The results reported in each case study, as ACL2 input scripts, as well as exercise solutions for both books, are available from this page. --- paper_title: A robust machine code proof framework for highly secure applications paper_content: Security-critical applications at the highest Evaluation Assurance Levels (EAL) require formal proofs of correctness in order to achieve certification. To support secure application development at the highest EALs, we have developed techniques to largely automate the process of producing proofs of correctness of machine code. As part of the Secure, High-Assurance Development Environment program, we have produced in ACL2 an executable formal model of the Rockwell Collins AAMP7G microprocessor at the instruction set level, in order to facilitate proofs of correctness about that processor's machine code. The AAMP7G, currently in use in Rockwell Collins secure system products, supports strict time and space partitioning in hardware, and has received a U.S. National Security Agency (NSA) Multiple Independent Levels of Security (MILS) certificate based in part on a formal proof of correctness of its separation kernel microcode. Proofs of correctness of AAMP7G machine code are accomplished using the method of "compositional cutpoints", which requires neither traditional clock functions nor a Verification Condition Generator (VCG). In this paper, we will summarize the AAMP7G architecture, detail our ACL2 model of the processor, and describe our development of the compositional cutpoint method into a robust machine code proof framework. --- paper_title: UCLA data secure unix paper_content: Monochlorotoluene that contains at least 45 percent of para-chlorotoluene is prepared by contacting toluene with chlorine in the presence of a catalyst system that contains a ferrocene compound and a sulfur compound. --- paper_title: A Technique for Proving Specifications are Multilevel Secure paper_content: Abstract : This document describe a technique for verifying that a design for an operating system or subsystem expressed in terms of a formal specification is consistent with a particular model of multilevel security. The technique to be described is mathematically rigorous and, if applied properly, gives assurance that the given design is multilevel secure by this particular model. The technique is supported by a collection of automated tools. These tools assist the user in the performance of a large amount of detailed routine tasks that must be performed to apply the technique. In general, contemporary formal verification techniques such as the one described here involve a great deal of repetitive, detailed, uninteresting steps that are necessary to maintain the rigor of the proof process. The proof are usually larger and more complex than the system being proved. Keywords: Critical technology. --- paper_title: A model for verification of data security in operating systems paper_content: Program verification applied to kernel architectures forms a promising method for providing uncircumventably secure, shared computer systems. A precise definition of data security is developed here in terms of a general model for operating systems. This model is suitable as a basis for verifying many of those properties of an operating system which are necessary to assure reliable enforcement of security. The application of this approach to the UCLA secure operating system is also discussed. --- paper_title: Programming from specifications paper_content: Providing a thorough treatment of most elementary programme development techniques, this revised edition covers topics such as procedures, parameters, recursion and data refinement, with the integration of specification, development and coding, based on ordinary (classical) logic. This second edition features: substantial restructuring of earlier material, streamlining the introduction of programming language features; simplified presentation of procedures, parameters and recursion; an expanded chapter on data refinement, giving the much simpler laws that specialize to functional abstractions; a new chapter on recursive types (trees etc) and appropriate control structures; and, following the original concluding case study, two completely new ones: "the recursive treatment of the largest rectangle under a histogram", and a specification and extended developnent of an electronic mail system (including limited concurrency). --- paper_title: The foundations of a provably secure operating system (PSOS) paper_content: PSOS has been designed according to a set of formal techniques embodying the SRI Hierarchical Development Methodology (HDM). HDM has been described elsewhere, 1 – 3 and thus is only summarized here. The influence of HDM on the security of PSOS is also discussed elsewhere. 4 In addition, Linden 5 gives a general discussion of the impact of structured design techniques on the security of operating systems (including capability systems). --- paper_title: Proof techniques for hierarchically structured programs paper_content: A method for describing and structuring programs that simplifies proofs of their correctness is presented. The method formally represents a program in terms of levels of abstraction, each level of which can be described by a self-contained nonprocedural specification. The proofs, like the programs, are structured by levels. Although only manual proofs are described in the paper, the method is also applicable to semi-automatic and automatic proofs. Preliminary results are encouraging, indicating that the method can be applied to large programs, such as operating systems. --- paper_title: Extending the Noninterference Version of MLS for SAT paper_content: A noninterference formulation of MLS applicable to the Secure Ada® Target (SAT) Abstract Model is developed. An analogous formulation is developed to handle the SAT type enforcement policy. Unwinding theorems are presented for both MLS and Multidomain Security (MDS) and the SAT Abstract Model is shown to satisfy both MLS and MDS. Generalizations and extensions are also considered. --- paper_title: Kit and the short stack paper_content: Kit is a small multi-tasking operating system kernel written in the machine language of a uni-processor von Neumann computer. The kernel is proved to implement on this shared computer a fixed number of conceptually distributed communicating processes. In addition to implementing processes, the kernel provides the following verified services: process scheduling, error handling, message passing, and an interface to asynchronous devices. We summarize the Kit project in order to discuss the place Kit could occupy in the verified stack of system components containing Micro-Gypsy, Piton and FM8502. --- paper_title: Kit : A Study in Operating System Verification paper_content: The author reviews Kit, a small multitasking operating system kernel written in the machine language of a uniprocessor von Neumann computer. The kernel is proved to implement on this shared computer a fixed number of conceptually distributed communicating processes. In addition to implementing processes, the kernel provides the following verified services: process scheduling, error handling, message passing, and an interface to asynchronous devices. As a by-product of the correctness proof, security-related results such as the protection of the kernel from tasks and the inability of tasks to enter supervisor mode are proved. The problem is stated in the Boyer-Moore logic, and the proof is mechanically checked with the Boyer-Moore theorem prover. > --- paper_title: A Verified Operating System Kernel paper_content: We present a multitasking operating system kernel, called KIT, written in the machine language of a uni-processor von Neumann computer. The kernel is proved to implement, on this shared computer, a fixed number of conceptually distributed communicating processes. In addition to implementing processes, the kernel provides the following verified services: process scheduling, error handling, message passing, and an interface to asynchronous devices. The problem is stated in the Boyer-Moore logic, and the proof is mechanically checked with the Boyer-Moore theorem prover. --- paper_title: KSOS—The design of a secure operating system* paper_content: This paper discusses the design of the Department of Defense (DoD) Kemelized Secure Operating System (KSOS, formerly called Secure UNIX). ** KSOS is intended to provide a provably secure operating system for larger minicomputers. KSOS will provide a system call interface closely compatible with the UNIX operating system. The initial implementation of KSOS will be on a Digital Equipment Corporation PDP-11/70 computer system. A group from Honeywell is also proceeding with an implementation for a modified version of the Honeywell Level 6 computer system. --- paper_title: Some Techniques for Proving Correctness of Programs which Alter Data Structures paper_content: A metal oxide varistor has a high thermal conductivity potting material encapsulating completely the metal oxide varistor and a portion of the leads therefrom, a pair of metal plates spaced in a generally parallel reslationship to the opposite major surfaces of the varistor and bonded thereto by the potting material, and one of the plates having at least a pair of mounting holes therein or other suitable means. --- paper_title: Pragmatic nonblocking synchronization for real-time systems paper_content: We present a pragmatic methodology for designing nonblocking real-time systems. Our methodology uses a combination of lock-free and wait-free synchronization techniques and clearly states which technique should be applied in which situation. This paper reports novel results in various respects: We restrict the usage of lock-free mechanisms to cases where the widely available atomic singleword compare-and-swap operation suffices. We show how Brinch Hansen’s monitors (alias Java’s synchronized methods) can be implemented on top of our mechanisms, thereby demonstrating their versatility. We describe in detail how we used the mechanisms for a full reimplementation of a popular microkernel interface (L4). Our kernel— in contrast to the original implementation—bounds execution time of all operations. We report on a previous implementation of our mechanisms in which we used Massalin’s and Pu’s single-server approach, and on the resulting performance, which lead us to abandon this well-known scheme. Our microkernel implementation is in daily use with a user-level Linux server running a large variety of applications. Hence, our system can be considered as more than just an academic prototype. Still, and despite its implementation in C++, it compares favorably with the original, highly optimized, non-real-time, assembly-language implementation. --- paper_title: The semantics of C++ data types: Towards verifying low-level system components paper_content: In order to formally reason about low-level system programs one needs a semantics (for the programming language in question) that can deal with programs that are not statically type-correct. For system-level programs, the semantics must deal with such heretical constructs like casting integers to pointers and converting pointers between incompatible base types. In this paper we describe a formal semantics for the data types of the C++ programming language that is suitable for low-level programs in the above sense. This work is part of a semantics for a large subset of the C++ programming language developed in the VFiasco project. In the VFiasco project we aim at the verification of substantial properties of the Fiasco microkernel, which is written in C++. --- paper_title: The Formal Semantics of Programming Languages paper_content: A shutter mechanism alternates between an exposure phase and a reflex phase. In the reflex phase it reflects incident scene light onto a light-sensitive stage which generates a light-indicating signal. A first control pulse is generated at the start of the reflex phase, and a second control pulse at the end of the reflex phase. An integrating circuit receives the first pulse and during the reflex phase generates an integral signal dependent upon the time integral of the light-indicating signal. A signal-transmission switch is operative in response to the second pulse for transmitting the integral signal to a signal-storing stage. The signal from the signal-storing stage is used as the feedback signal to a negative-feedback control arrangement for the camera diaphragm. This feedback signal is dependent not only upon scene light but also the duration of the reflex phase. Accordingly, if the scene light remains constant but the speed of the motor driving the shutter is below or above rated speed, that is automatically compensated by a change in the feedback signal, so that the diaphragm setting takes into account the longer or shorter exposure times resulting from the shutter-motor speed deviation. --- paper_title: A formal model of memory peculiarities for the verification of low-level operating-system code paper_content: This paper presents our solutions to some problems we encountered in an ongoing attempt to verify the micro-hypervisor currently developed within the Robin project. The problems that we discuss are (1) efficient automatic reasoning for type-correct programs in virtual memory, and (2) modeling memory-mapped devices with alignment requirements. The discussed solutions are integrated in our verification environment for operating-system kernels in the interactive theorem prover PVS. This verification environment will ultimately be used for the verification of the Robin micro-hypervisor. As a proof of concept we include an example verification of a very simple piece of code in our environment. --- paper_title: Applying source-code verification to a microkernel: the VFiasco project paper_content: We present the VFiasco project, in which we apply source-code verification to a complete operating-system kernel written in C++. The aim of the VFiasco project is to establish security-relevant properties of the Fiasco microkernel.Source-code verification works by reasoning about the semantics of the full source code of a program. Traditionally it is limited to small programs written in an academic programming language. The project's main challenges are to enable high-level reasoning about typed data starting from only low-level knowledge about the hardware, and to develop a clean semantics for the subset of C++ used by the kernel. In this extended abstract we present our ideas for tackling these challenges. We focus on a type-safe object store that is based on a hardware model that closely resembles the IA32 virtual-memory architecture and on guarantees provided by the kernel itself. We also briefly touch on the semantics for C++.Please find the full version of this paper at http://www.vfiasco.org/objstore.pdf. --- paper_title: A Linear Time Algorithm for Deciding Subject Security paper_content: A particular protection mechanism from the protection hterature-the take and grant system--is presented For this particular mechanism, it is shown that the safety problem can be solved in linear time Moreover the security policies that this mechanism can enforce are characterized The theoretical analysis of systems for protectmg the security of reformation should be of interest to the practmoner as well as the theorehctan. The practitioner must convince users that the integrity of their programs and files is maintained; Le. he must convince them that the operating system and its mechanisms will correctly protect these programs and files Vague or informal arguments are unacceptable since they are often wrong. Indeed the folklore is replete with stones of "secure" systems being compromised m a matter of hours. A primary reason for the abundance of these incidents is that even a small set of apparently simple protection pnmmves can often lead to complex systems that can be exploited, and therefore compromised, by some adversary. But it is preosely this fact, simple primitives with complex behavior, that lures the theoretician. Our purpose here is to present a concrete example of a protection system and then to completely analyze its behavior. Our motivation for doing this analysis ~s twofold. The protection system that we study ~s not one we invented, rather it appears, for example, in Cohen (1) Moreover it is closely related to systems studied m Denning and Graham (2) and Jones (4). This point is most important, for the space of possible protection systems is exceedingly nch and it is trivial to think up arbitrary systems to study. We are interested not in arbitrary systems, but in systems that have practical apphcahon The above motivation is necessary but not sufficient for us to establish that these questions should interest the theoreticmn. Our second reason for studying these prob- lems is that m a natural way they can be viewed as "generahzat~ons of transitive closure." Informally, our model is: --- paper_title: Verifying the EROS confinement mechanism paper_content: Capability systems can be used to implement higher-level security policies including the *-property if a mechanism exists to ensure confinement. The implementation can be efficient if the "weak" access restriction described in this paper is introduced. In the course of developing EROS, a pure capability system, it became clear that verifying the correctness of the confinement mechanism was necessary in establishing the security of the operating system. We present a verification of the EROS confinement mechanism with respect to a broad class of capability architectures (including EROS). We give a formal statement of the requirements, construct a model of the architecture's security policy and operational semantics, and show that architectures covered by this model enforce the confinement requirements if a small number of initial static checks on the confined subsystem are satisfied. The method used generalizes to any capability system. --- paper_title: An axiomatic basis for computer programming paper_content: In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This involves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantages, both theoretical and practical, may follow from a pursuance of these topics. --- paper_title: Formal verification of a C compiler front-end paper_content: This paper presents the formal verification of a compiler front-end that translates a subset of the C language into the Cminor intermediate language. The semantics of the source and target languages as well as the translation between them have been written in the specification language of the Coq proof assistant. The proof of observational semantic equivalence between the source and generated code has been machine-checked using Coq. An executable compiler was obtained by automatic extraction of executable Caml code from the Coq specification of the translator, combined with a certified compiler back-end generating PowerPC assembly code from Cminor, described in previous work. --- paper_title: Formal pervasive verification of a paging mechanism paper_content: Memory virtualization by means of demand paging is a crucial component of every modern operating system. The formal verification is challenging since reasoning about the page fault handler has to cover two concurrent computational sources: the processor and the hard disk. We accurately model the interleaved executions of devices and the page fault handler, which is written in a high-level programming language with inline assembler portions. We describe how to combine results from sequential Hoare logic style reasoning about the page fault handler on the low-level concurrent machine model. To the best of our knowledge this is the first example of pervasive formal verification of software communicating with devices. --- paper_title: Computer Architecture: A Quantitative Approach paper_content: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today. In this edition, the authors bring their trademark method of quantitative analysis not only to high-performance desktop machine design, but also to the design of embedded and server systems. They have illustrated their principles with designs from all three of these domains, including examples from consumer electronics, multimedia and Web technologies, and high-performance computing. --- paper_title: On the Architecture of System Verification Environments paper_content: Implementations of computer systems comprise many layers and employ a variety of programming languages. Building such systems requires support of an often complex, accompanying tool chain. ::: ::: The Verisoft project deals with the formal pervasive verification of computer systems. Making use of appropriate formal specification and proof tools, this task requires (i) specifying the layers and languages used in the implementation, (ii) specifying and verifying the algorithms employed by the tool chain (or, alternatively, validating their actual output), and (iii) proving simulation statements between layers, arguing about the programs residing at the different layers. Combining the simulation statements for all layers should allow to transfer correctness results for top-layer programs to their bottom-layer representation; in this manner, a verified stack can be built. ::: ::: Maintaining all formal artifacts, the actual system implementation, and the (verification) tool chain is a challenging task. We call sets of tools that help addressing this task system verification environments. In this paper, we describe the structure, contents, and architecture of the system verification environment used in the Verisoft project. --- paper_title: Formal certification of a compiler back-end or: programming a compiler with a proof assistant paper_content: This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well. --- paper_title: Correct Microkernel Primitives paper_content: Primitives are basic means provided by a microkernel to implementors of operating system services. Intensively used within every OS and commonly implemented in a mixture of high-level and assembly programming languages, primitives are meaningful and challenging candidates for formal verification. We report on the accomplished correctness proof of academic microkernel primitives. We describe how a novel approach to verification of programs written in C with inline assembler is successfully applied to a piece of realistic system software. Necessary and sufficient criteria covering functional correctness and requirements for the integration into a formal model of memory virtualization are determined and formally proven. The presented results are important milestones on the way to a pervasively verified operating system. --- paper_title: Pervasive Compiler Verification -- From Verified Programs to Verified Systems paper_content: We report in this paper on the formal verification of a simple compiler for the C-like programming language C0. The compiler correctness proof meets the special requirements of pervasive system verification and allows to transfer correctness properties from the C0 layer to the assembler and hardware layers. The compiler verification is split into two parts: the correctness of the compiling specification (which can be translated to executable ML code via Isabelle's code generator) and the correctness of a C0 implementation of this specification. We also sketch a method to solve the boot strap problem, i.e., how to obtain a trustworthy binary of the C0 compiler from its C0 implementation. Ultimately, this allows to prove pervasively the correctness of compiled C0 programs in the real system. --- paper_title: On the Verification of Memory Management Mechanisms paper_content: We report on the design and formal verification of a complex processor supporting address translation by means of a memory management unit (MMU). We give a paper and pencil proof that such a processor together with an appropriate page fault handler simulates virtual machines modeling user computation. These results are crucial steps towards the seamless verification of entire computer systems. --- paper_title: A verification environment for sequential imperative programs in Isabelle/HOL paper_content: We develop a general language model for sequential imperative programs together with a Hoare logic. We instantiate the framework with common programming language constructs and integrate it into Isabelle/HOL, to gain a usable and sound verification environment. --- paper_title: Formal Device and Programming Model for a Serial Interface paper_content: The verification of device drivers is essential for the pervasive verification of an operating system. To show the correctness of device drivers, devices have to be formally modeled. In this paper we present the formal model of the serial interface controller UART 16550A. By combining the device model with a formal model of a processor instruction set architecture we obtain an assembler-level programming model for a serial interface. As a programming and verification example we present a simple UART driver implemented in assembler and prove its correctness. All models presented in this paper have been formally specified in the Isabelle/HOL theorem prover. --- paper_title: An approach to systems verification paper_content: The term systems verification refers to the specification and verification of the components of a computing system, including compilers, assemblers, operating systems and hardware. We outline our approach to systems verification, and summarize the application of this approach to several systems components. These components consist of a code generator for a simple high-level language, an assembler and linking loader, a simple operating system kernel, and a microprocessor design. --- paper_title: A machine-checked model for a Java-like language, virtual machine, and compiler paper_content: We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between the realism of the language and the tractability and clarity of its formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence, a type system and a definite initialisation analysis, a type safety proof of the small step semantics, a virtual machine (JVM), its operational semantics and its type system, a type safety proof for the JVM; a bytecode verifier, that is, a data flow analyser for the JVM, a correctness proof of the bytecode verifier with respect to the type system, and a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine, and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL. --- paper_title: Towards the formal verification of a C0 compiler: code generation and implementation correctness paper_content: In the spirit of the famous CLI stack project the Verisoft project aims at the pervasive verification of entire computer systems including hardware, system software, compiler, and communicating applications, with a special focus on industrial applications. The main programming language used in the Verisoft project is C0 (a subset of C which is similar to MISRA C). This paper reports on (i) an operational small steps semantics for C0 which is formalized in Isabelle/HOL, (ii) the formal specification of a compiler from C0 to the DLX machine language in Isabelle/HOL, (iii) a paper and pencil correctness proof for this compiler and the status of the formal verification effort for this proof, and (iv) the implementation of the compiler in C0 and a formal proof in Isabelle/HOL that the implementation produces the same code as the specification. --- paper_title: Putting it all together – Formal verification of the VAMP paper_content: In the verified architecture microprocessor (VAMP) project we have designed, functionally verified, and synthesized a processor with full DLX instruction set, delayed branch, Tomasulo scheduler, maskable nested precise interrupts, pipelined fully IEEE compatible dual precision floating point unit with variable latency, and separate instruction and data caches. The verification has been carried out in the theorem proving system PVS. The processor has been implemented on a Xilinx FPGA. --- paper_title: Integration of a Software Model Checker into Isabelle paper_content: The paper presents a combination of interactive and automatic tools in the area of software verification. We have integrated a newly developed software model checker into an interactive verification environment for imperative programming languages. Although the problems in software verification are mostly too hard for full automation, we could increase the level of automated assistance by discharging less interesting side conditions. That allows the verification engineer to focus on the abstract algorithm, safely assuming unbounded arithmetic and unlimited buffers. --- paper_title: Separation logic: a logic for shared mutable data structures paper_content: In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions. --- paper_title: Formalising a high-performance microkernel paper_content: This paper argues that a pragmatic approach is needed for integrating design and formalisation of complex systems. We report on our approach to designing the seL4 operating system microkernel API and its formalisation in Isabelle/HOL. The formalisation consists of the systematic translation of significant parts of the functional programming language Haskell into Isabelle/HOL, including monadbased code. We give an account of the experience, decisions and outcomes in this translation as well as the technical problems we encountered together with our solutions. The longer-term goal is to demonstrate that formalisation and verification of a large, complex, OS-level code base is feasible with current tools and methods and is in the order of magnitude of traditional development cost. --- paper_title: A Linear Time Algorithm for Deciding Subject Security paper_content: A particular protection mechanism from the protection hterature-the take and grant system--is presented For this particular mechanism, it is shown that the safety problem can be solved in linear time Moreover the security policies that this mechanism can enforce are characterized The theoretical analysis of systems for protectmg the security of reformation should be of interest to the practmoner as well as the theorehctan. The practitioner must convince users that the integrity of their programs and files is maintained; Le. he must convince them that the operating system and its mechanisms will correctly protect these programs and files Vague or informal arguments are unacceptable since they are often wrong. Indeed the folklore is replete with stones of "secure" systems being compromised m a matter of hours. A primary reason for the abundance of these incidents is that even a small set of apparently simple protection pnmmves can often lead to complex systems that can be exploited, and therefore compromised, by some adversary. But it is preosely this fact, simple primitives with complex behavior, that lures the theoretician. Our purpose here is to present a concrete example of a protection system and then to completely analyze its behavior. Our motivation for doing this analysis ~s twofold. The protection system that we study ~s not one we invented, rather it appears, for example, in Cohen (1) Moreover it is closely related to systems studied m Denning and Graham (2) and Jones (4). This point is most important, for the space of possible protection systems is exceedingly nch and it is trivial to think up arbitrary systems to study. We are interested not in arbitrary systems, but in systems that have practical apphcahon The above motivation is necessary but not sufficient for us to establish that these questions should interest the theoreticmn. Our second reason for studying these prob- lems is that m a natural way they can be viewed as "generahzat~ons of transitive closure." Informally, our model is: --- paper_title: Future Directions in the Evolution of the L4 Microkernel paper_content: L4 is a small microkernel that is used as a basis for several operating systems. L4 seems an ideal basis for embedded systems that possess and use memory protection. It could provide a reliable, robust, and secure embedded platform. This paper examines L4’s suitability as a basis for trustworthy embedded systems. It motivates the use of a microkernel, introduces L4 in particular as an example microkernel, overviews selected embedded applications benefiting from memory protection (focusing mostly on security related applications), and then examines L4’s applicability to those application domains and identifies issues with L4’s abstractions and mechanisms. --- paper_title: A Logic for Virtual Memory paper_content: We present an extension to classical separation logic which allows reasoning about virtual memory. Our logic is formalised in the Isabelle/HOL theorem prover in a manner allowing classical separation logic notation to be used at an abstract level. We demonstrate that in the common cases, such as user applications, our logic reduces to classical separation logic. At the same time we can express properties about page tables, direct physical memory access, virtual memory access, and shared memory in detail. --- paper_title: A memory allocation model for an embedded microkernel paper_content: High-end embedded systems featuring millions of lines of code, with varying degrees of assurance, are becoming commonplace. These devices are typically expected to meet diverse application requirements within tight resource budgets. Their growing complexity makes it increasingly difficult to ensure that they are secure and robust. One approach is to provide strong guarantees of isolation between components — thereby ensuring that the effects of any misbehaviour are confined to the misbehaving component. This paper focuses on an aspect of the system’s behaviour that is critical to any such guarantee: management of physical memory resources. In this paper, we present a secure physical memory management model that gives hard guarantees on physical memory consumption. The model dictates the inkernel mechanisms for allocation, however the allocation policy is implemented outside the kernel. We also argue that exporting allocation to user-level provides the flexibility necessary to implement the diverse resource management policies needed in embedded systems, while retaining the high-assurance properties of a formally verified kernel. --- paper_title: Proving Pointer Programs in Hoare Logic paper_content: It is possible, but difficult, to reason in Hoare logic about programs which address and modify data structures defined by pointers. The challenge is to approach the simplicity of Hoare logic’s treatment of variable assignment, where substitution affects only relevant assertion formula. The axiom of assignment to object components treats each component name as a pointer-indexed array. This permits a formal treatment of inductively defined data structures in the heap but tends to produce instances of modified component mappings in arguments to inductively defined assertions. The major weapons against these troublesome mappings are assertions which describe spatial separation of data structures. Three example proofs are sketched. --- paper_title: Pervasive Compiler Verification -- From Verified Programs to Verified Systems paper_content: We report in this paper on the formal verification of a simple compiler for the C-like programming language C0. The compiler correctness proof meets the special requirements of pervasive system verification and allows to transfer correctness properties from the C0 layer to the assembler and hardware layers. The compiler verification is split into two parts: the correctness of the compiling specification (which can be translated to executable ML code via Isabelle's code generator) and the correctness of a C0 implementation of this specification. We also sketch a method to solve the boot strap problem, i.e., how to obtain a trustworthy binary of the C0 compiler from its C0 implementation. Ultimately, this allows to prove pervasively the correctness of compiled C0 programs in the real system. --- paper_title: Some Techniques for Proving Correctness of Programs which Alter Data Structures paper_content: A metal oxide varistor has a high thermal conductivity potting material encapsulating completely the metal oxide varistor and a portion of the leads therefrom, a pair of metal plates spaced in a generally parallel reslationship to the opposite major surfaces of the varistor and bonded thereto by the potting material, and one of the plates having at least a pair of mounting holes therein or other suitable means. --- paper_title: Types, bytes, and separation logic paper_content: We present a formal model of memory that both captures the low-level features of C's pointers and memory, and that forms the basis for an expressive implementation of separation logic. At the low level, we do not commit common oversimplifications, but correctly deal with C's model of programming language values and the heap. At the level of separation logic, we are still able to reason abstractly and efficiently. We implement this framework in the theorem prover Isabelle/HOL and demonstrate it on two case studies. We show that the divide between detailed and abstract does not impose undue verification overhead, and that simple programs remain easy to verify. We also show that the framework is applicable to real, security- and safety-critical code by formally verifying the memory allocator of the L4 microkernel. --- paper_title: Towards a Practical, Verified Kernel paper_content: In the paper we examine one of the issues in designing, specifying, implementing and formally verifying a small operating system kernel -- how to provide a productive and iterative developmentmethodology for both operating system developers and formal methods practitioners. ::: ::: We espouse the use of functional programming languages as a medium for prototyping that is readily amenable to formalisation with a low barrier to entry for kernel developers, and report early experience in the process of designing and building seL4: a new, practical, and formally verified microkernel. --- paper_title: Isabelle theories for machine words paper_content: We describe a collection of Isabelle theories which facilitate reasoning about machine words. For each possible word length, the words of that length form a type, and most of our work consists of generic theorems which can be applied to any such type. We develop the relationships between these words and integers (signed and unsigned), lists of booleans and functions from index to value, noting how these relationships are similar to those between an abstract type and its representing set. We discuss how we used Isabelle's bin type, before and after it was changed from a datatype to an abstract type, and the techniques we used to retain, as nearly as possible, the convenience of primitive recursive definitions. We describe other useful techniques, such as encoding the word length in the type. ---
Title: Operating System Verification — An Overview Section 1: Introduction Description 1: Introduce the motivation for operating system verification and the historical background of software complexity and the software crisis. Explain briefly the goals of this article including highlighting formal, mathematical proof as a solution for achieving high assurance in software implementation correctness. Section 2: Software Verification Description 2: Provide an overview of software verification in general, focusing on formal verification. Discuss how formal verification fits into the broader context of software development, its promises, and the remaining risks. Section 3: Microkernels Description 3: Introduce operating system microkernels, their definition, and significance. Include explanations of sample architectures, their advantages, disadvantages, and trade-offs. Section 4: Architectures Description 4: Detail the various architectures of microkernels, comparing them with traditional monolithic kernels. Discuss their design philosophy and real-world applications. Section 5: Services and Policies Description 5: Discuss the usage scenarios, services, and mechanisms provided by microkernels. Include the emphasis on abstraction, flexibility, real-time capabilities, and security policies. Section 6: OS Verification - The State of the Art Description 6: Survey the state of the art in operating system verification. Provide an overview of different projects in the field, their goals, approaches, and outcomes. Section 7: Overview Description 7: Present a high-level, table-style overview of the projects surveyed. Include brief backgrounds, goals, methods, and achievements of these projects. Section 8: Related Projects Description 8: Cover a number of smaller, related projects that have made significant contributions to the field of OS verification even if they did not target full microkernel verification. Section 9: Early Work Description 9: Dive into early work on OS verification, summarizing key projects such as UCLA Secure Unix, PSOS, and KIT. Discuss their methodologies, outcomes, and impact on later verification efforts. Section 10: VFiasco Description 10: Detail the VFiasco project, its goals, approaches, and contributions. Include discussions on its challenges and the modeling of the C++ language for verification. Section 11: EROS/Coyotos Description 11: Describe the Coyotos kernel and its predecessor EROS. Explain their security models, the shift towards formal verification in Coyotos, and the efforts involved in developing a suitable programming language for kernel implementation. Section 12: Verisoft Description 12: Provide a comprehensive overview of the Verisoft project, emphasizing its approach to pervasive verification from hardware to applications. Discuss the project's significant achievements and ongoing efforts. Section 13: L4.verified/seL4 Description 13: Detail the L4.verified and seL4 projects, covering their tight integration, methodologies, verification accomplishments, and performance outcomes. Discuss future prospects and the commercial implications of these efforts. Section 14: Conclusion Description 14: Summarize the key findings, state of the art, and future directions in operating system verification. Offer insights into the feasibility of achieving comprehensive verification and the technological advancements that have made this possible.
Survey on Variants of Distributed Energy efficient Clustering Protocols in heterogeneous Wireless Sensor Network
6
--- paper_title: A channel access scheme for large dense packet radio networks paper_content: Prior work in the field of packet radio networks has often assumed a simple success-if-exclusive model of successful reception. This simple model is insufficient to model interference in large dense packet radio networks accurately. In this paper we present a model that more closely approximates communication theory and the underlying physics of radio communication. Using this model we present a decentralized channel access scheme for scalable packet radio networks that is free of packet loss due to collisions and that at each hop requires no per-packet transmissions other than the single transmission used to convey the packet to the next-hop station. We also show that with a modest fraction of the radio spectrum, pessimistic assumptions about propagation resulting in maximum-possible self-interference, and an optimistic view of future signal processing capabilities that a self-organizing packet radio network may scale to millions of stations within a metro area with raw per-station rates in the hundreds of megabits per second. --- paper_title: EECS: An energy efficient clustering scheme in wireless sensor networks paper_content: Data gathering is a common but critical operation in many applications of wireless sensor networks.Innovative techniques that improve energy efficiency to prolong the network lifetime are highly required.Clustering is an effective topology control approach in wireless sensor networks, which can increase network scalability and lifetime.In this paper, we propose a novel energy efficient clustering scheme (EECS) for single-hop wireless sensor networks, which better suits the periodical data gathering applications.Our approach elects cluster heads with more residual energy in an autonomous manner through local radio communication with no iteration while achieving good cluster head distribution; further more, it introduces a novel distance-based method to balance the load among the cluster heads.Simulation results show that EECS prolongs the network lifetime significantly against the other clustering protocols such as LEACH and HEED. --- paper_title: Energy-efficient communication protocol for wireless microsensor networks paper_content: Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated. --- paper_title: PEGASIS : Power-Efficient Gathering in Sensor Information Systems paper_content: Sensor network consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. The main idea in PEGASIS is for each node to receive from and transmit to close neighbors and take turns being the leader for transmission to the BS. This approach distributes the energy load evenly among the sensor nodes in the network. Sensor nodes are randomly deployed in the sensor field, and therefore, the i th node is at a random location. The nodes will be organized to form a chain, which can either be accomplished by the sensor nodes themselves using a greedy algorithm. The algorithm to resolve the unbalanced energy consumption problem caused by long distance data transmission of some nodes in a chain formed by the greedy algorithm. --- paper_title: EEHC: Energy efficient heterogeneous clustered scheme for wireless sensor networks paper_content: Abstract In recent years, there has been a growing interest in wireless sensor networks. One of the major issues in wireless sensor network is developing an energy-efficient clustering protocol. Hierarchical clustering algorithms are very important in increasing the network’s life time. Each clustering algorithm is composed of two phases, the setup phase and steady state phase. The hot point in these algorithms is the cluster head selection. In this paper, we study the impact of heterogeneity of nodes in terms of their energy in wireless sensor networks that are hierarchically clustered. We assume that a percentage of the population of sensor nodes is equipped with the additional energy resources. We also assume that the sensor nodes are randomly distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. Homogeneous clustering protocols assume that all the sensor nodes are equipped with the same amount of energy and as a result, they cannot take the advantage of the presence of node heterogeneity. Adapting this approach, we introduce an energy efficient heterogeneous clustered scheme for wireless sensor networks based on weighted election probabilities of each node to become a cluster head according to the residual energy in each node. Finally, the simulation results demonstrate that our proposed heterogeneous clustering approach is more effective in prolonging the network lifetime compared with LEACH. --- paper_title: TEEN: a routing protocol for enhanced efficiency in wireless sensor networks paper_content: Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols. --- paper_title: Stochastic and Balanced Distributed Energy-Efficient Clustering ( SBDEEC ) for heterogeneous wireless sensor networks paper_content: Typically, a wireless sensor network contains an important number of inexpensive power constrained sensors which collect data from the environment and transmit them towards the base station in a cooperative way. Saving energy and therefore, extending the wireless sensor networks lifetime, imposes a great challenge. Many new protocols are specifically designed for these raisons where energy awareness is an essential consideration. The clustering techniques are largely used for these purposes. In this paper, we present and evaluate a Stochastic and Balanced Developed Distributed Energy-Efficient Clustering (SBDEEC) scheme for heterogeneous wireless sensor networks. This protocol is based on dividing the network into dynamic clusters. The cluster's nodes communicate with an elected node called cluster head, and then the cluster heads communicate the information to the base station. SBDEEC introduces a balanced and dynamic method where the cluster head election probability is more efficient. Moreover, it uses a stochastic scheme detection to extend the network lifetime. Simulation results show that our protocol performs better than the Stable Election Protocol (SEP) and than the Distributed Energy- Efficient Clustering (DEEC) in terms of network lifetime. In the proposed protocol the first node death occurs over 90% times longer than the first node death in DEEC protocol and by about 130% than SEP. --- paper_title: SEP: A stable election protocol for clustered heterogeneous wireless sensor networks paper_content: We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources—this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes. --- paper_title: Energy-efficient communication protocol for wireless microsensor networks paper_content: Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated. --- paper_title: E-DEEC- Enhanced Distributed Energy Efficient Clustering scheme for heterogeneous WSN paper_content: Many routing protocols on clustering structure have been proposed in recent years. In recent advances, achieving the energy efficiency, lifetime, deployment of nodes, fault tolerance, latency, in short high reliability and robustness have become the main research goals of wireless sensor network. Many routing protocols on clustering structure have been proposed in recent years based on heterogeneity. We propose EDEEC for three types of nodes in prolonging the lifetime and stability of the network. Hence, it increases the heterogeneity and energy level of the network. Simulation results show that EDEEC performs better than SEP with more stability and effective messages. --- paper_title: EDDEEC: Enhanced Developed Distributed Energy-Efficient Clustering for Heterogeneous Wireless Sensor Networks paper_content: Wireless Sensor Networks (WSNs) consist of large number of randomly deployed energy constrained sensor nodes. Sensor nodes have ability to sense and send sensed data to Base Station (BS). Sensing as well as transmitting data towards BS require high energy. In WSNs, saving energy and extending network lifetime are great challenges. Clustering is a key technique used to optimize energy consumption in WSNs. In this paper, we propose a novel clustering based routing technique: Enhanced Developed Distributed Energy Efficient Clustering scheme (EDDEEC) for heterogeneous WSNs. Our technique is based on changing dynamically and with more efficiency the Cluster Head (CH) election probability. Simulation results show that our proposed protocol achieves longer lifetime, stability period and more effective messages to BS than Distributed Energy Efficient Clustering (DEEC), Developed DEEC (DDEEC) and Enhanced DEEC (EDEEC) in heterogeneous environments. --- paper_title: BEENISH: Balanced Energy Efficient Network Integrated Super Heterogeneous Protocol for Wireless Sensor Networks paper_content: Abstract In past years there has been increasing interest in field of Wireless Sensor Networks (WSNs). One of the major issue of WSNs is development of energy efficient routing protocols. Clustering is an effective way to increase energy efficiency. Mostly, heterogenous protocols consider two or three energy level of nodes. In reality, heterogonous WSNs contain large range of energy levels. By analyzing communication energy consumption of the clusters and large range of energy levels in heterogenous WSN, we propose BEENISH (Balanced Energy Efficient Network Integrated Super Heterogenous) Protocol. It assumes WSN containing four energy levels of nodes. Here, Cluster Heads (CHs) are elected on the bases of residual energy level of nodes. Simulation results show that it performs better than existing clustering protocols in heterogeneous WSNs. Our protocol achieve longer stability, lifetime and more effective messages than Distributed Energy Efficient Clustering (DEEC), Developed DEEC (DDEEC) and Enhanced DEEC (EDEEC). --- paper_title: Energy Efficient Scheme for Clustering Protocol Prolonging the Lifetime of Heterogeneous Wireless Sensor Networks paper_content: In recent advances, many routing protocols have been proposed based on heterogeneity with main research goals such as achieving the energy efficiency, lifetime, deployment of nodes, fault tolerance, latency, in short high reliability and robustness. In this paper, we have proposed an energy efficient cluster head scheme, for heterogeneous wireless sensor networks, by modifying the threshold value of a node based on which it decides to be a cluster head or not, called TDEEC (Threshold Distributed Energy Efficient Clustering) protocol. Simulation results show that proposed algorithm performs better as compared to others. --- paper_title: HEER: Hybrid Energy Efficient Reactive Protocol for Wireless Sensor Networks paper_content: Wireless Sensor Networks (WSNs) consist of numerous sensors which send sensed data to base station. Energy conservation is an important issue for sensor nodes as they have limited power. Many routing protocols have been proposed earlier for energy efficiency of both homogeneous and heterogeneous environments. We can prolong our stability and network lifetime by reducing our energy consumption. In this research paper, we propose a protocol designed for the characteristics of a reactive homogeneous WSNs, HEER (Hybrid Energy Efficient Reactive) protocol. In HEER, Cluster Head(CH) selection is based on the ratio of residual energy of node and average energy of network. Moreover, to conserve more energy, we introduce Hard Threshold (HT) and Soft Threshold (ST). Finally, simulations show that our protocol has not only prolonged the network lifetime but also significantly increased stability period. --- paper_title: Stochastic and Balanced Distributed Energy-Efficient Clustering ( SBDEEC ) for heterogeneous wireless sensor networks paper_content: Typically, a wireless sensor network contains an important number of inexpensive power constrained sensors which collect data from the environment and transmit them towards the base station in a cooperative way. Saving energy and therefore, extending the wireless sensor networks lifetime, imposes a great challenge. Many new protocols are specifically designed for these raisons where energy awareness is an essential consideration. The clustering techniques are largely used for these purposes. In this paper, we present and evaluate a Stochastic and Balanced Developed Distributed Energy-Efficient Clustering (SBDEEC) scheme for heterogeneous wireless sensor networks. This protocol is based on dividing the network into dynamic clusters. The cluster's nodes communicate with an elected node called cluster head, and then the cluster heads communicate the information to the base station. SBDEEC introduces a balanced and dynamic method where the cluster head election probability is more efficient. Moreover, it uses a stochastic scheme detection to extend the network lifetime. Simulation results show that our protocol performs better than the Stable Election Protocol (SEP) and than the Distributed Energy- Efficient Clustering (DEEC) in terms of network lifetime. In the proposed protocol the first node death occurs over 90% times longer than the first node death in DEEC protocol and by about 130% than SEP. ---
Title: Survey on Variants of Distributed Energy Efficient Clustering Protocols in Heterogeneous Wireless Sensor Network Section 1: Introduction Description 1: Introduce the basic concepts of clustering in wireless sensor networks and discuss the importance of energy efficiency. Section 2: Distributed Energy Efficient Clustering Protocols for Heterogeneous Wireless Sensor Network Description 2: Describe the DEEC scheme and its mechanisms for energy-efficient clustering in heterogeneous networks. Section 3: Channel Propagation Model Description 3: Explain the first-order radio model used in DEEC and its variants to simulate energy dissipation in wireless channels. Section 4: Performance Metrics Description 4: Outline various performance metrics used to evaluate the performance of clustering protocols, including network lifetime, stability period, and throughput. Section 5: Variants of DEEC Description 5: Describe and compare the different variants of DEEC including SDEEC, DDEEC, EDEEC, BEENISH, TDEEC, SBDEEC, and HEER. Section 6: Conclusion Description 6: Summarize the findings and discuss the potential for further research and improvements in energy-efficient clustering protocols for WSN.
Computational Effects and Operations: An Overview
5
--- paper_title: Semantics for Algebraic Operations paper_content: Abstract Given a category C with finite products and a strong monad T on C, we investigate axioms under which an ObC-indexed family of operations of the form αx:(Tx)n → Tx provides a definitive semantics for algebraic operations added to the computational λ-calculus. We recall a definition for which we have elsewhere given adequacy results for both big and small step operational semantics, and we show that it is equivalent to a range of other possible natural definitions of algebraic operation. We outline examples and non-examples and we show that our definition is equivalent to one for call-by-name languages with effects, too. --- paper_title: Notions of Computation Determine Monads paper_content: We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of primary interest that have been used to model computational effects, with the striking omission of the continuations monad. We focus on semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship. --- paper_title: Notions of computation and monads paper_content: Abstract The λ-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be identified with λ-terms. However, if one goes further and uses βη-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from values to values ) that may jeopardise the applicability of theoretical results. In this paper we introduce calculi, based on a categorical semantics for computations , that provide a correct basis for proving equivalence of programs for a wide range of notions of computation . --- paper_title: Combining Computational Effects: Commutativity and Sum paper_content: We seek a unified account of modularity for computational effects, using the notion of enriched Lawvere theory, together with its relationship with strong monads, to reformulate Moggi’s paradigm for modelling computational effects. Effects qua theories are then combined by appropriate bifunctors (on the category of theories). We give a theory of the commutative combination of effects, which in particular yields Moggi’s side-effects monad transformer. And we give a theory for the sum of computational effects, which in particular yields Moggi’s exceptions monad transformer. --- paper_title: Adequacy for Algebraic Effects paper_content: Moggi proposed a monadic account of computational effects. He also presented the computational λ-calculus, λc, a core call-by-value functional programming language for effects; the effects are obtained by adding appropriate operations. The question arises as to whether one can give a corresponding treatment of operational semantics. We do this in the case of algebraic effects where the operations are given by a single-sorted algebraic signature, and their semantics is supported by the monad, in a certain sense. We consider call-by-value PCF with-- and without--recursion, an extension of λc with arithmetic. We prove general adequacy theorems, and illustrate these with two examples: nondeterminism and probabilistic nondeterminism. --- paper_title: Call-by-Push-Value: A Subsuming Paradigm paper_content: Call-by-push-value is a new paradigm that subsumes the call-by-name and call-by-value paradigms, in the following sense: both operational and denotational semantics for those paradigms can be seen as arising, via translations that we will provide, from similar semantics for call-by-observable. ::: ::: To explain call-by-observable, we first discuss general operational ideas, especially the distinction between values and computations, using the principle that "a value is, a computation does". Using an example program, we see that the lambda-calculus primitives can be understood as push/pop commands for an operand-stack. ::: ::: We provide operational and denotational semantics for a range of computational effects and show their agreement. We hence obtain semantics for call-by-name and call-by-value, of which some are familiar, some are new and some were known but previously appeared mysterious. --- paper_title: Combining Computational Effects: Commutativity and Sum paper_content: We seek a unified account of modularity for computational effects, using the notion of enriched Lawvere theory, together with its relationship with strong monads, to reformulate Moggi’s paradigm for modelling computational effects. Effects qua theories are then combined by appropriate bifunctors (on the category of theories). We give a theory of the commutative combination of effects, which in particular yields Moggi’s side-effects monad transformer. And we give a theory for the sum of computational effects, which in particular yields Moggi’s exceptions monad transformer. --- paper_title: Semantics for Algebraic Operations paper_content: Abstract Given a category C with finite products and a strong monad T on C, we investigate axioms under which an ObC-indexed family of operations of the form αx:(Tx)n → Tx provides a definitive semantics for algebraic operations added to the computational λ-calculus. We recall a definition for which we have elsewhere given adequacy results for both big and small step operational semantics, and we show that it is equivalent to a range of other possible natural definitions of algebraic operation. We outline examples and non-examples and we show that our definition is equivalent to one for call-by-name languages with effects, too. --- paper_title: Adjunctions whose counits are coequalizers, and presentations of finitary enriched monads paper_content: A right adjoint functor is said to be of descent type if the counit of the adjunction is pointwise a coequalizer. Building on the results of Tholen's doctoral thesis, we give necessary and sufficient conditions for a composite to be of descent type when each factor is so. We apply this to show that every finitary monad on a locally-finitely-presentable enriched category A admits a presentation in terms of basic operations and equations between derived operations, the arties here being the finitely-presentable objects of A. --- paper_title: Notions of Computation Determine Monads paper_content: We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of primary interest that have been used to model computational effects, with the striking omission of the continuations monad. We focus on semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship. --- paper_title: A powerdomain construction paper_content: We develop a powerdomain construction, $\mathcal{P}[ \cdot ]$, which is analogous to the powerset construction and also fits in with the usual sum, product and exponentiation constructions on domains. The desire for such a construction arises when considering programming languages with nondeterministic features or parallel features treated in a nondeterministic way. We hope to achieve a natural, fully abstract semantics in which such equivalences as $(p\textit{ par } p) = (q\textit{ par }p)$ hold. The domain ($D \to $ Truthvalues) is not the right one, and instead we take the (finitely) generable subsets of D. When D is discrete they are ordered in an elementwise fashion. In the general case they are given the coarsest ordering consistent, in an appropriate sense, with the ordering given in the discrete case. We then find a restricted class of algebraic inductive partial orders which is closed under $\mathcal{P}[ \cdot ]$ as well as the sum, product and exponentiation constructions. This class permits the... --- paper_title: A probabilistic powerdomain of evaluations paper_content: A probabilistic power domain construction is given for the category of inductively complete partial orders. It is the partial order of continuous --- paper_title: Notions of Computation Determine Monads paper_content: We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of primary interest that have been used to model computational effects, with the striking omission of the continuations monad. We focus on semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship. --- paper_title: Adjunctions whose counits are coequalizers, and presentations of finitary enriched monads paper_content: A right adjoint functor is said to be of descent type if the counit of the adjunction is pointwise a coequalizer. Building on the results of Tholen's doctoral thesis, we give necessary and sufficient conditions for a composite to be of descent type when each factor is so. We apply this to show that every finitary monad on a locally-finitely-presentable enriched category A admits a presentation in terms of basic operations and equations between derived operations, the arties here being the finitely-presentable objects of A. --- paper_title: Notions of Computation Determine Monads paper_content: We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of primary interest that have been used to model computational effects, with the striking omission of the continuations monad. We focus on semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship. --- paper_title: A probabilistic powerdomain of evaluations paper_content: A probabilistic power domain construction is given for the category of inductively complete partial orders. It is the partial order of continuous --- paper_title: Algebraic Theory Of Processes paper_content: Algebraic Theory of Processes provides the first general and systematic introduction to the semantics of concurrent systems, a relatively new research area in computer science. It develops the mathematical foundations of the algebraic approach to the formal semantics of languages and applies these ideas to a particular semantic theory of distributed processes. The book is unique in developing three complementary views of the semantics of concurrent processes: a behavioral view where processes are deemed to be equivalent if they cannot be distinguished by any experiment; a denotational model where processes are interpreted as certain kinds of trees; and a proof-theoretic view where processes may be transformed into equivalent processes using valid equations or transformations. It is an excellent guide on how to reason about and relate behavioral, denotational, and proof-theoretical aspects of languages in general: all three views are developed for a sequence of increasingly complex algebraic languages for concurrency and in each case they are shown to be equivalent. Algebraic Theory of Processes is a valuable source of information for theoretical computer scientists, not only as an elegant and comprehensive introduction to the field but also in its discussion of the author's own theory of the behavioral semantics of processes ("testing equivalence") and original results in example languages for distributed processes, It is self-contained; the problems addressed are motivated from the standpoint of computer science, and all the required algebraic concepts are covered. There are exercises at the end of each chapter. --- paper_title: Combining Computational Effects: Commutativity and Sum paper_content: We seek a unified account of modularity for computational effects, using the notion of enriched Lawvere theory, together with its relationship with strong monads, to reformulate Moggi’s paradigm for modelling computational effects. Effects qua theories are then combined by appropriate bifunctors (on the category of theories). We give a theory of the commutative combination of effects, which in particular yields Moggi’s side-effects monad transformer. And we give a theory for the sum of computational effects, which in particular yields Moggi’s exceptions monad transformer. --- paper_title: A probabilistic powerdomain of evaluations paper_content: A probabilistic power domain construction is given for the category of inductively complete partial orders. It is the partial order of continuous --- paper_title: Adequacy for Algebraic Effects paper_content: Moggi proposed a monadic account of computational effects. He also presented the computational λ-calculus, λc, a core call-by-value functional programming language for effects; the effects are obtained by adding appropriate operations. The question arises as to whether one can give a corresponding treatment of operational semantics. We do this in the case of algebraic effects where the operations are given by a single-sorted algebraic signature, and their semantics is supported by the monad, in a certain sense. We consider call-by-value PCF with-- and without--recursion, an extension of λc with arithmetic. We prove general adequacy theorems, and illustrate these with two examples: nondeterminism and probabilistic nondeterminism. --- paper_title: Towards a mathematical operational semantics paper_content: We present a categorical theory of 'well-behaved' operational semantics which aims at complementing the established theory of domains and denotational semantics to form a coherent whole. It is shown that, if the operational rules of a programming language can be modelled as a natural transformation of a suitable general form, depending on functorial notions of syntax and behaviour, then one gets the following for free: an operational model satisfying the rules and a canonical, internally fully abstract denotational model which satisfies the operational rules. The theory is based on distributive laws and bialgebras; it specialises to the known classes of well-behaved rules for structural operational semantics, such as GSOS. --- paper_title: Adjunctions whose counits are coequalizers, and presentations of finitary enriched monads paper_content: A right adjoint functor is said to be of descent type if the counit of the adjunction is pointwise a coequalizer. Building on the results of Tholen's doctoral thesis, we give necessary and sufficient conditions for a composite to be of descent type when each factor is so. We apply this to show that every finitary monad on a locally-finitely-presentable enriched category A admits a presentation in terms of basic operations and equations between derived operations, the arties here being the finitely-presentable objects of A. --- paper_title: Notions of Computation Determine Monads paper_content: We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of primary interest that have been used to model computational effects, with the striking omission of the continuations monad. We focus on semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship. --- paper_title: Combining Computational Effects: Commutativity and Sum paper_content: We seek a unified account of modularity for computational effects, using the notion of enriched Lawvere theory, together with its relationship with strong monads, to reformulate Moggi’s paradigm for modelling computational effects. Effects qua theories are then combined by appropriate bifunctors (on the category of theories). We give a theory of the commutative combination of effects, which in particular yields Moggi’s side-effects monad transformer. And we give a theory for the sum of computational effects, which in particular yields Moggi’s exceptions monad transformer. --- paper_title: Adequacy for Algebraic Effects paper_content: Moggi proposed a monadic account of computational effects. He also presented the computational λ-calculus, λc, a core call-by-value functional programming language for effects; the effects are obtained by adding appropriate operations. The question arises as to whether one can give a corresponding treatment of operational semantics. We do this in the case of algebraic effects where the operations are given by a single-sorted algebraic signature, and their semantics is supported by the monad, in a certain sense. We consider call-by-value PCF with-- and without--recursion, an extension of λc with arithmetic. We prove general adequacy theorems, and illustrate these with two examples: nondeterminism and probabilistic nondeterminism. --- paper_title: Towards a mathematical operational semantics paper_content: We present a categorical theory of 'well-behaved' operational semantics which aims at complementing the established theory of domains and denotational semantics to form a coherent whole. It is shown that, if the operational rules of a programming language can be modelled as a natural transformation of a suitable general form, depending on functorial notions of syntax and behaviour, then one gets the following for free: an operational model satisfying the rules and a canonical, internally fully abstract denotational model which satisfies the operational rules. The theory is based on distributive laws and bialgebras; it specialises to the known classes of well-behaved rules for structural operational semantics, such as GSOS. --- paper_title: Call-by-Push-Value: A Subsuming Paradigm paper_content: Call-by-push-value is a new paradigm that subsumes the call-by-name and call-by-value paradigms, in the following sense: both operational and denotational semantics for those paradigms can be seen as arising, via translations that we will provide, from similar semantics for call-by-observable. ::: ::: To explain call-by-observable, we first discuss general operational ideas, especially the distinction between values and computations, using the principle that "a value is, a computation does". Using an example program, we see that the lambda-calculus primitives can be understood as push/pop commands for an operand-stack. ::: ::: We provide operational and denotational semantics for a range of computational effects and show their agreement. We hence obtain semantics for call-by-name and call-by-value, of which some are familiar, some are new and some were known but previously appeared mysterious. ---
Title: Computational Effects and Operations: An Overview Section 1: Introduction Description 1: Introduce the motivation and scope of the paper, including an overview of the semantics of programming languages and computational effects. Section 2: Enriched Lawvere Theories Description 2: Introduce the concept of countable enriched Lawvere theories, provide examples, and explain their relationship with monads. Section 3: Combining Computational Effects Description 3: Discuss the natural combinations of countable enriched Lawvere theories and the various ways in which computational effects can be combined. Section 4: Operational Semantics Description 4: Present a unified structural operational semantics for computational effects using Lawvere theories and discuss its implications. Section 5: Further Work Description 5: Outline current work and future questions, including the study of deconstructors, extensions to local phenomena, and relationships with enriched equational theories.
An Overview of Multi-Task Learning in Deep Neural Networks
8
--- paper_title: Multitask Learning paper_content: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. --- paper_title: Massively Multitask Networks for Drug Discovery paper_content: Massively multitask neural architectures provide a learning framework for drug discovery that synthesizes information from many distinct biological sources. To train these architectures at scale, we gather large amounts of data from public sources to create a dataset of nearly 40 million measurements across more than 200 biological targets. We investigate several aspects of the multitask framework by performing a series of empirical studies and obtain some interesting results: (1) massively multitask networks obtain predictive accuracies significantly better than single-task methods, (2) the predictive power of multitask networks improves as additional tasks and data are added, (3) the total amount of data and the total number of tasks both contribute significantly to multitask improvement, and (4) multitask networks afford limited transferability to tasks not in the training set. Our results underscore the need for greater data sharing and further algorithmic innovation to accelerate the drug discovery process. --- paper_title: Fast R-CNN paper_content: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn. --- paper_title: Multitask Learning: A Knowledge-Based Source of Inductive Bias paper_content: This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined. --- paper_title: A Bayesian/Information Theoretic Model of Learning to Learn via Multiple Task Sampling paper_content: A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks. --- paper_title: Low Resource Dependency Parsing: Cross-lingual Parameter Sharing in a Neural Network Parser paper_content: Training a high-accuracy dependency parser requires a large treebank. However, these are costly and time-consuming to build. We propose a learning method that needs less data, based on the observation that there are underlying shared structures across languages. We exploit cues from a different source language in order to guide the learning process. Our model saves at least half of the annotation effort to reach the same accuracy compared with using the purely supervised method. --- paper_title: Trace Norm Regularised Deep Multi-Task Learning paper_content: We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way. --- paper_title: Multitask Learning paper_content: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. --- paper_title: Learning from hints in neural networks paper_content: Abstract Learning from examples is the process of taking input-output examples of an unknown function ƒ and infering an implementation of ƒ. Learning from hints allows for general information about ƒ to be used instead of just input-output examples. We introduce a method for incorporating any invariance hint about ƒ in any descent method for learning from examples. We also show that learning in a neural network remains NP-complete with a certain, biologically plausible, hint about the network. We discuss the information value and the complexity value of hibts. --- paper_title: Taking Advantage of Sparsity in Multi-Task Learning paper_content: We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning Argyriou et al. [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in Bickel et al. [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite. --- paper_title: A Dirty Model for Multi-task Learning paper_content: We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Recent research has studied the use of l1/lq norm block-regularizations with q > 1 for such block-sparse structured problems, establishing strong guarantees on recovery even under high-dimensional scaling where the number of features scale with the number of observations. However, these papers also caution that the performance of such block-regularized methods are very dependent on the extent to which the features are shared across tasks. Indeed they show [8] that if the extent of overlap is less than a threshold, or even if parameter values in the shared features are highly uneven, then block l1/lq regularization could actually perform worse than simple separate elementwise l1 regularization. Since these caveats depend on the unknown true parameters, we might not know when and which method to apply. Even otherwise, we are far away from a realistic multi-task setting: not only do the set of relevant features have to be exactly the same across tasks, but their values have to as well. ::: ::: Here, we ask the question: can we leverage parameter overlap when it exists, but not pay a penalty when it does not? Indeed, this falls under a more general question of whether we can model such dirty data which may not fall into a single neat structural bracket (all block-sparse, or all low-rank and so on). With the explosion of such dirty high-dimensional data in modern settings, it is vital to develop tools - dirty models - to perform biased statistical estimation tailored to such data. Here, we take a first step, focusing on developing a dirty model for the multiple regression problem. Our method uses a very simple idea: we estimate a superposition of two sets of parameters and regularize them differently. We show both theoretically and empirically, our method strictly and noticeably outperforms both l1 or l1/lq methods, under high-dimensional scaling and over the entire range of possible overlaps (except at boundary cases, where we match the best method). --- paper_title: The sparsity and bias of the Lasso selection in high-dimensional linear regression paper_content: Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436--1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541--2567] formalized the neighborhood stability condition in the context of linear regression as a strong irrepresentable condition. That paper showed that under this condition, the LASSO selects exactly the set of nonzero regression coefficients, provided that these coefficients are bounded away from zero at a certain rate. In this paper, the regression coefficients outside an ideal model are assumed to be small, but not necessarily zero. Under a sparse Riesz condition on the correlation of design variables, we prove that the LASSO selects a model of the correct order of dimensionality, controls the bias of the selected model at a level determined by the contributions of small regression coefficients and threshold bias, and selects all coefficients of greater order than the bias of the selected model. Moreover, as a consequence of this rate consistency of the LASSO in model selection, it is proved that the sum of error squares for the mean response and the $\ell_{\alpha}$-loss for the regression coefficients converge at the best possible rates under the given conditions. An interesting aspect of our results is that the logarithm of the number of variables can be of the same order as the sample size for certain random dependent designs. --- paper_title: Model selection and estimation in regression with grouped variables paper_content: Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods. --- paper_title: Linear Algorithms for Online Multitask Classification paper_content: Stainless steel articles such as support pins for color television tube shadow mask assemblies are rendered resistant to the formation of surface nodules promoting tube breakage by processing through a vacuum-firing heat treatment prior to use. --- paper_title: Deep multi-task learning with low level tasks supervised at lower layers paper_content: In all previous work on deep multi-task learning we are aware of, all task supervisions are on the same (outermost) layer. We present a multi-task learning architecture with deep bi-directional RNNs, where different tasks supervision can happen at different layers. We present experiments in syntactic chunking and CCG supertagging, coupled with the additional task of POS-tagging. We show that it is consistently better to have POS supervision at the innermost rather than the outermost layer. We argue that this is because “lowlevel” tasks are better kept at the lower layers, enabling the higher-level tasks to make use of the shared representation of the lower-level tasks. Finally, we also show how this architecture can be used for domain adaptation. --- paper_title: Learning Multiple Tasks with Kernel Methods paper_content: We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task. --- paper_title: Learning Multiple Tasks using Shared Hypotheses paper_content: In this work we consider a setting where we have a very large number of related tasks with few examples from each individual task. Rather than either learning each task individually (and having a large generalization error) or learning all the tasks together using a single hypothesis (and suffering a potentially large inherent error), we consider learning a small pool of shared hypotheses. Each task is then mapped to a single hypothesis in the pool (hard association). We derive VC dimension generalization bounds for our model, based on the number of tasks, shared hypothesis and the VC dimension of the hypotheses class. We conducted experiments with both synthetic problems and sentiment of reviews, which strongly support our approach. --- paper_title: Clustered Multi-Task Learning: A Convex Formulation paper_content: In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem. --- paper_title: Regularized multi--task learning paper_content: Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs. --- paper_title: Discovering Structure in Multiple Learning Tasks: The TC Algorithm paper_content: Recently, there has been an increased interest in “lifelong” machine learning methods, that transfer knowledge across multiple learning tasks. Such methods have repeatedly been found to outperform conventional, single-task learning algorithms when the learning tasks are appropriately related. To increase robustness of such approaches, methods are desirable that can reason about the relatedness of individual learning tasks, in order to avoid the danger arising from tasks that are unrelated and thus potentially misleading. This paper describes the task-clustering (TC) algorithm. TC clusters learning tasks into classes of mutually related tasks. When facing a new learning task, TC first determines the most related task cluster, then exploits information selectively from this task cluster only. An empirical study carried out in a mobile robot domain shows that TC outperforms its non-selective counterpart in situations where only a small number of tasks is relevant. --- paper_title: Learning to learn with the informative vector machine paper_content: This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MTIVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than random sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task. --- paper_title: The sparsity and bias of the Lasso selection in high-dimensional linear regression paper_content: Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436--1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541--2567] formalized the neighborhood stability condition in the context of linear regression as a strong irrepresentable condition. That paper showed that under this condition, the LASSO selects exactly the set of nonzero regression coefficients, provided that these coefficients are bounded away from zero at a certain rate. In this paper, the regression coefficients outside an ideal model are assumed to be small, but not necessarily zero. Under a sparse Riesz condition on the correlation of design variables, we prove that the LASSO selects a model of the correct order of dimensionality, controls the bias of the selected model at a level determined by the contributions of small regression coefficients and threshold bias, and selects all coefficients of greater order than the bias of the selected model. Moreover, as a consequence of this rate consistency of the LASSO in model selection, it is proved that the sum of error squares for the mean response and the $\ell_{\alpha}$-loss for the regression coefficients converge at the best possible rates under the given conditions. An interesting aspect of our results is that the logarithm of the number of variables can be of the same order as the sample size for certain random dependent designs. --- paper_title: Bayesian Multitask Learning with Latent Hierarchies paper_content: We learn multiple hypotheses for related tasks under a latent hierarchical relationship between tasks. We exploit the intuition that for domain adaptation, we wish to share classifier structure, but for multitask learning, we wish to share covariance structure. Our hierarchical model is seen to subsume several previously proposed multitask learning models and performs well on three distinct real-world data sets. --- paper_title: Learning Task Grouping and Overlap in Multi-task Learning paper_content: In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods. --- paper_title: Multi-task learning for classification with dirichlet process priors paper_content: Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP. --- paper_title: When is multitask learning effective? Semantic sequence prediction under varying data conditions paper_content: Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable. --- paper_title: Learning with Whom to Share in Multi-task Feature Learning paper_content: In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining "with whom" each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm mono-tonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature. --- paper_title: Learning Gaussian processes from multiple tasks paper_content: We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems. --- paper_title: Taking Advantage of Sparsity in Multi-Task Learning paper_content: We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning Argyriou et al. [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in Bickel et al. [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite. --- paper_title: Cross-Stitch Networks for Multi-task Learning paper_content: Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples. --- paper_title: Deep multi-task learning with low level tasks supervised at lower layers paper_content: In all previous work on deep multi-task learning we are aware of, all task supervisions are on the same (outermost) layer. We present a multi-task learning architecture with deep bi-directional RNNs, where different tasks supervision can happen at different layers. We present experiments in syntactic chunking and CCG supertagging, coupled with the additional task of POS-tagging. We show that it is consistently better to have POS supervision at the innermost rather than the outermost layer. We argue that this is because “lowlevel” tasks are better kept at the lower layers, enabling the higher-level tasks to make use of the shared representation of the lower-level tasks. Finally, we also show how this architecture can be used for domain adaptation. --- paper_title: Cross-Stitch Networks for Multi-task Learning paper_content: Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples. --- paper_title: Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics paper_content: Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. --- paper_title: A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks paper_content: Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce such a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. All layers include shortcut connections to both word representations and lower-level task predictions. We use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end trainable model obtains state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment. It also performs competitively on POS tagging. Our dependency parsing layer relies only on a single feed-forward pass and does not require a beam search. --- paper_title: Deep Multi-task Representation Learning: A Tensor Factorisation Approach paper_content: Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. --- paper_title: Multitask Learning: A Knowledge-Based Source of Inductive Bias paper_content: This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined. --- paper_title: A Bayesian/Information Theoretic Model of Learning to Learn via Multiple Task Sampling paper_content: A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks. --- paper_title: Massively Multitask Networks for Drug Discovery paper_content: Massively multitask neural architectures provide a learning framework for drug discovery that synthesizes information from many distinct biological sources. To train these architectures at scale, we gather large amounts of data from public sources to create a dataset of nearly 40 million measurements across more than 200 biological targets. We investigate several aspects of the multitask framework by performing a series of empirical studies and obtain some interesting results: (1) massively multitask networks obtain predictive accuracies significantly better than single-task methods, (2) the predictive power of multitask networks improves as additional tasks and data are added, (3) the total amount of data and the total number of tasks both contribute significantly to multitask improvement, and (4) multitask networks afford limited transferability to tasks not in the training set. Our results underscore the need for greater data sharing and further algorithmic innovation to accelerate the drug discovery process. --- paper_title: Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval paper_content: Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation. --- paper_title: Facial Landmark Detection by Deep Multi-task Learning paper_content: Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21]. --- paper_title: Multitask Learning paper_content: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. --- paper_title: Deep Voice: Real-time Neural Text-to-Speech paper_content: We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations. --- paper_title: Fast R-CNN paper_content: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn. --- paper_title: Unsupervised Domain Adaptation by Backpropagation paper_content: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). ::: As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. ::: Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. --- paper_title: Open-Domain Name Error Detection using a Multi-Task RNN paper_content: Out-of-vocabulary name errors in speech recognition create significant problems for downstream language processing, but the fact that they are rare poses challenges for automatic detection, particularly in an open-domain scenario. To address this problem, a multi-task recurrent neural network language model for sentence-level name detection is proposed for use in combination with out-of-vocabulary word detection. The sentence-level model is also effective for leveraging external text data. Experiments show a 26% improvement in name-error detection F-score over a system using n-gram lexical features. --- paper_title: Multitask Learning paper_content: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. --- paper_title: Promoting Poor Features to Supervisors: Some Inputs Work Better as Outputs paper_content: In supervised learning there is usually a clear distinction between inputs and outputs - inputs are what you will measure, outputs are what you will predict from those measurements. This paper shows that the distinction between inputs and outputs is not this simple. Some features are more useful as extra outputs than as inputs. By using a feature as an output we get more than just the case values but can learn a mapping from the other inputs to that feature. For many features this mapping may be more useful than the feature value itself. We present two regression problems and one classification problem where performance improves if features that could have been used as inputs are used as extra outputs instead. This result is surprising since a feature used as an output is not used during testing. --- paper_title: Multitask Learning paper_content: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. --- paper_title: Open-Domain Name Error Detection using a Multi-Task RNN paper_content: Out-of-vocabulary name errors in speech recognition create significant problems for downstream language processing, but the fact that they are rare poses challenges for automatic detection, particularly in an open-domain scenario. To address this problem, a multi-task recurrent neural network language model for sentence-level name detection is proposed for use in combination with out-of-vocabulary word detection. The sentence-level model is also effective for leveraging external text data. Experiments show a 26% improvement in name-error detection F-score over a system using n-gram lexical features. --- paper_title: Semi-supervised Multitask Learning for Sequence Labeling paper_content: We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data. --- paper_title: Multitask Learning paper_content: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. --- paper_title: Exploiting Task Relatedness for Multiple Task Learning paper_content: The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task. --- paper_title: Multi-task learning for classification with dirichlet process priors paper_content: Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP. --- paper_title: When is multitask learning effective? Semantic sequence prediction under varying data conditions paper_content: Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable. ---
Title: An Overview of Multi-Task Learning in Deep Neural Networks Section 1: Introduction Description 1: Introduce the concept of Multi-Task Learning (MTL), its benefits, and its application across various domains. Provide an overview of the paper. Section 2: Motivation Description 2: Describe various motivations for using MTL, including biological, pedagogical, pop culture references, and perspectives from machine learning. Section 3: Two MTL methods for Deep Learning Description 3: Discuss the two most commonly used methods for MTL in deep learning: hard parameter sharing and soft parameter sharing. Section 4: Why does MTL work? Description 4: Explain the underlying mechanisms that make MTL effective, including implicit data augmentation, attention focusing, eavesdropping, representation bias, and regularization. Section 5: MTL in non-neural models Description 5: Review the literature on MTL for linear models, kernel methods, and Bayesian algorithms, focusing on enforcing sparsity across tasks and modeling the relationships between tasks. Section 6: Recent work on MTL for Deep Learning Description 6: Discuss recent advances and innovations in MTL for deep learning, including deep relationship networks, fully-adaptive feature sharing, cross-stitch networks, and other novel approaches. Section 7: Auxiliary tasks Description 7: Explore what makes a good auxiliary task for MTL, various types of auxiliary tasks, and how they can be used to improve performance on the main task. Section 8: Conclusion Description 8: Summarize the findings, discuss the pervasive use of hard parameter sharing, and highlight the importance of understanding task similarity and relationship for the generalization capabilities of MTL with deep neural networks.
Virtual Reality and Augmented Reality in Plastic Surgery: A Review
7
--- paper_title: Virtual reality in surgery and medicine. paper_content: This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time. Surgical robots are likely to be deployed for specific tasks in the operating room (OR) and to support telesurgery applications. Technical developments in robotics and motion control are key components of many virtual reality systems. Since almost all of the virtual reality and enhanced reality systems will be digitally based, they are also capable of being put "on-line" for tele-training, consulting, and even surgery. Advancements in virtual and enhanced reality systems will be driven in part by consumer applications of this technology. Many of the companies that will supply systems for medical applications are also working on commercial products. A big consumer hit can benefit the entire industry by increasing volumes and bringing down costs.(ABSTRACT TRUNCATED AT 400 WORDS) --- paper_title: Overview: Virtual Reality in Medicine paper_content: 96 800x600 Normal 0 14 false false false IT JA X-NONE /* Style Definitions */ ::: table.MsoNormalTable ::: {mso-style-name:"Tabella normale"; ::: mso-tstyle-rowband-size:0; ::: mso-tstyle-colband-size:0; ::: mso-style-noshow:yes; ::: mso-style-priority:99; ::: mso-style-parent:""; ::: mso-padding-alt:0cm 5.4pt 0cm 5.4pt; ::: mso-para-margin:0cm; ::: mso-para-margin-bottom:.0001pt; ::: mso-pagination:widow-orphan; ::: font-size:10.0pt; ::: font-family:Cambria;} ::: Background: Virtual Reality (VR) was defined as a collection of technological devices: “a computer capable of interactive 3D visualization, a head-mounted display and data gloves equipped with one or more position trackers”. Today, lots of scientists define VR as a simulation of the real world based on computer graphics, a three dimensional world in which communities of real people interact, create content, items and services, producing real economic value through e-Commerce. Objective: To report the results of a systematic review of articles and reviews published about the theme: “Virtual Reality in Medicine”. Methods: We used the search query string: “Virtual Reality”, “Metaverse”, “Second Life”, “Virtual World”, “Virtual Life” in order to find out how many articles were written about these themes. For the “Meta-review” we used only “Virtual Reality” AND “Review”. We searched the following databases: Psycinfo, Journal of Medical Internet Research, Isiknowledge till September 2011 and Pubmed till February 2012. We included any source published in either print format or on the Internet, available in all languages, and containing texts that define or attempt to define VR in explicit terms. Results: We retrieved 3,443 articles on Pubmed in 2012 and 8,237 on Isiknowledge in 2011. This large number of articles covered a wide range of themes, but showed no clear consensus about VR. We identified 4 general uses of VR in Medicine, and searched for the existing reviews about them. We found 364 reviews in 2011, although only 197 were pertinent to our aims: 1. Communication Interface (11 Reviews); 2. Medical Education (49 reviews); 3. Surgical Simulation (49 Reviews) and 4. Psychotherapy (88 Reviews). Conclusion: We found a large number of articles, but no clear consensus about the meaning of the term VR in Medicine. We found numerous articles published on these topics and many of them have been reviewed. We decided to group these reviews in 4 areas in order to provide a systematic overview of the subject matter, and to enable those interested to learn more about these particular topics. --- paper_title: Mixed-reality simulation for orthognathic surgery paper_content: BackgroundMandibular motion tracking system (ManMoS) has been developed for orthognathic surgery. This article aimed to introduce the ManMoS and to examine the accuracy of this system.MethodsSkeletal and dental models are reconstructed in a virtual space from the DICOM data of three-dimensional computed tomography (3D-CT) recording and the STL data of 3D scanning, respectively. The ManMoS uniquely integrates the virtual dento-skeletal model with the real motion of the dental cast mounted on the simulator, using the reference splint. Positional change of the dental cast is tracked by using the 3D motion tracking equipment and reflects on the jaw position of the virtual model in real time, generating the mixed-reality surgical simulation. ManMoS was applied for two clinical cases having a facial asymmetry. In order to assess the accuracy of the ManMoS, the positional change of the lower dental arch was compared between the virtual and real models.ResultsWith the measurement data of the real lower dental cast as a reference, measurement error for the whole simulation system was less than 0.32 mm. In ManMoS, the skeletal and dental asymmetries were adequately diagnosed in three dimensions. Jaw repositioning was simulated with priority given to the skeletal correction rather than the occlusal correction. In two cases, facial asymmetry was successfully improved while a normal occlusal relationship was reconstructed. Positional change measured in the virtual model did not differ significantly from that in the real model.ConclusionsIt was suggested that the accuracy of the ManMoS was good enough for a clinical use. This surgical simulation system appears to meet clinical demands well and is an important facilitator of communication between orthodontists and surgeons. --- paper_title: An Augmented Reality Haptic Training Simulator for Spinal Needle Procedures paper_content: This paper presents the prototype for an augmented reality haptic simulation system with potential for spinal needle insertion training. The proposed system is composed of a torso mannequin, a MicronTracker2 optical tracking system, a PHANToM haptic device, and a graphical user interface to provide visual feedback. The system allows users to perform simulated needle insertions on a physical mannequin overlaid with an augmented reality cutaway of patient anatomy. A tissue model based on a finite-element model provides force during the insertion. The system allows for training without the need for the presence of a trained clinician or access to live patients or cadavers. A pilot user study demonstrates the potential and functionality of the system. --- paper_title: Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-Time Haptic Feedback paper_content: Background ::: With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. --- paper_title: Computer based system for simulating spine surgery paper_content: A computer based system described herein provides preoperative simulations to verify burring and morphological operations in spine surgery. An efficient bur-tissue intersection computation method is used to simulate tissue burring by diverse shapes of burs for various cutting procedures in spine surgery. A volume manipulation is then used to calculate burred changes on volume data, provide high-quality burred tissue surfaces and check if any structure separate from the spine. Morphological simulations such as repositions, deletions and fusions can be implemented on the separate structures for spine decompression and morphology correction. The coated surface or flutes on a bur are pre-divided into small elements in which a tissue removal force on each element can be fast summated to obtain the burring force by predetermined transformation matrices. Examples of typical spine surgery modalities illustrate the effectiveness of the proposed methods and simulator. --- paper_title: Real-Time Mandibular Angle Reduction Surgical Simulation With Haptic Rendering paper_content: Mandibular angle reduction is a popular and efficient procedure widely used to alter the facial contour. The primary surgical instruments, the reciprocating saw, and the round burr, employed in the surgery have a common feature: operating at a high speed. Generally, inexperienced surgeons need a long-time practice to learn how to minimize the risks caused by the uncontrolled contacts and cutting motions in manipulation of instruments with high-speed reciprocation or rotation. A virtual reality-based surgical simulator for the mandibular angle reduction was designed and implemented on a compute unified device architecture (CUDA)-based platform in this paper. High-fidelity visual and haptic feedbacks are provided to enhance the perception in a realistic virtual surgical environment. The impulse-based haptic models were employed to simulate the contact forces and torques on the instruments. It provides convincing haptic sensation for surgeons to control the instruments under different reciprocation or rotation velocities. The real-time methods for bone removal and reconstruction during surgical procedures have been proposed to support realistic visual feedbacks. The simulated contact forces were verified by comparing against the actual force data measured through the constructed mechanical platform. An empirical study based on the patient-specific data was conducted to evaluate the ability of the proposed system in training surgeons with various experiences. The results confirm the validity of our simulator. --- paper_title: A Virtual Reality Based Simulation Environment for Orthopedic Surgery paper_content: Virtual reality simulators in medicine are becoming more widely used to train residents and interns. In this paper, we discuss the design of a Virtual Surgical Environment VSE for performing a class of orthopedic surgeries. The VSE focuses on providing training to residents in LISS plating based surgeries for addressing fractures of the femur. Using such virtual environments enables medical residents to develop a better understanding of the various surgical steps along with enabling them to practice specific surgical steps involving surgical instruments as well as other components. A brief discussion of the network based implementation is also provided; such an approach enables multiple users from different geographical locations to interact with an expert surgeon at another location. This network based approach was demonstrated as part of Next Generation Internet activities related to the GENI and US Ignite projects. --- paper_title: Augmented reality patient-specific reconstruction plate design for pelvic and acetabular fracture surgery paper_content: Purpose ::: The objective of this work is to develop a preoperative reconstruction plate design system for unilateral pelvic and acetabular fracture reduction and internal fixation surgery, using computer graphics and augmented reality (AR) techniques, in order to respect the patient-specific morphology and to reduce surgical invasiveness, as well as to simplify the surgical procedure. --- paper_title: Haptic computer-assisted patient-specific preoperative planning for orthopedic fractures surgery paper_content: PURPOSE ::: The aim of orthopedic trauma surgery is to restore the anatomy and function of displaced bone fragments to support osteosynthesis. For complex cases, including pelvic bone and multi-fragment femoral neck and distal radius fractures, preoperative planning with a CT scan is indicated. The planning consists of (1) fracture reduction-determining the locations and anatomical sites of origin of the fractured bone fragments and (2) fracture fixation-selecting and placing fixation screws and plates. The current bone fragment manipulation, hardware selection, and positioning processes based on 2D slices and a computer mouse are time-consuming and require a technician. ::: ::: ::: METHODS ::: We present a novel 3D haptic-based system for patient-specific preoperative planning of orthopedic fracture surgery based on CT scans. The system provides the surgeon with an interactive, intuitive, and comprehensive, planning tool that supports fracture reduction and fixation. Its unique features include: (1) two-hand haptic manipulation of 3D bone fragments and fixation hardware models; (2) 3D stereoscopic visualization and multiple viewing modes; (3) ligaments and pivot motion constraints to facilitate fracture reduction; (4) semiautomatic and automatic fracture reduction modes; and (5) interactive custom fixation plate creation to fit the bone morphology. ::: ::: ::: RESULTS ::: We evaluate our system with two experimental studies: (1) accuracy and repeatability of manual fracture reduction and (2) accuracy of our automatic virtual bone fracture reduction method. The surgeons achieved a mean accuracy of less than 1 mm for the manual reduction and 1.8 mm (std [Formula: see text] 1.1 mm) for the automatic reduction. ::: ::: ::: CONCLUSION ::: 3D haptic-based patient-specific preoperative planning of orthopedic fracture surgery from CT scans is useful and accurate and may have significant advantages for evaluating and planning complex fractures surgery. --- paper_title: Mandible Reconstruction with 3D Virtual Planning paper_content: The fibula free flap has now become the most reliable and frequently used option for mandible reconstruction. Recently, three dimensional images and printing technologies are applied to mandibular reconstruction. We introduce our recent experience of mandibular reconstruction using three dimensionally planned fibula free flap in a patient with gunshot injury. The defect was virtually reconstructed with three-dimensional image. Because bone fragments are dislocated from original position, relocation was necessary. Fragments are virtually relocated to original position using mirror image of unaffected right side of the mandible. A medical rapid prototyping (MRP) model and cutting guide was made with 3D printer. Titanium reconstruction plate was adapted to the MRP model manually. 7 cm-sized fibula bone flap was designed on left lower leg. After dissection, proximal and distal margin of fibula flap was osteotomized by using three dimensional cutting guide. Segmentation was also done as planned. The fibula bone flap was attached to the inner side of the prebent reconstruction plate and fixed with screws. Postoperative evaluation was done by comparison between preoperative planning and surgical outcome. Although dislocated condyle is still not in ideal position, we can see that reconstruction was done as planned. --- paper_title: High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery paper_content: AbstractMedical imaging techniques provide a wealth of information for surgical preparation, but it is still often the case that surgeons are examining three-dimensional pre-operative image data as a series of two-dimensional images. With recent advances in visual computing and interactive technologies, there is much opportunity to provide surgeons an ability to actively manipulate and interpret digital image data in a surgically meaningful way. This article describes the design and initial evaluation of a virtual surgical environment that supports patient-specific simulation of temporal bone surgery using pre-operative medical image data. Computational methods are presented that enable six degree-of-freedom haptic feedback during manipulation, and that simulate virtual dissection according to the mechanical principles of orthogonal cutting and abrasive wear. A highly efficient direct volume renderer simultaneously provides high-fidelity visual feedback during surgical manipulation of the virtual anatomy.... --- paper_title: Collaborative virtual environments for orthopedic surgery paper_content: Virtual reality simulators can be beneficial in increasing the quality of training while decreasing the time needed for training a specific skill. These simulators can provide a risk free environment which has the benefit of repetition and correction of the user's error. It has been years since the introduction of these simulators into the medical applications. The application of VR in orthopedic surgery, however, is a relatively new area of interest. In this paper, collaborative virtual reality based environments with the application in orthopedic surgery, with a focus on LISS (Less Invasive Stabilization System) plating, is presented. They can be used for training medical residents in orthopedic surgery. --- paper_title: Percutaneous spinal fixation simulation with virtual reality and haptics. paper_content: BACKGROUND ::: In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. ::: ::: ::: OBJECTIVE ::: To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. ::: ::: ::: METHODS ::: Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. ::: ::: ::: RESULTS ::: Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. ::: ::: ::: CONCLUSION ::: The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool. --- paper_title: Virtual surgical system in reduction of maxillary fracture paper_content: Reduction of maxillary fracture is widely performed in cranial-maxillofacial surgeries, but it requires skilled and experienced surgeons. A virtual surgery system aiming on this kind of surgery is designed. CHAI 3D is used for rendering the haptic feedback. A multi-proxy algorithm is proposed to prevent the handle of operation tool stabbing into the virtual models and causing misjudgment. With the Geomagic haptic device, operators can manipulate one virtual model in the 3D virtual environment. This system can be used to train medical students or for preoperative planning of complicated surgeries. --- paper_title: A haptics-assisted cranio-maxillofacial surgery planning system for restoring skeletal anatomy in complex trauma cases paper_content: Purpose Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful pre-operative planning leads to a better outcome with a higher degree of function and reduced morbidity in addition to reduced time in the operating room. However, today’s surgery planning systems are primitive, relying mostly on the user’s ability to plan complex tasks with a two-dimensional graphical interface. Methods A system for planning the restoration of skeletal anatomy in facial trauma patients using a virtual model derived from patient-specific CT data. The system combines stereo visualization with six degrees-of-freedom, high-fidelity haptic feedback that enables analysis, planning, and preoperative testing of alternative solutions for restoring bone fragments to their proper positions. The stereo display provides accurate visual spatial perception, and the haptics system provides intuitive haptic feedback when bone fragments are in contact as well as six degrees-of-freedom attraction forces for precise bone fragment alignment. Results A senior surgeon without prior experience of the system received 45 min of system training. Following the training session, he completed a virtual reconstruction in 22 min of a complex mandibular fracture with an adequately reduced result. Conclusion Preliminary testing with one surgeon indicates that our surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning. With little training, it allows a surgeon to complete a complex plan in a short amount of time. --- paper_title: Virtual Reality Facial Contouring Surgery Simulator Based on CT Transversal Slices paper_content: In facial contouring surgery, surgeons operate to change facial bone morphology and obtain esthetic feminine face. Face appearance after surgery is required to evaluate; therefore, simulations for facial bone and soft tissue procedures are both required. This study develops a soft tissue manipulation method based on volume data to simulate face peeling and healing. Inte- grating with our reported bone surgical simulation methods, the proposed simulator can provide high quality illustrations for the simulations of various modalities of facial contouring surgery. --- paper_title: A surgical simulator for planning and performing repair of cleft lips. paper_content: UNLABELLED ::: The objective of this project was to develop a computer-based surgical simulation system for planning and performing cleft lip repair. This system allows the user to interact with a virtual patient to perform the traditional steps of cleft-lip repair (rotation-advancement technique). ::: ::: ::: MATERIALS AND METHODS ::: The system interfaces to force-feedback (haptic) devices to track the user's motion and provide feedback during the procedure, while performing real-time soft-tissue simulation. An 11-day-old unilateral cleft lip, alveolus and palate patient was previously CT scanned for ancillary diagnostic purposes using standard imaging protocols and 1mm slices. High-resolution 3D meshes were automatically generated from this data using the ROVE software developed in-house. The resulting 3D meshes of bone and soft tissue were instilled with physical properties of soft tissues for purposes of simulation. Once these preprocessing steps were completed, the patient's bone and soft tissue data are presented on the computer screen in stereo and the user can freely view, rotate, and otherwise interact with the patient's data in real time. The user is prompted to select anatomical landmarks on the patient's data for preoperative planning purposes, then their locations are compared against that of a 'gold standard' and a score, derived from their deviation from that standard and time required, is generated. The user can then move a haptic stylus and guide the motion of the virtual cutting tool. The soft tissues can thus be incised using this virtual cutting tool, moved using virtual forceps, and fused in order to perform any of the major procedures for cleft lip repair. Real-time soft tissue deformation of the mesh realistically simulates normal tissues and haptic-rate (>1 kHz) force-feedback is provided. The surgical result of the procedure can then be immediately visualized and the entire training process can be repeated at will. A short evaluation study was also performed. Two groups (non-medical and plastic surgery residents) of six persons each performed the anatomical marking task of the simulator four times. ::: ::: ::: RESULTS ::: Results showed that the plastic surgery residents scored consistently better than the persons without medical background. Every person's score increased with practice, and the length of time needed to complete the 11 markings decreased. The data was compiled and showed which specific markers consistently took users the longest to identify as well as which locations were hardest to accurately mark. ::: ::: ::: CONCLUSION ::: These findings suggest that the simulator is a valuable training tool, giving residents a way to practice anatomical identification for cleft lip surgery without the risks associated with training on a live patient. Educators can also use the simulator to examine which markers are consistently problematic, and modify their training to address these needs. --- paper_title: Haptics-assisted Virtual Planning of Bone, Soft Tissue, and Vessels in Fibula Osteocutaneous Free Flaps paper_content: Background: Virtual surgery planning has proven useful for reconstructing head and neck defects by fibula osteocutaneous free flaps (FOFF). Benefits include improved healing, function, and aestheti ... --- paper_title: Application of an augmented reality tool for maxillary positioning in orthognathic surgery - a feasibility study. paper_content: BACKGROUND ::: An augmented reality tool for computer assisted surgery named X-Scope allows visual tracking of real anatomical structures in superposition with volume rendered CT or MRI scans and thus can be used for navigated translocation of bony segments. ::: ::: ::: METHODS ::: In a feasibility study X-Scope was used in orthognathic surgery to control the translocation of the maxilla after Le Fort I osteotomy within a bimaxillary procedure. The situation achieved was compared with the pre-operative situation by means of cephalometric analysis on lateral and frontal cephalograms. ::: ::: ::: RESULTS ::: The technique was successfully utilized in 5 patients. Maxillary positioning using X-Scope was accomplished accurately within a range of 1mm. The tool was used in all cases in addition to the usual intra-operative splints. A stand-alone application without conventional control does not yet seem reasonable. ::: ::: ::: CONCLUSION ::: Augmented reality tools like X-Scope may be helpful for controlling maxillary translocation in orthognathic surgery. The application to other interventions in cranio-maxillofacial surgery such as Le Fort III osteotomy, fronto-orbital advancement, and cranial vault reshaping or repair may also be considered. --- paper_title: Video see‐through augmented reality for oral and maxillofacial surgery paper_content: BACKGROUND ::: Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. ::: ::: ::: METHODS ::: A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. ::: ::: ::: RESULTS ::: The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. ::: ::: ::: CONCLUSIONS ::: The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd. --- paper_title: Mandibular angle split osteotomy based on a novel augmented reality navigation using specialized robot-assisted arms--A feasibility study. paper_content: PURPOSE ::: Augmented reality (AR) navigation, is a visible 3-dimensional display technology, that, when combined with robot-assisted surgery (RAS), allows precision and automation in operational procedures. In this study, we used an innovative, minimally invasive, simplified operative method to position the landmarks and specialized robot-assisted arms to apply in a rapid protyping (RP) model. This is the first report of the use of AR and RAS technology in craniomaxillofacial surgery. ::: ::: ::: METHOD ::: Five patients with prominent mandibular angle were randomly chosen for this feasibility study. We reconstructed the mandibular modules and created preoperational plans as semi-embedded and nail-fixation modules for an easy registration procedure. The left side of the mandibular modules comprised the experimental groups with use of a robot, and the right sides comprised the control groups without a robot. With AR Toolkits program tracking and display system applied, we carried out the operative plans and measured the error. ::: ::: ::: RESULTS ::: Both groups were successfully treated in this study, but the RAS was more accurate and stable. The average position and angle were significant (p < 0.01) between the 2 groups. ::: ::: ::: CONCLUSIONS ::: This study reports a novel augmented reality navigation with specialized robot-assisted arms for mandibular angle split osteotomy. AR and RAS can be helpful for patients undergoing craniomaxillofacial surgery. --- paper_title: Augmented reality as an aid in maxillofacial surgery: validation of a wearable system allowing maxillary repositioning. paper_content: AIM ::: We present a newly designed, localiser-free, head-mounted system featuring augmented reality as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation. ::: ::: ::: METHODS ::: Our head-mounted wearable system facilitating augmented surgery was developed as a stand-alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. We implement a strategy designed to present augmented reality information to the operating surgeon. LeFort1 osteotomy was chosen as the test procedure. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, augmented-reality assisted bone repositioning. In vitro testing was conducted on a physical replica of a human skull, and the augmented reality system was used to perform LeFort1 maxillary repositioning. Surgical accuracy was measured with the aid of an optical navigation system that recorded the coordinates of three reference points (located in anterior, posterior right, and posterior left positions) on the repositioned maxilla. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill. ::: ::: ::: RESULTS ::: The mean error was 1.70 ± 0.51 mm. The axial errors were 0.89 ± 0.54 mm on the sagittal axis, 0.60 ± 0.20 mm on the frontal axis, and 1.06 ± 0.40 mm on the craniocaudal axis. The simplest plan was associated with a slightly lower mean error (1.58 ± 0.37 mm) compared with the more complex plans (medium: 1.82 ± 0.71 mm; difficult: 1.70 ± 0.45 mm). The mean error for the anterior reference point was lower (1.33 ± 0.58 mm) than those for both the posterior right (1.72 ± 0.24 mm) and posterior left points (2.05 ± 0.47 mm). No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice. ::: ::: ::: CONCLUSION ::: We used a new localiser-free, head-mounted, wearable, stereoscopic, video see-through display to develop a useful strategy affording surgeons access to augmented reality information. Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, our positive results suggest that it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions. --- paper_title: Computer-assisted orthognathic surgery: waferless maxillary positioning, versatility, and accuracy of an image-guided visualisation display. paper_content: There may well be a shift towards 3-dimensional orthognathic surgery when virtual surgical planning can be applied clinically. We present a computer-assisted protocol that uses surgical navigation supplemented by an interactive image-guided visualisation display (IGVD) to transfer virtual maxillary planning precisely. The aim of this study was to analyse its accuracy and versatility in vivo. The protocol consists of maxillofacial imaging, diagnosis, planning of virtual treatment, and intraoperative surgical transfer using an IGV display. The advantage of the interactive IGV display is that the virtually planned maxilla and its real position can be completely superimposed during operation through a video graphics array (VGA) camera, thereby augmenting the surgeon's 3-dimensional perception. Sixteen adult class III patients were treated with by bimaxillary osteotomy. Seven hard tissue variables were chosen to compare (ΔT1-T0) the virtual maxillary planning (T0) with the postoperative result (T1) using 3-dimensional cephalometry. Clinically acceptable precision for the surgical planning transfer of the maxilla (<0.35 mm) was seen in the anteroposterior and mediolateral angles, and in relation to the skull base (<0.35°), and marginal precision was seen in the orthogonal dimension (<0.64 mm). An interactive IGV display complemented surgical navigation, augmented virtual and real-time reality, and provided a precise technique of waferless stereotactic maxillary positioning, which may offer an alternative approach to the use of arbitrary splints and 2-dimensional orthognathic planning. --- paper_title: An effective visualization technique for depth perception in augmented reality-based surgical navigation. paper_content: BACKGROUND ::: Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. ::: ::: ::: METHODS ::: To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. ::: ::: ::: RESULTS ::: Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. ::: ::: ::: CONCLUSIONS ::: We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. --- paper_title: A virtual training system for maxillofacial surgery using advanced haptic feedback and immersive workbench. paper_content: Background ::: ::: VR-based surgery simulation provides a cost-effective and efficient method to train novices. In this study, a virtual training system for maxillofacial surgery (VR-MFS), which aims mainly at the simulation of operations on mandible and maxilla, was developed and demonstrated. ::: ::: ::: ::: Methods ::: ::: The virtual models of the anatomic structures were reconstructed from CT data, and the virtual instruments were built from laser scanning data using reverse engineering technology. For collision detection, axis aligned bounding boxes (AABBs) were constructed for the anatomic models. Then, the simulation algorithms were developed, and the haptic force feedback was consequently calculated based on regression equations. Finally, the vivid 3D stereo effect was implemented with the use of an immersive workbench. ::: ::: ::: ::: Results ::: ::: A virtual training system for maxillofacial surgery was developed; in particular, the application for Le-Fort I osteotomy was implemented. The tactile, visual and aural effects were highly integrated, making the virtual surgical environment vivid and realistic. ::: ::: ::: ::: Conclusions ::: ::: The VR-MFS provides an effective approach in terms of helping novices to become familiar with maxillofacial surgery procedures. The same method can also be applied to other bone simulations. Copyright © 2013 John Wiley & Sons, Ltd. --- paper_title: Impulse-Based Rendering Methods for Haptic Simulation of Bone-Burring. paper_content: Bone-burring is a common procedure in orthopedic, dental, and otologic surgeries. Virtual reality (VR)-based surgical simulations with both visual and haptic feedbacks provide novice surgeons with a feasible and safe way to practice their burring skill. However, creating realistic haptic interactions between a high-speed rotary burr and stiff bone is a challenging task. In this paper, we propose a novel interactive haptic bone-burring model based on impulse-based dynamics to simulate the contact forces, including resistant and frictional forces. In order to mimic the lateral and axial burring vibration forces, a 3D vibration model has been developed. A prototype haptic simulation system for the bone-burring procedure has been implemented to evaluate the proposed haptic rendering methods. Several experiments of force evaluations and task-oriented tests were conducted on the prototype system. The results demonstrate the validity and feasibility of the proposed methods. --- paper_title: A virtual reality simulator for orthopedic basic skills: A design and validation study paper_content: Orthopedic drilling as a skill demands high levels of dexterity and expertise from the surgeon. It is a basic skill that is required in many orthopedic procedures. Inefficient drilling can be a source of avoidable medical errors that may lead to adverse events. It is hence important to train and evaluate residents in safe environments for this skill. This paper presents a virtual orthopedic drilling simulator that was designed to provide visiohaptic interaction with virtual bones. The simulation provides a realistic basic training environment for orthopedic surgeons. It contains modules to track and analyze movements of surgeons, in order to determine their surgical proficiency. The simulator was tested with senior surgeons, residents and medical students for validation purposes. Through the multi-tiered testing strategy it was shown that the simulator was able to produce a learning effect that transfers to real-world drilling. Further, objective measures of surgical performance were found to be able to differentiate between experts and novices. --- paper_title: Development and validation of a surgical training simulator with haptic feedback for learning bone-sawing skill paper_content: Display Omitted We developed a visuo-haptic surgical training simulator for bone-sawing procedure.We validated the simulator's face validity and construct validity.This simulator was able to produce the effect of learning bone-sawing skill.This simulator could provide a training alternative for novices. ObjectiveBone sawing or cutting is widely used for bone removal processes in bone surgery. It is an essential skill that surgeons should execute with a high level of experience and sensitive force perception. Surgical training simulators, with virtual and haptic feedback functions, can offer a safe, repeatable and cost-effective alternative to traditional surgeries. In this research, we developed a surgical training simulator with virtual and haptic force feedback for maxillofacial surgery, and we validated the effects on the learning of bone-sawing skills through empirical evaluation. MethodsOmega.6 from Force Dimension was employed as the haptic device, and Display300 from SenseGraphices was used as the 3D stereo display. The voxel-based model was constructed using computed tomography (CT) images, and the virtual tools were built through reverse engineering. The multi-point collision detection method was applied for haptic rendering to test the 3D relationship between the virtual tool and the bone voxels. Bone-sawing procedures in maxillofacial surgery were simulated with a virtual environment and real-time haptic feedback. A total of 25 participants (16 novices and 9 experienced surgeons) were included in 2 groups to perform the bone-sawing simulation for assessing the construct validity. Each of the participants completed the same bone-sawing procedure at the predefined maxillary region six times. For each trial, the sawing operative time, the maximal acceleration, and the percentage of the haptic force exceeding the threshold were recorded and analysed to evaluate the validity. After six trials, all of the participants scored the simulator in terms of safe force learning, stable hand control and overall performance to confirm the face validity. Moreover, 10 novices in 2 groups indentified the transfer validity on rapid prototype skull models by comparing the operative time and the maximal acceleration. ResultsThe analysed results of construct validity showed that the two groups significantly reduced their sawing operative times after six trials. Regarding maximal acceleration, the curve significantly descended and reached a plateau after the fifth repetition (novices) or third repetition (surgeons). Regarding safe haptic force, the novices obviously reduced the percentage of the haptic force exceeding the threshold, with statistical significance after four trials, but the surgeons did not show a significant difference. Moreover, the subjectively scored results demonstrated that the proposed simulator was more helpful for the novices than for the experienced surgeons, with scores of 8.31 and 7.22, respectively, for their overall performance. The experimental results on skill transference showed that the experimental group performed bone-sawing operation in lower maximal acceleration than control group with a significant difference (p<0.05). These findings suggested that the simulator training had positive effects on real sawing. ConclusionsThe evaluation results proved the construct validity, face validity and the transfer validity of the simulator. These results indicated that this simulator was able to produce the effect of learning bone-sawing skill, and it could provide a training alternative for novices. --- paper_title: A Virtual Reality System to Train Image Guided Placement of Kirschner-Wires for Distal Radius Fractures paper_content: We present the design, development and initial user testing of a virtual reality simulator to train orthopaedic surgeons in the optimal placement of K-wires for fixation of distal radius fractures. Our platform includes 5 DOF haptic feedback to recreate the manual skill aspects of the drilling process, a 3D view of the anatomy and a controllable x-ray image. Once complete, the user is given an overview of their performance compared with the ’ideal placement’ defined by an expert orthopaedic surgeon. The design goals based on analysis of the core steps in the procedure are presented, along with the technical implementation in terms of both haptic and graphical feedback. Preliminary user testing results are discussed, together with current limitations and planned future development. --- paper_title: The Validity and Reliability of a Hybrid Reality Simulator for Wire Navigation in Orthopedic Surgery paper_content: Creating training simulators for orthopedic surgery presents unique and largely unexplored engineering challenges. One of these challenges is simulating both the large forces and subtle haptic feedback involved in drilling into bone. This paper involves a hybrid reality wire navigation simulator that combines a physical drill and plastic foam bone surrogate with 3-D graphics to mimic the radiographic imaging often used in a common surgical task. This paper describes experiments to test the simulator's face, content, and construct validity, as well as the precision of the motion-tracking system. The results suggest that the simulator has sufficient face validity to engage trainees and recreates the critical aspects of the wire navigation task. Experimental simulator trials reveal time and accuracy differentials between experts and novices, suggesting construct validity. The results also show that the tracking system is accurate within 3.6 mm. --- paper_title: Comparison of cadaveric and isomorphic virtual haptic simulation in temporal bone training paper_content: Virtual surgery may improve learning and provides an opportunity for pre-operative surgical rehearsal. We describe a novel haptic temporal bone simulator specifically developed for multicore processing and improved visual realism. A position locking algorithm for enhanced drill-bone interaction and haptic fidelity is further employed. The simulation construct is evaluated against cadaveric education. A voxel-based simulator was designed for multicore architecture employing Marching Cubes and Laplacian smoothing to perform real-time haptic and graphic rendering of virtual bone. Ten Otolaryngology trainees dissected a cadaveric temporal bone (CTB) followed by a virtual isomorphic haptic model (VM) based on derivative microCT data. Participants rated 1) physical characteristics, 2) specific anatomic constructs, 3) usefulness in skill development and 4) perceived educational value. The survey instrument employed a Likert scale (1-7). Residents were equivocal about the physical properties of the VM, as cortical (3.2 ± 2.0) and trabecular (2.8 ± 1.6) bone drilling character was appraised as dissimilar to CTB. Overall similarity to cadaveric training was moderate (3.5 ± 1.8). Residents generally felt the VM was beneficial in skill development, rating it highest for translabyrinthine skull-base approaches (5.2 ± 1.3). The VM was considered an effective (5.4 ± 1.5) and accurate (5.7 ± 1.4) training tool which should be integrated into resident education (5.5 ± 1.4). The VM was thought to improve performance (5.3 ± 1.8) and confidence (5.3 ± 1.9) and was highly rated for anatomic learning (6.1 ± 1.9). Study participants found the VM to be a beneficial and effective platform for learning temporal bone anatomy and surgical techniques. They identify some concern with limited physical realism likely owing to the haptic device interface. This study is the first to compare isomorphic simulation in education. This significantly removes possible confounding features as the haptic simulation was based on derivative imaging. --- paper_title: An Augmented Reality Haptic Training Simulator for Spinal Needle Procedures paper_content: This paper presents the prototype for an augmented reality haptic simulation system with potential for spinal needle insertion training. The proposed system is composed of a torso mannequin, a MicronTracker2 optical tracking system, a PHANToM haptic device, and a graphical user interface to provide visual feedback. The system allows users to perform simulated needle insertions on a physical mannequin overlaid with an augmented reality cutaway of patient anatomy. A tissue model based on a finite-element model provides force during the insertion. The system allows for training without the need for the presence of a trained clinician or access to live patients or cadavers. A pilot user study demonstrates the potential and functionality of the system. --- paper_title: Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-Time Haptic Feedback paper_content: Background ::: With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. --- paper_title: Computer based system for simulating spine surgery paper_content: A computer based system described herein provides preoperative simulations to verify burring and morphological operations in spine surgery. An efficient bur-tissue intersection computation method is used to simulate tissue burring by diverse shapes of burs for various cutting procedures in spine surgery. A volume manipulation is then used to calculate burred changes on volume data, provide high-quality burred tissue surfaces and check if any structure separate from the spine. Morphological simulations such as repositions, deletions and fusions can be implemented on the separate structures for spine decompression and morphology correction. The coated surface or flutes on a bur are pre-divided into small elements in which a tissue removal force on each element can be fast summated to obtain the burring force by predetermined transformation matrices. Examples of typical spine surgery modalities illustrate the effectiveness of the proposed methods and simulator. --- paper_title: Three-Dimensional Display Technologies for Anatomical Education: A Literature Review paper_content: Anatomy is a foundational component of biological sciences and medical education and is important for a variety of clinical tasks. To augment current curriculum and improve students’ spatial knowledge of anatomy, many educators, anatomists, and researchers use three-dimensional (3D) visualization technologies. This article reviews 3D display technologies and their associated assessments for anatomical education. In the first segment, the review covers the general function of displays employing 3D techniques. The second segment of the review highlights the use and assessment of 3D technology in anatomical education, focusing on factors such as knowledge gains, student perceptions, and cognitive load. The review found 32 articles on the use of 3D displays in anatomical education and another 38 articles on the assessment of 3D displays. The review shows that the majority (74 %) of studies indicate that the use of 3D is beneficial for many tasks in anatomical education, and that student perceptions are positive toward the technology. --- paper_title: Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-Time Haptic Feedback paper_content: Background ::: With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. --- paper_title: Virtual Reality Technology paper_content: From the Publisher: ::: This in-depth review of current virtual reality technology and its applications provides a detailed analysis of the engineering, scientific and functional aspects of virtual reality systems and the fundamentals of VR modeling and programming. It also contains an exhaustive list of present and future VR applications in a number of diverse fields. Virtual Reality Technology is the first book to include a full chapter on force and tactile feedback and to discuss newer interface tools such as 3-D probes and cyberscopes. Supplemented with 23 color plates and more than 200 drawings and tables which illustrate the concepts described. ---
Title: Virtual Reality and Augmented Reality in Plastic Surgery: A Review Section 1: INTRODUCTION Description 1: This section provides a definition and historical context of VR/AR technologies, their applications in healthcare, and their potential in plastic surgery. Section 2: METHODS Description 2: This section details the systematic review process, search terms, and the inclusion criteria used to gather relevant publications and products in the study. Section 3: RESULTS Description 3: This section summarizes the findings from the reviewed studies and products, categorized into surgical planning, navigation, and training, along with the devices used. Section 4: Surgical planning Description 4: This section discusses the use of VR/AR in patient-specific simulations during the preoperative phase and the use of various devices and technologies in surgical planning. Section 5: Surgical navigation Description 5: This section describes AR-based navigation systems that assist surgeons by displaying anatomical information and preoperative plans during surgery. Section 6: Surgical training Description 6: This section covers VR and AR systems used for surgical skill training and simulation, highlighting various devices and their applications in different surgical procedures. Section 7: DISCUSSION Description 7: This section provides a discussion of the main advantages of VR/AR in plastic surgery, including their potential future developments and impact on surgical practice.
Patent retrieval: a literature review
9
--- paper_title: Our Divided Patent System paper_content: In this comprehensive new study, we evaluate all substantive decisions rendered by any court in every patent case filed in 2008 and 2009 — decisions made between 2009 and 2013. We assess the outcome of litigation by technology and industry. We relate the outcomes of those cases to a host of variables, including variables related to the parties, the patents, and the courts in which those cases were litigated. We find dramatic differences in the outcomes of patent litigation by both technology and industry. For example, owners of patents in the pharmaceutical industry fare much better in dispositive litigation rulings than do owners of patents in the computer & electronics industry, and chemistry patents have much greater success in litigation than their software or biotech counterparts. Our results provide an important window into both patent litigation and the industry-specific battles over patent reform. And they suggest that the traditional narrative of industry-specific patent disputes, which pits the IT industries against the life sciences, is incomplete. --- paper_title: Current Challenges in Patent Information Retrieval paper_content: Intellectual property in the form of patents plays a vital role in today's increasingly knowledge-based economy. This book assembles state-of-the art research and is intended to illustrate innovative approaches to patent information retrieval. --- paper_title: CLEF-IP 2011: Retrieval in the Intellectual Property Domain paper_content: The patent system is designed to encourage disclosure of new technologies and novel ideas by granting exclusive rights on the use of inventions to their inventors, for a limited period of time. Before a patent can be granted, patent o ces around the world perform thorough searches to ensure that no previous similar disclosures were made. In the intellectual property terminology, such kind of searches are called prior art searches. In some industries, the number of granted patents a company owns has a high impact on the market value of the company. This underlines the importance of well-performed prior art searches. Together with the Trec Chem track [5], also organized by our institution, the Clef Ip e ort comes to complete the work that is being done in the series of Ntcir workshops (see for example [4]). The rst Clef Ip track ran within Clef 2009. The purpose of the track was twofold: to encourage and facilitate research in the area of patent retrieval by providing a large clean data set for experimentation; to create a large test collection of patents in the three main European languages for the evaluation of cross lingual information access. The Clef Ip data set includes documents published by the European Patent O ce (Epo) which contain a mixture of English, German and French content. The track focused on the task of prior art search. In 2010 and 2011, the Clef Ip track was organized as a benchmarking activity (lab) in the Clef conference. In these years, the main goal of the Clef Ip e ort remained the same to foster research in the patent retrieval area, and provide a large clean data set. To this end, the number of tasks in the track was increased and the data set was enlarged. Recognizing the importance of patent classi cations in the daily activity of an intellectual property professional, in 2010 the Clef Ip benchmarking activity included a patent classi cation task. The participants were asked to classify --- paper_title: Modern Information Retrieval paper_content: From the Publisher: ::: This is a rigorous and complete textbook for a first course on information retrieval from the computer science (as opposed to a user-centred) perspective. The advent of the Internet and the enormous increase in volume of electronically stored information generally has led to substantial work on IR from the computer science perspective - this book provides an up-to-date student oriented treatment of the subject. --- paper_title: Understanding the Realities of Modern Patent Litigation paper_content: Sixteen years ago, two of us published the first detailed empirical look at patent litigation. In this Article, we update and expand the earlier study with a new hand-coded data set. We evaluate all substantive decisions rendered by any court in every patent case filed in 2008 and 2009 — decisions made between 2009 and 2013. We consider not just patent validity but also infringement and unenforceability. Moreover, we relate the outcomes of those cases to a host of variables, including variables related to the parties, the patents, and the courts in which those cases were litigated. The result is a comprehensive picture of the outcomes of modern patent litigation, one that confirms conventional wisdom in some respects but upends it in others. In particular, we find a surprising amount of continuity in the basic outcomes of patent lawsuits over the past twenty years, despite rather dramatic changes in who brought patent suits during that time. --- paper_title: Wikipedia-based query phrase expansion in patent class search paper_content: Relevance feedback methods generally suffer from topic drift caused by word ambiguities and synonymous uses of words. Topic drift is an important issue in patent information retrieval as people tend to use different expressions describing similar concepts causing low precision and recall at the same time. Furthermore, failing to retrieve relevant patents to an application during the examination process may cause legal problems caused by granting an existing invention. A possible cause of topic drift is utilizing a relevance feedback-based search method. As a way to alleviate the inherent problem, we propose a novel query phrase expansion approach utilizing semantic annotations in Wikipedia pages, trying to enrich queries with phrases disambiguating the original query words. The idea was implemented for patent search where patents are classified into a hierarchy of categories, and the analyses of the experimental results showed not only the positive roles of phrases and words in retrieving additional relevant documents through query expansion but also their contributions to alleviating the query drift problem. More specifically, our query expansion method was compared against relevance-based language model, a state-of-the-art query expansion method, to show its superiority in terms of MAP on all levels of the classification hierarchy. --- paper_title: A Methodology for Building a Patent Test Collection for Prior Art Search paper_content: This paper proposes a methodology for the construction of a patent test collection for the task of prior art search. Key to the justification of the methodology is an analysis of the nature and structure of patent documents and the patenting process. These factors enable a corpus of patent documents to be reverse engineered in order to arrive at high quality, realistic, relevance assessments. The paper first outlines the case for such a prior art search test collection along with the characteristics of patent documents, before describing the proposed method. Further research and development will be directed towards the application of this methodology to create a suite of prior art search topics for the evaluation of patent retrieval systems. We also include a preliminary analysis of its applica --- paper_title: CLEF-IP 2011: Retrieval in the Intellectual Property Domain paper_content: The patent system is designed to encourage disclosure of new technologies and novel ideas by granting exclusive rights on the use of inventions to their inventors, for a limited period of time. Before a patent can be granted, patent o ces around the world perform thorough searches to ensure that no previous similar disclosures were made. In the intellectual property terminology, such kind of searches are called prior art searches. In some industries, the number of granted patents a company owns has a high impact on the market value of the company. This underlines the importance of well-performed prior art searches. Together with the Trec Chem track [5], also organized by our institution, the Clef Ip e ort comes to complete the work that is being done in the series of Ntcir workshops (see for example [4]). The rst Clef Ip track ran within Clef 2009. The purpose of the track was twofold: to encourage and facilitate research in the area of patent retrieval by providing a large clean data set for experimentation; to create a large test collection of patents in the three main European languages for the evaluation of cross lingual information access. The Clef Ip data set includes documents published by the European Patent O ce (Epo) which contain a mixture of English, German and French content. The track focused on the task of prior art search. In 2010 and 2011, the Clef Ip track was organized as a benchmarking activity (lab) in the Clef conference. In these years, the main goal of the Clef Ip e ort remained the same to foster research in the patent retrieval area, and provide a large clean data set. To this end, the number of tasks in the track was increased and the data set was enlarged. Recognizing the importance of patent classi cations in the daily activity of an intellectual property professional, in 2010 the Clef Ip benchmarking activity included a patent classi cation task. The participants were asked to classify --- paper_title: CLEF-IP 2010: Retrieval Experiments in the Intellectual Property Domain paper_content: The CLEF-IP track ran for the first time within CLEF 2009. The purpose of the track was twofold: to encourage and facilitate research in the area of patent retrieval by providing a large clean data set for experimentation; to create a large test collection of patents in the three main European languages for the evaluation of cross-lingual information access. The track focused on the task of prior art search. The 15 European teams who participated in the track deployed a rich range of Information Retrieval techniques adapting them to this new specific domain and task. A large-scale test collection for evaluation purposes was created by exploiting patent citations. --- paper_title: A study of query reformulation for patent prior art search with partial patent applications paper_content: Patents are used by legal entities to legally protect their inventions and represent a multi-billion dollar industry of licensing and litigation. In 2014, 326,033 patent applications were approved in the US alone -- a number that has doubled in the past 15 years and which makes prior art search a daunting, but necessary task in the patent application process. In this work, we seek to investigate the efficacy of prior art search strategies from the perspective of the inventor who wishes to assess the patentability of their ideas prior to writing a full application. While much of the literature inspired by the evaluation framework of the CLEF-IP competition has aimed to assist patent examiners in assessing prior art for complete patent applications, less of this work has focused on patent search with queries representing partial applications. In the (partial) patent search setting, a query is often much longer than in other standard IR tasks, e.g., the description section may contain hundreds or even thousands of words. While the length of such queries may suggest query reduction strategies to remove irrelevant terms, intentional obfuscation and general language used in patents suggests that it may help to expand queries with additionally relevant terms. To assess the trade-offs among all of these pre-application prior art search strategies, we comparatively evaluate a variety of partial application search and query reformulation methods. Among numerous findings, querying with a full description, perhaps in conjunction with generic (non-patent specific) query reduction methods, is recommended for best performance. However, we also find that querying with an abstract represents the best trade-off in terms of writing effort vs. retrieval efficacy (i.e., querying with the description sections only lead to marginal improvements) and that for such relatively short queries, generic query expansion methods help. --- paper_title: Overview Of Patent Retrieval Task At NTCIR-3 paper_content: This paper describes the Patent Retrieval Task in the Fourth NTCIR Workshop, and the test collections produced in this task. We perform the invalidity search task, in which each participant group searches a patent collection for the patents that can invalidate the demand in an existing claim. We also perform the automatic patent map generation task, in which the patents associated with a specific topic are organized in a multi-dimensional matrix. --- paper_title: Overview of the Patent Retrieval Task at the NTCIR-6 Workshop paper_content: In the Sixth NTCIR Workshop, we organized the Patent Retrieval Task and performed three subtasks; Japanese Retrieval, English Retrieval, and Classification. This paper describes the Japanese Retrieval Subtask and English Retrieval Subtask, both of which were intended for patent-to-patent invalidity search task. We report the evaluation results of the groups participating in those subtasks. --- paper_title: Understanding the Realities of Modern Patent Litigation paper_content: Sixteen years ago, two of us published the first detailed empirical look at patent litigation. In this Article, we update and expand the earlier study with a new hand-coded data set. We evaluate all substantive decisions rendered by any court in every patent case filed in 2008 and 2009 — decisions made between 2009 and 2013. We consider not just patent validity but also infringement and unenforceability. Moreover, we relate the outcomes of those cases to a host of variables, including variables related to the parties, the patents, and the courts in which those cases were litigated. The result is a comprehensive picture of the outcomes of modern patent litigation, one that confirms conventional wisdom in some respects but upends it in others. In particular, we find a surprising amount of continuity in the basic outcomes of patent lawsuits over the past twenty years, despite rather dramatic changes in who brought patent suits during that time. --- paper_title: Overview of the TREC 2011 Chemical IR Track paper_content: Abstract : TREC 2009 was the first year of the Chemical IR Track, which focuses on evaluation of search techniques for discovery of digitally stored information on chemical patents and academic journal articles. The track included two tasks: Prior Art (PA) and Technical Survey (TS) tasks. This paper describes how we designed the two tasks and presents the official results of eight participating groups. --- paper_title: TREC-CHEM: large scale chemical information retrieval evaluation at TREC paper_content: Over the past decades, significant progress has been made in Information Retrieval (IR), ranging from efficiency and scalability to theoretical modeling and evaluation. However, many grand challenges remain. Recently, more and more attention has been paid to the research in domain specific IR applications, as evidenced by the organization of Genomics and Legal tracks in the Text REtrieval Conference (TREC). Now it is the right time to carry out large scale evaluations on chemical datasets in order to promote the research in chemical IR in general and chemical Patent IR in particular. Accordingly, we organize a chemical IR track in TREC (TREC-CHEM) in order to address the challenges in chemical and patent IR. This paper describes these challenges and the accomplishments of the first year and opens up the discussions for the next year. --- paper_title: The use of MMR, diversity-based reranking for reordering documents and producing summaries paper_content: This paper presents a method for combining ::: query-relevance with information-novelty in the context ::: of text retrieval and summarization. The Maximal ::: Marginal Relevance (MMR) criterion strives to reduce ::: redundancy while maintaining query relevance in ::: re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results ::: indicate some benefits for MMR diversity ranking ::: in document retrieval and in single document summarization. ::: The latter are borne out by the recent results of the ::: SUMMAC conference in the evaluation of summarization ::: systems. However, the clearest advantage is demonstrated ::: in constructing non-redundant multi-document ::: summaries, where MMR results are clearly superior to ::: non-MMR passage selection. --- paper_title: Data sources on patents, copyrights, trademarks, and other intellectual property paper_content: In this book chapter, we provide a roadmap of the sources of data on the various forms of intellectual property protection. We first explain what data is available about patents, copyrights, trademarks, and other types of intellectual property, and where to find it. Then we identify and analyze data sources specifically relating to intellectual property licensing and litigation, growing areas of research by scholars and lawyers. --- paper_title: Evaluation of machine-learning protocols for technology-assisted review in electronic discovery paper_content: Abstract Using a novel evaluation toolkit that simulates a human reviewer in the loop, we compare the effectiveness of three machine-learning protocols for technology-assisted review as used in document review for discovery in legal proceedings. Our comparison addresses a central question in the deployment of technology-assisted review: Should training documents be selected at random, or should they be selected using one or more non-random methods, such as keyword search or active learning? On eight review tasks -- four derived from the TREC 2009 Legal Track and four derived from actual legal matters -- recall was measured as a function of human review effort. The results show that entirely non-random training methods, in which the initial training documents are selected using a simple keyword search, and subsequent training documents are selected by active learning, require substantially and significantly less human review effort (P --- paper_title: The Market Value of Blocking Patent Citations paper_content: There is a growing literature that aims at assessing the private value of knowledge assets and patents. It has been shown that patents and their quality as measured by citations received by future patents contribute significantly to the market value of firms beyond their R&D stocks. This paper goes one step further and distinguishes between different types of forward citations patents can receive at the European Patent Office. While a patent can be cited as non-infringing state of the art, it can also be cited because it threatens the novelty of patent applications ('blocking citations'). Empirical results from a market value model for a sample of large, R&D-intensive U.S., European and Japanese firms show that patents frequently cited as blocking references have a higher economic value for their owners than patents cited for nonblocking reasons. This finding adds to the patent value literature by showing that different types of patent citations carry different information on the economic value of patents. The result further suggests that the total number of forward citations can be an imprecise measure of patent value. --- paper_title: Predicting query performance paper_content: We develop a method for predicting query performance by computing the relative entropy between a query language model and the corresponding collection language model. The resulting clarity score measures the coherence of the language usage in documents whose models are likely to generate the query. We suggest that clarity scores measure the ambiguity of a query with respect to a collection of documents and show that they correlate positively with average precision in a variety of TREC test sets. Thus, the clarity score may be used to identify ineffective queries, on average, without relevance information. We develop an algorithm for automatically setting the clarity score threshold between predicted poorly-performing queries and acceptable queries and validate it using TREC data. In particular, we compare the automatic thresholds to optimum thresholds and also check how frequently results as good are achieved in sampling experiments that randomly assign queries to the two classes. --- paper_title: CLEF-IP 2010: Prior Art Retrieval Using the Different Sections in Patent Documents paper_content: In this paper we describe our participation in the 2010 CLEF-IP Prior Art Retrieval task where we examined the impact of information in dierent sections of patent documents, namely the title, abstract, claims, description and IPC-R sections, on the retrieval and re-ranking of patent documents. Using a standard bag-of-words approach in Lemur we found that the IPC-R sections are the most informative for patent retrieval. We then performed a re-ranking of the retrieved documents using a Logistic Regression Model, trained on the retrieved documents in the training set. We found indications that the information contained in the text sections of the patent document can contribute to a better ranking of the retrieved documents. The ocial results have shown that among the nine groups that participated in the Prior Art Retrieval task we achieved the eigth rank in terms of both Mean Average Precision (MAP) and Recall. --- paper_title: Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed paper_content: Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Patent documents are another important information source, though they are considerably less accessible. One option to expand patent search beyond pure keywords is the inclusion of classification information: Since every patent is assigned at least one class code, it should be possible for these assignments to be automatically used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. This report describes our comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms. Our analysis shows a strong structural similarity of the hierarchies, but significant differences of terms and annotations. The low number of IPC class assignments and the lack of occurrences of class labels in patent texts imply that current patent search is severely limited. To overcome these limits, we evaluate a method for the automated assignment of additional classes to patent documents, and we propose a system for guided patent search based on the use of class co-occurrence information and external resources. --- paper_title: Overview Of Patent Retrieval Task At NTCIR-3 paper_content: This paper describes the Patent Retrieval Task in the Fourth NTCIR Workshop, and the test collections produced in this task. We perform the invalidity search task, in which each participant group searches a patent collection for the patents that can invalidate the demand in an existing claim. We also perform the automatic patent map generation task, in which the patents associated with a specific topic are organized in a multi-dimensional matrix. --- paper_title: PRES: a score metric for evaluating recall-oriented information retrieval applications paper_content: Information retrieval (IR) evaluation scores are generally designed to measure the effectiveness with which relevant documents are identified and retrieved. Many scores have been proposed for this purpose over the years. These have primarily focused on aspects of precision and recall, and while these are often discussed with equal importance, in practice most attention has been given to precision focused metrics. Even for recall-oriented IR tasks of growing importance, such as patent retrieval, these precision based scores remain the primary evaluation measures. Our study examines different evaluation measures for a recall-oriented patent retrieval task and demonstrates the limitations of the current scores in comparing different IR systems for this task. We introduce PRES, a novel evaluation metric for this type of application taking account of recall and the user's search effort. The behaviour of PRES is demonstrated on 48 runs from the CLEF-IP 2009 patent retrieval track. A full analysis of the performance of PRES shows its suitability for measuring the retrieval effectiveness of systems from a recall focused perspective taking into account the user's expected search effort. --- paper_title: Cumulated gain-based evaluation of IR techniques paper_content: Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view. --- paper_title: Modern Information Retrieval paper_content: From the Publisher: ::: This is a rigorous and complete textbook for a first course on information retrieval from the computer science (as opposed to a user-centred) perspective. The advent of the Internet and the enormous increase in volume of electronically stored information generally has led to substantial work on IR from the computer science perspective - this book provides an up-to-date student oriented treatment of the subject. --- paper_title: Selecting good expansion terms for pseudo-relevance feedback paper_content: Pseudo-relevance feedback assumes that most frequent terms in the pseudo-feedback documents are useful for the retrieval. In this study, we re-examine this assumption and show that it does not hold in reality - many expansion terms identified in traditional approaches are indeed unrelated to the query and harmful to the retrieval. We also show that good expansion terms cannot be distinguished from bad ones merely on their distributions in the feedback documents and in the whole collection. We then propose to integrate a term classification process to predict the usefulness of expansion terms. Multiple additional features can be integrated in this process. Our experiments on three TREC collections show that retrieval effectiveness can be much improved when term classification is used. In addition, we also demonstrate that good terms should be identified directly according to their possible impact on the retrieval effectiveness, i.e. using supervised learning, instead of unsupervised learning. --- paper_title: Query Terms Extraction from Patent Document for Invalidity Search paper_content: This paper describes our patent retrieval system participated in the NTCIR-5 Patent Retrieval Task, Document Retrieval Subtask. The main scope of our method is the appropriate query expansion to improve recall. We extracted query terms from the topic claim, and expanded query terms extracted from sentences explained in the patent document including the topic claim. The explanation sentences were extracted by the method based on pattern matching and by the method based on the longest common subsequence length. --- paper_title: Query construction based on concept importance for effective patent retrieval paper_content: Patent retrieval is a long query task whose aim is to retrieve all documents related to patent applications. However, current approaches face with the term mismatch problem, leading to low retrieval performance. To deal with this issue, we propose a novel automatic query construction approach based on semantic concept importance for effective patent retrieval. In this approach, natural language processing techniques are firstly adopted to analyze patent long query inputs. Then, candidate query concepts are generated according to the concept features. Further, a concept importance-based query construction algorithm is presented to select the representative query concepts. Experimental results on the standard patent dataset demonstrate that our proposed approach can significantly outperform other state-of-art methods. --- paper_title: Parsimonious language models for information retrieval paper_content: We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such, they need fewer (non-zero) parameters to describe the data. We apply parsimonious models at three stages of the retrieval process: 1) at indexing time; 2) at search time; 3) at feedback time. Experimental results show that we are able to build models that are significantly smaller than standard models, but that still perform at least as well as the standard approaches. --- paper_title: CLEF-IP 2010: Prior Art Retrieval Using the Different Sections in Patent Documents paper_content: In this paper we describe our participation in the 2010 CLEF-IP Prior Art Retrieval task where we examined the impact of information in dierent sections of patent documents, namely the title, abstract, claims, description and IPC-R sections, on the retrieval and re-ranking of patent documents. Using a standard bag-of-words approach in Lemur we found that the IPC-R sections are the most informative for patent retrieval. We then performed a re-ranking of the retrieved documents using a Logistic Regression Model, trained on the retrieved documents in the training set. We found indications that the information contained in the text sections of the patent document can contribute to a better ranking of the retrieved documents. The ocial results have shown that among the nine groups that participated in the Prior Art Retrieval task we achieved the eigth rank in terms of both Mean Average Precision (MAP) and Recall. --- paper_title: Building queries for prior-art search paper_content: Prior-art search is a critical step in the examination procedure of a patent application. This study explores automatic query generation from patent documents to facilitate the time-consuming and labor-intensive search for relevant patents. It is essential for this task to identify discriminative terms in different fields of a query patent, which enables us to distinguish relevant patents from non-relevant patents. To this end we investigate the distribution of terms occurring in different fields of the query patent and compare the distributions with the rest of the collection using language modeling estimation techniques. We experiment with term weighting based on the Kullback-Leibler divergence between the query patent and the collection and also with parsimonious language model estimation. Both of these techniques promote words that are common in the query patent and are rare in the collection. We also incorporate the classification assigned to patent documents into our model, to exploit available human judgements in the form of a hierarchical classification. Experimental results show that the retrieval using the generated queries is effective, particularly in terms of recall, while patent description is shown to be the most useful source for extracting query terms. --- paper_title: Prior Art Retrieval Using Various Patent Document Fields Contents paper_content: In this paper, we report our approach to retrieve patent documents based on the prior art. We use the standard Information Retrieval (IR) techniques which contain indexing and retrieval processes. We use various combinations of document fields for the query formulation. Based on the evaluation summary, we achieve the best result for the combinations of invention-title, description and claims fields in terms of precision and recall. --- paper_title: Transforming patents into prior-art queries paper_content: Searching for prior-art patents is an essential step for the patent examiner to validate or invalidate a patent application. In this paper, we consider the whole patent as the query, which reduces the burden on the user, and also makes many more potential search features available. We explore how to automatically transform the query patent into an effective search query, especially focusing on the effect of different patent fields. Experiments show that the background summary of a patent is the most useful source of terms for generating a query, even though most previous work used the patent claims. --- paper_title: Applying the KISS Principle for the CLEF- IP 2010 Prior Art Candidate Patent Search Task paper_content: We present our experiments and results for the DCU CNGL participation in the CLEF-IP 2010 Candidate Patent Search Task. Our work applied standard information retrieval (IR) techniques to patent search. In addition, a very simple citation extraction method was applied to improve the results. This was our second consecutive participation in the CLEF-IP tasks. Our experiments in 2009 showed that many sophisticated approach to IR do not improve the retrieval effectiveness for this task. For this reason of we decided to apply only simple methods in 2010. These were demonstrated to be highly competitive with other participants. DCU submitted three runs for the Prior Art Candidate Search Task, two of these runs achieved the second and third ranks among the 25 runs submitted by nine different participants. Our best run achieved MAP of 0.203, recall of 0.618, and PRES of 0.523. --- paper_title: A query model based on normalized log-likelihood paper_content: Leveraging information from relevance assessments has been proposed as an effective means for improving retrieval. We introduce a novel language modeling method which uses information from each assessed document and their aggregate. While most previous approaches focus either on features of the entire set or on features of the individual relevant documents, our model exploits features of both the documents and the set as a whole. When evaluated, we show that our model is able to significantly improve over state-of-art feedback methods. --- paper_title: A study of query reformulation for patent prior art search with partial patent applications paper_content: Patents are used by legal entities to legally protect their inventions and represent a multi-billion dollar industry of licensing and litigation. In 2014, 326,033 patent applications were approved in the US alone -- a number that has doubled in the past 15 years and which makes prior art search a daunting, but necessary task in the patent application process. In this work, we seek to investigate the efficacy of prior art search strategies from the perspective of the inventor who wishes to assess the patentability of their ideas prior to writing a full application. While much of the literature inspired by the evaluation framework of the CLEF-IP competition has aimed to assist patent examiners in assessing prior art for complete patent applications, less of this work has focused on patent search with queries representing partial applications. In the (partial) patent search setting, a query is often much longer than in other standard IR tasks, e.g., the description section may contain hundreds or even thousands of words. While the length of such queries may suggest query reduction strategies to remove irrelevant terms, intentional obfuscation and general language used in patents suggests that it may help to expand queries with additionally relevant terms. To assess the trade-offs among all of these pre-application prior art search strategies, we comparatively evaluate a variety of partial application search and query reformulation methods. Among numerous findings, querying with a full description, perhaps in conjunction with generic (non-patent specific) query reduction methods, is recommended for best performance. However, we also find that querying with an abstract represents the best trade-off in terms of writing effort vs. retrieval efficacy (i.e., querying with the description sections only lead to marginal improvements) and that for such relatively short queries, generic query expansion methods help. --- paper_title: Our Divided Patent System paper_content: In this comprehensive new study, we evaluate all substantive decisions rendered by any court in every patent case filed in 2008 and 2009 — decisions made between 2009 and 2013. We assess the outcome of litigation by technology and industry. We relate the outcomes of those cases to a host of variables, including variables related to the parties, the patents, and the courts in which those cases were litigated. We find dramatic differences in the outcomes of patent litigation by both technology and industry. For example, owners of patents in the pharmaceutical industry fare much better in dispositive litigation rulings than do owners of patents in the computer & electronics industry, and chemistry patents have much greater success in litigation than their software or biotech counterparts. Our results provide an important window into both patent litigation and the industry-specific battles over patent reform. And they suggest that the traditional narrative of industry-specific patent disputes, which pits the IT industries against the life sciences, is incomplete. --- paper_title: A study on query expansion methods for patent retrieval paper_content: Patent retrieval is a recall-oriented search task where the objective is to find all possible relevant documents. Queries in patent retrieval are typically very long since they take the form of a patent claim or even a full patent application in the case of prior-art patent search. Nevertheless, there is generally a significant mismatch between the query and the relevant documents, often leading to low retrieval effectiveness. Some previous work has tried to address this mismatch through the application of query expansion (QE) techniques which have generally showed effectiveness for many other retrieval tasks. However, results of QE on patent search have been found to be very disappointing. We present a review of previous investigations of QE in patent retrieval, and explore some of these techniques on a prior-art patent search task. In addition, a novel method for QE using automatically generated synonyms set is presented. While previous QE techniques fail to improve over baseline retrieval, our new approach show statistically better retrieval precision over the baseline, although not for recall. In addition, it proves to be significantly more efficient than existing techniques. An extensive analysis to the results is presented which seeks to better understand situations where these QE techniques succeed or fail. --- paper_title: Patent query reduction using pseudo relevance feedback paper_content: Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs). --- paper_title: Leveraging conceptual lexicon: query disambiguation using proximity information for patent retrieval paper_content: Patent prior art search is a task in patent retrieval where the goal is to rank documents which describe prior art work related to a patent application. One of the main properties of patent retrieval is that the query topic is a full patent application and does not represent a focused information need. This query by document nature of patent retrieval introduces new challenges and requires new investigations specific to this problem. Researchers have addressed this problem by considering different information resources for query reduction and query disambiguation. However, previous work has not fully studied the effect of using proximity information and exploiting domain specific resources for performing query disambiguation. In this paper, we first reduce the query document by taking the first claim of the document itself. We then build a query-specific patent lexicon based on definitions of the International Patent Classification (IPC). We study how to expand queries by selecting expansion terms from the lexicon that are focused on the query topic. The key problem is how to capture whether an expansion term is focused on the query topic or not. We address this problem by exploiting proximity information. We assign high weights to expansion terms appearing closer to query terms based on the intuition that terms closer to query terms are more likely to be related to the query topic. Experimental results on two patent retrieval datasets show that the proposed method is effective and robust for query expansion, significantly outperforming the standard pseudo relevance feedback (PRF) and existing baselines in patent retrieval. --- paper_title: The use of MMR, diversity-based reranking for reordering documents and producing summaries paper_content: This paper presents a method for combining ::: query-relevance with information-novelty in the context ::: of text retrieval and summarization. The Maximal ::: Marginal Relevance (MMR) criterion strives to reduce ::: redundancy while maintaining query relevance in ::: re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results ::: indicate some benefits for MMR diversity ranking ::: in document retrieval and in single document summarization. ::: The latter are borne out by the recent results of the ::: SUMMAC conference in the evaluation of summarization ::: systems. However, the clearest advantage is demonstrated ::: in constructing non-redundant multi-document ::: summaries, where MMR results are clearly superior to ::: non-MMR passage selection. --- paper_title: A study of query reformulation for patent prior art search with partial patent applications paper_content: Patents are used by legal entities to legally protect their inventions and represent a multi-billion dollar industry of licensing and litigation. In 2014, 326,033 patent applications were approved in the US alone -- a number that has doubled in the past 15 years and which makes prior art search a daunting, but necessary task in the patent application process. In this work, we seek to investigate the efficacy of prior art search strategies from the perspective of the inventor who wishes to assess the patentability of their ideas prior to writing a full application. While much of the literature inspired by the evaluation framework of the CLEF-IP competition has aimed to assist patent examiners in assessing prior art for complete patent applications, less of this work has focused on patent search with queries representing partial applications. In the (partial) patent search setting, a query is often much longer than in other standard IR tasks, e.g., the description section may contain hundreds or even thousands of words. While the length of such queries may suggest query reduction strategies to remove irrelevant terms, intentional obfuscation and general language used in patents suggests that it may help to expand queries with additionally relevant terms. To assess the trade-offs among all of these pre-application prior art search strategies, we comparatively evaluate a variety of partial application search and query reformulation methods. Among numerous findings, querying with a full description, perhaps in conjunction with generic (non-patent specific) query reduction methods, is recommended for best performance. However, we also find that querying with an abstract represents the best trade-off in terms of writing effort vs. retrieval efficacy (i.e., querying with the description sections only lead to marginal improvements) and that for such relatively short queries, generic query expansion methods help. --- paper_title: Learning-Based pseudo-relevance feedback for patent retrieval paper_content: Pseudo-relevance feedback (PRF) is an effective approach in Information Retrieval but unfortunately many experiments have shown that PRF is ineffective in patent retrieval. This is because the quality of initial results in the patent retrieval is poor and therefore estimating a relevance model via PRF often hurts the retrieval performance due to off-topic terms. We propose a learning to rank framework for estimating the effectiveness of a patent document in terms of its performance in PRF. Specifically, the knowledge of effective feedback documents on past queries is used to estimate effective feedback documents for new queries. This is achieved by introducing features correlated with feedback document effectiveness. We use patent-specific contents to define such features. We then apply regression to predict document effectiveness given the proposed features. We evaluated the effectiveness of the proposed method on the patent prior art search collection CLEF-IP 2010. Our experimental results show significantly improved retrieval accuracy over a PRF baseline which expands the query using all top-ranked documents. --- paper_title: Patent query reduction using pseudo relevance feedback paper_content: Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs). --- paper_title: On Term Selection Techniques for Patent Prior Art Search paper_content: In this paper, we investigate the influence of term selection on retrieval performance on the CLEF-IP prior art test collection, using the Description section of the patent query with Language Model (LM) and BM25 scoring functions. We find that an oracular relevance feedback system that extracts terms from the judged relevant documents far outperforms the baseline and performs twice as well on MAP as the best competitor in CLEF-IP 2010. We find a very clear term selection value threshold for use when choosing terms. We also noticed that most of the useful feedback terms are actually present in the original query and hypothesized that the baseline system could be substantially improved by removing negative query terms. We tried four simple automated approaches to identify negative terms for query reduction but we were unable to notably improve on the baseline performance with any of them. However, we show that a simple, minimal interactive relevance feedback approach where terms are selected from only the first retrieved relevant document outperforms the best result from CLEF-IP 2010 suggesting the promise of interactive methods for term selection in patent prior art search. --- paper_title: Improving retrievability of patents in prior-art search paper_content: Prior-art search is an important task in patent retrieval. The success of this task relies upon the selection of relevant search queries. Typically terms for prior-art queries are extracted from the claim fields of query patents. However, due to the complex technical structure of patents, and presence of terms mismatch and vague terms, selecting relevant terms for queries is a difficult task. During evaluating the patents retrievability coverage of prior-art queries generated from query patents, a large bias toward a subset of the collection is experienced. A large number of patents either have a very low retrievability score or can not be discovered via any query. To increase the retrievability of patents, in this paper we expand prior-art queries generated from query patents using query expansion with pseudo relevance feedback. Missing terms from query patents are discovered from feedback patents, and better patents for relevance feedback are identified using a novel approach for checking their similarity with query patents. We specifically focus on how to automatically select better terms from query patents based on their proximity distribution with prior-art queries that are used as features for computing similarity. Our results show, that the coverage of prior-art queries can be increased significantly by incorporating relevant queries terms using query expansion. --- paper_title: A study on query expansion methods for patent retrieval paper_content: Patent retrieval is a recall-oriented search task where the objective is to find all possible relevant documents. Queries in patent retrieval are typically very long since they take the form of a patent claim or even a full patent application in the case of prior-art patent search. Nevertheless, there is generally a significant mismatch between the query and the relevant documents, often leading to low retrieval effectiveness. Some previous work has tried to address this mismatch through the application of query expansion (QE) techniques which have generally showed effectiveness for many other retrieval tasks. However, results of QE on patent search have been found to be very disappointing. We present a review of previous investigations of QE in patent retrieval, and explore some of these techniques on a prior-art patent search task. In addition, a novel method for QE using automatically generated synonyms set is presented. While previous QE techniques fail to improve over baseline retrieval, our new approach show statistically better retrieval precision over the baseline, although not for recall. In addition, it proves to be significantly more efficient than existing techniques. An extensive analysis to the results is presented which seeks to better understand situations where these QE techniques succeed or fail. --- paper_title: Wikipedia-based query phrase expansion in patent class search paper_content: Relevance feedback methods generally suffer from topic drift caused by word ambiguities and synonymous uses of words. Topic drift is an important issue in patent information retrieval as people tend to use different expressions describing similar concepts causing low precision and recall at the same time. Furthermore, failing to retrieve relevant patents to an application during the examination process may cause legal problems caused by granting an existing invention. A possible cause of topic drift is utilizing a relevance feedback-based search method. As a way to alleviate the inherent problem, we propose a novel query phrase expansion approach utilizing semantic annotations in Wikipedia pages, trying to enrich queries with phrases disambiguating the original query words. The idea was implemented for patent search where patents are classified into a hierarchy of categories, and the analyses of the experimental results showed not only the positive roles of phrases and words in retrieving additional relevant documents through query expansion but also their contributions to alleviating the query drift problem. More specifically, our query expansion method was compared against relevance-based language model, a state-of-the-art query expansion method, to show its superiority in terms of MAP on all levels of the classification hierarchy. --- paper_title: Patent Query Formulation by Synthesizing Multiple Sources of Relevance Evidence paper_content: Patent prior art search is a task in patent retrieval with the goal of finding documents which describe prior art work related to a query patent. A query patent is a full patent application composed of hundreds of terms which does not represent a single focused information need. Fortunately, other relevance evidence sources (i.e., classification tags and bibliographical data) provide additional details about the underlying information need. In this article, we propose a unified framework that integrates multiple relevance evidence components for query formulation. We first build a query model from the textual fields of a query patent. To overcome the term mismatch, we expand this initial query model with the term distribution of documents in the citation graph, modeling old and recent domain terminology. We build an IPC lexicon and perform query expansion using this lexicon incorporating proximity information. We performed an empirical evaluation on two patent datasets. Our results show that employing the temporal features of documents has a precision enhancing effect, while query expansion using IPC lexicon improves the recall of the final rank list. --- paper_title: Recommending patents based on latent topics paper_content: The availability of large volumes of granted patents and applications, all publicly available on the Web, enables the use of sophisticated text mining and information retrieval methods to facilitate access and analysis of patents. In this paper we investigate techniques to automatically recommend patents given a query patent. This task is critical for a variety of patent-related analysis problems such as finding relevant citations, research of relevant prior art, and infringement analysis. We investigate the use of latent Dirichlet allocation and Dirichlet multinomial regression to represent patent documents and to compute similarity scores. We compare our methods with state-of-the-art document representations and retrieval techniques and demonstrate the effectiveness of our approach on a collection of US patent publications. --- paper_title: An effective approach to document retrieval via utilizing WordNet and recognizing phrases paper_content: Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data. --- paper_title: Predicting query performance paper_content: We develop a method for predicting query performance by computing the relative entropy between a query language model and the corresponding collection language model. The resulting clarity score measures the coherence of the language usage in documents whose models are likely to generate the query. We suggest that clarity scores measure the ambiguity of a query with respect to a collection of documents and show that they correlate positively with average precision in a variety of TREC test sets. Thus, the clarity score may be used to identify ineffective queries, on average, without relevance information. We develop an algorithm for automatically setting the clarity score threshold between predicted poorly-performing queries and acceptable queries and validate it using TREC data. In particular, we compare the automatic thresholds to optimum thresholds and also check how frequently results as good are achieved in sampling experiments that randomly assign queries to the two classes. --- paper_title: PatNet: A Lexical Database for the Patent Domain paper_content: In the patent domain Boolean retrieval is particularly common. But despite the importance of Boolean retrieval, there is not much work in current research assisting patent experts in formulating such queries. Currently, these approaches are mostly limited to the usage of standard dictionaries, such as WordNet, to provide synonymous expansion terms. In this paper we present a new approach to support patent searchers in the query generation process. We extract a lexical database, which we call PatNet, from real query sessions of patent examiners of the United Patent and Trademark Office (USPTO). PatNet provides several types of synonym relations. Further, we apply several query term expansion strategies to improve the precision measures of PatNet in suggesting expansion terms. Experiments based on real query sessions of patent examiners show a drastic increase in precision, when considering support of the synonym relations, US patent classes, and word senses. --- paper_title: Adaptive relevance feedback in information retrieval paper_content: Relevance Feedback has proven very effective for improving retrieval accuracy. A difficult yet important problem in all relevance feedback methods is how to optimally balance the original query and feedback information. In the current feedback methods, the balance parameter is usually set to a fixed value across all the queries and collections. However, due to the difference in queries and feedback documents, this balance parameter should be optimized for each query and each set of feedback documents. In this paper, we present a learning approach to adaptively predict the optimal balance coefficient for each query and each collection. We propose three heuristics to characterize the balance between query and feedback information. Taking these three heuristics as a road map, we explore a number of features and combine them using a regression approach to predict the balance coefficient. Our experiments show that the proposed adaptive relevance feedback is more robust and effective than the regular fixed-coefficient feedback. --- paper_title: Selecting good expansion terms for pseudo-relevance feedback paper_content: Pseudo-relevance feedback assumes that most frequent terms in the pseudo-feedback documents are useful for the retrieval. In this study, we re-examine this assumption and show that it does not hold in reality - many expansion terms identified in traditional approaches are indeed unrelated to the query and harmful to the retrieval. We also show that good expansion terms cannot be distinguished from bad ones merely on their distributions in the feedback documents and in the whole collection. We then propose to integrate a term classification process to predict the usefulness of expansion terms. Multiple additional features can be integrated in this process. Our experiments on three TREC collections show that retrieval effectiveness can be much improved when term classification is used. In addition, we also demonstrate that good terms should be identified directly according to their possible impact on the retrieval effectiveness, i.e. using supervised learning, instead of unsupervised learning. --- paper_title: Mining Query Logs of USPTO Patent Examiners paper_content: In this paper we analyze a highly professional search setting of patent examiners of the United Patent and Trademark Office USPTO. We gain insight into the search behavior of USPTO patent examiners to explore ways for enhancing query generation in patent searching. We show that query generation is highly patent domain specific and patent examiners follow a strict scheme for generating text queries. Means to enhance query generation in patent search are to suggest synonyms and equivalents, co-occurring terms and keyword phrases to the searchable features of the invention. Further, we show that term networks including synonyms and equivalents can be learned from the query logs for automatic query expansion in patent searching. --- paper_title: Effect of Log-Based Query Term Expansion on Retrieval Effectiveness in Patent Searching paper_content: In this paper we study the impact of query term expansion QTE using synonyms on patent document retrieval. We use an automatically generated lexical database from USPTO query logs, called PatNet, which provides synonyms and equivalents for a query term. Our experiments on the CLEF-IP 2010 benchmark dataset show that automatic query expansion using PatNet tends to decrease or only slightly improve the retrieval effectiveness, with no significant improvement. An analysis of the retrieval results shows that PatNet does not have generally a negative effect on the retrieval effectiveness. Recall is drastically improved for query topics, where the baseline queries achieve, on average, only low recall values. But we have not detected any commonality that allows us to characterize these queries. So we recommend using PatNet for semi-automatic QTE in Boolean retrieval, where expanding query terms with synonyms and equivalents with the aim of expanding the query scope is a common practice. --- paper_title: Acquiring Lexical Knowledge from Query Logs for Query Expansion in Patent Searching paper_content: Query expansion is a crucial step in recall-oriented domains such as Patent Searching. Currently, automatic query expansion in patent search is mostly based on statistical measures. Additional query terms are extracted from the query documents based on entropy measures. To automate query expansion in patent searching, we acquire lexical knowledge from Query Logs of USPTO Patent Examiners. Results show good performance in query expansion and patent searching using the lexical database. This will help improving (semi-) automated query expansion in patent searching. --- paper_title: Analyzing query logs of USPTO examiners to identify useful query terms in patent documents for query expansion in patent searching: a preliminary study paper_content: In an attempt to improve retrieval systems for the patent domain, significant efforts are invested to assist researchers in formulating better queries, preferably via automated query generation. Current work on query generation in patent retrieval is mostly based on statistical measures without considering whether these terms are the best choice. To learn from actual queries being posed by experts, we analyze query logs from USPTO patent examiners. Results show that US examiners pick the majority of query terms from the claim section, a large fraction of which, in turn, coincide with the subject feature terms which determine the extent of the protection of the patent right. Considering the lessons learned from evaluating existing search logs will help in improving (semi-) automated query generation. --- paper_title: Using query logs of USPTO patent examiners for automatic query expansion in patent searching paper_content: In the patent domain significant efforts are invested to assist researchers in formulating better queries, preferably via automated query expansion. Currently, automatic query expansion in patent search is mostly limited to computing co-occurring terms for the searchable features of the invention. Additional query terms are extracted automatically from patent documents based on entropy measures. Learning synonyms in the patent domain for automatic query expansion has been a difficult task. No dedicated sources providing synonyms for the patent domain, such as patent domain specific lexica or thesauri, are available. In this paper we focus on the highly professional search setting of patent examiners. In particular, we use query logs to learn synonyms for the patent domain. For automatic query expansion, we create term networks based on the query logs specifically for several USPTO patent classes. Experiments show good performance in automatic query expansion using these automatically generated term networks. Specifically, with a larger number of query logs for a specific patent US class available the performance of the learned term networks increases. --- paper_title: Multiple Retrieval Models and Regression Models for Prior Art Search paper_content: This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized at the Humboldt University for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend. --- paper_title: Applying the KISS Principle for the CLEF- IP 2010 Prior Art Candidate Patent Search Task paper_content: We present our experiments and results for the DCU CNGL participation in the CLEF-IP 2010 Candidate Patent Search Task. Our work applied standard information retrieval (IR) techniques to patent search. In addition, a very simple citation extraction method was applied to improve the results. This was our second consecutive participation in the CLEF-IP tasks. Our experiments in 2009 showed that many sophisticated approach to IR do not improve the retrieval effectiveness for this task. For this reason of we decided to apply only simple methods in 2010. These were demonstrated to be highly competitive with other participants. DCU submitted three runs for the Prior Art Candidate Search Task, two of these runs achieved the second and third ranks among the 25 runs submitted by nine different participants. Our best run achieved MAP of 0.203, recall of 0.618, and PRES of 0.523. --- paper_title: Experiments with Citation Mining and Key-Term Extraction for Prior Art Search paper_content: This technical note presents the system built for the IP track of CLEF 2010 based on PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS), the modular search infrastructure initially realized for CLEF IP 2009. We largely reused the system of the previous CLEF IP but at a relatively smaller scale and with the improvement of three main components: • A new citation mining tool based on Conditional Random Fields (CRF). • A key-term extraction module developed for technical and scientific documents and adapted to patent document structures using a vast ranges of metrics, features and a bagged decision tree. • An improvement of our multi-domain terminological database called GRISP. We used the Okapi BM25 and the Indri retrieval models for the prior art task and a KNN model for the automatic classification task under the IPC subclasses. In both tasks, specific final re-ranking techniques were used, including multiple regression models based on SVM. Although the Prior Art task was more challenging and we used a more limited number of retrieval models, we maintained similar results as last year. We performed, however, miserably at the classification task, and we consider that an instance-based KNN algorithm is not competitive with standard classifiers based on preliminary large scale training. --- paper_title: Patent Query Formulation by Synthesizing Multiple Sources of Relevance Evidence paper_content: Patent prior art search is a task in patent retrieval with the goal of finding documents which describe prior art work related to a query patent. A query patent is a full patent application composed of hundreds of terms which does not represent a single focused information need. Fortunately, other relevance evidence sources (i.e., classification tags and bibliographical data) provide additional details about the underlying information need. In this article, we propose a unified framework that integrates multiple relevance evidence components for query formulation. We first build a query model from the textual fields of a query patent. To overcome the term mismatch, we expand this initial query model with the term distribution of documents in the citation graph, modeling old and recent domain terminology. We build an IPC lexicon and perform query expansion using this lexicon incorporating proximity information. We performed an empirical evaluation on two patent datasets. Our results show that employing the temporal features of documents has a precision enhancing effect, while query expansion using IPC lexicon improves the recall of the final rank list. --- paper_title: An IPC-based vector space model for patent retrieval paper_content: Determining requirements when searching for and retrieving relevant information suited to a user's needs has become increasingly important and difficult, partly due to the explosive growth of electronic documents. The vector space model (VSM) is a popular method in retrieval procedures. However, the weakness in traditional VSM is that the indexing vocabulary changes whenever changes occur in the document set, or the indexing vocabulary selection algorithms, or parameters of the algorithms, or if wording evolution occurs. The major objective of this research is to design a method to solve the afore-mentioned problems for patent retrieval. The proposed method utilizes the special characteristics of the patent documents, the International Patent Classification (IPC) codes, to generate the indexing vocabulary for presenting all the patent documents. The advantage of the generated indexing vocabulary is that it remains unchanged, even if the document sets, selection algorithms, and parameters are changed, or if wording evolution occurs. Comparison of the proposed method with two traditional methods (entropy and chi-square) in manual and automatic evaluations is presented to verify the feasibility and validity. The results also indicate that the IPC-based indexing vocabulary selection method achieves a higher accuracy and is more satisfactory. --- paper_title: On the role of classification in patent invalidity searches paper_content: Searches on patents to determine prior art violations are often cumbersome and require extensive manpower to accomplish successfully. When time is constrained, an automatically generated list of candidate patents may decrease search costs and improve search efficiency. We examine whether semantic relations inferred from the pseudo-hierarchy of patent classifications can contribute to the recognition of related patents. We examine a similarity measure for hierarchically-ordered patent classes and subclasses and return a ranked list of candidate patents, using a similarity measure that has demonstrated its effectiveness when applied to WordNet ontologies. We then demonstrate that this ranked list of candidate patents allows us to better constrain the effort needed to examine for prior art violations on a target patent. --- paper_title: Comparison of IPC and USPC classification systems in patent prior art searches paper_content: Patent classification systems are used to help scrutinize patent applications for possible violations of the novelty and non-obviousness/inventive steps of a patentability test. There are several different patent classification systems in use today, each with a different underlying philosophy and approach. We compare the two most widely-used patent classification systems -- the IPC and USPC -- and examine their ability to help re-rank patents based on similarity. We observed a significant improvement in MAP, Recall@100, and nDCG when using these systems to re-rank our retrieved document set, demonstrating their overall utility in patent searches. --- paper_title: Cluster-based patent retrieval using international patent classification system paper_content: A patent collection provides a great test-bed for cluster-based information retrieval. International Patent Classification (IPC) system provides a hierarchical taxonomy with 5 levels of specificity. We regard IPC codes of patent applications as cluster information, manually assigned by patent officers according to their subjects. Such manual cluster provides advantages over auto-matically built clusters using document term similarities. There are previous researches that successfully apply cluster-based retrieval models using language modeling. We develop cluster-based language models that employ advantages of having manually clustered documents. --- paper_title: The effect of citation analysis on query expansion for patent retrieval paper_content: Patent prior art search is a type of search in the patent domain where documents are searched for that describe the work previously carried out related to a patent application. The goal of this search is to check whether the idea in the patent application is novel. Vocabulary mismatch is one of the main problems of patent retrieval which results in low retrievability of similar documents for a given patent application. In this paper we show how the term distribution of the cited documents in an initially retrieved ranked list can be used to address the vocabulary mismatch. We propose a method for query modeling estimation which utilizes the citation links in a pseudo relevance feedback set. We first build a topic dependent citation graph, starting from the initially retrieved set of feedback documents and utilizing citation links of feedback documents to expand the set. We identify the important documents in the topic dependent citation graph using a citation analysis measure. We then use the term distribution of the documents in the citation graph to estimate a query model by identifying the distinguishing terms and their respective weights. We then use these terms to expand our original query. We use CLEF-IP 2011 collection to evaluate the effectiveness of our query modeling approach for prior art search. We also study the influence of different parameters on the performance of the proposed method. The experimental results demonstrate that the proposed approach significantly improves the recall over a state-of-the-art baseline which uses the link-based structure of the citation graph but not the term distribution of the cited documents. --- paper_title: Simple pre and post processing strategies for patent searching in CLEF intellectual property track 2009 paper_content: The objective of the 2009 CLEF-IP Track was to find documents that constitute prior art for a given patent. We explored a wide range of simple preprocessing and post-processing strategies, using Mean Average Precision (MAP) for evaluation purposes. Once determined the best document representation, we tuned a classical Information Retrieval engine in order to perform the retrieval step. Finally, we explored two different post-processing strategies. In our experiments, using the complete IPC codes for filtering purposes led to greater improvements than using 4-digits IPC codes. The second postprocessing strategy was to exploit the citations of retrieved patents in order to boost scores of cited patents. Combining all selected strategies, we computed optimal runs that reached a MAP of 0.122 for the training set, and a MAP of 0.129 for the official 2009 CLEF-IP XL set. --- paper_title: Building queries for prior-art search paper_content: Prior-art search is a critical step in the examination procedure of a patent application. This study explores automatic query generation from patent documents to facilitate the time-consuming and labor-intensive search for relevant patents. It is essential for this task to identify discriminative terms in different fields of a query patent, which enables us to distinguish relevant patents from non-relevant patents. To this end we investigate the distribution of terms occurring in different fields of the query patent and compare the distributions with the rest of the collection using language modeling estimation techniques. We experiment with term weighting based on the Kullback-Leibler divergence between the query patent and the collection and also with parsimonious language model estimation. Both of these techniques promote words that are common in the query patent and are rare in the collection. We also incorporate the classification assigned to patent documents into our model, to exploit available human judgements in the form of a hierarchical classification. Experimental results show that the retrieval using the generated queries is effective, particularly in terms of recall, while patent description is shown to be the most useful source for extracting query terms. --- paper_title: Query-Driven Mining of Citation Networks for Patent Citation Retrieval and Recommendation paper_content: Prior art search or recommending citations for a patent application is a challenging task. Many approaches have been proposed and shown to be useful for prior art search. However, most of these methods do not consider the network structure for integrating and diffusion of different kinds of information present among tied patents in the citation network. In this paper, we propose a method based on a time-aware random walk on a weighted network of patent citations, the weights of which are characterized by contextual similarity relations between two nodes on the network. The goal of the random walker is to find influential documents in the citation network of a query patent, which can serve as candidates for drawing query terms and bigrams for query refinement. The experimental results on CLEF-IP datasets (CLEF-IP 2010 and CLEF-IP 2011) show the effectiveness of encoding contextual similarities (common classification codes, common inventor, and common applicant) between nodes in the citation network. Our proposed approach can achieve significantly better results in terms of recall and Mean Average Precision rates compared to strong baselines of prior art search. --- paper_title: Multiple Retrieval Models and Regression Models for Prior Art Search paper_content: This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized at the Humboldt University for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend. --- paper_title: Multilayer source selection as a tool for supporting patent search and classification paper_content: In this paper we present a method that can be used to attain specific objectives in a typical prior art search process. The objectives are first to assist patent searchers in understanding the underlying technical concepts of a patent by identifying relevant international patent classification (IPC) codes and second to help them conduct a filtered search based on automatically selected IPCs. We view the automated selection of IPCs as a collection selection problem from the domain of distributed information retrieval (DIR) that can be addressed using existing DIR methods, which we extend and adapt for the patent domain. Our work exploits the intellectually assigned classifications codes that are used to categorize patents and to facilitate patent searches. In our method, manually assigned IPC codes of patent documents are used to cluster, distribute and index patents through hundreds or thousands of sub-collections. We propose a new multilayer collection selection method that effectively suggests classification codes exploiting the hierarchical classification schemes such as IPC/CPC. The new method in addition to utilizing the topical relevance of IPCs at a particular level of interest exploits the topical relevance of their ancestors in the IPC hierarchy and aggregates those multiple estimations of relevance to a single estimation. Experimental results on the CLEF-IP 2011 dataset show that the proposed approach outperforms state-of-art methods from the DIR domain not only in identifying relevant IPC codes but also in retrieving relevant patent documents given a patent query. --- paper_title: Simple vs. sophisticated approaches for patent prior-art search paper_content: Patent prior-art search is concerned with finding all filed patents relevant to a given patent application. We report a comparison between two search approaches representing the state-of-the-art in patent prior-art search. The first approach uses simple and straightforward information retrieval (IR) techniques, while the second uses much more sophisticated techniques which try to model the steps taken by a patent examiner in patent search. Experiments show that the retrieval effectiveness using both techniques is statistically indistinguishable when patent applications contain some initial citations. However, the advanced search technique is statistically better when no initial citations are provided. Our findings suggest that less time and effort can be exerted by applying simple IR approaches when initial citations are provided. --- paper_title: Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed paper_content: Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Patent documents are another important information source, though they are considerably less accessible. One option to expand patent search beyond pure keywords is the inclusion of classification information: Since every patent is assigned at least one class code, it should be possible for these assignments to be automatically used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. This report describes our comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms. Our analysis shows a strong structural similarity of the hierarchies, but significant differences of terms and annotations. The low number of IPC class assignments and the lack of occurrences of class labels in patent texts imply that current patent search is severely limited. To overcome these limits, we evaluate a method for the automated assignment of additional classes to patent documents, and we propose a system for guided patent search based on the use of class co-occurrence information and external resources. --- paper_title: Patent search using IPC classification vectors paper_content: Finding similar patents is a challenging task in patent information retrieval. A patent application is often a starting point to find similar inventions. Keyword search for similar patents requires significant domain expertise and may not fetch relevant results. We propose a novel representation for patents and use a two stage approach to find similar patents. Each patent is represented as an IPC class vector. Citation network of patents is used to propagate these vectors from a node (patent) to its neighbors (cited patents). Thus, each patent is represented as a weighted combination of its IPC information as well as of its neighbors. A query patent is represented as a vector using its IPC information and similar patents can be simply found by comparing this vector with vectors of patents in the corpus. Text based search is used to re-rank this solution set to improve precision. We experiment with two similarity measures and re-ranking strategies to empirically show that our representation is effective in improving both precision and recall of queries of CLEF-2011 dataset. --- paper_title: Exploratory Professional Search through Semantic Post-Analysis of Search Results paper_content: Professional Search is usually a recall-oriented problem. For helping the user to get efficiently a concise overview, to quickly restrict the search space and to make sense of the results, in this article we present an exploratory strategy for professional search that is based on semantic post-analysis of the classical search results (of keyword based queries). The described strategy can exploit the metadata that are already available, as well as the results of textual clustering and entity mining that can be performed at query time. The outcome of this process (i.e. metadata, clusters and entities grouped in categories) complement the ranked list of results produced from the core search engine with useful information for the user. This extra information is useful not only for providing a concise overview of the search results, but also for supporting a faceted and session-based interaction scheme that allows the users to restrict their focus gradually and to explore other related information. To tackle the corresponding configuration requirements of this process, we show how one can exploit the (constantly evolving) Linked Data for specifying the entities of interest and for providing further information about the identified entities. In this article, apart from detailing the steps of this process, we present applications of this approach in the marine domain and in the domain of patent search. --- paper_title: An Evaluation of an Interactive Federated Patent Search System paper_content: Patent search tasks are challenging and often require many hours or even days to be completed. Patent search systems that integrate multiple search tools could assist patent examiners to complete the demanding patent search tasks by using the set of search tools most suitable for the task at hand. PerFedPat is an interactive patent search system designed on this principle and based on the federated search approach and the ezDL framework. PerFedPat provides core services to search multiple online patent resources, while hiding complexity from the end user. The extensible architecture of the system enables the integration and parallel use of multiple search tools. In this paper, we present part of a user study of the PerFedPat system and we also discuss the results, mostly focused on the research question: could the patent examiners efficiently use a patent search system that integratesmultiple resources, search tools and UIs? --- paper_title: On Term Selection Techniques for Patent Prior Art Search paper_content: In this paper, we investigate the influence of term selection on retrieval performance on the CLEF-IP prior art test collection, using the Description section of the patent query with Language Model (LM) and BM25 scoring functions. We find that an oracular relevance feedback system that extracts terms from the judged relevant documents far outperforms the baseline and performs twice as well on MAP as the best competitor in CLEF-IP 2010. We find a very clear term selection value threshold for use when choosing terms. We also noticed that most of the useful feedback terms are actually present in the original query and hypothesized that the baseline system could be substantially improved by removing negative query terms. We tried four simple automated approaches to identify negative terms for query reduction but we were unable to notably improve on the baseline performance with any of them. However, we show that a simple, minimal interactive relevance feedback approach where terms are selected from only the first retrieved relevant document outperforms the best result from CLEF-IP 2010 suggesting the promise of interactive methods for term selection in patent prior art search. --- paper_title: A visual semantic framework for innovation analytics paper_content: In this demo we present a semantic framework for innovation and patent analytics powered by Mined Semantic Analysis (MSA). Our framework provides cognitive assistance to its users through a Web-based visual and interactive interface. First, we describe building a conceptual knowledge graph by mining user-generated encyclopedic textual corpus for semantic associations. Then, we demonstrate applying the acquired knowledge to support many cognition and knowledge based use cases for innovation analysis including technology exploration and landscaping, competitive analysis, literature and prior art search and others. --- paper_title: PerFedPat: An integrated federated system for patent search paper_content: Abstract We present PerFedPat, an interactive patent search system based on the federated search approach and the ezDL framework. PerFedPat provides core services to search, using a federated method, multiple online patent resources (currently Espacenet, Google patents, Patentscope and the MAREC collection), thus providing parallel access to multiple patent sources. PerFedPat hides complexity from the end user who uses a common single query tool for querying all patent datasets at the same time. PerFedPat provides cores services such as Boolean and fielded search, merging, grouping and filtering of results, and offers support for query history and search sessions. The second innovative feature of PerFedPat is that it has a pluggable and extensible architecture and it enables the use of multiple search tools which are integrated in PerFedPat. Currently tools are integrated for IPC classification search, faceted navigation, clustering of search results and machine translation of queries. As a result the system is able to provide a rich, personalized information seeking experience for different types of patent searches, potentially exploiting techniques from diverse areas such as distributed information retrieval, semantic search, machine learning and human–computer interaction. The system is available for download from the internet. --- paper_title: Evaluation of machine-learning protocols for technology-assisted review in electronic discovery paper_content: Abstract Using a novel evaluation toolkit that simulates a human reviewer in the loop, we compare the effectiveness of three machine-learning protocols for technology-assisted review as used in document review for discovery in legal proceedings. Our comparison addresses a central question in the deployment of technology-assisted review: Should training documents be selected at random, or should they be selected using one or more non-random methods, such as keyword search or active learning? On eight review tasks -- four derived from the TREC 2009 Legal Track and four derived from actual legal matters -- recall was measured as a function of human review effort. The results show that entirely non-random training methods, in which the initial training documents are selected using a simple keyword search, and subsequent training documents are selected by active learning, require substantially and significantly less human review effort (P --- paper_title: Measuring Semantic Relatedness using Mined Semantic Analysis paper_content: Mined Semantic Analysis (MSA) is a novel distributional semantics approach which employs data mining techniques. MSA embraces knowledge-driven analysis of natural languages. It uncovers implicit relations between concepts by mining for their associations in target encyclopedic corpora. MSA exploits not only target corpus content but also its knowledge graph (e.g., "See also" link graph of Wikipedia). Empirical results show competitive performance of MSA compared to prior state-of-the-art methods for measuring semantic relatedness on benchmark data sets. Additionally, we introduce the first analytical study to examine statistical significance of results reported by different semantic relatedness methods. Our study shows that, top performing results could be statistically equivalent though mathematically different. The study positions MSA as one of state-of-the-art methods for measuring semantic relatedness. --- paper_title: The Market Value of Blocking Patent Citations paper_content: There is a growing literature that aims at assessing the private value of knowledge assets and patents. It has been shown that patents and their quality as measured by citations received by future patents contribute significantly to the market value of firms beyond their R&D stocks. This paper goes one step further and distinguishes between different types of forward citations patents can receive at the European Patent Office. While a patent can be cited as non-infringing state of the art, it can also be cited because it threatens the novelty of patent applications ('blocking citations'). Empirical results from a market value model for a sample of large, R&D-intensive U.S., European and Japanese firms show that patents frequently cited as blocking references have a higher economic value for their owners than patents cited for nonblocking reasons. This finding adds to the patent value literature by showing that different types of patent citations carry different information on the economic value of patents. The result further suggests that the total number of forward citations can be an imprecise measure of patent value. --- paper_title: A New Look at Patent Quality: Relating Patent Prosecution to Validity paper_content: The paper uses two hand-collected datasets to implement a novel research design for analyzing the precursors to patent quality. Operationalizing patent "quality" as legal validity, the paper analyzes the relation between Federal Circuit decisions on patent validity and three sets of data about the patents: quantitative features of the patents themselves, textual analysis of the patent documents, and data collected from the prosecution histories of the patents. The paper finds large and statistically significant relations between ex post validity and both textual features of the patents and ex ante aspects of the prosecution history (especially prior art submissions and the existence of internal patent office appeals before issuance). The results demonstrate the importance of refocusing analysis of patent quality on replicable indicators like validity, and the value that more comprehensive collection of prosecution history data can have for improving the output of the patent prosecution process. --- paper_title: A Penny for Your Quotes : Patent Citations and the Value of Innovations paper_content: The use of patents in economic research has been seriously hindered by the fact that patents vary enormously in their importance or value, and hence, simple patent counts cannot be informative about innovative output. The purpose of this article is to put forward patent counts weighted by citations as indicators of the value of innovations, thereby overcoming the limitations of simple counts. The empirical analysis of a particular innovation (Computed Tomography scanners) indeed shows a close association between citation-based patent indices and independent measures of the social value of innovations in that field. Moreover, the weighting scheme appears to be nonlinear (increasing) in the number of citations, implying that the informational content of citations rises at the margin. As in previous studies, simple patent counts are found to be highly correlated with contemporaneous RD however, here the association is within a field over time rather than cross-sectional. --- paper_title: SIMPLE: A Strategic Information Mining Platform for Licensing and Execution paper_content: Intellectual Properties (IP), such as patents and trademarks, are one of the most critical assets in today’s enterprises and research organizations. They represent the core innovation and differentiators of an organization. When leveraged effectively, they not only protect a business from its competition, but also generate significant opportunities in licensing, execution, long term research and innovation. In certain industries, e. g., Pharmaceutical industry, patents lead to multi-billion dollar revenue per year. In this paper, we present a holistic information mining solution, called SIMPLE, which mines large corpus of patents and scientific literature for insights. Unlike much prior work that deals with specific aspects of analytics, SIMPLE is an integrated and end-to-end IP analytics solution which addresses a wide range of challenges in patent analytics such as the data complexity, scale, and nomenclature issues. It encompasses techniques for patent data processing and modeling, analytics algorithms, web interface and web services for analytics service delivery and end-user interaction. We use real-world case studies to demonstrate the effectiveness of SIMPLE. --- paper_title: Finding nuggets in IP portfolios: core patent mining through textual temporal analysis paper_content: Patents are critical for a company to protect its core technologies. Effective patent mining in massive patent databases can provide companies with valuable insights to develop strategies for IP management and marketing. In this paper, we study a novel patent mining problem of automatically discovering core patents (i.e., patents with high novelty and influence in a domain). We address the unique patent vocabulary usage problem, which is not considered in traditional word-based statistical methods, and propose a topic-based temporal mining approach to quantify a patent's novelty and influence. Comprehensive experimental results on real-world patent portfolios show the effectiveness of our method. --- paper_title: Patent Maintenance Recommendation with Patent Information Network Model paper_content: Patents are of crucial importance for businesses, because they provide legal protection for the invented techniques, processes or products. A patent can be held for up to 20 years. However, large maintenance fees need to be paid to keep it enforceable. If the patent is deemed not valuable, the owner may decide to abandon it by stopping paying the maintenance fees to reduce the cost. For large companies or organizations, making such decisions is difficult because too many patents need to be investigated. In this paper, we introduce the new patent mining problem of automatic patent maintenance prediction, and propose a systematic solution to analyze patents for recommending patent maintenance decision. We model the patents as a heterogeneous time-evolving information network and propose new patent features to build model for a ranked prediction on whether to maintain or abandon a patent. In addition, a network-based refinement approach is proposed to further improve the performance. We have conducted experiments on the large scale United States Patent and Trademark Office (USPTO) database which contains over four million granted patents. The results show that our technique can achieve high performance. --- paper_title: Modeling Patent Quality: A System for Large-scale Patentability Analysis using Text Mining paper_content: Current patent systems face a serious problem of declining quality of patents as the larger number of ap- plications make it difficult for patent officers to spend enough time for evaluating each application. For building a better patent system, it is necessary to define a public consensus on the quality of patent applications in a quantitative way. In this article, we tackle the problem of assessing the quality of patent applications based on machine learning and text mining techniques. For each patent application, our tool automatically computes a score called patentability, which indicates how likely it is that the application will be approved by the patent office. We employ a new statis- tical prediction model to estimate examination results (approval or rejection) based on a large data set including 0.3 million patent applications. The model computes the patentability score based on a set of feature variables including the text contents of the specification documents. Experimental results showed that our model outperforms a conven- tional method which uses only the structural properties of the documents. Since users can access the estimated result through a Web-browser-based GUI, this system allows both patent examiners and applicants to quickly detect weak applications and to find their specific flaws. --- paper_title: COA: finding novel patents through text analysis paper_content: In recent years, the number of patents filed by the business enterprises in the technology industry are growing rapidly, thus providing unprecedented opportunities for knowledge discovery in patent data. One important task in this regard is to employ data mining techniques to rank patents in terms of their potential to earn money through licensing. Availability of such ranking can substantially reduce enterprise IP (Intellectual Property) management costs. Unfortunately, the existing software systems in the IP domain do not address this task directly. Through our research, we build a patent ranking software, named COA (Claim Originality Analysis) that rates a patent based on its value by measuring the recency and the impact of the important phrases that appear in the "claims" section of a patent. Experiments show that COA produces meaningful ranking when comparing it with other indirect patent evaluation metrics--citation count, patent status, and attorney's rating. In reallife settings, this tool was used by beta-testers in the IBM IP department. Lawyers found it very useful in patent rating, specifically, in highlighting potentially valuable patents in a patent cluster. In this article, we describe the ranking techniques and system architecture of COA. We also present the results that validate its effectiveness. --- paper_title: Citation Frequency and the Value of Patented Inventions paper_content: Through a survey, economic value estimates were obtained on 962 inventions made in the United States and Germany and on which German patent renewal fees were paid to full-term expiration in 1995. A search of subsequent U.S. and German patents yielded a count of citations to those patents. Patents renewed to full term were significantly more valuable than patents allowed to expire before their full term. The higher an invention's economic value estimate was, the more the relevant patent was subsequently cited. --- paper_title: Exploring Legal Patent Citations for Patent Valuation paper_content: Effective patent valuation is important for patent holders. Forward patent citations, widely used in assessing patent value, have been considered as reflecting knowledge flows, just like paper citations. However, patent citations also carry legal implication, which is important for patent valuation. We argue that patent citations can either be technological citations that indicate knowledge transfer or be legal citations that delimit the legal scope of citing patents. In this paper, we first develop citation-network based methods to infer patent quality measures at either the legal or technological dimension. Then we propose a probabilistic mixture approach to incorporate both the legal and technological dimensions in patent citations, and an iterative learning process that integrates a temporal decay function on legal citations, a probabilistic citation network based algorithm and a prediction model for patent valuation. We learn all the parameters together and use them for patent valuation. We demonstrate the effectiveness of our approach by using patent maintenance status as an indicator of patent value and discuss the insights we learned from this study. --- paper_title: Latent graphical models for quantifying and predicting patent quality paper_content: The number of patents filed each year has increased dramatically in recent years, raising concerns that patents of questionable validity are restricting the issuance of truly innovative patents. For this reason, there is a strong demand to develop an objective model to quantify patent quality and characterize the attributes that lead to higher-quality patents. In this paper, we develop a latent graphical model to infer patent quality from related measurements. In addition, we extract advanced lexical features via natural language processing techniques to capture the quality measures such as clarity of claims, originality, and importance of cited prior art. We demonstrate the effectiveness of our approach by validating its predictions with previous court decisions of litigated patents. --- paper_title: How to Count Patents and Value Intellectual Property: The Uses of Patent Renewal and Application Data paper_content: Patent counts are very imperfect measures of innovative output. This paper discusses how additional data-the number of years a patent is renewed and the number of countries in which protection for the same invention is sought - can be used to improve on counts in studies which require a measure of the extent of innovation. A simple renewal based weighting scheme is proposed which may remove half of the noise in patent counts as a measure of innovative output. The paper also illustrates how these data can be used to estimate the value of the proprietary rights created by the patent laws. The parameters estimated in this analysis can be used to answer a series of questions related to the value of patents. We illustrate with estimates of how the value of patent protection would vary under alternative legal rules and renewal fees, and with estimates of the international flows of returns from the patent system. Recent progress in the development of databases has increased the potential for this type of analysis. --- paper_title: Market value and patent citations paper_content: We explore the usefulness of patent citations as a measure of the "importance" of a firm's patents, as indicated by the stock market valuation of the firm's intangible stock of knowledge. Using patents and citations for 1963--1995, we estimate Tobin's q equations on the ratios of R&D to assets stocks, patents to R&D, and citations to patents. We find that each ratio significantly affects market value, with an extra citation per patent boosting market value by 3%. Further findings indicate that "unpredictable" citations have a stronger effect than the predictable portion, and that self-citations are more valuable than external citations. --- paper_title: Our Divided Patent System paper_content: In this comprehensive new study, we evaluate all substantive decisions rendered by any court in every patent case filed in 2008 and 2009 — decisions made between 2009 and 2013. We assess the outcome of litigation by technology and industry. We relate the outcomes of those cases to a host of variables, including variables related to the parties, the patents, and the courts in which those cases were litigated. We find dramatic differences in the outcomes of patent litigation by both technology and industry. For example, owners of patents in the pharmaceutical industry fare much better in dispositive litigation rulings than do owners of patents in the computer & electronics industry, and chemistry patents have much greater success in litigation than their software or biotech counterparts. Our results provide an important window into both patent litigation and the industry-specific battles over patent reform. And they suggest that the traditional narrative of industry-specific patent disputes, which pits the IT industries against the life sciences, is incomplete. --- paper_title: Using Data Analytics Tools to Supplement Traditional Research and Analysis in Forecasting Case Outcomes paper_content: The legal profession appears to be on the brink of a major transformation, as new data analytics tools promise to revolutionize the ways lawyers practice law. One important aspect of this transformation is case forecasting -- that is, the way lawyers evaluate potential case outcomes. When a lawyer decides whether to initiate a new lawsuit, for example, or whether to settle an existing lawsuit, the lawyer must try to accurately assess the likely outcome of the case were it to proceed in litigation. This article examines the future role of data analytics in case forecasting, including its interplay with the method of predictive analysis lawyers have traditionally employed. This analysis has important implications not only for the practice of law, but also for legal pedagogy, which will need to conform to these expected changes in legal practice. --- paper_title: The Law Machine paper_content: In a low-rise building in Menlo Park, Calif., just upstairs from a Mexican restaurant and a nail salon, a Stanford University spin-off is crunching data in ways that could shake the foundations of the legal profession.Here, a small group of patent lawyers and computer scientists is applying the latest in machine learning and natural-language processing to reams of documents related to intellectual property lawsuits. The result is a massive statistical database on IP litigation like nothing the world has seen before.Which attorney has the best track record in defending against semiconductor-related infringement claims? Has a particular judge ruled on cases involving patent trolls, and if so, what was the outcome? Which companies tend to go to trial, and which settle out of court? By offering up such information, the database provides corporate lawyers, law lirms, and government agencies with hard numbers that will reduce the guesswork, as well as the enormous expense, of patent litigation. In short, the company is building a “law machine,” from whichcomes its name: Lex Machina. --- paper_title: Understanding the Realities of Modern Patent Litigation paper_content: Sixteen years ago, two of us published the first detailed empirical look at patent litigation. In this Article, we update and expand the earlier study with a new hand-coded data set. We evaluate all substantive decisions rendered by any court in every patent case filed in 2008 and 2009 — decisions made between 2009 and 2013. We consider not just patent validity but also infringement and unenforceability. Moreover, we relate the outcomes of those cases to a host of variables, including variables related to the parties, the patents, and the courts in which those cases were litigated. The result is a comprehensive picture of the outcomes of modern patent litigation, one that confirms conventional wisdom in some respects but upends it in others. In particular, we find a surprising amount of continuity in the basic outcomes of patent lawsuits over the past twenty years, despite rather dramatic changes in who brought patent suits during that time. --- paper_title: SIMPLE: A Strategic Information Mining Platform for Licensing and Execution paper_content: Intellectual Properties (IP), such as patents and trademarks, are one of the most critical assets in today’s enterprises and research organizations. They represent the core innovation and differentiators of an organization. When leveraged effectively, they not only protect a business from its competition, but also generate significant opportunities in licensing, execution, long term research and innovation. In certain industries, e. g., Pharmaceutical industry, patents lead to multi-billion dollar revenue per year. In this paper, we present a holistic information mining solution, called SIMPLE, which mines large corpus of patents and scientific literature for insights. Unlike much prior work that deals with specific aspects of analytics, SIMPLE is an integrated and end-to-end IP analytics solution which addresses a wide range of challenges in patent analytics such as the data complexity, scale, and nomenclature issues. It encompasses techniques for patent data processing and modeling, analytics algorithms, web interface and web services for analytics service delivery and end-user interaction. We use real-world case studies to demonstrate the effectiveness of SIMPLE. --- paper_title: Exploratory analytics on patent data sets using the SIMPLE platform paper_content: Abstract Exploratory Analytics is the process of analyzing data for the purpose of forming hypotheses. Patent data sets, because they are relatively large and diverse and because they consist of a mixture of structured and unstructured information present a formidable challenge and a great opportunity in applying exploratory analytics techniques. In this paper we describe methods we have implemented for effective exploratory analytics on patent data sets using an interactive approach and a web based software tool called SIMPLE. We use real-world case studies to demonstrate the effectiveness of our exploratory analytics approach in the discovery of useful information from the patent corpus. --- paper_title: SIMPLE: Interactive Analytics on Patent Data paper_content: Intellectual Properties (IP), such as patents and trademarks, are one of the most critical assets in today’s enterprises and research organizations. They represent the core innovation and differentiators of an organization. When leveraged effectively, they not only protect freedom of action, but also generate significant opportunities in licensing, execution, long term research and innovation. In this paper, we expand upon a previous paper describing a solution called SIMPLE, which mines large corpus of patents and scientific literature for insights. In this paper we focus on the interactive analytics aspects of SIMPLE, which allow the analyst to explore large unstructured information collections containing mixed information in a dynamic way. We use real-world case studies to demonstrate the effectiveness of interactive analytics in SIMPLE. ---
Title: Patent Retrieval: A Literature Review Section 1: Introduction Description 1: Write about the significance of patents, why patent retrieval is critical, and the unique challenges faced in this field. Section 2: Patent documents and kind codes Description 2: Describe the structure of patent documents, the various elements within them, and the importance of kind codes. Section 3: Patent classification Description 3: Explain how patents are classified using different systems like IPC and CPC, and the relevance of these classifications. Section 4: Patent families Description 4: Define what patent families are and discuss their relevance in the context of patent retrieval and prior art search. Section 5: Data and evaluation tracks Description 5: Provide an overview of various evaluation tracks and data collections used in patent data analysis, focusing on their role in benchmarking patent retrieval systems. Section 6: Patent retrieval tasks Description 6: Discuss the different tasks within patent retrieval, such as prior art search, patentability search, infringement search, and more. Section 7: Patent retrieval methods Description 7: Present a comprehensive review of different methods in patent retrieval, categorized by their approach (e.g., keyword-based, PRF, semantic-based, metadata-based, interactive). Section 8: Related topics Description 8: Lightly touch on related topics such as patent quality assessment, litigation, and technology licensing, highlighting key challenges and opportunities. Section 9: Concluding remarks Description 9: Summarize the key points covered in the review, discuss the current state of the art, and suggest future directions for research in patent retrieval.
Behavior Rule Specification-based Intrusion Detection for Safety Critical Medical Cyber Physical Systems : A Review
9
---
Title: Behavior Rule Specification-based Intrusion Detection for Safety Critical Medical Cyber Physical Systems: A Review Section 1: Introduction Description 1: Introduce the topic of intrusion detection in safety-critical medical cyber-physical systems and the importance of behavior rule specification. Section 2: Background and Related Work Description 2: Provide background information on cyber-physical systems, particularly in the context of medical applications. Review related work in the field of intrusion detection. Section 3: Behavior Rule Specification Description 3: Explain what behavior rule specification is and how it is used in the context of intrusion detection systems. Section 4: Intrusion Detection Techniques Description 4: Review various intrusion detection techniques that utilize behavior rule specification. Section 5: Challenges in Medical CPS Description 5: Discuss the unique challenges faced when implementing intrusion detection systems in safety-critical medical cyber-physical systems. Section 6: Evaluation Metrics Description 6: Identify and explain the metrics used to evaluate the effectiveness of behavior rule specification-based intrusion detection systems. Section 7: Case Studies Description 7: Present case studies where behavior rule specification-based intrusion detection has been applied in medical cyber-physical systems. Section 8: Future Directions Description 8: Discuss the future research directions and potential improvements in behavior rule specification-based intrusion detection systems. Section 9: Conclusion Description 9: Summarize the key points covered in the review and the significance of behavior rule specification in enhancing the security of medical cyber-physical systems.
Survey of Region-Based Text Extraction Techniques for Efficient Indexing of Image/Video Retrieval
9
--- paper_title: Content Based Image and Video Retrieval Using Embedded Text paper_content: Extraction of text from image and video is an important step in building efficient indexing and retrieval systems for multimedia databases. We adopt a hybrid approach for such text extraction by exploiting a number of characteristics of text blocks in color images and video frames. Our system detects both caption text as well as scene text of different font, size, color and intensity. We have developed an application for on-line extraction and recognition of texts from videos. Such texts are used for retrieval of video clips based on any given keyword. The application is available on the web for the readers to repeat our experiments and also to try text extraction and retrieval from their own videos. --- paper_title: Robust extraction of text in video paper_content: Despite advances in the archiving of digital video, we are still unable to efficiently search and retrieve the portions that interest us. Video indexing by shot segmentation has been a proposed solution and several research efforts are seen in the literature. Shot segmentation alone cannot solve the problem of content based access to video. Recognition of text in video has been proposed as an additional feature. Several research efforts are found in the literature for text extraction from complex images and video with applications for video indexing. We present an update of our system for detection and extraction of an unconstrained variety of text from general purpose video. The text detection results from a variety of methods are fused and each single text instance is segmented to enable it for OCR. Problems in segmenting text from video are similar to those faced in detection and localization phases. Video has low resolution and the text often has poor contrast with a changing background. The proposed system applies a variety of methods and takes advantage of the temporal redundancy in video resulting in good text segmentation. --- paper_title: Extraction and recognition of artificial text in multimedia documents paper_content: Abstract The systems currently available for content based image and video retrieval work without semantic knowledge, i.e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e.g. by key word based queries. In this paper we present an algorithm to localise artificial text in images and videos using a measure of accumulated gradients and morphological processing. The quality of the localised text is improved by robust multiple frame integration. A new technique for the binarisation of the text boxes based on a criterion maximizing local contrast is proposed. Finally, detection and OCR results for a commercial OCR are presented, justifying the choice of the binarisation technique. --- paper_title: ANSES: summarisation of news video paper_content: We describe the Automatic News Summarisation and Extraction System (ANSES), which captures television news each day with the accompanying subtitles and identifies and extracts news stories from the video. Lexical chain analysis is used to provide a summary of each story and important entities are highlighted in the text. --- paper_title: Accessing textual information embedded in Internet images paper_content: Indexing and searching for WWW pages is relying on analysing text. Current technology cannot process the text embedded in images on WWW pages. This paper argues that this is a significant problem as text in image form is usually semantically important (e.g. headers, titles). The results of a recent study are presented to show that the majority (76%) of words embedded in images do not appear elsewhere in the main text and that the majority (56%) of ALT tag descriptions of images are incorrect or do not exist at all. Research under way to devise tools to extract text from images based on the way humans perceive colour differences is outlined and results are presented. --- paper_title: T-HOG: An effective gradient-based descriptor for single line text regions paper_content: We discuss the use of histogram of oriented gradients (HOG) descriptors as an effective tool for text description and recognition. Specifically, we propose a HOG-based texture descriptor (T-HOG) that uses a partition of the image into overlapping horizontal cells with gradual boundaries, to characterize single-line texts in outdoor scenes. The input of our algorithm is a rectangular image presumed to contain a single line of text in Roman-like characters. The output is a relatively short descriptor that provides an effective input to an SVM classifier. Extensive experiments show that the T-HOG is more accurate than Dalal and Triggs's original HOG-based classifier, for any descriptor size. In addition, we show that the T-HOG is an effective tool for text/non-text discrimination and can be used in various text detection applications. In particular, combining T-HOG with a permissive bottom-up text detector is shown to outperform state-of-the-art text detection systems in two major publicly available databases. --- paper_title: Detecting text in natural scenes with stroke width transform paper_content: We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages. --- paper_title: Scene text detection using graph model built upon maximally stable extremal regions paper_content: Scene text detection could be formulated as a bi-label (text and non-text regions) segmentation problem. However, due to the high degree of intraclass variation of scene characters as well as the limited number of training samples, single information source or classifier is not enough to segment text from non-text background. Thus, in this paper, we propose a novel scene text detection approach using graph model built upon Maximally Stable Extremal Regions (MSERs) to incorporate various information sources into one framework. Concretely, after detecting MSERs in the original image, an irregular graph whose nodes are MSERs, is constructed to label MSERs as text regions or non-text ones. Carefully designed features contribute to the unary potential to assess the individual penalties for labeling a MSER node as text or non-text, and color and geometric features are used to define the pairwise potential to punish the likely discontinuities. By minimizing the cost function via graph cut algorithm, different information carried by the cost function could be optimally balanced to get the final MSERs labeling result. The proposed method is naturally context-relevant and scale-insensitive. Experimental results on the ICDAR 2011 competition dataset show that the proposed approach outperforms state-of-the-art methods both in recall and precision. --- paper_title: Text detection and recognition in images and video frames paper_content: Text embedded in images and videos represents a rich source of information for content-based indexing and retrieval applications. In this paper, we present a new method for localizing and recognizing text in complex images and videos. Text localization is performed in a two step approach that combines the speed of a focusing step with the strength of a machine learning based text verification step. The experiments conducted show that the support vector machine is more appropriate for the verification task than the more commonly used neural networks. To perform text recognition on the localized regions, we propose a new multi-hypotheses method. Assuming different models of the text image, several segmentation hypotheses are produced. They are processed by an optical character recognition (OCR) system, and the result is selected from the generated strings according to a confidence value computed using language modeling and OCR statistics. Experiments show that this approach leads to much better results than the conventional method that tries to improve the individual segmentation algorithm. The whole system has been tested on several hours of videos and showed good performance when integrated in a sports video annotation system and a video indexing system within the framework of two European projects. --- paper_title: Video OCR: indexing digital news libraries by recognition of superimposed captions paper_content: The automatic extraction and recognition of news captions and annotations can be of great help locating topics of interest in digital news video libraries. To achieve this goal, we present a technique, called Video OCR (Optical Character Reader), which detects, extracts, and reads text areas in digital video data. In this paper, we address problems, describe the method by which Video OCR operates, and suggest applications for its use in digital news archives. To solve two problems of character recognition for videos, low-resolution characters and extremely complex backgrounds, we apply an interpolation filter, multiframe integration and character extraction filters. Character segmentation is performed by a recognition-based segmentation method, and intermediate character recognition results are used to improve the segmentation. We also include a method for locating text areas using text-like properties and the use of a language-based postprocessing technique to increase word recognition rates. The overall recognition results are satisfactory for use in news indexing. Performing Video OCR on news video and combining its results with other video understanding techniques will improve the overall understanding of the news video content. --- paper_title: Detection of Curved Text in Video: Quad Tree Based Method paper_content: In this paper, we address curved text detection in video through a new enhancement criterion and the use of quad tree. The proposed method makes use of the quad tree to simplify the task of handling the entire frame at each stage. The proposed method employs a novel criterion for grouping of pixels based on their R, G and B values to enhance text information. As generally, a text detection problem is a two class problem, we used k-means with k=2 to identify potential text candidate pixels. From these potential candidates, connected components are then extracted and subjected to further analysis, where symmetry property based on stroke width is used for further authentication of the text representatives. These authenticated text representatives are then exploited as seed points to restore the text information with reference to the Sobel edge frame of the original input frame. To preserve the spatial information of text pixels the concept of quad tree is applied. From these seed blocks, text lines are extracted by the use of a region growing approach driven completely based on Sobel edge map. The proposed method is tested on curved video data and Hua's horizontal video text data in terms of recall, precision, f-measure, misdetection rate and processing time. The results are compared and analyzed to show that the proposed method outperforms several existing methods in terms of accuracy and efficiency. --- paper_title: A robust video text detection approach using SVM paper_content: A new method for detecting text in video images is proposed in this article. Variations in background complexity, font size and color, make detecting text regions in video images a difficult task. A pyramidal scheme is utilized to solve these problems. First, two downsized images are generated by bilinear interpolation from the original image. Then, the gradient difference of each pixel is calculated for three differently sized images, including the original one. Next, three K-means clustering procedures are applied to separate all the pixels of the three gradient difference images into two clusters: text and non-text, separately. The K-means clustering results are then combined to form the text regions. Thereafter, projection profile analysis is applied to the Sobel edge map of each text region to determine the boundaries of candidate text regions. Finally, we identify text candidates through two verification phases. In the first verification phase, we verify the geometrical properties and texture of each text candidate. In the second verification phase, statistical characteristics of the text candidate are computed using a discrete wavelet transform, and then the principal component analysis is further used to reduce the number of dimensions of these features. Next, the optimal decision function of the support vector machine, obtained by sequential minimal optimization, is applied to determine whether the text candidates contain texts or not. --- paper_title: A Laplacian Method for Video Text Detection paper_content: In this paper, we propose an efficient text detection method based on the Laplacian operator. The maximum gradient difference value is computed for each pixel in the Laplacian-filtered image. K-means is then used to classify all the pixels into two clusters: text and non-text. For each candidate text region, the corresponding region in the Sobel edge map of the input image undergoes projection profile analysis to determine the boundary of the text blocks. Finally, we employ empirical rules to eliminate false positives based on geometrical properties. Experimental results show that the proposed method is able to detect text of different fonts, contrast and backgrounds. Moreover, it outperforms three existing methods in terms of detection and false positive rates. --- paper_title: Detecting texts of arbitrary orientations in natural images paper_content: With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes. --- paper_title: A new approach for video text detection paper_content: Text detection is fundamental to video information retrieval and indexing. Existing methods cannot handle well those texts with different contrast or embedded in a complex background. To handle these difficulties, this paper proposes an efficient text detection approach, which is based on invariant features, such as edge strength, edge density, and horizontal distribution. First, it applies edge detection and uses a low threshold to filter out definitely non-text edges. Then, a local threshold is selected to both keep low-contrast text and simplify complex background of high-contrast text. Next, two text-area enhancement operators are proposed to highlight those areas with either high edge strength or high edge density. Finally, coarse-to-fine detection locates text regions efficiently. Experimental results show that this approach is robust for contrast, font-size, font-color, language, and background complexity. --- paper_title: A New Method for Arbitrarily-Oriented Text Detection in Video paper_content: Text detection in video frames plays a vital role in enhancing the performance of information extraction systems because the text in video frames helps in indexing and retrieving video efficiently and accurately. This paper presents a new method for arbitrarily-oriented text detection in video, based on dominant text pixel selection, text representatives and region growing. The method uses gradient pixel direction and magnitude corresponding to Sobel edge pixels of the input frame to obtain dominant text pixels. Edge components in the Sobel edge map corresponding to dominant text pixels are then extracted and we call them text representatives. We eliminate broken segments of each text representatives to get candidate text representatives. Then the perimeter of candidate text representatives grows along the text direction in the Sobel edge map to group the neighboring text components which we call word patches. The word patches are used for finding the direction of text lines and then the word patches are expanded in the same direction in the Sobel edge map to group the neighboring word patches and to restore missing text information. This results in extraction of arbitrarily-oriented text from the video frame. To evaluate the method, we considered arbitrarily-oriented data, non-horizontal data, horizontal data, Hua's data and ICDAR-2003 competition data (Camera images). The experimental results show that the proposed method outperforms the existing method in terms of recall and f-measure. --- paper_title: Video OCR for digital news archive paper_content: Video OCR is a technique that can greatly help to locate topics of interest in a large digital news video archive via the automatic extraction and reading of captions and annotations. News captions generally provide vital search information about the video being presented, the names of people and places or descriptions of objects. In this paper, two difficult problems of character recognition for videos are addressed: low resolution characters and extremely complex backgrounds. We apply an interpolation filter, multi-frame integration and a combination of four filters to solve these problems. Segmenting characters is done by a recognition-based segmentation method and intermediate character recognition results are used to improve the segmentation. The overall recognition results are good enough for use in news indexing. Performing video OCR on news video and combining its results with other video understanding techniques will improve the overall understanding of the news video content. --- paper_title: A combined corner and edge detector paper_content: The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed. --- paper_title: Text From Corners: A Novel Approach to Detect Text and Caption in Videos paper_content: Detecting text and caption from videos is important and in great demand for video retrieval, annotation, indexing, and content analysis. In this paper, we present a corner based approach to detect text and caption from videos. This approach is inspired by the observation that there exist dense and orderly presences of corner points in characters, especially in text and caption. We use several discriminative features to describe the text regions formed by the corner points. The usage of these features is in a flexible manner, thus, can be adapted to different applications. Language independence is an important advantage of the proposed method. Moreover, based upon the text features, we further develop a novel algorithm to detect moving captions in videos. In the algorithm, the motion features, extracted by optical flow, are combined with text features to detect the moving caption patterns. The decision tree is adopted to learn the classification criteria. Experiments conducted on a large volume of real video shots demonstrate the efficiency and robustness of our proposed approaches and the real-world system. Our text and caption detection system was recently highlighted in a worldwide multimedia retrieval competition, Star Challenge, by achieving the superior performance with the top ranking. --- paper_title: Automatic location of text in video frames paper_content: A new automatic text location approach for videos is proposed. First of all, the corner points of the selected video frames are detected. After deleting some isolate corners, we merge the remaining corners to form candidate text regions. The regions are then decomposed vertically and horizontally using edge maps of the video frames to get candidate text lines. Finally, a text box verification step based on the feature derived from edge maps is taken to significantly reduce false alarms. Experimental results show that the new text location scheme proposed in this paper is accurate. --- paper_title: A novel text detection and localization method based on corner response paper_content: Information of text in videos and images plays an important role in semantic analysis. In this paper, we propose an effective method for text detection and localization in noisy background. The algorithm is based on corner response. Compared to non-text regions, there often exist dense edges and corners in text regions. So we can get relatively strong responses from text regions and low responses from non-text regions. These responses provide us useful cues for text detection and localization in images. Then using a simple block based threshold scheme, we get candidate regions for text. These regions are further verified by combining other features such as color and size range of connected component. Finally, Text line is located accurately by the projection of corner response. The experimental results show the effectiveness of our methods. --- paper_title: Video caption duration extraction paper_content: Caption detection in the video is an active research topic in recent years. In the conventional methods, one of most difficult problems is to effectively and quickly extract the durations of the different-size captions in the complex background. To solve this problem, a novel and effective method is presented to locate and track the captions in the video. The main contributions are: (1)present a multi-scale Harris-corner based method to detect the initial position of the caption (2)propose the SGF (Steady Global Feature) to determine the caption duration. Extensive experiments demonstrate the effectiveness of the proposed method. --- paper_title: Gaussian mixture modeling and learning of neighboring characters for multilingual text extraction in images paper_content: This paper proposes an approach based on the statistical modeling and learning of neighboring characters to extract multilingual texts in images. The case of three neighboring characters is represented as the Gaussian mixture model and discriminated from other cases by the corresponding 'pseudo-probability' defined under Bayes framework. Based on this modeling, text extraction is completed through labeling each connected component in the binary image as character or non-character according to its neighbors, where a mathematical morphology based method is introduced to detect and connect the separated parts of each character, and a Voronoi partition based method is advised to establish the neighborhoods of connected components. We further present a discriminative training algorithm based on the maximum-minimum similarity (MMS) criterion to estimate the parameters in the proposed text extraction approach. Experimental results in Chinese and English text extraction demonstrate the effectiveness of our approach trained with the MMS algorithm, which achieved the precision rate of 93.56% and the recall rate of 98.55% for the test data set. In the experiments, we also show that the MMS provides significant improvement of overall performance, compared with influential training criterions of the maximum likelihood (ML) and the maximum classification error (MCE). --- paper_title: A Robust Wavelet Transform Based Technique for Video Text Detection paper_content: In this paper, we propose a new method based on wavelet transform, statistical features and central moments for both graphics and scene text detection in video images. The method uses wavelet single level decomposition LH, HL and HH subbands for computing features and the computed features are fed to k means clustering to classify the text pixel from the background of the image. The average of wavelet subbands and the output of k means clustering helps in classifying true text pixel in the image. The text blocks are detected based on analysis of projection profiles. Finally, we introduce a few heuristics to eliminate false positives from the image. The robustness of the proposed method is tested by conducting experiments on a variety of images of low contrast, complex background, different fonts, and size of text in the image. The experimental results show that the proposed method outperforms the existing methods in terms of detection rate, false positive rate and misdetection rate. --- paper_title: Automatic text location in complex color images using local color quantization paper_content: An efficient text location method in complex color images is proposed. To deal with characters on complex background colors, local color quantization is done separately for each color instead of all colors. A candidate text line is extracted merging connected components and confirmed using heuristics based on color profile. I tested the proposed method on complex book cover images and it is shown to be successful in extracting texts in complex color images. --- paper_title: Scene Text Extraction Using Focus of Mobile Camera paper_content: Robust extraction of text from scene images is essential for successful scene text recognition. Scene images usually have non-uniform illumination, complex background, and existence of text-like objects. The common assumption of a homogeneous text region on a nearly uniform background cannot be maintained in real applications. We proposed a text extraction method that utilizes user's hint on the location of the text within the image. A resizable square rim in the viewfinder of the mobile camera, referred to here as a 'focus', is the interface used to help the user indicate the target text. With the hint from the focus, the color of the target text is easily estimated by clustering colors only within the focused section. Image binarization with the estimated color is performed to extract connected components. After obtaining the text region within the focused section, the text region is expanded iteratively by searching neighboring regions with the updated text color. Such an iterative method would prevent the problem of one text region being separated into more than one component due to non-uniform illumination and reflection. A text verification process is conducted on the extracted components to determine the true text region. It is demonstrated that the proposed method achieved high accuracy of text extraction for moderately difficult examples from the ICDAR 2003 database. --- paper_title: Selecting variables for k-means cluster analysis by using a genetic algorithm that optimises the silhouettes paper_content: The goal of present work is to analyse the effect of having non-informative variables (NIV) in a data set when applying cluster analysis and to propose a method computationally capable of detecting and removing these variables. The method proposed is based on the use of a genetic algorithm to select those variables important to make the presence of groups in data clear. The procedure has been implemented to be used with k-means and using the cluster silhouettes as fitness function for the genetic algorithm. ::: ::: The main problem that can appear when applying the method to real data is the fact that, in general, we do not know a priori what the real cluster structure is (number and composition of the groups). ::: ::: The work explores the evolution of the silhouette values computed from the clusters built by using k-means when non-informative variables are added to the original data set in both a literature data set as well as some simulated data in higher dimension. The procedure has also been applied to real data sets. --- paper_title: A method for text localization and recognition in real-world images paper_content: A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. ::: ::: The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72% is achieved, 18% higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics. --- paper_title: Scene Text Extraction in HSI Color Space using K-means Algorithm and Modified Cylindrical Distance paper_content: Text extraction, that is segmentation of characters from background, is especially important step that greatly determines final recognition performance. Particular focus is put on this task for scene text which is characterized with wide set of degradations like complex backgrounds, uneven illumination, viewing angle etc. In this paper we introduce text extraction method based on k-means clustering with modified cylindrical distance in HSI color space. Performance of this distance is analyzed depending on different degrees of chroma reliability. For purpose of result comparison, K-means text extraction is also performed with cylindrical distance in HSI color space and Euclidean distance in RGB color space. Complementarity of tested distances is also analyzed showing possible direction for further performance improvement. --- paper_title: Automatic text detection using multi-layer color quantization in complex color images paper_content: News, magazines, Web pages, etc. in modern life always contain much text information. In this paper, we propose a novel approach to detect text in images with very low false alarm rate. First of all, neural network color quantization is used to compact text color. Second, 3D histogram analysis chooses several color candidates, and then extracts each of these color candidates to obtain several bi-level images. Moreover, for each bi-level image, connectivity analysis and some morphological operators are fed to produce character candidates. Furthermore, we calculate some spatial features and relationships of each text candidate. Finally, we can localize text regions by authentication from LOG (Laplacian of Gaussian) edge detector. Meanwhile, in complex color images, multi-quantization layers can be integrated to reject non-text parts and reduce false alarm rate --- paper_title: Text locating in scene images for reading and navigation aids for visually impaired persons paper_content: Many reading assistants and navigation systems have been designed specifically for people who are blind or visually impaired, but text locating in scene image with complex background has not yet been successfully addressed. In this paper, we propose a novel method to locate scene text by combining color uniformity and high edge density together. We perform structural analysis of text strings which contain several characters in alignment. First, we calculate the edge image and then repaint the corresponding edge pixels in the original image by using a non-dominant color. Second, color reduction is performed by color histogram and K-means algorithms to segment the repainted image into color layers. Third, we perform edge detection and label the boundaries of both text characters and unexpected noises in each color layer. Each centroid is assigned a degree which is the number of overlap in the same position among color layers. Fourth, text line fitting among centroids with high degree is performed to cascade the character boundaries which belong to the same text string. The detected text string is presented by a rectangle region covering all character boundaries in its text line. Experimental results demonstrate that our algorithm is able to locate text strings with arbitrary orientations. The performance of our algorithm is comparable with the state-of-art algorithms. --- paper_title: From Text Detection in Videos to Person Identification paper_content: We present in this article a video OCR system that detects and recognizes overlaid texts in video as well as its application to person identification in video documents. We proceed in several steps. First, text detection and temporal tracking are performed. After adaptation of images to a standard OCR system, a final post-processing combines multiple transcriptions of the same text box. The semi-supervised adaptation of this system to a particular video type (video broadcast from a French TV) is proposed and evaluated. The system is efficient as it runs 3 times faster than real time (including the OCR step) on a desktop Linux box. Both text detection and recognition are evaluated individually and through a person recognition task where it is shown that the combination of OCR and audio (speaker) information can greatly improve the performances of a state of the art audio based person identification system. --- paper_title: Text extraction using edge detection and morphological dilation paper_content: In this paper, a method used to extract text regions from document images is proposed. It first finds edges by computing the gradient of each location. Then long edges, such as lines and isolated edges which are often due to noise, are removed. Finally, this edge map is morphologically dilated. Experimentally, the proposed method is robust to noise, skew and text orientation. --- paper_title: MULTIRESOLUTION TEXT DETECTION IN VIDEO FRAMES paper_content: This paper proposes an algorithm for detecting artificial text in video frames using edge information. First, an edge map is created using the Canny edge detector. Then, morphological dilation and opening are used in order to connect the vertical edges and eliminate false alarms. Bounding boxes are determined for every non-zero valued connected component, consisting the initial candidate text areas. Finally, an edge projection analysis is applied, refining the result and splitting text areas in text lines. The whole algorithm is applied in different resolutions to ensure text detection with size variability. Experimental results prove that the method is highly effective and efficient for artificial text detection. --- paper_title: Snoopertext: A multiresolution system for text detection in complex visual scenes paper_content: Text detection in natural images remains a very challenging task. For instance, in an urban context, the detection is very difficult due to large variations in terms of shape, size, color, orientation, and the image may be blurred or have irregular illumination, etc. In this paper, we describe a robust and accurate multiresolution approach to detect and classify text regions in such scenarios. Based on generation/validation paradigm, we first segment images to detect character regions with a multiresolution algorithm able to manage large character size variations. The segmented regions are then filtered out using shapebased classification, and neighboring characters are merged to generate text hypotheses. A validation step computes a region signature based on texture analysis to reject false positives. We evaluate our algorithm in two challenging databases, achieving very good results. --- paper_title: Detecting texts of arbitrary orientations in natural images paper_content: With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes. --- paper_title: Scene text detection using graph model built upon maximally stable extremal regions paper_content: Scene text detection could be formulated as a bi-label (text and non-text regions) segmentation problem. However, due to the high degree of intraclass variation of scene characters as well as the limited number of training samples, single information source or classifier is not enough to segment text from non-text background. Thus, in this paper, we propose a novel scene text detection approach using graph model built upon Maximally Stable Extremal Regions (MSERs) to incorporate various information sources into one framework. Concretely, after detecting MSERs in the original image, an irregular graph whose nodes are MSERs, is constructed to label MSERs as text regions or non-text ones. Carefully designed features contribute to the unary potential to assess the individual penalties for labeling a MSER node as text or non-text, and color and geometric features are used to define the pairwise potential to punish the likely discontinuities. By minimizing the cost function via graph cut algorithm, different information carried by the cost function could be optimally balanced to get the final MSERs labeling result. The proposed method is naturally context-relevant and scale-insensitive. Experimental results on the ICDAR 2011 competition dataset show that the proposed approach outperforms state-of-the-art methods both in recall and precision. --- paper_title: Extraction and recognition of artificial text in multimedia documents paper_content: Abstract The systems currently available for content based image and video retrieval work without semantic knowledge, i.e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e.g. by key word based queries. In this paper we present an algorithm to localise artificial text in images and videos using a measure of accumulated gradients and morphological processing. The quality of the localised text is improved by robust multiple frame integration. A new technique for the binarisation of the text boxes based on a criterion maximizing local contrast is proposed. Finally, detection and OCR results for a commercial OCR are presented, justifying the choice of the binarisation technique. --- paper_title: Snoopertext: A multiresolution system for text detection in complex visual scenes paper_content: Text detection in natural images remains a very challenging task. For instance, in an urban context, the detection is very difficult due to large variations in terms of shape, size, color, orientation, and the image may be blurred or have irregular illumination, etc. In this paper, we describe a robust and accurate multiresolution approach to detect and classify text regions in such scenarios. Based on generation/validation paradigm, we first segment images to detect character regions with a multiresolution algorithm able to manage large character size variations. The segmented regions are then filtered out using shapebased classification, and neighboring characters are merged to generate text hypotheses. A validation step computes a region signature based on texture analysis to reject false positives. We evaluate our algorithm in two challenging databases, achieving very good results. --- paper_title: Locating text in complex color images paper_content: Abstract There is a substantial interest in retrieving images from a large database using the textual information contained in the images. An algorithm which will automatically locate the textual regions in the input image will facilitate this task; the optical character recognizer can then be applied to only those regions of the image which contain text. We present two methods for automatically locating text in complex color images. The first method segments the image into connected components with uniform color, and uses several heuristics (size, alignment, proximity) to select the components which are likely to contain character(s) belonging to the text. The second method computes the local spatial variation in the gray-scale image, and locates text in regions with high variance. A combination of the two approaches is shown to be more effective than the individual methods. The proposed methods have been used to locate text in compact disc (CD) and book cover images, as well as in the images of traffic scenes captured by a video camera. Initial results are encouraging and suggest that these algorithms can be used in image retrieval applications. --- paper_title: Text From Corners: A Novel Approach to Detect Text and Caption in Videos paper_content: Detecting text and caption from videos is important and in great demand for video retrieval, annotation, indexing, and content analysis. In this paper, we present a corner based approach to detect text and caption from videos. This approach is inspired by the observation that there exist dense and orderly presences of corner points in characters, especially in text and caption. We use several discriminative features to describe the text regions formed by the corner points. The usage of these features is in a flexible manner, thus, can be adapted to different applications. Language independence is an important advantage of the proposed method. Moreover, based upon the text features, we further develop a novel algorithm to detect moving captions in videos. In the algorithm, the motion features, extracted by optical flow, are combined with text features to detect the moving caption patterns. The decision tree is adopted to learn the classification criteria. Experiments conducted on a large volume of real video shots demonstrate the efficiency and robustness of our proposed approaches and the real-world system. Our text and caption detection system was recently highlighted in a worldwide multimedia retrieval competition, Star Challenge, by achieving the superior performance with the top ranking. --- paper_title: A robust video text detection approach using SVM paper_content: A new method for detecting text in video images is proposed in this article. Variations in background complexity, font size and color, make detecting text regions in video images a difficult task. A pyramidal scheme is utilized to solve these problems. First, two downsized images are generated by bilinear interpolation from the original image. Then, the gradient difference of each pixel is calculated for three differently sized images, including the original one. Next, three K-means clustering procedures are applied to separate all the pixels of the three gradient difference images into two clusters: text and non-text, separately. The K-means clustering results are then combined to form the text regions. Thereafter, projection profile analysis is applied to the Sobel edge map of each text region to determine the boundaries of candidate text regions. Finally, we identify text candidates through two verification phases. In the first verification phase, we verify the geometrical properties and texture of each text candidate. In the second verification phase, statistical characteristics of the text candidate are computed using a discrete wavelet transform, and then the principal component analysis is further used to reduce the number of dimensions of these features. Next, the optimal decision function of the support vector machine, obtained by sequential minimal optimization, is applied to determine whether the text candidates contain texts or not. --- paper_title: Detecting text in natural scenes with stroke width transform paper_content: We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages. --- paper_title: Automatic location of text in video frames paper_content: A new automatic text location approach for videos is proposed. First of all, the corner points of the selected video frames are detected. After deleting some isolate corners, we merge the remaining corners to form candidate text regions. The regions are then decomposed vertically and horizontally using edge maps of the video frames to get candidate text lines. Finally, a text box verification step based on the feature derived from edge maps is taken to significantly reduce false alarms. Experimental results show that the new text location scheme proposed in this paper is accurate. --- paper_title: Extraction and recognition of artificial text in multimedia documents paper_content: Abstract The systems currently available for content based image and video retrieval work without semantic knowledge, i.e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e.g. by key word based queries. In this paper we present an algorithm to localise artificial text in images and videos using a measure of accumulated gradients and morphological processing. The quality of the localised text is improved by robust multiple frame integration. A new technique for the binarisation of the text boxes based on a criterion maximizing local contrast is proposed. Finally, detection and OCR results for a commercial OCR are presented, justifying the choice of the binarisation technique. --- paper_title: Improved text-detection methods for a camera-based text reading system for blind persons paper_content: Automatic text recognition from natural images receives a growing attention because of potential applications in image retrieval, robotics and intelligent transport system. Camera-based document analysis becomes a real possibility with the increasing resolution and availability of digital cameras. Our research objective is a system that reads the text encountered in natural scenes with the aim to provide assistance to visually impaired persons. In the case of a blind person, finding the text region is the first important problem that must be addressed, because it cannot be assumed that the acquired image contains only characters. In a previous paper (N. Ezaki et al., 2004), we propose four text-detection methods based on connected components. Finding small characters needed significant improvement. This paper describes a new text-detection method geared for small text characters. This method uses Fisher's discriminant rate (FDR) to decide whether an image area should be binarized using local or global thresholds. Fusing the new method with a previous morphology-based one yields improved results. Using a controllable Webcam and a laptop PC, we developed a prototype that works in real time. At first, our system tries to find in the image areas with small characters. Then it zooms into the found areas to retake higher resolution images necessary for character recognition. Going from this proof-of-concept to a complete system requires further research effort. --- paper_title: Extraction and Recognition of Text From Digital English Comic Image Using Median Filter paper_content: Text extraction from image is one of the complicated areas in digital image processing. Text characters entrenched in image represents a rich source of information for text retrieval application. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. Automatic text extraction from comic images receives a growing attention because of prospective application in image retrieval. In existing work, Japanese text is extracted vertically from Manga Comic Image using Blob extraction functions. At the same time, text is extracted from multiple constraints using optical character recognition (OCR) and make translation of Japanese language of Manga into some other languages in conventional way to share the enjoyment of reading Manga through the Internet. This paper talks about English text extraction from blob comic image using various methods. --- paper_title: Autonomous Text Capturing Robot Using Improved DCT Feature and Text Tracking paper_content: When an autonomous robot tries to find text in the surrounding scene using an onboard video camera, some duplicate text images appear in the video frames. To avoid recognizing the same text many times, it is necessary to decrease the number of text candidate regions for recognition. This paper presents a text capturing robot that can look around the environment using an active camera. The text candidate regions are extracted from the images using an improved DCT feature. The text regions are tracked in the video sequence so that the number of text images to be recognized is reduced. In the experiment, we tested 460 images of a corridor with fifteen signboards including text. The number of text candidate regions is reduced by 90.1% using our text tracking method. --- paper_title: Automatic Text Detection and Tracking in Digital Video paper_content: Text that appears in a scene or is graphically added to video can provide an important supplemental source of index information as well as clues for decoding the video's structure and for classification. In this work, we present algorithms for detecting and tracking text in digital video. Our system implements a scale-space feature extractor that feeds an artificial neural processor to detect text blocks. Our text tracking scheme consists of two modules: a sum of squared difference (SSD) based module to find the initial position and a contour-based module to refine the position. Experiments conducted with a variety of video sources show that our scheme can detect and track text robustly. --- paper_title: Caption Text Extraction Using DCT Feature in MPEG Compressed Video paper_content: Caption text provides valuable information about contents in video sequences. In this paper, an efficient method to locate candidate caption text regions of video directly in the DCT compressed domain is proposed. Candidate text blocks are detected in terms of DCT texture energy. A 3×3 median filter is used as spatial constraint to refine the text regions. An adaptive temporal constraint method is designed according to the same caption text last for at least two seconds. Finally we convert the extracted text regions into HSV color space to generate binary text images that required by commercial OCRs. Experimental results on several video sequences show that the proposed algorithm is efficient to detect and extract caption text in MPEG video sequences with various scene complexities. --- paper_title: Text From Corners: A Novel Approach to Detect Text and Caption in Videos paper_content: Detecting text and caption from videos is important and in great demand for video retrieval, annotation, indexing, and content analysis. In this paper, we present a corner based approach to detect text and caption from videos. This approach is inspired by the observation that there exist dense and orderly presences of corner points in characters, especially in text and caption. We use several discriminative features to describe the text regions formed by the corner points. The usage of these features is in a flexible manner, thus, can be adapted to different applications. Language independence is an important advantage of the proposed method. Moreover, based upon the text features, we further develop a novel algorithm to detect moving captions in videos. In the algorithm, the motion features, extracted by optical flow, are combined with text features to detect the moving caption patterns. The decision tree is adopted to learn the classification criteria. Experiments conducted on a large volume of real video shots demonstrate the efficiency and robustness of our proposed approaches and the real-world system. Our text and caption detection system was recently highlighted in a worldwide multimedia retrieval competition, Star Challenge, by achieving the superior performance with the top ranking. --- paper_title: An Efficient Video Text Recognition System paper_content: In this paper, we present efficient schemes to utilize multiple frames that contain the same text to get every clear word from these frames. Firstly, we use multiple frame traction to reduce text detection false alarms. And then detect and joint every clear text block from those frames to form a clearer integration frame. Later we uses Otsu threshold method to generate a binary text image. Finally, the binarized frames are sent to OCR engine for recognition. The experimental results show our method can improve the performance of text recognition in video significantly. --- paper_title: Text detection, localization, and tracking in compressed video paper_content: Video text information plays an important role in semantic-based video analysis, indexing and retrieval. Video texts are closely related to the content of a video. Usually, the fundamental steps of text-based video analysis, browsing and retrieval consist of video text detection, localization, tracking, segmentation and recognition. Video sequences are commonly stored in compressed formats where MPEG coding techniques are often adopted. In this paper, a unified framework for text detection, localization, and tracking in compressed videos using the discrete cosines transform (DCT) coefficients is proposed. A coarse to fine text detection method is used to find text blocks in terms of the block DCT texture intensity information. The DCT texture intensity of an 8x8 block of an intra-frame is approximately represented by seven AC coefficients. The candidate text block regions are further verified and refined. The text block region localization and tracking are carried out by virtue of the horizontal and vertical block texture intensity projection profiles. The appearing and disappearing frames of each text line are determined by the text tracking. The final experimental results show the effectiveness of the proposed methods. --- paper_title: Text-tracking wearable camera system for visually-impaired people paper_content: Disability of visual text reading has a huge impact on the quality of life for visually disabled people. One of the most anticipated devices is a wearable camera capable of finding text regions in natural scenes and translating the text into another representation such as sound or braille. In order to develop such a device, text tracking in video sequences is required as well as text detection. We need to group homogeneous text regions to avoid multiple and redundant speech syntheses or braille conversions. We have developed a prototype system equipped with a head-mounted video camera. Text regions are extracted from the video frames using a revised DCT feature. Particle filtering is employed for fast and robust text tracking. We have tested the performance of our system using 1,000 video frames of a hall way with eight signboards. The number of text candidate images is reduced to 0.98%. --- paper_title: Tracking text in MPEG videos paper_content: Tracking superimposed text moving across several frames of a video is relevant for exploiting its temporal occurrence for effective video content indexing and retrieval. In this paper, an approach is presented that automatically detects, localizes and tracks text appearing in videos. The proposed approach consists of two steps: (1) unsupervised text detection and localization in each Nth frame to monitor new text events, i.e. text appearing in a video for the first time; (2) text tracking within a group of pictures (GOP) using MPEG motion vector information extracted directly from the compressed video stream. Comparative experimental results for a set of videos are presented to show the benefits of our approach. --- paper_title: Detecting text in natural scenes with stroke width transform paper_content: We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages. --- paper_title: A combined algorithm for video text extraction paper_content: Video text provides high-level semantic information. However, due to the complex background in video, it is of great difficulty to extract text efficiently. Although many methods hold assumptions on single feature, such as texture, connected areas etc., there are still some problems in dealing with multilingual text extraction because of its quite different appearance. In this paper, the color and edge features are used to extract the text from the video frame. In this paper, two methods are combined, called color-edge combined algorithm, to remove text background. One of the combined methods is based on the exponential changes of text color, called Transition Map model, the other one uses the text edges of different gray level image. After removing complex background, text location is determined using the vertical and horizontal projection method. This algorithm is robust to the image with multilingual text. Through extensive comparison with other approach, experimental results on a large number of video images successfully demonstrate the efficiency of this algorithm. --- paper_title: Detecting moving text in video using temporal information paper_content: This paper presents our work on automatically detecting moving rigid text in digital videos. The temporal information is obtained by dividing a video frame into sub-blocks and calculating inter-frame motion vector for each sub-block. Text blocks are then extracted through both intra-frame classification and inter-frame spatial relationship checking. Unlike previous works, our method achieves both detection and tracking of moving text at the same time. The method works very well detecting scrolling text in news clips and movies, and is robust towards low resolution and complex background. The computational efficiency of the method is also discussed. --- paper_title: A method for text localization and recognition in real-world images paper_content: A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. ::: ::: The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72% is achieved, 18% higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics. --- paper_title: Extraction and recognition of artificial text in multimedia documents paper_content: Abstract The systems currently available for content based image and video retrieval work without semantic knowledge, i.e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e.g. by key word based queries. In this paper we present an algorithm to localise artificial text in images and videos using a measure of accumulated gradients and morphological processing. The quality of the localised text is improved by robust multiple frame integration. A new technique for the binarisation of the text boxes based on a criterion maximizing local contrast is proposed. Finally, detection and OCR results for a commercial OCR are presented, justifying the choice of the binarisation technique. --- paper_title: An integration text extraction approach in video frame paper_content: Text in video frames contains high-level semantic information and thus can contribute significantly to video content analysis and retrieval. Therefore, video text recognition is crucial to the research in all video indexing and summarization. In the processing of video text recognition, in the extraction step, background in the text rows is removed so only the text pixels are left for recognition. Although many efforts have been made for video text extraction, there are still many research works to do. Text extraction from images remains a challenging problem for character recognition applications, due to complex background, unknown text color, degraded text quality caused by the lossy compression, and different language characteristics. In this paper, we present an efficient approach of text extraction based on multiple frame integration and stroke filter. Firstly text block filtering and integration are used to obtain the clean background and clear text with high contrast for the effective recognition. Secondly the character of the stroke-like structure is adequately considered, the stroke filter is applied to get most step-like edges, and at the same time, the text parts are enhanced well. Lastly the missed text pixels in former steps are recalled by local region growing. ---
Title: Survey of Region-Based Text Extraction Techniques for Efficient Indexing of Image/Video Retrieval Section 1: INTRODUCTION Description 1: Write about the rapid increase in multimedia data, the importance of content-based indexing, and classifying text into caption and scene text. Section 2: APPLICATIONS OF TEXT EXTRACTION Description 2: Discuss various applications of text extraction, including video and image retrieval, multimedia summarization, and indexing and retrieval of web pages. Section 3: CHALLENGES Description 3: Highlight the challenges faced in text extraction from multimedia data, including issues related to resolution, computational efficiency, dynamic text, noise, blur, and compression. Section 4: SEGMENTATION Description 4: Describe different segmentation techniques used in text extraction, such as edge and gradient-based segmentation, corner-based segmentation, color and intensity-based segmentation, and clustering techniques like K-means. Section 5: PIXEL LEVEL MERGING Description 5: Elaborate on methods used to merge pixels at the low level during text detection, including morphological dilation and other morphological operations. Section 6: OBJECT LEVEL MERGING Description 6: Discuss strategies for merging potential text characters into logical text strings at the object level, including conditional dilation, erosion, and spatial feature-based mergers. Section 7: FEATURE VECTOR Description 7: Provide details about various feature vectors used to differentiate text from non-text elements, emphasizing geometrical and statistical features pertinent to text objects. Section 8: TEXT TRACKING Description 8: Describe techniques for tracking text across video frames to exploit temporal redundancy, and methods for managing translations, motions, and complex movements. Section 9: CONCLUSION Description 9: Summarize the findings about text extraction techniques, highlighting existing methodologies' strengths and limitations, and propose future research directions.
A Review on Wireless Body Area Network (WBAN) for Health Monitoring System: Implementation Protocols
8
--- paper_title: A survey of research in WBAN for biomedical and scientific applications paper_content: A review of earlier work on Wireless Body Area Networks (WBANs) is given here as these networks have gained a lot of research attention in recent years since they offer tremendous benefits for remote health monitoring and continuous, real-time patient care. However, as with any wireless communication, data security in WBANs is a challenging design issue. Since such networks consist of small sensors placed on the human body, they impose resource and computational restrictions, thereby making the use of sophisticated and advanced encryption algorithms infeasible. This calls for the design of algorithms with a robust key generation management scheme, which are reasonably resource optimal. The main purpose of the WBAN is to make it possible for patients who need permanent monitoring to be fully mobile. The WBAN is worn by a patient and basically consists of a set of lightweight devices that monitor and wirelessly transmit certain bio signals (vital signs) to a Backend System at a Healthcare center. A monitoring healthcare specialist retrieves the patient data over a reliable wired connection. The focus is on the wireless technologies Bluetooth and Packet Radio Switching (GPRS), because of their important role of communication. This paper discusses several uses of the WBANs technology, the most obvious application of a WBAN is being in the medical sector. However, there are also more recreational uses to WBANs which are mentioned here. This paper discusses the technologies about WBANs, as well as different applications for WBANs. A survey of the state of the art in Wireless Body Area Networks is given to provide a better understanding of the current research issues in this emerging field. --- paper_title: Monitoring Motor Fluctuations in Patients With Parkinson's Disease Using Wearable Sensors paper_content: This paper presents the results of a pilot study to assess the feasibility of using accelerometer data to estimate the severity of symptoms and motor complications in patients with Parkinson's disease. A support vector machine (SVM) classifier was implemented to estimate the severity of tremor, bradykinesia and dyskinesia from accelerometer data features. SVM-based estimates were compared with clinical scores derived via visual inspection of video recordings taken while patients performed a series of standardized motor tasks. The analysis of the video recordings was performed by clinicians trained in the use of scales for the assessment of the severity of Parkinsonian symptoms and motor complications. Results derived from the accelerometer time series were analyzed to assess the effect on the estimation of clinical scores of the duration of the window utilized to derive segments (to eventually compute data features) from the accelerometer data, the use of different SVM kernels and misclassification cost values, and the use of data features derived from different motor tasks. Results were also analyzed to assess which combinations of data features carried enough information to reliably assess the severity of symptoms and motor complications. Combinations of data features were compared taking into consideration the computational cost associated with estimating each data feature on the nodes of a body sensor network and the effect of using such data features on the reliability of SVM-based estimates of the severity of Parkinsonian symptoms and motor complications. --- paper_title: Wireless body area networks: challenges, trends and emerging technologies paper_content: We provide a comprehensive review of the challenges and emerging technologies for WBANs: Wireless Body Area Networks. We describe leading applications and the tiers of the WBAN architectures, itemizing their major design areas. A succinct review of solutions, progress and possible future research in the areas of sensors, power and radio hardware components, the network stack, localization and mobility techniques, security and privacy, and of certification and standardization follows. We finally make the connection between WBANs and the fast growing research field of Cognitive Radios, which help establish efficient wireless transmissions in an increasingly crowded radio spectrum --- paper_title: Monitoring Motor Fluctuations in Patients With Parkinson's Disease Using Wearable Sensors paper_content: This paper presents the results of a pilot study to assess the feasibility of using accelerometer data to estimate the severity of symptoms and motor complications in patients with Parkinson's disease. A support vector machine (SVM) classifier was implemented to estimate the severity of tremor, bradykinesia and dyskinesia from accelerometer data features. SVM-based estimates were compared with clinical scores derived via visual inspection of video recordings taken while patients performed a series of standardized motor tasks. The analysis of the video recordings was performed by clinicians trained in the use of scales for the assessment of the severity of Parkinsonian symptoms and motor complications. Results derived from the accelerometer time series were analyzed to assess the effect on the estimation of clinical scores of the duration of the window utilized to derive segments (to eventually compute data features) from the accelerometer data, the use of different SVM kernels and misclassification cost values, and the use of data features derived from different motor tasks. Results were also analyzed to assess which combinations of data features carried enough information to reliably assess the severity of symptoms and motor complications. Combinations of data features were compared taking into consideration the computational cost associated with estimating each data feature on the nodes of a body sensor network and the effect of using such data features on the reliability of SVM-based estimates of the severity of Parkinsonian symptoms and motor complications. --- paper_title: Energy-Efficient Low Duty Cycle MAC Protocol for Wireless Body Area Networks paper_content: This paper presents an energy-efficient medium access control protocol suitable for communication in a wireless body area network for remote monitoring of physiological signals such as EEG and ECG. The protocol takes advantage of the static nature of the body area network to implement the effective time-division multiple access (TDMA) strategy with very little amount of overhead and almost no idle listening (by static, we refer to the fixed topology of the network investigated). The main goal is to develop energy-efficient and reliable communication protocol to support streaming of large amount of data. TDMA synchronization problems are discussed and solutions are presented. Equations for duty cycle calculation are also derived for power consumption and battery life predictions. The power consumption model was also validated through measurements. Our results show that the protocol is energy efficient for streaming communication as well as sending short bursts of data, and thus can be used for different types of physiological signals with different sample rates. The protocol is implemented on the analog devices ADF7020 RF transceivers. --- paper_title: MAC protocols for wireless sensor networks: a survey paper_content: Wireless sensor networks are appealing to researchers due to their wide range of application potential in areas such as target detection and tracking, environmental monitoring, industrial process monitoring, and tactical systems. However, low sensing ranges result in dense networks and thus it becomes necessary to achieve an efficient medium-access protocol subject to power constraints. Various medium-access control (MAC) protocols with different objectives have been proposed for wireless sensor networks. In this article, we first outline the sensor network properties that are crucial for the design of MAC layer protocols. Then, we describe several MAC protocols proposed for sensor networks, emphasizing their strengths and weaknesses. Finally, we point out open research issues with regard to MAC layer design. --- paper_title: Wireless Body Area Network (WBAN) Design Techniques and Performance Evaluation paper_content: In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques. --- paper_title: Battery-dynamics driven tdma mac protocols for wireless body-area monitoring networks in healthcare applications paper_content: We propose the cross-layer based battery-aware time division multiple access (TDMA) medium access control (MAC) protocols for wireless body-area monitoring networks in wireless healthcare applications. By taking into account the joint effect of electrochemical properties of the battery, time-varying wireless fading channels, and packet queuing characteristics, our proposed schemes are designed to prolong the battery lifespan of the wireless sensor nodes while guaranteeing the reliable and timely message delivery, which is critically important for the patient monitoring networks. In addition, we develop a Markov chain model to analyze the performance of our proposed schemes. Both the obtained analytical and simulation results show that our proposed schemes can significantly increase the battery lifespan of sensor nodes while satisfying the reliability and delay-bound quality of service (QoS) requirements for wireless body-area monitoring networks. Furthermore, the case study of the electrocardiogram (ECG) monitoring application shows that besides meeting the delay requirements, our proposed schemes outperform the IEEE 802.15.4 and Bluetooth protocols in terms of battery lifespan. --- paper_title: Wireless Body Area Networks: A Survey paper_content: Recent developments and technological advancements in wireless communication, MicroElectroMechanical Systems (MEMS) technology and integrated circuits has enabled low-power, intelligent, miniaturized, invasive/non-invasive micro and nano-technology sensor nodes strategically placed in or around the human body to be used in various applications, such as personal health monitoring. This exciting new area of research is called Wireless Body Area Networks (WBANs) and leverages the emerging IEEE 802.15.6 and IEEE 802.15.4j standards, specifically standardized for medical WBANs. The aim of WBANs is to simplify and improve speed, accuracy, and reliability of communication of sensors/actuators within, on, and in the immediate proximity of a human body. The vast scope of challenges associated with WBANs has led to numerous publications. In this paper, we survey the current state-of-art of WBANs based on the latest standards and publications. Open issues and challenges within each area are also explored as a source of inspiration towards future developments in WBANs. --- paper_title: A novel energy efficient MAC protocol for Wireless Body Area Network paper_content: Wireless Body Area Network (WBAN) is the most promising technology in e-health applications. Energy efficiency stands out as the paramount issue for WBAN. In this paper, an energy efficient MAC protocol named Quasi-Sleep-Preempt-Supported (QS-PS) is proposed. The protocol is mainly TDMA-based: nodes transmit packets in the allocated slots, while entering the Q-Sleep mode in other slots. Moreover, for a node with emergency packet, it can broadcast a special designed Awakening Message to wake up the whole network and preempts the right to use the current slot to transmit that emergency packet, thus decreasing delay. Compared with relevant protocols, QS-PS can achieve high energy efficiency and decrease the delay of both normal packets and emergency packets. --- paper_title: Improving the reliability of wireless body area networks paper_content: In this paper we propose a highly reliable wireless body area network (WBAN) that provides increased throughput and avoids single points of failure. Such networks improve upon current WBANs by taking advantage of a new technology, Cooperative Network Coding (CNC). Using CNC in wireless body area network to support real-time applications is an attractive solution to combat packet loss, reduce latency due to retransmissions, avoid single points of failure, and improve the probability of successful recovery of the information at the destination. In this paper, we have extended Cooperative Network Coding, from its original configuration (one-to-one) to many-to-many as in multiple-input-multiple-output (MIMO) systems. Cooperative Network Coding results in increased throughput and network reliability because of the cooperation of the nodes in transmitting coded combination packets across spatially distinct paths to the information sinks. --- paper_title: A Robust Protocol Stack for Multi-hop Wireless Body Area Networks with Transmit Power Adaptation ∗ paper_content: Wireless Body Area Networks (WBANs) have characteristic properties that should be considered for designing a proper network architecture. Movement of on-body sensors, low quality and time-variant wireless links, and the demand for a reliable and fast data transmission at low energy cost are some challenging issues in WBANs. Using ultra low power wireless transceivers to reduce power consumption causes a limited transmission range. This implies that a multi-hop protocol is a promising design choice. This paper proposes a multi-hop protocol for human body health monitoring. The protocol is robust against frequent changes of the network topology due to posture changes, and variation of wireless link quality. A technique for adapting the transmit power of sensor nodes at run-time allows to optimize power consumption while ensuring a reliable outgoing link for every node in the network and avoiding network disconnection. --- paper_title: 1 Energy Comparison and Optimization of Wireless Body-Area Network Technologies paper_content: Wireless body-area networks (WBANs) have revolutionized the way mobile and wearable computers communicate with their users and I/O devices. We investigate an energy-efficient wireless device driver for low-duty peripherals, sensors and other I/O devices employed in a WBAN to communicate with a more powerful central device. We present an extensive comparative study of two popular WBAN technologies, 802.15.1 (Bluetooth) and 802.15.4 (ZigBee), in terms of design cost, performance, and energy efficiency. We discuss the impact of tunable parameters of the wireless device driver on connection latency and energy consumption for both Bluetooth and ZigBee. We address dynamic resource management in higher-level protocols by investigating the trade-off between connection latency and energy consumption. We propose an energy-efficient power-down policy that utilizes the interval between consecutive connection requests for energy reduction; we study an adaptive connection latency management technique that adjusts various tunable parameters dynamically to achieve minimum connection latency without changing the energy consumption level. Our measurements and experimental results show that these techniques are very effective in reducing energy consumption while meeting connection latency requirements. --- paper_title: An Emergency Handling Scheme for Superframe-structured MAC protocols in WBAN paper_content: Wireless body area networks (WBANs) provide medical and/or consumer electronics (CE) services within the vicinity of a human body. In a WBAN environment, immediate and reliable data transmissions during an emergency situation should be supported for medical services. In this letter, we propose a flexible emergency handling scheme for WBAN MAC protocols. The proposed scheme can be applied to superframe-structured MAC protocols such as IEEE 802.15.4 and its extended versions. In addition, our scheme can be incorporated into the current working draft for IEEE 802.15.6 standards. Extensive simulations were performed and the low latency of emergent traffics was validated. --- paper_title: Performance Study of Wireless Body Area Network in Medical Environment paper_content: The advanced developments in sensors and wireless communications devices have enabled the design of miniature, cost-effective, and smart physiological sensor nodes. One of the approaches in developing wearable health monitoring systems is the emerging of wireless body area network (WBAN). IEEE 802.15.4 provides low power, low data rate wireless standard in relation to medical sensor body area networks. In the current study, the star network topology of 802.15.4 standard at 2.4 GHz had been considered for body area network (BAN) configured in beacon mode. The main consideration is in the total data bits received by all the nodes at the coordinator and flight times of a data packet reaching its destination. In this paper we discussed on wireless technologies that can be used for medical applications and how they perform in a healthcare environment. The low-rate Wireless Personnel Area Network (WPAN) is being used to evaluate its suitability in medical application. --- paper_title: Wireless Body Area Networks: A Survey paper_content: Recent developments and technological advancements in wireless communication, MicroElectroMechanical Systems (MEMS) technology and integrated circuits has enabled low-power, intelligent, miniaturized, invasive/non-invasive micro and nano-technology sensor nodes strategically placed in or around the human body to be used in various applications, such as personal health monitoring. This exciting new area of research is called Wireless Body Area Networks (WBANs) and leverages the emerging IEEE 802.15.6 and IEEE 802.15.4j standards, specifically standardized for medical WBANs. The aim of WBANs is to simplify and improve speed, accuracy, and reliability of communication of sensors/actuators within, on, and in the immediate proximity of a human body. The vast scope of challenges associated with WBANs has led to numerous publications. In this paper, we survey the current state-of-art of WBANs based on the latest standards and publications. Open issues and challenges within each area are also explored as a source of inspiration towards future developments in WBANs. --- paper_title: Characteristics of on-body 802.15.4 networks paper_content: One would expect a body-area network to have consistently good connectivity, given the relatively short distances involved. However, early experimental results suggest otherwise. This poster examines the characteristics of the links in an on-body IEEE 802.15.4 network and the factors that influence link performance. We demonstrate that node location, as well as body position, significantly affects connectivity. For example, connectivity in the sitting position tends to be much worse than standing. We will present a comprehensive evaluation including various combinations of changes in node orientation, node placement, body position, and environmental factors. Preliminary results clearly demonstrate the need for researching different radios, topologies and protocol design to make body area networks viable. --- paper_title: An energy analysis of IEEE 802.15.6 scheduled access modes paper_content: Body Area Networks (BANs) are an emerging area of wireless personal communications. The IEEE 802.15.6 working group aims to develop a communications standard optimised for low power devices operating on, in or around the human body. IEEE 802.15.6 specifically targets low power medical application areas. The IEEE 802.15.6 draft defines two main channel access modes; contention based and contention free. This paper examines the energy lifetime performance of contention free access and in particular of periodic scheduled allocations. This paper presents an overview of the IEEE 802.15.6 and an analytical model for estimating the device lifetime. The analysis determines the maximum device lifetime for a range of scheduled allocations. It also shows that the higher the data rate of frame transfers the longer the device lifetime. Finally, the energy savings provided by block transfers are quantified and compared to immediately acknowledged alternatives. ---
Title: A Review on Wireless Body Area Network (WBAN) for Health Monitoring System: Implementation Protocols Section 1: INTRODUCTION Description 1: Introduce the concept of Wireless Body Area Network (WBAN) and its significance in health monitoring systems, along with a brief overview of the paper's structure. Section 2: WBAN Architecture Description 2: Describe the WBAN architecture, including wireless communication technologies, standards, and the interconnection of sensing nodes for medical applications. Section 3: Devices used in WBAN Description 3: Discuss the various devices involved in WBAN, including the base station and central server, covering their roles and functionalities. Section 4: Design Considerations cum requirements for WBAN Description 4: Outline the critical design considerations and requirements for WBANs, such as non-invasiveness, reliability, robustness, low latency, and energy efficiency. Section 5: AIM AND SCOPE OF REVIEW Description 5: Present the primary aim of the review, focusing on the importance of MAC protocols in WBAN for efficient signal transmission and patient monitoring. Section 6: RELATED WORKS Description 6: Provide a literature review of existing MAC protocols for WBAN, discussing scheduled access protocols, random access protocols, and various other MAC protocols like TMAC, SMAC, and ZigBee MAC. Section 7: Current Topology Used in WBAN Description 7: Explain the current topologies used in WBAN, including star and multi-hop topologies, and methods of data transmission such as beacon and non-beacon modes. Section 8: CONCLUSION AND FUTURE SCOPE Description 8: Summarize the paper's findings, compare MAC protocols like CSMA/CA and TDMA, and propose future work directions for mitigating fading in captured signals using new MAC protocols.
Information Forensics: An Overview of the First Decade
13
--- paper_title: Group-Oriented Fingerprinting for Multimedia Forensics paper_content: Digital fingerprinting of multimedia data involves embedding information in the content signal and offers protection to the digital rights of the content by allowing illegitimate usage of the content to be identified by authorized parties. One potential threat to fingerprinting is collusion, whereby a group of adversaries combine their individual copies in an attempt to remove the underlying fingerprints. Former studies indicate that collusion attacks based on a few dozen independent copies can confound a fingerprinting system that employs orthogonal modulation. However, in practice an adversary is more likely to collude with some users than with other users due to geographic or social circumstances. To take advantage of prior knowledge of the collusion pattern, we propose a two-tier group-oriented fingerprinting scheme where users likely to collude with each other are assigned correlated fingerprints. Additionally, we extend our construction to represent the natural social and geographic hierarchical relationships between users by developing a more flexible tree-structure-based fingerprinting system. We also propose a multistage colluder identification scheme by taking advantage of the hierarchial nature of the fingerprints. We evaluate the performance of the proposed fingerprinting scheme by studying the collusion resistance of a fingerprinting system employing Gaussian-distributed fingerprints. Our results show that the group-oriented fingerprinting system provides the superior collusion resistance over a system employing orthogonal modulation when knowledge of the potential collusion pattern is available. --- paper_title: A natural image model approach to splicing detection paper_content: Image splicing detection is of fundamental importance in digital forensics and therefore has attracted increasing attention recently. In this paper, we propose a blind, passive, yet effective splicing detection approach based on a natural image model. This natural image model consists of statistical features extracted from the given test image as well as 2-D arrays generated by applying to the test images multi-size block discrete cosine transform (MBDCT). The statistical features include moments of characteristic functions of wavelet subbands and Markov transition probabilities of difference 2-D arrays. To evaluate the performance of our proposed model, we further present a concrete implementation of this model that has been designed for and applied to the Columbia Image Splicing Detection Evaluation Dataset. Our experimental works have demonstrated that this new splicing detection scheme outperforms the state of the art by a significant margin when applied to the above-mentioned dataset, indicating that the proposed approach possesses promising capability in splicing detection. --- paper_title: Image splicing detection using 2-D phase congruency and statistical moments of characteristic function paper_content: A new approach to efficient blind image splicing detection is proposed in this paper. Image splicing is the process of ::: making a composite picture by cutting and joining two or more photographs. The spliced image may introduce a number ::: of sharp transitions such as lines, edges and corners. Phase congruency has been known as a sensitive measure of these ::: sharp transitions and hence been proposed as features for splicing detection. In addition to the phase information, the ::: magnitude information is also used for splicing detection. Specifically, statistical moments of characteristic functions of ::: wavelet subbands have been examined to catch the difference between the authentic images and spliced images. ::: Consequently, the proposed scheme extracts image features from moments of wavelet characteristic functions and 2-D ::: phase congruency for image splicing detection. The experiments have demonstrated that the proposed approach can ::: achieve a higher detection rate as compared with the state-of-the-art. --- paper_title: Image manipulation detection with Binary Similarity Measures paper_content: Since extremely powerful technologies are now available to generate and process digital images, there is a concomitant need for developing techniques to distinguish the original images from the altered ones, the genuine ones from the doctored ones. In this paper we focus on this problem and propose a method based on the neighbor bit planes of the image. The basic idea is that, the correlation between the bit planes as well the binary texture characteristics within the bit planes will differ between an original and a doctored image. This change in the intrinsic characteristics of the image can be monitored via the quantal-spatial moments of the bit planes. These so-called Binary Similarity Measures are used as features in classifier design. It has been shown that the linear classifiers based on BSM features can detect with satisfactory reliability most of the image doctoring executed via Photoshop tool. --- paper_title: Higher-order Wavelet Statistics and their Application to Digital Forensics paper_content: We describe a statistical model for natural images that is built upon a multi-scale wavelet decomposition. The model consists of first- and higher-order statistics that capture certain statistical regularities of natural images. We show how this model can be useful in several digital forensic applications, specifically in detecting various types of digital tampering. --- paper_title: How realistic is photorealistic? paper_content: Computer graphics rendering software is capable of generating highly photorealistic images that can be impossible to differentiate from photographic images. As a result, the unique stature of photographs as a definitive recording of events is being diminished (the ease with which digital images can be manipulated is, of course, also contributing to this demise). To this end, we describe a method for differentiating between photorealistic and photographic images. Specifically, we show that a statistical model based on first-order and higher order wavelet statistics reveals subtle but significant differences between photorealistic and photographic images. --- paper_title: Blind detection of photomontage using higher order statistics paper_content: We investigate the prospect of using bicoherence features for blind image splicing detection. Image splicing is an essential operation for digital photomontaging, which in turn is a technique for creating image forgery. We examine the properties of bicoherence features on a data set, which contains image blocks of diverse image properties. We then demonstrate the limitation of the baseline bicoherence features for image splicing detection. Our investigation has led to two suggestions for improving the performance of bicoherence features, i.e., estimating the bicoherence features of the authentic counterpart and incorporating features that characterize the variance of the feature performance. The features derived from the suggestions are evaluated with support vector machine (SVM) classification and is shown to improve the image splicing detection accuracy from 62% to about 70%. --- paper_title: Physics-motivated features for distinguishing photographic images and computer graphics paper_content: The increasing photorealism for computer graphics has made computer graphics a convincing form of image forgery. Therefore, classifying photographic images and photorealistic computer graphics has become an important problem for image forgery detection. In this paper, we propose a new geometry-based image model, motivated by the physical image generation process, to tackle the above-mentioned problem. The proposed model reveals certain physical differences between the two image categories, such as the gamma correction in photographic images and the sharp structures in computer graphics. For the problem of image forgery detection, we propose two levels of image authenticity definition, i.e., imaging-process authenticity and scene authenticity, and analyze our technique against these definitions. Such definition is important for making the concept of image authenticity computable. Apart from offering physical insights, our technique with a classification accuracy of 83.5% outperforms those in the prior work, i.e., wavelet features at 80.3% and cartoon features at 71.0%. We also consider a recapturing attack scenario and propose a counter-attack measure. In addition, we constructed a publicly available benchmark dataset with images of diverse content and computer graphics of high photorealism. --- paper_title: Steganography in Digital Media: Principles, Algorithms, and Applications paper_content: Steganography, the art of hiding of information in apparently innocuous objects or images, is a field with a rich heritage, and an area of rapid current development. This clear, self-contained guide shows you how to understand the building blocks of covert communication in digital media files and how to apply the techniques in practice, including those of steganalysis, the detection of steganography. Assuming only a basic knowledge in calculus and statistics, the book blends the various strands of steganography, including information theory, coding, signal estimation and detection, and statistical signal processing. Experiments on real media files demonstrate the performance of the techniques in real life, and most techniques are supplied with pseudo-code, making it easy to implement the algorithms. The book is ideal for students taking courses on steganography and information hiding, and is also a useful reference for engineers and practitioners working in media security and information assurance. --- paper_title: Image manipulation detection paper_content: Techniques and methodologies for validating the authenticity of digital images and testing for the presence of doctoring and manipulation operations on them has recently attracted attention. We review three categories of forensic features and discuss the design of classifiers between doctored and original images. The performance of classifiers with respect to selected controlled manipulations as well as to uncontrolled manipulations is analyzed. The tools for image manipulation detection are treated under feature fusion and decision fusion scenarios. --- paper_title: Exposing digital forgeries through chromatic aberration paper_content: Virtually all optical imaging systems introduce a variety of aberrations into an image. Chromatic aberration, for example, results from the failure of an optical system to perfectly focus light of different wavelengths. Lateral chromatic aberration manifests itself, to a first-order approximation, as an expansion/contraction of color channels with respect to one another. When tampering with an image, this aberration is often disturbed and fails to be consistent across the image. We describe a computational technique for automatically estimating lateral chromatic aberration and show its efficacy in detecting digital tampering. --- paper_title: Exposing image forgery with blind noise estimation paper_content: Noise is unwanted in high quality images, but it can aid image tampering. For example, noise can be intentionally added in image to conceal tampered regions and to create special visual effects. It may also be introduced unnoticed during camera imaging process, which makes the noise levels inconsistent in splicing images. In this paper, we propose a method to expose such image forgeries by detecting the noise variance differences between original and tampered parts of an image. The noise variance of local image blocks is estimated using a recently developed technique, where no prior information about the imaging device or original image is required. The tampered region is segmented from the original image by a two-phase coarse-to-fine clustering of image blocks. Our experimental results demonstrate that the proposed method can effectively detect image forgeries with high detection accuracy and low false positive rate both quantitatively and qualitatively. --- paper_title: Using noise inconsistencies for blind image forensics paper_content: A commonly used tool to conceal the traces of tampering is the addition of locally random noise to the altered image regions. The noise degradation is the main cause of failure of many active or passive image forgery detection methods. Typically, the amount of noise is uniform across the entire authentic image. Adding locally random noise may cause inconsistencies in the image's noise. Therefore, the detection of various noise levels in an image may signify tampering. In this paper, we propose a novel method capable of dividing an investigated image into various partitions with homogenous noise levels. In other words, we introduce a segmentation method detecting changes in noise level. We assume the additive white Gaussian noise. Several examples are shown to demonstrate the proposed method's output. An extensive quantitative measure of the efficiency of the noise estimation part as a function of different noise standard deviations, region sizes and various JPEG compression qualities is proposed as well. --- paper_title: Exposing image splicing with inconsistent local noise variances paper_content: Image splicing is a simple and common image tampering operation, where a selected region from an image is pasted into another image with the aim to change its content. In this paper, based on the fact that images from different origins tend to have different amount of noise introduced by the sensors or post-processing steps, we describe an effective method to expose image splicing by detecting inconsistencies in local noise variances. Our method estimates local noise variances based on an observation that kurtosis values of natural images in band-pass filtered domains tend to concentrate around a constant value, and is accelerated by the use of integral image. We demonstrate the efficacy and robustness of our method based on several sets of forged images generated with image splicing. --- paper_title: Detecting Image Splicing using Geometry Invariants and Camera Characteristics Consistency paper_content: Recent advances in computer technology have made digital image tampering more and more common. In this paper, we propose an authentic vs. spliced image classification method making use of geometry invariants in a semi-automatic manner. For a given image, we identify suspicious splicing areas, compute the geometry invariants from the pixels within each region, and then estimate the camera response function (CRF) from these geometry invariants. The cross-fitting errors are fed into a statistical classifier. Experiments show a very promising accuracy, 87%, over a large data set of 363 natural and spliced images. To the best of our knowledge, this is the first work detecting image splicing by verifying camera characteristic consistency from a single-channel image. --- paper_title: Camera Response Functions for Image Forensics: An Automatic Algorithm for Splicing Detection paper_content: We present a fully automatic method to detect doctored digital images. Our method is based on a rigorous consistency checking principle of physical characteristics among different arbitrarily shaped image regions. In this paper, we specifically study the camera response function (CRF), a fundamental property in cameras mapping input irradiance to output image intensity. A test image is first automatically segmented into distinct arbitrarily shaped regions. One CRF is estimated from each region using geometric invariants from locally planar irradiance points (LPIPs). To classify a boundary segment between two regions as authentic or spliced, CRF-based cross fitting and local image features are computed and fed to statistical classifiers. Such segment level scores are further fused to infer the image level authenticity. Tests on two data sets reach performance levels of 70% precision and 70% recall, showing promising potential for real-world applications. Moreover, we examine individual features and discover the key factor in splicing detection. Our experiments show that the anomaly introduced around splicing boundaries plays the major role in detecting splicing. Such finding is important for designing effective and efficient solutions to image splicing detection. --- paper_title: Statistical Tools for Digital Forensics paper_content: A digitally altered photograph, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic photograph. As a result, photographs no longer hold the unique stature as a definitive recording of events. We describe several statistical techniques for detecting traces of digital tampering in the absence of any digital watermark or signature. In particular, we quantify statistical correlations that result from specific forms of digital tampering, and devise detection schemes to reveal these correlations. --- paper_title: Detecting doctored images using camera response normality and consistency paper_content: The advance in image/video editing techniques has facilitated people in synthesizing realistic images/videos that may hard to be distinguished from real ones by visual examination. This poses a problem: how to differentiate real images/videos from doctored ones? This is a serious problem because some legal issues may occur if there is no reliable way for doctored image/video detection when human inspection fails. Digital watermarking cannot solve this problem completely. We propose an approach that computes the response functions of the camera by selecting appropriate patches in different ways. An image may be doctored if the response functions are abnormal or inconsistent to each other. The normality of the response functions is classified by a trained support vector machine (SVM). Experiments show that our method is effective for high-contrast images with many textureless edges. --- paper_title: Forensic detection of image manipulation using statistical intrinsic fingerprints paper_content: As the use of digital images has increased, so has the means and the incentive to create digital image forgeries. Accordingly, there is a great need for digital image forensic techniques capable of detecting image alterations and forged images. A number of image processing operations, such as histogram equalization or gamma correction, are equivalent to pixel value mappings. In this paper, we show that pixel value mappings leave behind statistical traces, which we shall refer to as a mapping's intrinsic fingerprint, in an image's pixel value histogram. We then propose forensic methods for detecting general forms globally and locally applied contrast enhancement as well as a method for identifying the use of histogram equalization by searching for the identifying features of each operation's intrinsic fingerprint. Additionally, we propose a method to detect the global addition of noise to a previously JPEG-compressed image by observing that the intrinsic fingerprint of a specific mapping will be altered if it is applied to an image's pixel values after the addition of noise. Through a number of simulations, we test the efficacy of each proposed forensic technique. Our simulation results show that aside from exceptional cases, all of our detection methods are able to correctly detect the use of their designated image processing operation with a probability of 99% given a false alarm probability of 7% or less. --- paper_title: Multimedia Data Hiding paper_content: Introduction.- Preliminaries.- Classification and capacity of embedding.- Handling uneven embedding capacity.- Data hiding in binary images.- Multilevel data hiding for image & video.- Data hiding for image authentication.- Data hiding for video communication.- Attacks on known data-hiding algorithms.- Attacks on unknown data-hiding algorithms.- Conclusions and perspectives. --- paper_title: A SIFT-Based Forensic Method for Copy–Move Attack Detection and Transformation Recovery paper_content: One of the principal problems in image forensics is determining if a particular image is authentic or not. This can be a crucial task when images are used as basic evidence to influence judgment like, for example, in a court of law. To carry out such forensic analysis, various technological instruments have been developed in the literature. In this paper, the problem of detecting if an image has been forged is investigated; in particular, attention has been paid to the case in which an area of an image is copied and then pasted onto another zone to create a duplication or to cancel something that was awkward. Generally, to adapt the image patch to the new context a geometric transformation is needed. To detect such modifications, a novel methodology based on scale invariant features transform (SIFT) is proposed. Such a method allows us to both understand if a copy-move attack has occurred and, furthermore, to recover the geometric transformation used to perform cloning. Extensive experimental results are presented to confirm that the technique is able to precisely individuate the altered area and, in addition, to estimate the geometric transformation parameters with high reliability. The method also deals with multiple cloning. --- paper_title: Detection of copy-move forgery using a method based on blur moment invariants. paper_content: In our society digital images are a powerful and widely used communication medium. They have an important impact on our life. In recent years, due to the advent of high-performance commodity hardware and improved human-computer interfaces, it has become relatively easy to create fake images. Modern, easy to use image processing software enables forgeries that are undetectable by the naked eye. In this work we propose a method to automatically detect and localize duplicated regions in digital images. The presence of duplicated regions in an image may signify a common type of forgery called copy-move forgery. The method is based on blur moment invariants, which allows successful detection of copy-move forgery, even when blur degradation, additional noise, or arbitrary contrast changes are present in the duplicated regions. These modifications are commonly used techniques to conceal traces of copy-move forgery. Our method works equally well for lossy format such as JPEG. We demonstrate our method on several images affected by copy-move forgery. --- paper_title: Detection of Copy-Move Forgery in Digital Images paper_content: An algorithm called exact match is proposed to solve the problem of copy-move attack in digital images. The method may successfully detect the forged part even when the copied area is enhanced/retouched to merge it with the background and when the forged image is saved in a lossy format, such as JPEG. The result is much more efficient than traditional methods. --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Region Duplication Detection Using Image Feature Matching paper_content: Region duplication is a simple and effective operation to create digital image forgeries, where a continuous portion of pixels in an image, after possible geometrical and illumination adjustments, are copied and pasted to a different location in the same image. Most existing region duplication detection methods are based on directly matching blocks of image pixels or transform coefficients, and are not effective when the duplicated regions have geometrical or illumination distortions. In this work, we describe a new region duplication detection method that is robust to distortions of the duplicated regions. Our method starts by estimating the transform between matched scale invariant feature transform (SIFT) keypoints, which are insensitive to geometrical and illumination distortions, and then finds all pixels within the duplicated regions after discounting the estimated transforms. The proposed method shows effective detection on an automatically synthesized forgery image database with duplicated and distorted regions. We further demonstrate its practical performance with several challenging forgery images created with state-of-the-art tools. --- paper_title: A Sorted Neighborhood Approach for Detecting Duplicated Regions in Image Forgeries Based on DWT and SVD paper_content: The presence of duplicated regions in the image can be considered as a tell-tale sign for image forgery, which belongs to the research field of digital image forensics. In this paper, a blind forensics approach based on DWT (discrete wavelet transform) and SVD (singular value decomposition) is proposed to detect the specific artifact. Firstly, DWT is applied to the image, and SVD is used on fixed-size blocks of low-frequency component in wavelet sub-band to yield a reduced dimension representation. Then the SV vectors are then lexicographically sorted and duplicated image blocks will be close in the sorted list, and therefore will be compared during the detection steps. The experimental results demonstrate that the proposed approach can not only decrease computational complexity, but also localize the duplicated regions accurately even when the image was highly compressed or edge processed. --- paper_title: Detection of Copy-Move Forgery in Digital Images Using SIFT Algorithm paper_content: As result of powerful image processing tools, digital image forgeries have already become a serious social problem. In this paper we describe an effective method to detect copy-move forgery in digital images. This method works by first extracting SIFT descriptors of an image, which are invariant to changes in illumination, rotation, scaling etc. Owing to the similarity between pasted region and copied region, descriptors are then matched between each other to seek for any possible forgery in images. Experiments have been performed to demonstrate the efficiency of this method on different forgeries and quantify its robustness and sensitivity to post image processing, such as additive noise and lossy JPEG compression etc, or even compound processing. --- paper_title: Robust Detection of Region-Duplication Forgery in Digital Image paper_content: Region duplication forgery, in which a part of a digital image is copied and then pasted to another portion of the same image in order to conceal an important object in the scene, is one of the common image forgery techniques. In this paper, we describe an efficient and robust algorithm for detecting and localizing this type of malicious tampering. We present experimental results which show that our method is robust and can successfully detect this type of tampering for images that have been subjected to various forms of post region duplication image processing, including blurring, noise contamination, severe lossy compression, and a mixture of these processing operations. --- paper_title: Detection of linear and cubic interpolation in JPEG compressed images paper_content: A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain. --- paper_title: Blind Authentication Using Periodic Properties of Interpolation paper_content: In this paper, we analyze and analytically describe the specific statistical changes brought into the covariance structure of signal by the interpolation process. We show that interpolated signals and their derivatives contain specific detectable periodic properties. Based on this, we propose a blind, efficient, and automatic method capable of finding traces of resampling and interpolation. The proposed method can be very useful in many areas, especially in image security and authentication. For instance, when two or more images are spliced together, to create high quality and consistent image forgeries, almost always geometric transformations, such as scaling, rotation, or skewing are needed. These procedures are typically based on a resampling and interpolation step. By having a method capable of detecting the traces of resampling, we can significantly reduce the successful usage of such forgeries. Among other points, the presented method is also very useful in estimation of the geometric transformations factors. --- paper_title: Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue paper_content: This paper revisits the state-of-the-art resampling detector, which is based on periodic artifacts in the residue of a local linear predictor. Inspired by recent findings from the literature, we take a closer look at the complex detection procedure and model the detected artifacts in the spatial and frequency domain by means of the variance of the prediction residue. We give an exact formulation on how transformation parameters influence the appearance of periodic artifacts and analytically derive the expected position of characteristic resampling peaks. We present an equivalent accelerated and simplified detector, which is orders of magnitudes faster than the conventional scheme and experimentally shown to be comparably reliable. --- paper_title: Statistical Tools for Digital Forensics paper_content: A digitally altered photograph, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic photograph. As a result, photographs no longer hold the unique stature as a definitive recording of events. We describe several statistical techniques for detecting traces of digital tampering in the absence of any digital watermark or signature. In particular, we quantify statistical correlations that result from specific forms of digital tampering, and devise detection schemes to reveal these correlations. --- paper_title: On resampling detection in re-compressed images paper_content: Resampling detection has become a standard tool in digital image forensics. This paper investigates the important case of resampling detection in re-compressed JPEG images. We show how blocking artifacts of the previous compression step can help to increase the otherwise drastically reduced detection performance in JPEG compressed images. We give a formulation on how affine transformations of JPEG compressed images affect state-of-the-art resampling detectors and derive a new efficient detection variant, which better suits this relevant detection scenario. The principal appropriateness of using JPEG pre-compression artifacts for the detection of resampling in re-compressed images is backed with experimental evidence on a large image set and for a variety of different JPEG qualities. --- paper_title: Exposing digital forgeries by detecting traces of resampling paper_content: The unique stature of photographs as a definitive recording of events is being diminished due, in part, to the ease with which digital images can be manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. For example, we describe how resampling (e.g., scaling or rotating) introduces specific statistical correlations, and describe how these correlations can be automatically detected in any portion of an image. This technique works in the absence of any digital watermark or signature. We show the efficacy of this approach on uncompressed TIFF images, and JPEG and GIF images with minimal compression. We expect this technique to be among the first of many tools that will be needed to expose digital forgeries. --- paper_title: Forensic estimation and reconstruction of a contrast enhancement mapping paper_content: Due to the ease with which convincing digital image forgeries can be created, a need has arisen for digital forensic techniques capable of detecting image manipulation. Once image alterations have been identified, the next logical forensic task is to recover as much information as possible about the unaltered version of image and the operation used to modify it. Previous work has dealt with the forensic detection of contrast enhancement in digital images. In this paper we propose an iterative algorithm to jointly estimate any arbitrary contrast enhancement mapping used to modify an image as well as the pixel value histogram of the image before contrast enhancement. To do this, we use a probabilistic model of an image's pixel value histogram to determine which histogram entries are most likely to correspond to contrast enhancement artifacts. Experimental results are presented to demonstrate the effectiveness of our proposed method. --- paper_title: Forensic detection of image manipulation using statistical intrinsic fingerprints paper_content: As the use of digital images has increased, so has the means and the incentive to create digital image forgeries. Accordingly, there is a great need for digital image forensic techniques capable of detecting image alterations and forged images. A number of image processing operations, such as histogram equalization or gamma correction, are equivalent to pixel value mappings. In this paper, we show that pixel value mappings leave behind statistical traces, which we shall refer to as a mapping's intrinsic fingerprint, in an image's pixel value histogram. We then propose forensic methods for detecting general forms globally and locally applied contrast enhancement as well as a method for identifying the use of histogram equalization by searching for the identifying features of each operation's intrinsic fingerprint. Additionally, we propose a method to detect the global addition of noise to a previously JPEG-compressed image by observing that the intrinsic fingerprint of a specific mapping will be altered if it is applied to an image's pixel values after the addition of noise. Through a number of simulations, we test the efficacy of each proposed forensic technique. Our simulation results show that aside from exceptional cases, all of our detection methods are able to correctly detect the use of their designated image processing operation with a probability of 99% given a false alarm probability of 7% or less. --- paper_title: Blind forensics of contrast enhancement in digital images paper_content: Digital images have seen increased use in applications where their authenticity is of prime importance. This proves to be problematic due to the widespread availability of digital image editing software. As a result, there is a need for the development of reliable techniques for verifying an image's authenticity. In this paper, a blind forensic algorithm is proposed for detecting the use of global contrast enhancement operations to modify digital images. Furthermore, a separate algorithm is proposed to identify the use of histogram equalization, a commonly implemented contrast enhancement operation. Both algorithms perform detection by seeking out unique artifacts introduced into an image's histogram as a result of the particular operation examined. Additionally, results are presented showing the effectiveness of both proposed algorithms. --- paper_title: Blind Forensics of Median Filtering in Digital Images paper_content: Exposing the processing history of a digital image is an important problem for forensic analyzers and steganalyzers. As the median filter is a popular nonlinear denoising operator, the blind forensics of median filtering is particularly interesting. This paper proposes a novel approach for detecting median filtering in digital images, which can 1) accurately detect median filtering in arbitrary images, even reliably detect median filtering in low-resolution and JPEG compressed images; and 2) reliably detect tampering when part of a median-filtered image is inserted into a nonmedian-filtered image, or vice versa. The effectiveness of the proposed approach is exhaustively evaluated in five different image databases. --- paper_title: Robust median filtering forensics based on the autoregressive model of median filtered residual paper_content: One important aspect of multimedia forensics is exposing an image's processing history. Median filtering is a popular noise removal and image enhancement tool. It is also an effective tool in anti-forensics recently. An image is usually saved in a compressed format such as the JPEG format. The forensic detection of median filtering from a JPEG compressed image remains challenging, because typical filter characteristics are suppressed by JPEG quantization and blocking artifacts. In this paper, we introduce a robust median filtering detection scheme based on the autoregressive model of median filtered residual. Median filtering is first applied on a test image and the difference between the initial image and the filtered output image is called the median filtered residual (MFR). The MFR is used as the forensic fingerprint. Thus, the interference from the image edge and texture, which is regarded as a limitation of the existing forensic methods, can be reduced. Because the overlapped window filtering introduces correlation among the pixels of MFR, an autoregressive (AR) model of the MFR is calculated and the AR coefficients are used by a support vector machine (SVM) for classification. Experimental results show that the proposed median filtering detection method is very robust to JPEG post-compression with a quality factor as low as 30. It distinguishes well between median filtering and other manipulations, such as Gaussian filtering, average filtering, and rescaling and performs well on low-resolution images of size 32 × 32. The proposed method achieves not only much better performance than the existing state-of-the-art methods, but also has very small dimension of feature, i.e., 10-D. --- paper_title: Forensic detection of median filtering in digital images paper_content: In digital image forensics, prior works are prone to the detection of malicious tampering. However, there is also a need for developing techniques to identify general content-preserved manipulations, which are employed to conceal tampering trails frequently. In this paper, we propose a blind forensic algorithm to detect median filtering (MF), which is applied extensively for signal denoising and digital image enhancement. The probability of zero values on the first order difference map in texture regions can serve as MF statistical fingerprint, which distinguishes MF from other operations. Since anti-forensic techniques enjoy utilizing MF to attack the linearity assumption of existing forensics algorithms, blind detection of the non-linear MF becomes especially significant. Both theoretically reasoning and experimental results verify the effectiveness of our proposed MF forensics scheme. --- paper_title: Steganalysis by Subtractive Pixel Adjacency Matrix paper_content: This paper presents a method for detection of steganographic methods that embed in the spatial domain by adding a low-amplitude independent stego signal, an example of which is least significant bit (LSB) matching. First, arguments are provided for modeling the differences between adjacent pixels using first-order and second-order Markov chains. Subsets of sample transition probability matrices are then used as features for a steganalyzer implemented by support vector machines. The major part of experiments, performed on four diverse image databases, focuses on evaluation of detection of LSB matching. The comparison to prior art reveals that the presented feature set offers superior accuracy in detecting LSB matching. Even though the feature set was developed specifically for spatial domain steganalysis, by constructing steganalyzers for ten algorithms for JPEG images, it is demonstrated that the features detect steganography in the transform domain as well. --- paper_title: Streaking in median filtered images paper_content: This paper presents a probabilistic analysis of the streaking or blotching effect commonly observed in median filtered signals in both one and two dimensions. The effcts are identified as runs of equal or nearly equal values which create visual impressions that have no visual correlate. For one-dimensional discrete iid random signals with continuous input probability densities, the probability of a streak of length L occurring is computed and shown to be independent of the input probability distribution. Expressions for the first and second moments of the streak length are also derived, and certain asymptotic results are given. As the analysis and definition of the analogous effect in two dimensions is less tractable, the probability that medians taken from distinct overlapping windows will take the same value is derived for various filter geometries. The analytic results are supported by examples using both one- and two-dimensional signals. --- paper_title: On detection of median filtering in digital images paper_content: In digital image forensics, it is generally accepted that intentional manipulations of the image content are ::: most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing. ::: However, it is also beneficial to know as much as possible about the general processing history of an image, ::: including content-preserving operations, since they can affect the reliability of forensic methods in various ways. ::: In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely ::: used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity ::: assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method ::: is backed with experimental evidence on a large image database. --- paper_title: Identification of bitmap compression history: JPEG detection and quantizer estimation paper_content: Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value. --- paper_title: Digital image source coder forensics via intrinsic fingerprints paper_content: Recent development in multimedia processing and network technologies has facilitated the distribution and sharing of multimedia through networks, and increased the security demands of multimedia contents. Traditional image content protection schemes use extrinsic approaches, such as watermarking or fingerprinting. However, under many circumstances, extrinsic content protection is not possible. Therefore, there is great interest in developing forensic tools via intrinsic fingerprints to solve these problems. Source coding is a common step of natural image acquisition, so in this paper, we focus on the fundamental research on digital image source coder forensics via intrinsic fingerprints. First, we investigate the unique intrinsic fingerprint of many popular image source encoders, including transform-based coding (both discrete cosine transform and discrete wavelet transform based), subband coding, differential image coding, and also block processing as the traces of evidence. Based on the intrinsic fingerprint of image source encoders, we construct an image source coding forensic detector that identifies which source encoder is applied, what the coding parameters are along with confidence measures of the result. Our simulation results show that the proposed system provides trustworthy performance: for most test cases, the probability of detecting the correct source encoder is over 90%. --- paper_title: Transform Coder Classification for Digital Image Forensics paper_content: The area of non-intrusive forensic analysis has found many applications in the area of digital imaging. One unexplored area is the identification of source coding in digital images. In other words, given a digital image, can we identify which compression scheme was used, if any? This paper focuses on the aspect of transform coder classification, where we wish to determine which transform was used during compression. This scheme analyzes the histograms of coefficient subbands to determine the nature of the transform method. By obtaining the distance between the obtained histogram and the estimate of the original histogram, we can determine if the image was compressed using the transform tested. Results show that this method can successfully classify compression by transform as well as detect whether any compression has occurred at all in an image. --- paper_title: Block Size Forensic Analysis in Digital Images paper_content: In non-intrusive forensic analysis, we wish to find information and properties about a piece of data without any reference to the original data prior to processing. An important first step to forensic analysis is the detection and estimation of block processing. Most existing work in block measurement uses strong assumptions on the data related to the block size or the method of compression. In this paper, we propose a new method to estimate the block size in digital images in a blind manner for use in a forensic context. We make no assumptions on the block size or the nature of any previous processing. Our scheme can accurately estimate block sizes in images up to a PSNR of 42 dB where block artifacts are perceptually invisible. We also offer a measure of detection accuracy which correctly classifies an image as block-processed with a probability of 95.0% while keeping the probability of false alarm at 7.4%. --- paper_title: Exposing Digital Forgeries From JPEG Ghosts paper_content: When creating a digital forgery, it is often necessary to combine several images, for example, when compositing one person's head onto another person's body. If these images were originally of different JPEG compression quality, then the digital composite may contain a trace of the original compression qualities. To this end, we describe a technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image. This approach is applicable to images of high and low quality as well as resolution. --- paper_title: A generalized Benford's law for JPEG coefficients and its applications in image forensics paper_content: In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the ::: block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's ::: law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, ::: which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor ::: for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our ::: extensive experiments demonstrate the effectiveness of the proposed statistical model. --- paper_title: Detection of Double-Compression in JPEG Images for Applications in Steganography paper_content: This paper presents a method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor. These methods are essential for construction of accurate targeted and blind steganalysis methods for JPEG images. The proposed methods use support vector machine classifiers with feature vectors formed by histograms of low-frequency discrete cosine transformation coefficients. The performance of the algorithms is compared to selected prior art. --- paper_title: Detecting doubly compressed JPEG images by using Mode Based First Digit Features paper_content: In this paper, we utilize the probabilities of the first digits of quantized DCT (discrete cosine transform) coefficients from individual AC (alternate current) modes to detect doubly compressed JPEG images. Our proposed features, named by mode based first digit features (MBFDF), have been shown to outperform all previous methods on discriminating doubly compressed JPEG images from singly compressed JPEG images. Furthermore, combining the MBFDF with a multi-class classification strategy can be exploited to identify the quality factor in the primary JPEG compression, thus successfully revealing the double JPEG compression history of a given JPEG image. --- paper_title: Estimation of primary quantization matrix in double compressed jpeg images paper_content: In this report, we present a method for estimation of primary quantization matrix from a double compressed JPEG image. We first identify characteristic features that occur in DCT histograms of individual coefficients due to double compression. Then, we present 3 different approaches that estimate the original quantization matrix from double compressed images. Finally, most successful of them Neural Network classifier is discussed and its performance and reliability is evaluated in a series of experiments on various databases of double compressed images. It is also explained in this paper, how double compression detection techniques and primary quantization matrix estimators can be used in steganalysis of JPEG files and in digital forensic analysis for detection of digital forgeries. --- paper_title: Detecting doctored JPEG images via DCT coefficient analysis paper_content: The steady improvement in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may occur when a doctored image cannot be distinguished from a real one by visual examination. Realizing that it might be impossible to develop a method that is universal for all kinds of images and JPEG is the most frequently used image format, we propose an approach that can detect doctored JPEG images and further locate the doctored parts, by examining the double quantization effect hidden among the DCT coefficients. Up to date, this approach is the only one that can locate the doctored part automatically. And it has several other advantages: the ability to detect images doctored by different kinds of synthesizing methods (such as alpha matting and inpainting, besides simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experiments show that our method is effective for JPEG images, especially when the compression quality is high. --- paper_title: Detecting Double JPEG Compression With the Same Quantization Matrix paper_content: Detection of double joint photographic experts group (JPEG) compression is of great significance in the field of digital forensics. Some successful approaches have been presented for detecting double JPEG compression when the primary compression and the secondary compression have different quantization matrixes. However, when the primary compression and the secondary compression have the same quantization matrix, no detection method has been reported yet. In this paper, we present a method which can detect double JPEG compression with the same quantization matrix. Our algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients, i.e., the quantized discrete cosine transform coefficients between the sequential two versions will monotonically decrease in general. For example, the number of different JPEG coefficients between the singly and doubly compressed images is generally larger than the number of different JPEG coefficients between the corresponding doubly and triply compressed images. Via a novel random perturbation strategy implemented on the JPEG coefficients of the recompressed test image, we can find a “proper” randomly perturbed ratio. For different images, this universal “proper” ratio will generate a dynamically changed threshold, which can be utilized to discriminate the singly compressed image and doubly compressed image. Furthermore, our method has the potential to detect triple JPEG compression, four times JPEG compression, etc. --- paper_title: Statistical Tools for Digital Forensics paper_content: A digitally altered photograph, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic photograph. As a result, photographs no longer hold the unique stature as a definitive recording of events. We describe several statistical techniques for detecting traces of digital tampering in the absence of any digital watermark or signature. In particular, we quantify statistical correlations that result from specific forms of digital tampering, and devise detection schemes to reveal these correlations. --- paper_title: Passive detection of doctored JPEG image via block artifact grid extraction paper_content: It has been noticed that the block artifact grids (BAG), caused by the blocking processing during JPEG compression, are usually mismatched when interpolating or concealing objects by copy-paste operations. In this paper, the BAGs are extracted blindly with a new extraction algorithm, and then abnormal BAGs can be detected with a marking procedure. Then the phenomenon of grid mismatch or grid blank can be taken as a trail of such forensics. Experimental results show that our method can mark these trails efficiently. --- paper_title: Temporal Forensics and Anti-Forensics for Motion Compensated Video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator. --- paper_title: Exposing digital forgeries in video by detecting double MPEG compression paper_content: With the advent of sophisticated and low-cost video editing software, it is becoming increasingly easier to tamper with digital video. In addition,an ever-growing number of video surveillance cameras is giving rise to an enormous amount of video data. The ability to ensure the integrity and authenticity of this data poses considerable challenges. Here we begin to explore techniques for detecting traces of tampering in digital video. Specifically, we show how a doublycompressed MPEG video sequence introduces specific static and temporal statistical perturbations whose presence can be used as evidence of tampering. --- paper_title: Exposing digital forgeries in video by detecting double quantization paper_content: We describe a technique for detecting double quantization in digital video that results from double MPEG compression or from combining two videos of different qualities (e.g., green-screening). We describe how double quantization can introduce statistical artifacts that while not visible, can be quantified, measured, and used to detect tampering. This technique can detect highly localized tampering in regions as small as 16 x 16 pixels. --- paper_title: Exposing digital forgeries by detecting inconsistencies in lighting paper_content: When creating a digital composite of, for example, two people standing side-by-side, it is often difficult to match the lighting conditions from the individual photographs. Lighting inconsistencies can therefore be a useful tool for revealing traces of digital tampering. Borrowing and extending tools from the field of computer vision, we describe how the direction of a point light source can be estimated from only a single image. We show the efficacy of this approach in real-world settings. --- paper_title: Exposing Digital Forgeries in Ballistic Motion paper_content: We describe a geometric technique to detect physically implausible trajectories of objects in video sequences. This technique explicitly models the three-dimensional ballistic motion of objects in free-flight and the two-dimensional projection of the trajectory into the image plane of a static or moving camera. Deviations from this model provide evidence of manipulation. The technique assumes that the object's trajectory is substantially influenced only by gravity, that the image of the object's center of mass can be determined from the images, and requires that any camera motion can be estimated from background elements. The computational requirements of the algorithm are modest, and any detected inconsistencies can be illustrated in an intuitive, geometric fashion. We demonstrate the efficacy of this analysis on videos of our own creation and on videos obtained from video-sharing websites. --- paper_title: Exposing digital forgeries through specular highlights on the eye paper_content: When creating a digital composite of two people, it is difficult to exactly match the lighting conditions under which each individual was originally photographed. In many situations, the light source in a scene gives rise to a specular highlight on the eyes. We show how the direction to a light source can be estimated from this highlight. Inconsistencies in lighting across an image are then used to reveal traces of digital tampering. --- paper_title: Exposing Digital Forgeries in Complex Lighting Environments paper_content: The availability of sophisticated digital imaging technology has given rise to digital forgeries that are increasing in sophistication and frequency. We describe a technique for exposing such fakes by detecting inconsistencies in lighting. We show how to approximate complex lighting environments with a low-dimensional model and, further, how to estimate the model's parameters from a single image. Inconsistencies in the lighting model are th.en used as evidence of tampering. --- paper_title: Nonintrusive component forensics of visual sensors using output images paper_content: Rapid technology development and the widespread use of visual sensors have led to a number of new problems related to protecting intellectual property rights, handling patent infringements, authenticating acquisition sources, and identifying content manipulations. This paper introduces nonintrusive component forensics as a new methodology for the forensic analysis of visual sensing information, aiming to identify the algorithms and parameters employed inside various processing modules of a digital device by only using the device output data without breaking the device apart. We propose techniques to estimate the algorithms and parameters employed by important camera components, such as color filter array and color interpolation modules. The estimated interpolation coefficients provide useful features to construct an efficient camera identifier to determine the brand and model from which an image was captured. The results obtained from such component analysis are also useful to examine the similarities between the technologies employed by different camera models to identify potential infringement/licensing and to facilitate studies on technology evolution --- paper_title: Accurate Detection of Demosaicing Regularity for Digital Image Forensics paper_content: In this paper, we propose a novel accurate detection framework of demosaicing regularity from different source images. The proposed framework first reversely classifies the demosaiced samples into several categories and then estimates the underlying demosaicing formulas for each category based on partial second-order derivative correlation models, which detect both the intrachannel and the cross-channel demosaicing correlation. An expectation-maximization reverse classification scheme is used to iteratively resolve the ambiguous demosaicing axes in order to best reveal the implicit grouping adopted by the underlying demosaicing algorithm. Comparison results based on syntactic images show that our proposed formulation significantly improves the accuracy of the regenerated demosaiced samples from the sensor samples for a large number of diversified demosaicing algorithms. By running sequential forward feature selection, our reduced feature sets used in conjunction with the probabilistic support vector machine classifier achieve superior performance in identifying 16 demosaicing algorithms in the presence of common camera post demosaicing processing. When applied to real applications, including camera model and RAW-tool identification, our selected features achieve nearly perfect classification performances based on large sets of cropped image blocks. --- paper_title: Non-Intrusive Forensic Analysis of Visual Sensors Using Output Images paper_content: This paper considers the problem of non-intrusive forensic analysis of the individual components in visual sensors and its implementation. As a new addition to the emerging area of forensic engineering, we present a framework for analyzing technologies employed inside digital cameras based on output images, and develop a set of forensic signal processing algorithms for visual sensors based on color array sensor and interpolation methods. We show through simulations that the proposed method is robust against compression and noise, and can help identify various processing components inside the camera. Such a non-intrusive forensic framework would provide useful evidence for analyzing technology infringement and evolution for visual sensors. --- paper_title: Exposing digital forgeries in color filter array interpolated images paper_content: With the advent of low-cost and high-resolution digital cameras, and sophisticated photo editing software, digital images can be easily manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. Most digital cameras, for example, employ a single sensor in conjunction with a color filter array (CFA), and then interpolate the missing color samples to obtain a three channel color image. This interpolation introduces specific correlations which are likely to be destroyed when tampering with an image. We quantify the specific correlations introduced by CFA interpolation, and describe how these correlations, or lack thereof, can be automatically detected in any portion of an image. We show the efficacy of this approach in revealing traces of digital tampering in lossless and lossy compressed color images interpolated with several different CFA algorithms. --- paper_title: Source camera identification based on CFA interpolation paper_content: In this work, we focus our interest on blind source camera identification problem by extending our results in the direction of M. Kharrazi et al. (2004). The interpolation in the color surface of an image due to the use of a color filter array (CFA) forms the basis of the paper. We propose to identify the source camera of an image based on traces of the proprietary interpolation algorithm deployed by a digital camera. For this purpose, a set of image characteristics are defined and then used in conjunction with a support vector machine based multi-class classifier to determine the originating digital camera. We also provide initial results on identifying source among two and three digital cameras. --- paper_title: Component Forensics of Digital Cameras: A Non-Intrusive Approach paper_content: This paper considers the problem of component forensics and proposes a methodology to identify the algorithms and parameters employed by various processing modules inside a digital camera. The proposed analysis techniques are non-intrusive, using only sample output images collected from the camera to find the color filter array pattern; and the algorithm and parameters of color interpolation employed in cameras. As demonstrated by various case studies in the paper, the features obtained from component forensic analysis provide useful evidence for such applications as detecting technology infringement, protecting intellectual property rights, determining camera source, and identifying image tampering. --- paper_title: Capturing images with digital still cameras paper_content: We face multiple technical tasks when designing and developing digital cameras for consumer use. As would be expected, image quality is a top concern. Digital still cameras fall into two categories. One uses a black box and a camera lens with a CCD (charge-coupled device) back unit that can acquire more than 3 million pixels. Designed for professional and business use, this camera normally costs more than $5,000. The second type of digital camera is designed for consumers and employs a CCD capable of acquiring 350,000 to over 1 million pixels (Mpixels). Because it is easy to handle and carries a reasonable price for the picture quality, its market has grown dramatically in last three years. Here, I discuss the image-capturing system for a consumer-use digital still camera after briefly explaining its history. --- paper_title: Interactions between color plane interpolation and other image processing functions in electronic photography paper_content: Electronic cameras using a single CCD detector acquire scene color by subsampling in three, color planes and subsequently interpolating the information to reconstruct three, full-resolution color planes. The nature and size of the interpolation errors are a function of the algorithm used. When interpolation errors are propagated through the rest of the imaging chain, it becomes evident that synergistic effects among image processing operations must be considered when selecting and tuning an interpolation algorithm. This presentation demonstrates and comments on these image processing interactions.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: IMPROVEMENTS ON SOURCE CAMERA-MODEL IDENTIFICATION BASED ON CFA INTERPOLATION paper_content: The idea of using traces of interpolation algorithms, deployed by a digital camera, as an identifier in the source camera-model identification problem has been initially studied in [2]. In this work, we improve our previous approach by incorporating methods to better detect the interpolation artifacts in smooth image parts. To identify the source cameramodel of a digital image, new features that can detect traces of low-order interpolation are introduced and used in conjunction with a support vector machine based multi-class classifier. Performance results due to newly added features are obtained considering source identification among two and three digital cameras. Also, these results are combined with those of [2] to further improve our methodology. --- paper_title: Color processing in digital cameras paper_content: In seconds, a digital camera performs full-color rendering that includes color filter array interpolation, color calibration, anti-aliasing, infrared rejection, and white-point correction. This article describes the design decisions that make this processing possible. --- paper_title: Robustness of color interpolation identification against anti-forensic operations paper_content: Color interpolation identification using digital images has been shown to be a powerful tool for addressing a range of digital forensic questions. However, due to the existence of adversaries who have the incentive to counter the identification, it is necessary to understand how color interpolation identification performs against anti-forensic operations that intentionally manipulate identification results. This paper proposes two anti-forensic techniques against which the robustness of color interpolation identification is investigated. The first technique employs parameter perturbation to circumvent identification. Various options that achieve different trade-offs between image quality and identification manipulation are examined. The second technique involves algorithm mixing and demonstrates that one can not only circumvent but also mislead the identification system while preserving the image quality. Additional discussions are also provided to enhance the understanding of anti-forensics and its implications to the design of identification systems. --- paper_title: Nonintrusive component forensics of visual sensors using output images paper_content: Rapid technology development and the widespread use of visual sensors have led to a number of new problems related to protecting intellectual property rights, handling patent infringements, authenticating acquisition sources, and identifying content manipulations. This paper introduces nonintrusive component forensics as a new methodology for the forensic analysis of visual sensing information, aiming to identify the algorithms and parameters employed inside various processing modules of a digital device by only using the device output data without breaking the device apart. We propose techniques to estimate the algorithms and parameters employed by important camera components, such as color filter array and color interpolation modules. The estimated interpolation coefficients provide useful features to construct an efficient camera identifier to determine the brand and model from which an image was captured. The results obtained from such component analysis are also useful to examine the similarities between the technologies employed by different camera models to identify potential infringement/licensing and to facilitate studies on technology evolution --- paper_title: Source digital camcorder identification using sensor photo response non-uniformity paper_content: Photo-response non-uniformity (PRNU) of digital sensors was recently proposed [1] as a unique identification fingerprint ::: for digital cameras. The PRNU extracted from a specific image can be used to link it to the digital camera that took the ::: image. Because digital camcorders use the same imaging sensors, in this paper, we extend this technique for ::: identification of digital camcorders from video clips. We also investigate the problem of determining whether two video ::: clips came from the same camcorder and the problem of whether two differently transcoded versions of one movie came ::: from the same camcorder. The identification technique is a joint estimation and detection procedure consisting of two ::: steps: (1) estimation of PRNUs from video clips using the Maximum Likelihood Estimator and (2) detecting the presence ::: of PRNU using normalized cross-correlation. We anticipate this technology to be an essential tool for fighting piracy of ::: motion pictures. Experimental results demonstrate the reliability and generality of our approach. --- paper_title: Digital camera identification from sensor pattern noise paper_content: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction. --- paper_title: Source Camera Identification Using Enhanced Sensor Pattern Noise paper_content: Sensor pattern noises (SPNs), extracted from digital images to serve as the fingerprints of imaging devices, have been proved as an effective way for digital device identification. However, as we demonstrate in this work, the limitation of the current method of extracting SPNs is that the SPNs extracted from images can be severely contaminated by details from scenes, and as a result, the identification rate is unsatisfactory unless images of a large size are used. In this work, we propose a novel approach for attenuating the influence of details from scenes on SPNs so as to improve the device identification rate of the identifier. The hypothesis underlying our SPN enhancement method is that the stronger a signal component in an SPN is, the less trustworthy the component should be, and thus should be attenuated. This hypothesis suggests that an enhanced SPN can be obtained by assigning weighting factors inversely proportional to the magnitude of the SPN components. --- paper_title: Exploring compression effects for improved source camera identification using strongly compressed video paper_content: This paper presents a study of the video compression effect on source camera identification based on the Photo-Response Non-Uniformity (PRNU). Specifically, the reliability of different types of frames in a compressed video is first investigated, which shows quantitatively that I-frames are more reliable than P-frames for PRNU estimation. Motivated by this observation, a new mechanism for estimating the reference PRNU and two mechanisms for estimating the test-video PRNU are proposed to achieve higher accuracy with fewer frames used. Experiments are performed to validate the effectiveness of the proposed mechanisms. --- paper_title: Scanner identification using sensor pattern noise paper_content: Digital images can be captured or generated by a variety of sources including digital cameras and scanners. In ::: many cases it is important to be able to determine the source of a digital image. This paper presents methods for ::: authenticating images that have been acquired using flatbed desktop scanners. The method is based on using the ::: pattern noise of the imaging sensor as a fingerprint for the scanner, similar to methods that have been reported ::: for identifying digital cameras. To identify the source scanner of an image a reference pattern is estimated for ::: each scanner and is treated as a unique fingerprint of the scanner. An anisotropic local polynomial estimator is ::: used for obtaining the reference patterns. To further improve the classification accuracy a feature vector based ::: approach using an SVM classifier is used to classify the pattern noise. This feature vector based approach is ::: shown to achieve a high classification accuracy. --- paper_title: Scanner Identification Using Feature-Based Processing and Analysis paper_content: Digital images can be obtained through a variety of sources including digital cameras and scanners. In many cases, the ability to determine the source of a digital image is important. This paper presents methods for authenticating images that have been acquired using flatbed desktop scanners. These methods use scanner fingerprints based on statistics of imaging sensor pattern noise. To capture different types of sensor noise, a denoising filterbank consisting four different denoising filters is used for obtaining the noise patterns. To identify the source scanner, a support vector machine classifier based on these fingerprints is used. These features are shown to achieve high classification accuracy. Furthermore, the selected fingerprints based on statistical properties of the sensor noise are shown to be robust under postprocessing operations, such as JPEG compression, contrast stretching, and sharpening. --- paper_title: Physics-motivated features for distinguishing photographic images and computer graphics paper_content: The increasing photorealism for computer graphics has made computer graphics a convincing form of image forgery. Therefore, classifying photographic images and photorealistic computer graphics has become an important problem for image forgery detection. In this paper, we propose a new geometry-based image model, motivated by the physical image generation process, to tackle the above-mentioned problem. The proposed model reveals certain physical differences between the two image categories, such as the gamma correction in photographic images and the sharp structures in computer graphics. For the problem of image forgery detection, we propose two levels of image authenticity definition, i.e., imaging-process authenticity and scene authenticity, and analyze our technique against these definitions. Such definition is important for making the concept of image authenticity computable. Apart from offering physical insights, our technique with a classification accuracy of 83.5% outperforms those in the prior work, i.e., wavelet features at 80.3% and cartoon features at 71.0%. We also consider a recapturing attack scenario and propose a counter-attack measure. In addition, we constructed a publicly available benchmark dataset with images of diverse content and computer graphics of high photorealism. --- paper_title: Forensic techniques for classifying scanner, computer generated and digital camera images paper_content: Digital images can be captured or generated by a variety of sources including digital cameras, scanners and computer graphics softwares. In many cases it is important to be able to determine the source of a digital image such as for criminal and forensic investigation. This paper presents methods for distinguishing between an image captured using a digital camera, a computer generated image and an image captured using a scanner. The method proposed here is based on the differences in the image generation processes used in these devices and is independent of the image content. The method is based on using features of the residual pattern noise that exist in images obtained from digital cameras and scanners. The residual noise present in computer generated images does not have structures similar to the pattern noise of cameras and scanners. The experiments show that a feature based approach using an SVM classifier gives high accuracy. --- paper_title: Robust scanner identification based on noise features paper_content: A large portion of digital image data available today is acquired using digital cameras or scanners. While cameras ::: allow digital reproduction of natural scenes, scanners are often used to capture hardcopy art in more controlled ::: scenarios. This paper proposes a new technique for non-intrusive scanner model identification, which can be ::: further extended to perform tampering detection on scanned images. Using only scanned image samples that ::: contain arbitrary content, we construct a robust scanner identifier to determine the brand/model of the scanner ::: used to capture each scanned image. The proposed scanner identifier is based on statistical features of scanning ::: noise. We first analyze scanning noise from several angles, including through image de-noising, wavelet analysis, ::: and neighborhood prediction, and then obtain statistical features from each characterization. Experimental ::: results demonstrate that the proposed method can effectively identify the correct scanner brands/models with ::: high accuracy. --- paper_title: Intrinsic Sensor Noise Features for Forensic Analysis on Scanners and Scanned Images paper_content: A large portion of digital images available today are acquired using digital cameras or scanners. While cameras provide digital reproduction of natural scenes, scanners are often used to capture hard-copy art in a more controlled environment. In this paper, new techniques for nonintrusive scanner forensics that utilize intrinsic sensor noise features are proposed to verify the source and integrity of digital scanned images. Scanning noise is analyzed from several aspects using only scanned image samples, including through image denoising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Based on the proposed statistical features of scanning noise, a robust scanner identifier is constructed to determine the model/brand of the scanner used to capture a scanned image. Utilizing these noise features, we extend the scope of acquisition forensics to differentiating scanned images from camera-taken photographs and computer-generated graphics. The proposed noise features also enable tampering forensics to detect postprocessing operations on scanned images. Experimental results are presented to demonstrate the effectiveness of employing the proposed noise features for performing various forensic analysis on scanners and scanned images. --- paper_title: Noise Features for Image Tampering Detection and Steganalysis paper_content: With increasing availability of low-cost image editing softwares, the authenticity of digital images can no longer be taken for granted. Digital images have also been used as cover data for transmitting secret information in the field of steganography. In this paper, we introduce a new set of features for multimedia forensics to determine if a digital image is an authentic camera output or if it has been tampered or embedded with hidden data. We perform such image forensic analysis employing three sets of statistical noise features, including those from denoising operations, wavelet analysis, and neighborhood prediction. Our experimental results demonstrate that the proposed method can effectively distinguish digital images from their tampered or stego versions. --- paper_title: Determining Image Origin and Integrity Using Sensor Noise paper_content: In this paper, we provide a unified framework for identifying the source digital camera from its images and for revealing digitally altered images using photo-response nonuniformity noise (PRNU), which is a unique stochastic fingerprint of imaging sensors. The PRNU is obtained using a maximum-likelihood estimator derived from a simplified model of the sensor output. Both digital forensics tasks are then achieved by detecting the presence of sensor PRNU in specific regions of the image under investigation. The detection is formulated as a hypothesis testing problem. The statistical distribution of the optimal test statistics is obtained using a predictor of the test statistics on small image blocks. The predictor enables more accurate and meaningful estimation of probabilities of false rejection of a correct camera and missed detection of a tampered region. We also include a benchmark implementation of this framework and detailed experimental validation. The robustness of the proposed forensic methods is tested on common image processing, such as JPEG compression, gamma correction, resizing, and denoising. --- paper_title: Exposing digital forgeries in color filter array interpolated images paper_content: With the advent of low-cost and high-resolution digital cameras, and sophisticated photo editing software, digital images can be easily manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. Most digital cameras, for example, employ a single sensor in conjunction with a color filter array (CFA), and then interpolate the missing color samples to obtain a three channel color image. This interpolation introduces specific correlations which are likely to be destroyed when tampering with an image. We quantify the specific correlations introduced by CFA interpolation, and describe how these correlations, or lack thereof, can be automatically detected in any portion of an image. We show the efficacy of this approach in revealing traces of digital tampering in lossless and lossy compressed color images interpolated with several different CFA algorithms. --- paper_title: Detecting digital image forgeries using sensor pattern noise paper_content: We present a new approach to detection of forgeries in digital images under the assumption that either the camera that took the image is available or other images taken by that camera are available. Our method is based on detecting the presence of the camera pattern noise, which is a unique stochastic characteristic of imaging sensors, in individual regions in the image. The forged region is determined as the one that lacks the pattern noise. The presence of the noise is established using correlation as in detection of spread spectrum watermarks. We proposed two approaches. In the first one, the user selects an area for integrity verification. The second method attempts to automatically determine the forged area without assuming any a priori knowledge. The methods are tested both on examples of real forgeries and on non-forged images. We also investigate how further image processing applied to the forged image, such as lossy compression or filtering, influences our ability to verify image integrity. --- paper_title: Image tamper detection based on demosaicing artifacts paper_content: In this paper, we introduce tamper detection techniques based on artifacts created by Color Filter Array (CFA) processing in most digital cameras. The techniques are based on computing a single feature and a simple threshold based classifier. The efficacy of the approach was tested over thousands of authentic, tampered, and computer generated images. Experimental results demonstrate reasonably low error rates. --- paper_title: Image Tampering Identification using Blind Deconvolution paper_content: Digital images have been used in growing number of applications from law enforcement and surveillance, to medical diagnosis and consumer photography. With such widespread popularity and the presence of low-cost image editing softwares, the integrity of image content can no longer be taken for granted. In this paper, we propose a novel technique based on blind deconvolution to verify image authenticity. We consider the direct output images of a camera as authentic, and introduce algorithms to detect further processing such as tampering applied to the image. Our proposed method is based on the observation that many tampering operations can be approximated as a combination of linear and non-linear components. We model the linear part of the tampering process as a filter, and obtain its coefficients using blind deconvolution. These estimated coefficients are then used to identify possible manipulations. We demonstrate the effectiveness of the proposed image authentication technique and compare our results with existing works. --- paper_title: Tampering identification using Empirical Frequency Response paper_content: With the widespread popularity of digital images and the presence of easy-to-use image editing software, content integrity can no longer be taken for granted, and there is a strong need for techniques that not only detect the presence of tampering but also identify its type. This paper focusses on tampering-type identification and introduces a new approach based on the Empirical Frequency Response (EFR) to address this problem. We show that several types of tampering operations, both linear shift invariant (LSI) and non-LSI, can be characterized consistently and distinctly by their EFRs. We then extend the approach to estimate the EFR for scenarios where only the final image is available. Theoretical reasoning supported by experimental results verify the effectiveness of this method for identifying the type of a tampering operation. --- paper_title: Digital image forensics via intrinsic fingerprints paper_content: Digital imaging has experienced tremendous growth in recent decades, and digital camera images have been used in a growing number of applications. With such increasing popularity and the availability of low-cost image editing software, the integrity of digital image content can no longer be taken for granted. This paper introduces a new methodology for the forensic analysis of digital camera images. The proposed method is based on the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on digital images, and these intrinsic fingerprints can be identified and employed to verify the integrity of digital data. The intrinsic fingerprints of the various in-camera processing operations can be estimated through a detailed imaging model and its component analysis. Further processing applied to the camera captured image is modelled as a manipulation filter, for which a blind deconvolution technique is applied to obtain a linear time-invariant approximation and to estimate the intrinsic fingerprints associated with these postcamera operations. The absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes. Any change or inconsistencies among the estimated camera-imposed fingerprints, or the presence of new types of fingerprints suggest that the image has undergone some kind of processing after the initial capture, such as tampering or steganographic embedding. Through analysis and extensive experimental studies, this paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics. --- paper_title: Intrinsic Sensor Noise Features for Forensic Analysis on Scanners and Scanned Images paper_content: A large portion of digital images available today are acquired using digital cameras or scanners. While cameras provide digital reproduction of natural scenes, scanners are often used to capture hard-copy art in a more controlled environment. In this paper, new techniques for nonintrusive scanner forensics that utilize intrinsic sensor noise features are proposed to verify the source and integrity of digital scanned images. Scanning noise is analyzed from several aspects using only scanned image samples, including through image denoising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Based on the proposed statistical features of scanning noise, a robust scanner identifier is constructed to determine the model/brand of the scanner used to capture a scanned image. Utilizing these noise features, we extend the scope of acquisition forensics to differentiating scanned images from camera-taken photographs and computer-generated graphics. The proposed noise features also enable tampering forensics to detect postprocessing operations on scanned images. Experimental results are presented to demonstrate the effectiveness of employing the proposed noise features for performing various forensic analysis on scanners and scanned images. --- paper_title: Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis. paper_content: This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation. --- paper_title: Instantaneous frequency estimation and localization for ENF signals paper_content: Forensic analysis based on Electric Network Frequency (ENF) fluctuations is an emerging technology for authenticating multimedia recordings. This class of techniques requires extracting frequency fluctuations from multimedia recordings and comparing them with the ground truth frequencies, obtained from the power mains, at the corresponding time. Most current guidelines for frequency estimation from the ENF signal use non-parametric approaches. Such approaches have limited temporal-frequency resolution due to the tradeoffs of the time-frequency resolutions as well as computational power. To facilitate robust high-resolution matching, it is important to estimate instantaneous frequency using as few samples as possible. The use of subspace-based methods for high resolution frequency estimation is fairly new for ENF analysis. In this paper, a systematic study of several high resolution low-complexity frequency estimation algorithms is conducted, focusing on estimating the frequencies in short time-frames. After establishing the performance of several frequency estimation algorithms, a study towards using the ENF signal for estimating the location-of-recording is carried out. Experiments conducted on ENF data collected in several cities indicate the presence of location-specific signatures that can be exploited for future forensic applications. --- paper_title: Audio Authenticity: Detecting ENF Discontinuity With High Precision Phase Analysis paper_content: This paper addresses a forensic tool used to assess audio authenticity. The proposed method is based on detecting phase discontinuity of the power grid signal; this signal, referred to as electric network frequency (ENF), is sometimes embedded in audio signals when the recording is carried out with the equipment connected to an electrical outlet or when certain microphones are in an ENF magnetic field. After down-sampling and band-filtering the audio around the nominal value of the ENF, the result can be considered a single tone such that a high-precision Fourier analysis can be used to estimate its phase. The estimated phase provides a visual aid to locating editing points (signalled by abrupt phase changes) and inferring the type of audio editing (insertion or removal of audio segments). From the estimated values, a feature is used to quantify the discontinuity of the ENF phase, allowing an automatic decision concerning the authenticity of the audio evidence. The theoretical background is presented along with practical implementation issues related to the proposed technique, whose performance is evaluated on digitally edited audio signals. --- paper_title: "Seeing" ENF: natural time stamp for digital video via optical sensing and signal processing paper_content: Electric Network Frequency (ENF) fluctuates slightly over time from its nominal value of 50 Hz/60 Hz. The fluctuations in the ENF remain consistent across the entire power grid even when measured at physically distant locations. The near-invisible flickering of fluorescent lights connected to the power mains reflect these fluctuations present in the ENF. In this paper, mechanisms using optical sensors and video cameras to record and validate the presence of the ENF fluctuations in fluorescent lighting are presented. Signal processing techniques are applied to demonstrate a high correlation between the fluctuations in the ENF signal captured from fluorescent lighting and the ENF signal captured directly from power mains supply. The proposed technique is then used to demonstrate the presence of the ENF signal in video recordings taken in various geographical areas. Experimental results show that the ENF signal can be used as a natural timestamp for optical sensor recordings and video surveillance recordings from indoor environments under fluorescent lighting. Application of the ENF signal analysis to tampering detection of surveillance video recordings is also demonstrated. --- paper_title: Signal Processing of Power Quality Disturbances paper_content: PREFACE. ACKNOWLEDGMENTS. 1 INTRODUCTION. 1.1 Modern View of Power Systems. 1.2 Power Quality. 1.3 Signal Processing and Power Quality. 1.4 Electromagnetic Compatibility Standards. 1.5 Overview of Power Quality Standards. 1.6 Compatibility Between Equipment and Supply. 1.7 Distributed Generation. 1.8 Conclusions. 1.9 About This Book. 2 ORIGIN OF POWER QUALITY VARIATIONS. 2.1 Voltage Frequency Variations. 2.2 Voltage Magnitude Variations. 2.3 Voltage Unbalance. 2.4 Voltage Fluctuations and Light Flicker. 2.5 Waveform Distortion. 2.6 Summary and Conclusions. 3 PROCESSING OF STATIONARY SIGNALS. 3.1 Overview of Methods. 3.2 Parameters That Characterize Variations. 3.3 Power Quality Indices. 3.4 Frequency-Domain Analysis and Signal Transformation. 3.5 Estimation of Harmonics and Interharmonics. 3.6 Estimation of Broadband Spectrum. 3.7 Summary and Conclusions. 3.8 Further Reading. 4 PROCESSING OF NONSTATIONARY SIGNALS. 4.1 Overview of Some Nonstationary Power Quality Data Analysis Methods. 4.2 Discrete STFT for Analyzing Time-Evolving Signal Components. 4.3 Discrete Wavelet Transforms for Time-Scale Analysis of Disturbances. 4.4 Block-Based Modeling. 4.5 Models Directly Applicable to Nonstationary Data. 4.6 Summary and Conclusion. 4.7 Further Reading. 5 STATISTICS OF VARIATIONS. 5.1 From Features to System Indices. 5.2 Time Aggregation. 5.3 Characteristics Versus Time. 5.4 Site Indices. 5.5 System Indices. 5.6 Power Quality Objectives. 5.7 Summary and Conclusions. 6 ORIGIN OF POWER QUALITY EVENTS. 6.1 Interruptions. 6.2 Voltage Dips. 6.3 Transients. 6.4 Summary and Conclusions. 7 TRIGGERING AND SEGMENTATION. 7.1 Overview of Existing Methods. 7.2 Basic Concepts of Triggering and Segmentation. 7.3 Triggering Methods. 7.4 Segmentation. 7.5 Summary and Conclusions. 8 CHARACTERIZATION OF POWER QUALITY EVENTS. 8.1 Voltage Magnitude Versus Time. 8.2 Phase Angle Versus Time. 8.3 Three-Phase Characteristics Versus Time. 8.4 Distortion During Event. 8.5 Single-Event Indices: Interruptions. 8.6 Single-Event Indices: Voltage Dips. 8.7 Single-Event Indices: Voltage Swells. 8.8 Single-Event Indices Based on Three-Phase Characteristics. 8.9 Additional Information from Dips and Interruptions. 8.10 Transients. 8.11 Summary and Conclusions. 9 EVENT CLASSIFICATION. 9.1 Overview of Machine Data Learning Methods for Event Classification. 9.2 Typical Steps Used in Classification System. 9.3 Learning Machines Using Linear Discriminants. 9.4 Learning and Classification Using Probability Distributions. 9.5 Learning and Classification Using Artificial Neural Networks. 9.6 Learning and Classification Using Support Vector Machines. 9.7 Rule-Based Expert Systems for Classification of Power System Events. 9.8 Summary and Conclusions. 10 EVENT STATISTICS. 10.1 Interruptions. 10.2 Voltage Dips: Site Indices. 10.3 Voltage Dips: Time Aggregation. 10.4 Voltage Dips: System Indices. 10.5 Summary and Conclusions. 11 CONCLUSIONS. 11.1 Events and Variations. 11.2 Power Quality Variations. 11.3 Power Quality Events. 11.4 Itemization of Power Quality. 11.5 Signal-Processing Needs. APPENDIX A IEC STANDARDS ON POWER QUALITY. APPENDIX B IEEE STANDARDS ON POWER QUALITY. BIBLIOGRAPHY. INDEX. --- paper_title: Determining Image Origin and Integrity Using Sensor Noise paper_content: In this paper, we provide a unified framework for identifying the source digital camera from its images and for revealing digitally altered images using photo-response nonuniformity noise (PRNU), which is a unique stochastic fingerprint of imaging sensors. The PRNU is obtained using a maximum-likelihood estimator derived from a simplified model of the sensor output. Both digital forensics tasks are then achieved by detecting the presence of sensor PRNU in specific regions of the image under investigation. The detection is formulated as a hypothesis testing problem. The statistical distribution of the optimal test statistics is obtained using a predictor of the test statistics on small image blocks. The predictor enables more accurate and meaningful estimation of probabilities of false rejection of a correct camera and missed detection of a tampered region. We also include a benchmark implementation of this framework and detailed experimental validation. The robustness of the proposed forensic methods is tested on common image processing, such as JPEG compression, gamma correction, resizing, and denoising. --- paper_title: Can we trust digital image forensics? paper_content: Compared to the prominent role digital images play in nowadays multimedia society, research in the field of image authenticity is still in its infancy. Only recently, research on digital image forensics has gained attention by addressing tamper detection and image source identification. However, most publications in this emerging field still lack rigorous discussions of robustness against strategic counterfeiters, who anticipate the existence of forensic techniques. As a result, the question of trustworthiness of digital image forensics arises. This work will take a closer look at two state-of-the-art forensic methods and proposes two counter-techniques; one to perform resampling operations undetectably and another one to forge traces of image origin. Implications for future image forensic systems will be discussed. --- paper_title: Digital camera identification from sensor pattern noise paper_content: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction. --- paper_title: Hiding Traces of Resampling in Digital Images paper_content: Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques. --- paper_title: Anti-forensics of digital image compression paper_content: As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image's compression history and its associated compression fingerprints. Little consideration has been given, however, to anti-forensic techniques capable of fooling forensic algorithms. In this paper, we present a set of anti-forensic techniques designed to remove forensically significant indicators of compression from an image. We do this by first developing a generalized framework for the design of anti-forensic techniques to remove compression fingerprints from an image's transform coefficients. This framework operates by estimating the distribution of an image's transform coefficients before compression, then adding anti-forensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one. We then use this framework to develop anti-forensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and wavelet-based coders. Additionally, we propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments, we demonstrate that our anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality. Furthermore, we show how these techniques can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means. --- paper_title: Temporal Forensics and Anti-Forensics for Motion Compensated Video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator. --- paper_title: Undetectable image tampering through JPEG compression anti-forensics paper_content: Recently, a number of digital image forensic techniques have been developed which are capable of identifying an image's origin, tracing its processing history, and detecting image forgeries. Though these techniques are capable of identifying standard image manipulations, they do not address the possibility that anti-forensic operations may be designed and used to hide evidence of image tampering. In this paper, we propose an anti-forensic operation capable of removing blocking artifacts from a previously JPEG compressed image. Furthermore, we show that by using this operation along with another anti-forensic operation which we recently proposed, we are able to fool forensic methods designed to detect evidence of JPEG compression in decoded images, determine an image's origin, detect double JPEG compression, and identify cut-and-paste image forgeries. --- paper_title: Temporal Forensics and Anti-Forensics for Motion Compensated Video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator. --- paper_title: Anti-forensics for frame deletion/addition in MPEG video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have recently been developed to authenticate multimedia content. One important digital forensic result is that adding or deleting frames from an MPEG video sequence introduces a temporally distributed fingerprint into the video can be used to identify frame deletion or addition. By contrast, very little research exists into anti-forensic operations designed to make digital forgeries undetectable by forensic techniques. In this paper, we propose an anti-forensic technique capable of removing the temporal fingerprint from MPEG videos that have undergone frame addition or deletion. We demonstrate that our proposed anti-forensic technique can effectively remove this fingerprint through a series of experiments. --- paper_title: Defending Against Fingerprint-Copy Attack in Sensor-Based Camera Identification paper_content: Sensor photoresponse nonuniformity has been proposed as a unique identifier (fingerprint) for various forensic tasks, including digital-camera ballistics in which an image is matched to the specific camera that took it. The problem investigated here concerns the situation when an adversary estimates the sensor fingerprint from a set of images and superimposes it onto an image from a different camera to frame an innocent victim. This paper proposes a reliable method for detecting such fake fingerprints under rather mild and general assumptions about the adversary's activity and the means available to the victim. The proposed method is subjected to experiments to evaluate its reliability as well as its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought. --- paper_title: Anti-forensics for frame deletion/addition in MPEG video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have recently been developed to authenticate multimedia content. One important digital forensic result is that adding or deleting frames from an MPEG video sequence introduces a temporally distributed fingerprint into the video can be used to identify frame deletion or addition. By contrast, very little research exists into anti-forensic operations designed to make digital forgeries undetectable by forensic techniques. In this paper, we propose an anti-forensic technique capable of removing the temporal fingerprint from MPEG videos that have undergone frame addition or deletion. We demonstrate that our proposed anti-forensic technique can effectively remove this fingerprint through a series of experiments. --- paper_title: Temporal Forensics and Anti-Forensics for Motion Compensated Video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator. --- paper_title: How secure are power network signature based time stamps? paper_content: A time stamp based on the power network signature called the Electrical Network Frequency (ENF) has been used by an emerging class of approaches for authenticating digital audio and video recordings in computer-to-computer communications. However, the presence of adversaries may render the time stamp insecure, and it is crucial to understand the robustness of ENF analysis against anti-forensic operations. This paper investigates possible anti-forensic operations that can remove and alter the ENF signal while trying to preserve the host signal, and develops detection methods targeting these operations. Improvements over anti-forensic operations that can circumvent the detection are also examined, for which various trade-offs are discussed. To develop an understanding of the dynamics between a forensic analyst and an adversary, an evolutionary perspective and a game-theoretical perspective are proposed, which allow for a comprehensive characterization of plausible anti-forensic strategies and countermeasures. Such an understanding has the potential to lead to more secure and reliable time stamp schemes based on ENF analysis. --- paper_title: Temporal Forensics and Anti-Forensics for Motion Compensated Video paper_content: Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator. --- paper_title: The Source Identification Game: An Information-Theoretic Perspective paper_content: We introduce a theoretical framework in which to cast the source identification problem. Thanks to the adoption of a game-theoretic approach, the proposed framework permits us to derive the ultimate achievable performance of the forensic analysis in the presence of an adversary aiming at deceiving it. The asymptotic Nash equilibrium of the source identification game is derived under an assumption on the resources on which the forensic analyst may rely. The payoff at the equilibrium is analyzed, deriving the conditions under which a successful forensic analysis is possible and the error exponent of the false-negative error probability in such a case. The difficulty of deriving a closed-form solution for general instances of the game is alleviated by the introduction of an efficient numerical procedure for the derivation of the optimum attacking strategy. The numerical analysis is applied to a case study to show the kind of information it can provide. --- paper_title: Multimedia Data Hiding paper_content: Introduction.- Preliminaries.- Classification and capacity of embedding.- Handling uneven embedding capacity.- Data hiding in binary images.- Multilevel data hiding for image & video.- Data hiding for image authentication.- Data hiding for video communication.- Attacks on known data-hiding algorithms.- Attacks on unknown data-hiding algorithms.- Conclusions and perspectives. --- paper_title: Image-adaptive watermarking using visual models paper_content: The huge success of the Internet allows for the transmission, wide distribution, and access of electronic data in an effortless manner. Content providers are faced with the challenge of how to protect their electronic data. This problem has generated a flurry of research activity in the area of digital watermarking of electronic content for copyright protection. The challenge here is to introduce a digital watermark that does not alter the perceived quality of the electronic content, while being extremely robust to attack. For instance, in the case of image data, editing the picture or illegal tampering should not destroy or transform the watermark into another valid signature. Equally important, the watermark should not alter the perceived visual quality of the image. From a signal processing perspective, the two basic requirements for an effective watermarking scheme, robustness and transparency, conflict with each other. We propose two watermarking techniques for digital images that are based on utilizing visual models which have been developed in the context of image compression. Specifically, we propose watermarking schemes where visual models are used to determine image dependent upper bounds on watermark insertion. This allows us to provide the maximum strength transparent watermark which, in turn, is extremely robust to common image processing and editing such as JPEG compression, rescaling, and cropping. We propose perceptually based watermarking schemes in two frameworks: the block-based discrete cosine transform and multiresolution wavelet framework and discuss the merits of each one. Our schemes are shown to provide very good results both in terms of image transparency and robustness. --- paper_title: Information-Theoretic Analysis of Information Hiding paper_content: An information-theoretic analysis of information hiding is presented, forming the theoretical basis for design of information-hiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermarking, fingerprinting, steganography, and data embedding. In these applications, information is hidden within a host data set and is to be reliably communicated to a receiver. The host data set is intentionally corrupted, but in a covert way, designed to be imperceptible to a casual analysis. Next, an attacker may seek to destroy this hidden information, and for this purpose, introduce additional distortion to the data set. Side information (in the form of cryptographic keys and/or information about the host signal) may be available to the information hider and to the decoder. We formalize these notions and evaluate the hiding capacity, which upper-bounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities: the achievable information-hiding rates and the allowed distortion levels for the information hider and the attacker. The hiding capacity is the value of a game between the information hider and the attacker. The optimal attack strategy is the solution of a particular rate-distortion problem, and the optimal hiding strategy is the solution to a channel-coding problem. The hiding capacity is derived by extending the Gel'fand-Pinsker (1980) theory of communication with side information at the encoder. The extensions include the presence of distortion constraints, side information at the decoder, and unknown communication channel. Explicit formulas for capacity are given in several cases, including Bernoulli and Gaussian problems, as well as the important special case of small distortions. In some cases, including the last two above, the hiding capacity is the same whether or not the decoder knows the host data set. It is shown that many existing information-hiding systems in the literature operate far below capacity. --- paper_title: Robust content-dependent high-fidelity watermark for tracking in digital cinema paper_content: Forensic digital watermarking is a promising tool in the fight against piracy of copyrighted motion imagery content, but to be effective it must be (1) imperceptibly embedded in high-definition motion picture source, (2) reliably retrieved, even from degraded copies as might result from camcorder capture and subsequent very-low-bitrate compression and distribution on the Internet, and (3) secure against unauthorized removal. No existing watermarking technology has yet to meet these three simultaneous requirements of fidelity, robustness, and security. We describe here a forensic watermarking approach that meets all three requirements. It is based on the inherent robustness and imperceptibility of very low spatiotemporal frequency watermark carriers, and on a watermark placement technique that renders jamming attacks too costly in picture quality, even if the attacker has complete knowledge of the embedding algorithm. The algorithm has been tested on HD Cinemascope source material exhibited in a digital cinema viewing room. The watermark is imperceptible, yet recoverable after exhibition capture with camcorders, and after the introduction of other distortions such as low-pass filtering, noise addition, geometric shifts, and the manipulation of brightness and contrast.© (2003) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Secure spread spectrum watermarking for multimedia paper_content: This paper presents a secure (tamper-resistant) algorithm for watermarking images, and a methodology for digital watermarking that may be generalized to audio, video, and multimedia data. We advocate that a watermark should be constructed as an independent and identically distributed (i.i.d.) Gaussian random vector that is imperceptibly inserted in a spread-spectrum-like fashion into the perceptually most significant spectral components of the data. We argue that insertion of a watermark under this regime makes the watermark robust to signal processing operations (such as lossy compression, filtering, digital-analog and analog-digital conversion, requantization, etc.), and common geometric transformations (such as cropping, scaling, translation, and rotation) provided that the original image is available and that it can be successfully registered against the transformed watermarked image. In these cases, the watermark detector unambiguously identifies the owner. Further, the use of Gaussian noise, ensures strong resilience to multiple-document, or collusional, attacks. Experimental results are provided to support these claims, along with an exposition of pending open problems. --- paper_title: Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding paper_content: We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing the information-embedding rate, minimizing the distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications. --- paper_title: Rotation-, scale-, and translation-resilient public watermarking for images paper_content: Many electronic watermarks for still images and video content are sensitive to geometric distortions. For example, simple rotation, scaling, and/or translation (RST) of an image can prevent detection of a public watermark. In this paper, we propose a watermarking algorithm that is robust to RST distortions. The watermark is embedded into a 1-dimensional signal obtained by first taking the Fourier transform of the image, resampling the Fourier magnitudes into log-polar coordinates, and then summing a function of those magnitudes along the log-radius axis. If the image is rotated, the resulting signal is cyclically shifted. If it is scaled, the signal is multiplied by some value. And if the image is translated, the signal is unaffected. We can therefore compensate for rotation with a simple search, and for scaling by using the correlation coefficient for the detection metric. False positive results on a database of 10,000 images are reported. Robustness results on a database of 2,000 images are described. It is shown that the watermark is robust to rotation, scale and translation. In addition, the algorithm shows resistance to cropping. --- paper_title: Resistance of digital watermarks to collusive attacks paper_content: In digital watermarking (also called digital fingerprinting), extra information is embedded imperceptibly into digital content (such as an audio track, a still image, or a movie). This extra information can be read by authorized parties, and other users attempting to remove the watermark cannot do so without destroying the value of the content by making perceptible changes to the content. This provides a disincentive to copying by allowing copies to be traced to their original owner. Unlike cryptography, digital watermarking provides protection to content that is in the clear. It is not easy to design watermarks that are hard to erase, especially if an attacker has access to several differently marked copies of the same base content. Cox et al. (see IEEE Trans. on Image Processing, vol.6, no.12, p.1673-87, 1997) have proposed the use of additive normally distributed values as watermarks, and have sketched an argument showing that, in a certain theoretical model, such watermarks are resistant to collusive attacks. Here, we fill in the mathematical justification for this claim. --- paper_title: Performance Study of ECC-based Collusion-resistant Multimedia Fingerprinting paper_content: Digital fingerprinting is a tool to protect multimedia content from illegal redistribution by uniquely marking copies of the content distributed to each user. Fingerprinting based on error correction coding (ECC) handle the important issue of how to embed the fingerprint into host data in an abstract way known as the marking assumptions, which often do not fully account for multimedia specific issues. In this paper, we examine the performance of ECC based fingerprinting by considering both coding and embedding issues. We provide performance comparison of ECC-based scheme and a major alternative of orthogonal fingerprinting. As averaging is a feasible and cost-effective collusion attack against multimedia fingerprints yet is generally not considered in the ECC-based system, we also investigate the resistance against averaging collusion and identify avenues for improving collusion resistance. --- paper_title: Resistance of orthogonal Gaussian fingerprints to collusion attacks paper_content: Digital fingerprinting is a means to offer protection to digital data by which fingerprints embedded in the multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We propose a likelihood-based approach to estimate the number of colluders, and introduce the thresholding detector for colluder identification. We first analyze the collusion resistance of a system to the average attack by considering the probability of a false negative and the probability of a false positive when identifying colluders. Lower and upper bounds for the maximum number of colluders K/sub max/ are derived. We then show that the detectors are robust to different attacks. We further study different sets of performance criteria. --- paper_title: Joint coding and embedding techniques for MultimediaFingerprinting paper_content: Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking every copy of the content distributed to each user. The collusion attack is a powerful attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. One major category of collusion-resistant fingerprinting employs an explicit step of coding. Most existing works on coded fingerprinting mainly focus on the code-level issues and treat the embedding issues through abstract assumptions without examining the overall performance. In this paper, we jointly consider the coding and embedding issues for coded fingerprinting systems and examine their performance in terms of collusion resistance, detection computational complexity, and distribution efficiency. Our studies show that coded fingerprinting has efficient detection but rather low collusion resistance. Taking advantage of joint coding and embedding, we propose a permuted subsegment embedding technique and a group-based joint coding and embedding technique to improve the collusion resistance of coded fingerprinting while maintaining its efficient detection. Experimental results show that the number of colluders that the proposed methods can resist is more than three times as many as that of the conventional coded fingerprinting approaches. --- paper_title: Performance of Orthogonal Fingerprinting Codes Under Worst-Case Noise paper_content: We study the effect of the noise distribution on the error probability of the detection test when a class of randomly rotated spherical fingerprints is used. The detection test is performed by a focused correlation detector, and the spherical codes studied here form a randomized orthogonal constellation. The colluders create a noise-free forgery by uniform averaging of their individual copies, and then add a noise sequence to form the actual forgery. We derive the noise distribution that maximizes the error probability of the detector under average and almost-sure distortion constraints. Moreover, we characterize the noise distribution that minimizes the decoder's error exponent under a large-deviations distortion constraint. --- paper_title: Collusion-resistant fingerprinting for multimedia paper_content: Digital fingerprinting is a technology for enforcing digital rights policies whereby unique labels, known as digital fingerprints, are inserted into content prior to distribution. For multimedia content, fingerprints can be embedded using conventional watermarking techniques that are typically concerned with robustness against a variety of attacks mounted by an individual. These attacks, known as multiuser collusion attacks, provide a cost-effective method for attenuating each of the colluder's fingerprints and poses a real threat to protecting media data and enforcing usage policies. In this article, we review some major design methodologies for collusion-resistant fingerprinting of multimedia and highlight common and unique issues of different fingerprinting techniques. It also provides detailed discussions on the two major classes of fingerprinting strategies, namely, orthogonal fingerprinting and correlated fingerprinting. --- paper_title: Nonlinear collusion attacks on independent fingerprints for multimedia paper_content: Digital fingerprinting is a technology for tracing the distribution of multimedia content and protecting them from unauthorized redistribution. Collusion attack is a cost effective attack against digital fingerprinting where several copies with the same content but different fingerprints are combined to remove the original fingerprints. In this paper, we investigate average and nonlinear collusion attacks of independent Gaussian fingerprints and study both their effectiveness and the perceptual quality. We also propose the bounded Gaussian fingerprints to improve the perceptual quality of the fingerprinted copies. We further discuss the tradeoff between the robustness against collusion attacks and the perceptual quality of a fingerprinting system. --- paper_title: Anti-Collusion Fingerprinting for Multimedia paper_content: Digital fingerprinting is a technique for identifying users who use multimedia content for unintended purposes, such as redistribution. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cost-effective attack against such digital fingerprints is collusion, where several differently marked copies of the same content are combined to disrupt the underlying fingerprints. We investigate the problem of designing fingerprints that can withstand collusion and allow for the identification of colluders. We begin by introducing the collusion problem for additive embedding. We then study the effect that averaging collusion has on orthogonal modulation. We introduce a tree-structured detection algorithm for identifying the fingerprints associated with K colluders that requires O(Klog(n/K)) correlations for a group of n users. We next develop a fingerprinting scheme based on code modulation that does not require as many basis signals as orthogonal modulation. We propose a new class of codes, called anti-collusion codes (ACCs), which have the property that the composition of any subset of K or fewer codevectors is unique. Using this property, we can therefore identify groups of K or fewer colluders. We present a construction of binary-valued ACC under the logical AND operation that uses the theory of combinatorial designs and is suitable for both the on-off keying and antipodal form of binary code modulation. In order to accommodate n users, our code construction requires only O(/spl radic/n) orthogonal signals for a given number of colluders. We introduce three different detection strategies that can be used with our ACC for identifying a suspect set of colluders. We demonstrate the performance of our ACC for fingerprinting multimedia and identifying colluders through experiments using Gaussian signals and real images. --- paper_title: Joint coding and embedding techniques for MultimediaFingerprinting paper_content: Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking every copy of the content distributed to each user. The collusion attack is a powerful attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. One major category of collusion-resistant fingerprinting employs an explicit step of coding. Most existing works on coded fingerprinting mainly focus on the code-level issues and treat the embedding issues through abstract assumptions without examining the overall performance. In this paper, we jointly consider the coding and embedding issues for coded fingerprinting systems and examine their performance in terms of collusion resistance, detection computational complexity, and distribution efficiency. Our studies show that coded fingerprinting has efficient detection but rather low collusion resistance. Taking advantage of joint coding and embedding, we propose a permuted subsegment embedding technique and a group-based joint coding and embedding technique to improve the collusion resistance of coded fingerprinting while maintaining its efficient detection. Experimental results show that the number of colluders that the proposed methods can resist is more than three times as many as that of the conventional coded fingerprinting approaches. --- paper_title: Collusion-Resistant Video Fingerprinting for Large User Group paper_content: Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking copies of the content distributed to each user. Most existing multimedia fingerprinting schemes consider userset on the scale of thousands for evaluation. However, in real applications such as cable TV and DVD distribution, the potential users can be as many as 10 ~ 100 million. This large scale user set demands not only high collusion resistance but also high efficiency in the fingerprint construction and detection, which makes most existing schemes incapable of practical application. A recently proposed joint coding and embedding fingerprinting framework provides a promising balance between collusion resistance, efficient construction and detection. In this paper, we explore how to employ such a framework and develop practical algorithms to fingerprint video in such challenging practical settings as to accommodate more than ten million users and resisting hundreds of users' collusion. Both analysis and experimental results show the great potential of joint coding and embedding fingerprinting for real video fingerprinting applications. --- paper_title: Multiresolution scene-based video watermarking using perceptual models paper_content: We present a watermarking procedure to embed copyright protection into digital video. Our watermarking procedure is scene-based and video dependent. It directly exploits spatial masking, frequency masking, and temporal properties to embed an invisible and robust watermark. The watermark consists of static and dynamic temporal components that are generated from a temporal wavelet transform of the video scenes. The resulting wavelet coefficient frames are modified by a perceptually shaped pseudorandom sequence representing the author. The noise-like watermark is statistically undetectable to thwart unauthorized removal. Furthermore, the author representation resolves the deadlock problem. The multiresolution watermark may be detected on single frames without knowledge of the location of the frames in the video scene. We demonstrate the robustness of the watermarking procedure to several video degradations and distortions. --- paper_title: Statistical invisibility for collusion-resistant digital video watermarking paper_content: We present a theoretical framework for the linear collusion analysis of watermarked digital video sequences, and derive a new theorem equating a definition of statistical invisibility, collusion-resistance, and two practical watermark design rules. The proposed framework is simple and intuitive; the basic processing unit is the video frame and we consider second-order statistical descriptions of their temporal inter-relationships. Within this analytical setup, we define the linear frame collusion attack, the analytic notion of a statistically invisible video watermark, and show that the latter is an effective counterattack against the former. Finally, to show how the theoretical results detailed in this paper can easily be applied to the construction of collusion-resistant video watermarks, we encapsulate the analysis into two practical video watermark design rules that play a key role in the subsequent development of a novel collusion-resistant video watermarking algorithm discussed in a companion paper. --- paper_title: Collusion-Secure Fingerprinting for Digital Data paper_content: This paper discusses methods for assigning code-words for the purpose of fingerprinting digital data, e.g., software, documents, music, and video. Fingerprinting consists of uniquely marking and registering each copy of the data. This marking allows a distributor to detect any unauthorized copy and trace it back to the user. This threat of detection will deter users from releasing unauthorized copies. A problem arises when users collude: for digital data, two different fingerprinted objects can be compared and the differences between them detected. Hence, a set of users can collude to detect the location of the fingerprint. They can then alter the fingerprint to mask their identities. We present a general fingerprinting solution which is secure in the context of collusion. In addition, we discuss methods for distributing fingerprinted data. --- paper_title: Collusion-resistant fingerprinting for multimedia paper_content: Digital fingerprinting is a technology for enforcing digital rights policies whereby unique labels, known as digital fingerprints, are inserted into content prior to distribution. For multimedia content, fingerprints can be embedded using conventional watermarking techniques that are typically concerned with robustness against a variety of attacks mounted by an individual. These attacks, known as multiuser collusion attacks, provide a cost-effective method for attenuating each of the colluder's fingerprints and poses a real threat to protecting media data and enforcing usage policies. In this article, we review some major design methodologies for collusion-resistant fingerprinting of multimedia and highlight common and unique issues of different fingerprinting techniques. It also provides detailed discussions on the two major classes of fingerprinting strategies, namely, orthogonal fingerprinting and correlated fingerprinting. --- paper_title: Optimal probabilistic fingerprint codes paper_content: We construct binary codes for fingerprinting digital documents. Our codes for n users that are e-secure against c pirates have length O(c2log(n/e)). This improves the codes proposed by Boneh and Shaw l1998r whose length is approximately the square of this length. The improvement carries over to works using the Boneh--Shaw code as a primitive, for example, to the dynamic traitor tracing scheme of Tassa l2005r. By proving matching lower bounds we establish that the length of our codes is best within a constant factor for reasonable error probabilities. This lower bound generalizes the bound found independently by Peikert et al. l2003r that applies to a limited class of codes. Our results also imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting. --- paper_title: Combining digital watermarks and collusion secure fingerprints for digital images paper_content: Digital watermarking is the enabling technology to prove ownership on copyrighted material, detect originators of illegally made copies, monitor the usage of the copyrighted multimedia data, and analyze the spread spectrum of the data over networks and servers. Embedding of unique customer identification as a watermark into data is called fingerprinting to identify illegal copies of documents. Basically, watermarks embedded into multimedia data for enforcing copyrights must uniquely identify the data and must be difficult to remove, even after various media transformation processes. Digital fingerprinting raises the additional problem that we produce different copies for each customer. Attackers can compare several fingerprinted copies to find and destroy the embedded identification string by altering the data in those places where a difference was detected. In our paper we present a technology for combining a collusion-secure fingerprinting scheme based on finite geometries and a watermarking mechanism with special marking points for digital images. The only marking positions the pirates cannot detect are those positions which contain the same letter in all the compared documents, called intersection of different fingerprints. The proposed technology for a maximal number d of pirates, puts enough information in the intersection of up to d fingerprints to uniquely identify all the pirates. --- paper_title: Combinatorial Properties of Frameproof and Traceability Codes paper_content: In order to protect copyrighted material, codes may be embedded in the content or codes may be associated with the keys used to recover the content. Codes can offer protection by providing some form of traceability (TA) for pirated data. Several researchers have studied different notions of TA and related concepts in previous years. "Strong" versions of TA allow at least one member of a coalition that constructs a "pirate decoder" to be traced. Weaker versions of this concept ensure that no coalition can "frame" a disjoint user or group of users. All these concepts can be formulated as codes having certain combinatorial properties. We study the relationships between the various notions, and we discuss equivalent formulations using structures such as perfect hash families. We use methods from combinatorics and coding theory to provide bounds (necessary conditions) and constructions (sufficient conditions) for the objects of interest. --- paper_title: Performance Study of ECC-based Collusion-resistant Multimedia Fingerprinting paper_content: Digital fingerprinting is a tool to protect multimedia content from illegal redistribution by uniquely marking copies of the content distributed to each user. Fingerprinting based on error correction coding (ECC) handle the important issue of how to embed the fingerprint into host data in an abstract way known as the marking assumptions, which often do not fully account for multimedia specific issues. In this paper, we examine the performance of ECC based fingerprinting by considering both coding and embedding issues. We provide performance comparison of ECC-based scheme and a major alternative of orthogonal fingerprinting. As averaging is a feasible and cost-effective collusion attack against multimedia fingerprints yet is generally not considered in the ECC-based system, we also investigate the resistance against averaging collusion and identify avenues for improving collusion resistance. --- paper_title: Efficient Watermark Detection and Collusion Security paper_content: Watermarking techniques allow the tracing of pirated copies of data by modifying each copy as it is distributed, embedding hidden information into the data which identifies the owner of that copy The owner of the original data can then identify the source of a pirated copy by reading out the hidden information present in that copy. Naturally, one would like these schemes to be as efficient as possible. Previous analyses measured efficiency in terms of the amount of data needed to allow many different copies to be distributed; in order to hide enough data to distinguish many users, the total original data must be sufficiently large Here, we consider a different notion of efficiency: What resources does the watermark detector need in order to perform this tracing?We address this question in two ways. First, we present a modified version of the CKLS media watermarking algorithm which improves the detector running time from linear to polylogarithmic in the number of users while still maintaining collusion-security. Second, we show that any public, invertible watermarking scheme secure against c colluding adversaries must have at least ?(c) bits of secret information. --- paper_title: Error Control Systems for Digital Communication and Storage paper_content: 1. Error Control Coding for Digital Communication Systems. 2. Galois Fields. 3. Polynomials over Galois Fields. 4. Linear Block Codes. 5. Cyclic Codes. 6. Hadamard, Quadratic Residue, and Golay Codes. 7. Reed-Muller Codes 8. BCH and Reed-Solomon Codes. 9. Decoding BCH and Reed-Solomon Codes. 10. The Analysis of the Performance of Block Codes. 11. Convolutional Codes. 12. The Viterbi Decoding Algorithm. 13. The Sequential Decoding Algorithms. 14. Trellis Coded Modulation. 15. Error Control for Channels with Feedback. 16. Applications. Appendices: A. Binary Primitive Polynomials. B. Add-on Tables and Vector Space Representations for GF(8) Through GF(1024). C. Cyclotronic Cosets Modulo 2m-1. D. Minimal Polynomials for Elements in GF (2m). E. Generator Polynomials of Binary BCH Codes of Lengths Through 511. Bibliography. --- paper_title: Joint coding and embedding techniques for MultimediaFingerprinting paper_content: Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking every copy of the content distributed to each user. The collusion attack is a powerful attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. One major category of collusion-resistant fingerprinting employs an explicit step of coding. Most existing works on coded fingerprinting mainly focus on the code-level issues and treat the embedding issues through abstract assumptions without examining the overall performance. In this paper, we jointly consider the coding and embedding issues for coded fingerprinting systems and examine their performance in terms of collusion resistance, detection computational complexity, and distribution efficiency. Our studies show that coded fingerprinting has efficient detection but rather low collusion resistance. Taking advantage of joint coding and embedding, we propose a permuted subsegment embedding technique and a group-based joint coding and embedding technique to improve the collusion resistance of coded fingerprinting while maintaining its efficient detection. Experimental results show that the number of colluders that the proposed methods can resist is more than three times as many as that of the conventional coded fingerprinting approaches. --- paper_title: Collusion Secure q-ary Fingerprinting for Perceptual Content paper_content: We propose a q-ary fingerprinting system for stored digital objects such as images, videos and audio clips. A fingerprint is a q-ary sequence. The object is divided into blocks and each symbol of the fingerprint is embedded into one block. Colluders construct a pirate object by assembling parts from their copies. They can also erase some of the marks or cut out part of the object resulting in a shortened fingerprint with some unreadable marks. We give constructions of codes that can identify one of the colluders once a pirate object is found. --- paper_title: Collusion-Resistant Video Fingerprinting for Large User Group paper_content: Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking copies of the content distributed to each user. Most existing multimedia fingerprinting schemes consider userset on the scale of thousands for evaluation. However, in real applications such as cable TV and DVD distribution, the potential users can be as many as 10 ~ 100 million. This large scale user set demands not only high collusion resistance but also high efficiency in the fingerprint construction and detection, which makes most existing schemes incapable of practical application. A recently proposed joint coding and embedding fingerprinting framework provides a promising balance between collusion resistance, efficient construction and detection. In this paper, we explore how to employ such a framework and develop practical algorithms to fingerprint video in such challenging practical settings as to accommodate more than ten million users and resisting hundreds of users' collusion. Both analysis and experimental results show the great potential of joint coding and embedding fingerprinting for real video fingerprinting applications. --- paper_title: Digital fingerprinting codes: Problem statements, constructions, identification of traitors paper_content: We consider a general fingerprinting problem of digital data under which coalitions of users can alter or erase some bits in their copies in order to create an illegal copy. Each user is assigned a fingerprint which is a word in a fingerprinting code of size M (the total number of users) and length n. We present binary fingerprinting codes secure against size-t coalitions which enable the distributor (decoder) to recover at least one of the users from the coalition with probability of error exp(-/spl Omega/(n)) for M=exp(/spl Omega/(n)). This is an improvement over the best known schemes that provide the error probability no better than exp(-/spl Omega/(n/sup 1/2/)) and for this probability support at most exp(O(n/sup 1/2/)) users. The construction complexity of codes is polynomial in n. We also present versions of these constructions that afford identification algorithms of complexity poly(n)=polylog(M), improving over the best previously known complexity of /spl Omega/(M). For the case t=2, we construct codes of exponential size with even stronger performance, namely, for which the distributor can either recover both users from the coalition with probability 1-exp(/spl Omega/(n)), or identify one traitor with probability 1. --- paper_title: Behavior forensics for scalable multiuser collusion: fairness versus effectiveness paper_content: Multimedia security systems involve many users with different objectives and users influence each other's performance. To have a better understanding of multimedia security systems and offer stronger protection of multimedia, behavior forensics formulate the dynamics among users and investigate how they interact with and respond to each other. This paper analyzes the behavior forensics in multimedia fingerprinting and formulates the dynamics among attackers during multi-user collusion. In particular, this paper focuses on how colluders achieve the fair play of collusion and guarantee that all attackers share the same risk (i.e., the probability of being detected). We first analyze how to distribute the risk evenly among colluders when they receive fingerprinted copies of scalable resolutions due to network and device heterogeneity. We show that generating a colluded copy of higher resolution puts more severe constraints on achieving fairness. We then analyze the effectiveness of fair collusion. Our results indicate that the attackers take a larger risk of being captured when the colluded copy has higher resolution, and they have to take this tradeoff into consideration during collusion. Finally, we analyze the collusion resistance of the scalable fingerprinting systems in various scenarios with different system requirements, and evaluate the maximum number of colluders that the fingerprinting systems can withstand --- paper_title: Game-theoretic strategies and equilibriums in multimedia fingerprinting social networks paper_content: Multimedia social network is a network infrastructure in which the social network users share multimedia contents with all different purposes. Analyzing user behavior in multimedia social networks helps design more secured and efficient multimedia and networking systems. Multimedia fingerprinting protects multimedia from illegal alterations and multiuser collusion is a cost-effective attack. The colluder social network is naturally formed during multiuser collusion with which colluders gain reward by redistributing the colluded multimedia contents. Since the colluders have conflicting interest, the maximal-payoff collusion for one colluder may not be the maximal-payoff collusion for others. Hence, before a collusion being successful, the colluders must bargain with each other to reach agreements. We first model the bargaining behavior among colluders as a noncooperative game and study four different bargaining solutions of this game. Moreover, the market value of the redistributed multimedia content is often time-sensitive. The earlier the colluded copy being released, the more the people are willing to pay for it. Thus, the colluders have to reach agreements on how to distribute reward and risk among themselves as soon as possible. This paper further incorporates this time-sensitiveness of the colluders' reward and studies the time-sensitive bargaining equilibrium. The study in this paper reveals the strategies that are optimal for the colluders; thus, all the colluders have no inventive to disagree. Such understanding reduces the possible types of collusion into a small finite set. --- paper_title: Multi-User Collusion Behavior Forensics: Game Theoretic Formulation of Fairness Dynamics paper_content: Multi-user collusion is an cost-effective attack against digital fingerprinting, in which a group of attackers collectively undermine the traitor tracing capability of digital fingerprints. However, during multi-user collusion, each colluder wishes to minimize his/her own risk and maximize his/her own profit, and different colluders have different objectives. Thus, an important issue during collusion is to agree on how to distribute the risk/profit among colluders and ensure fairness of the attack. To have a better understanding of the attackers' behavior during collusion to achieve fairness, this paper models the dynamics among colluders as a non-cooperative game. We then study the Pareto-optimal set, where no colluder can further increase his/her own payoff without decreasing others', and analyze the Nash bargaining solution of this game. --- paper_title: Steganography in Digital Media: Principles, Algorithms, and Applications paper_content: Steganography, the art of hiding of information in apparently innocuous objects or images, is a field with a rich heritage, and an area of rapid current development. This clear, self-contained guide shows you how to understand the building blocks of covert communication in digital media files and how to apply the techniques in practice, including those of steganalysis, the detection of steganography. Assuming only a basic knowledge in calculus and statistics, the book blends the various strands of steganography, including information theory, coding, signal estimation and detection, and statistical signal processing. Experiments on real media files demonstrate the performance of the techniques in real life, and most techniques are supplied with pseudo-code, making it easy to implement the algorithms. The book is ideal for students taking courses on steganography and information hiding, and is also a useful reference for engineers and practitioners working in media security and information assurance. --- paper_title: Behavior modeling and forensics for multimedia social networks paper_content: Factors influencing human behavior have seldom appeared in signal processing disciplines. Therefore, the goals of the article are to illustrate why human factors are important, identify emerging issues strongly related to signal processing, and to demonstrate that signal processing can be effectively used to model, analyze, and perform behavior forensics for multimedia social networks. Since media security and content protection is a major issue, the article illustrates various aspects of issues and problems in multimedia social networks via a case study of human behavior in traitor-tracing multimedia fingerprinting. We focus on the understanding of behavior forensics from signal processing perspective and present a framework to model and analyze user dynamics. The objective is to provide a broad overview of recent advances in behavior modeling and forensics for multimedia social networks. --- paper_title: Behavior forensics with side information for multimedia fingerprinting social networks paper_content: In multimedia social networks, there exists complicated dynamics among users who share and exchange multimedia content. Using multimedia fingerprinting as an example, this paper investigates the human behavior dynamics in the multimedia social networks with side information. Side information is the information other than the colluded multimedia content that can help increase the probability of detection. We study the impact of side information in multimedia fingerprinting and show that the statistical means of the detection statistics can help the fingerprint detector significantly improve the collusion resistance. We then investigate how to probe the side information and model the dynamics between the fingerprint detector and the colluders as a two-stage extensive game with perfect information. We model the colluder-detector behavior dynamics as a two-stage game and find the equilibrium of the colluder-detector game using backward induction and show that the min-max solution is a Nash equilibrium, which gives no incentive for everyone in the multimedia fingerprint social network to deviate. This paper demonstrates that the proposed side information can significantly help improve the system performance to almost the same as the optimal correlation-based detector. Such result opens up a new scope in the research of fingerprinting system that given any fingerprint code, leveraging side information can improve the collusion resistance. Also, we provide the solutions to how to reach optimal collusion strategy and the corresponding detection, thus lead to a better protection of the multimedia content. --- paper_title: Impact of Social Network Structure on Multimedia Fingerprinting Misbehavior Detection and Identification paper_content: Users in video-sharing social networks actively interact with each other, and it is of critical importance to model user behavior and analyze the impact of human factors on video sharing systems. In video-sharing social networks, users have access to extra resources from their peers, and they also contribute their own resources to help others. Each user wants to maximize his/her own payoff, and they negotiate with each other to achieve fairness and address this conflict. However, some selfish users may cheat to their peers and manipulate the system to maximize their own payoffs, and cheat prevention is a critical requirement in many social networks to stimulate user cooperation. It is of ample importance to design monitoring mechanisms to detect and identify misbehaving users, and to design cheat-proof cooperation stimulation strategies. Using video fingerprinting as an example, this paper analyzes the complex dynamics among colluders during multiuser collusion, and explores possible monitoring mechanisms to detect and identify misbehaving colluders in multiuser collusion. We consider two types of colluder networks: one has a centralized structure with a trusted ringleader, and the other is a distributed peer-structured network. We investigate the impact of network structures on misbehavior detection and identification, propose different selfish colluder identification schemes for different colluder networks, and analyze their performance. We show that the proposed schemes can accurately identify selfish colluders without falsely accusing others even under attacks. We also evaluate their robustness against framing attacks and quantify the maximum number of framing colluders that they can resist. --- paper_title: Traitor-Within-Traitor Behavior Forensics: Strategy and Risk Minimization paper_content: Multimedia security systems have many users with different objectives and they influence each other's performance and decisions. Behavior forensics analyzes how users with conflicting interests interact with and respond to each other. Such investigation enables a thorough understanding of multimedia security systems and helps the digital rights enforcer offer stronger protection of multimedia. This paper analyzes the dynamics among attackers during multiuser collusion. The colluders share not only the profit from the redistribution of multimedia but also the risk of being detected by the content owner, and an important issue in collusion is fairness of the attack (i.e., whether all attackers share the same risk) (e.g., whether they have the same probability of being detected). While they might agree so, some selfish colluders may break their fair-play agreement in order to further lower their risk. This paper investigates the problem of "traitors within traitors" in multimedia forensics, in an effort to formulate the dynamics among attackers and understand their behavior to minimize their own risk and protect their own interests. As the first work on the analysis of this colluder dynamics, this paper explores some possible strategies that a selfish colluder can use to minimize his or her probability of being caught. We show that processing his or her fingerprinted copy before multiuser collusion helps a selfish colluder further lower his or her risk, especially when the colluded copy has high resolution and good quality. This paper also investigates the optimal precollusion processing strategies for selfish colluders to minimize their risk under the quality constraints --- paper_title: Forensic hash for multimedia information paper_content: Digital multimedia such as images and videos are prevalent on today's internet and cause significant ::: social impact, which can be evidenced by the proliferation of social networking sites with user generated ::: contents. Due to the ease of generating and modifying images and videos, it is critical to establish ::: trustworthiness for online multimedia information. In this paper, we propose novel approaches to ::: perform multimedia forensics using compact side information to reconstruct the processing history of ::: a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. ::: Based on the Radon transform and scale space theory, the proposed forensic hash is compact and ::: can effectively estimate the parameters of geometric transforms and detect local tampering that an ::: image may have undergone. Forensic hash is designed to answer a broader range of questions regarding ::: the processing history of multimedia data than the simple binary decision from traditional robust ::: image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic ::: techniques that do not use any side information. --- paper_title: Multimedia forensic hash based on visual words paper_content: In recent years, digital images and videos have become increasingly popular over the internet and bring great social impact to a wide audience. In the meanwhile, technology advancement allows people to easily alter the content of digital multimedia and brings serious concern on the trustworthiness of online multimedia information. Forensic hash is a short signature attached to an image before transmission and acts as side information for analyzing the processing history and trustworthiness of the received image. In this paper, we propose a new construction of forensic hash based on visual words representation. We encode SIFT features into a compact visual words representation for robust estimation of geometric transformations and propose a hybrid construction using both SIFT and block-based features to detect and localize image tampering. The proposed hash construction achieves more robust and accurate forensic analysis than prior work. --- paper_title: Seam carving estimation using forensic hash paper_content: Seam carving is an adaptive multimedia retargeting technique to resize multimedia data for different display sizes. This technique has found promising applications in media consumption on mobile devices such as tablets and smartphones. However, seam carving can also be used to maliciously alter image content and when combined with other tampering operations, makes tampering detection very difficult by traditional multimedia forensic techniques. In this paper, we study the problem of seam carving estimation and tampering localization using very compact side information called forensic hash. The forensic hash technique bridges two related areas, namely robust image hashing and blind multimedia forensics, to answer a broader scope of forensic questions in a more efficient and accurate manner. We show that our recently proposed forensic hash construction can be extended to accurately estimate seam carving and detect local tampering. --- paper_title: Protection against reverse engineering in digital cameras paper_content: Over the past decade, a number of digital forensic techniques have been developed to authenticate digital signals. One important set of forensic techniques operates by estimating signal processing components of a digital camera's signal processing pipeline, then using these estimates to perform forensic tasks such as camera identification or forgery detection. However, because these techniques are capable of estimating a camera's internal signal processing components, these forensic techniques can be used for reverse engineering. In this paper, we propose integrating an anti-forensic module into a digital camera's processing pipeline to protect against forensic reverse engineering. Our proposed technique operates by removing linear dependencies amongst an output images interpolated color values and by disrupting the color sampling grid. Experimental results show that our proposed technique can be effectively used to protect against the forensic reverse engineering of key components of a digital camera's processing pipeline. --- paper_title: A Component Estimation Framework for Information Forensics paper_content: With a rapid growth of imaging technologies and an increasingly widespread usage of digital images and videos for a large number of high security and forensic applications, there is a strong need for techniques to verify the source and integrity of digital data. Component forensics is new approach for forensic analysis that aims to estimate the algorithms and parameters in each component of the digital device. In this paper, we develop a novel theoretical foundation to understand the fundamental performance limits of component forensics. We define formal notions of identifiability of components in the information processing chain, and present methods to quantify the accuracies at which the component parameters can be estimated. Building upon the proposed theoretical framework, we devise methods to improve the accuracies of component parameter estimation for a wide range of forensic applications. --- paper_title: A pattern classification framework for theoretical analysis of component forensics paper_content: Component forensics is an emerging methodology for forensic analysis that aims at estimating the algorithms and parameters in each component of a digital device. This paper proposes a theoretical foundation to examine the performance limits of component forensics. Using ideas from pattern classification theory, we define formal notions of identifiability of components in the information processing chain. We show that the parameters of certain device components can be accurately identified only in controlled settings through semi non-intrusive forensics, while the parameters of some others can be computed directly from the available sample data via complete non-intrusive analysis. We then extend the proposed theoretical framework to quantify and improve the accuracies and confidence in component parameter identification for several forensic applications. --- paper_title: Robustness of color interpolation identification against anti-forensic operations paper_content: Color interpolation identification using digital images has been shown to be a powerful tool for addressing a range of digital forensic questions. However, due to the existence of adversaries who have the incentive to counter the identification, it is necessary to understand how color interpolation identification performs against anti-forensic operations that intentionally manipulate identification results. This paper proposes two anti-forensic techniques against which the robustness of color interpolation identification is investigated. The first technique employs parameter perturbation to circumvent identification. Various options that achieve different trade-offs between image quality and identification manipulation are examined. The second technique involves algorithm mixing and demonstrates that one can not only circumvent but also mislead the identification system while preserving the image quality. Additional discussions are also provided to enhance the understanding of anti-forensics and its implications to the design of identification systems. ---
Title: Information Forensics: An Overview of the First Decade Section 1: INTRODUCTION Description 1: Provide an overview of the development of information forensics and highlight the need for forensic technologies to ensure the authenticity and trustworthiness of digital content. Section 2: DETECTION OF TAMPERING AND PROCESSING OPERATIONS Description 2: Discuss various forensic techniques used to identify manipulations and processing operations within multimedia content. Section 3: STATISTICAL CLASSIFIERS Description 3: Explain how statistical classifiers use machine learning to detect changes in multimedia files and identify manipulations. Section 4: DEVICE FINGERPRINTS Description 4: Elaborate on how intrinsic fingerprints left by acquisition devices can be used to authenticate multimedia files. Section 5: MANIPULATION FINGERPRINTS Description 5: Describe specific fingerprints such as from copy-move forgeries, resampling, contrast enhancement, and median filtering that can be detected in manipulated multimedia content. Section 6: COMPRESSION AND CODING FINGERPRINTS Description 6: Detail how compression techniques like JPEG can leave forensic fingerprints that can be used for various forensic tasks. Section 7: PHYSICAL INCONSISTENCIES Description 7: Discuss techniques that identify inconsistencies in multimedia content using physics-based models rather than fingerprints. Section 8: DEVICE FORENSICS Description 8: Highlight forensic techniques to trace multimedia content back to the acquisition devices and differentiate between individual device units. Section 9: ANTI-FORENSICS AND COUNTERMEASURES Description 9: Explore anti-forensic techniques that can disguise manipulations, and discuss countermeasures developed to detect anti-forensic activities. Section 10: EMERGING FORENSIC USE OF ENVIRONMENTAL SIGNATURES Description 10: Introduce environmental fingerprints such as Electric Network Frequency (ENF) and their applications in determining the time and location of recordings. Section 11: EMBEDDED FINGERPRINTING AND FORENSIC WATERMARKING Description 11: Review techniques involving embedded fingerprints and forensic watermarks used for tracing and anti-collusion to protect multimedia content. Section 12: BEHAVIOR/HUMAN/SOCIAL FACTORS AND DYNAMICS IN FORENSICS Description 12: Discuss the social and behavioral aspects of forensic analysis and the adversarial dynamics between users, attackers, and investigators. Section 13: FINAL THOUGHTS AND FUTURE DIRECTIONS Description 13: Provide concluding thoughts on the current state of information forensics and discuss potential future directions and theoretical foundations for advancing the field.
Scene Representation Technologies for 3DTV—A Survey
9
--- paper_title: Layered depth images paper_content: In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan’s warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alphacompositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem. --- paper_title: Interactive 3-D Video Representation and Coding Technologies paper_content: Interactivity in the sense of being able to explore and navigate audio-visual scenes by freely choosing viewpoint and viewing direction, is an important key feature of new and emerging audio-visual media. This paper gives an overview of suitable technology for such applications, with a focus on international standards, which are beneficial for consumers, service providers, and manufacturers. We first give a general classification and overview of interactive scene representation formats as commonly used in computer graphics literature. Then, we describe popular standard formats for interactive three-dimensional (3-D) scene representation and creation of virtual environments, the virtual reality modeling language (VRML), and the MPEG-4 BInary Format for Scenes (BIFS) with some examples. Recent extensions to MPEG-4 BIFS, the Animation Framework eXtension (AFX), providing advanced computer graphics tools, are explained and illustrated. New technologies mainly targeted at reconstruction, modeling, and representation of dynamic real world scenes are further studied. The user shall be able to navigate photorealistic scenes within certain restrictions, which can be roughly defined as 3-D video. Omnidirectional video is an extension of the planar two-dimensional (2-D) image plane to a spherical or cylindrical image plane. Any 2-D view in any direction can be rendered from this overall recording to give the user the impression of looking around. In interactive stereo two views, one for each eye, are synthesized to provide the user with an adequate depth cue of the observed scene. Head motion parallax viewing can be supported in a certain operating range if sufficient depth or disparity data are delivered with the video data. In free viewpoint video, a dynamic scene is captured by a number of cameras. The input data are transformed into a special data representation that enables interactive navigation through the dynamic scene environment. --- paper_title: Depth image-based representations for static and animated 3D objects paper_content: We describe a novel depth image-based representation (DIBR) that has been adopted into the MPEG-4 animation framework extension (AFX). The idea of this approach is to build a compact representation of a 3D object or scene without storing the geometry information in traditional polygonal form. The main formats of the DIBR family are simple texture (an image together with depth array), point texture (a view of a scene from a single input camera but with multiple pixels along each line of sight), and octree image (octree data structure together with a set of images and their viewport parameters). The designed node specifications and rendering algorithms are addressed. The experimental results show the efficacy and fidelity of the proposed approach. --- paper_title: An Evolutionary and Optimised Approach on 3D-TV paper_content: In this paper we will present the concept of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework. The work is part of the European Information Society Technologies (IST) project “Advanced Three-Dimensional Television System Technologies” (ATTEST), an activity where industries, research centers and universities have joined forces to design a backwardscompatible, flexible and modular broadcast 3D-TV system, where all parts of the 3D processing chain are optimised to one another. This includes content creation, coding and transmission, display and research in human 3D perception, which be will used to guide the development process. The goals of the project comprise the development of a novel broadcast 3D camera, algorithms to convert existing 2D-video material into 3D, a 2Dcompatible coding and transmission scheme for 3D-video using MPEG2/4/7 technologies and the design of two new autostereoscopic displays. --- paper_title: High-quality video view interpolation using a layered representation paper_content: The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates. --- paper_title: Estimation of Depth Fields Suitable for Video Compression Based on 3-D Structure and Motion of Objects paper_content: Intensity prediction along motion trajectories removes temporal redundancy considerably in video compression algorithms. In three-dimensional (3-D) object-based video coding, both 3-D motion and depth values are required for temporal prediction. The required 3-D motion parameters for each object are found by the correspondence-based E-matrix method. The estimation of the correspondences-two-dimensional (2-D) motion field-between the frames and segmentation of the scene into objects are achieved simultaneously by minimizing a Gibbs energy. The depth field is estimated by jointly minimizing a defined distortion and bit-rate criterion using the 3-D motion parameters. The resulting depth field is efficient in the rate-distortion sense. Bit-rate values corresponding to the lossless encoding of the resultant depth fields are obtained using predictive coding; prediction errors are encoded by a Lempel-Ziv algorithm. The results are satisfactory for real-life video scenes. --- paper_title: Hierarchical stochastic diffusion for disparity estimation paper_content: This paper proposes a stochastic approach to estimate the disparity field combined with line field. In the maximum a posteriori (MAP) method based on Markov random field (MRF) model, it is important to optimize and converge the Gibbs potential function corresponding to the perturbed disparity field. The proposed optimization method, stochastic diffusion, takes advantage of the probabilistic distribution of the neighborhood fields, and diffuses the Gibbs potential space to be stable iteratively. By using the neighborhood distribution in the non-random and non-deterministic diffusion, the stochastic diffusion improves both the estimation accuracy and the convergence speed. In the paper, the hierarchical stochastic diffusion is also applied to the disparity field. The hierarchical approach reduces the memory and computational load, and increases the convergence of the potential space. The line field is the discontinuity model of the disparity field. The paper also proposes an effective configuration of the neighborhood to be suitable for the hierarchical disparity structure. According to the experiments, the stochastic diffusion shows good estimation performance. The line field improves the estimation at the object boundary, and the estimated line field coincides with the object boundary with the useful contours. Furthermore, the stochastic diffusion with line field embeds the occlusion detection and compensation. And, the stochastic diffusion converges the estimated fields very fast in the hierarchical scheme. The stochastic diffusion is applicable to any kind of field estimation given the appropriate definition of the field and MRF models. --- paper_title: Interactive 3-D Video Representation and Coding Technologies paper_content: Interactivity in the sense of being able to explore and navigate audio-visual scenes by freely choosing viewpoint and viewing direction, is an important key feature of new and emerging audio-visual media. This paper gives an overview of suitable technology for such applications, with a focus on international standards, which are beneficial for consumers, service providers, and manufacturers. We first give a general classification and overview of interactive scene representation formats as commonly used in computer graphics literature. Then, we describe popular standard formats for interactive three-dimensional (3-D) scene representation and creation of virtual environments, the virtual reality modeling language (VRML), and the MPEG-4 BInary Format for Scenes (BIFS) with some examples. Recent extensions to MPEG-4 BIFS, the Animation Framework eXtension (AFX), providing advanced computer graphics tools, are explained and illustrated. New technologies mainly targeted at reconstruction, modeling, and representation of dynamic real world scenes are further studied. The user shall be able to navigate photorealistic scenes within certain restrictions, which can be roughly defined as 3-D video. Omnidirectional video is an extension of the planar two-dimensional (2-D) image plane to a spherical or cylindrical image plane. Any 2-D view in any direction can be rendered from this overall recording to give the user the impression of looking around. In interactive stereo two views, one for each eye, are synthesized to provide the user with an adequate depth cue of the observed scene. Head motion parallax viewing can be supported in a certain operating range if sufficient depth or disparity data are delivered with the video data. In free viewpoint video, a dynamic scene is captured by a number of cameras. The input data are transformed into a special data representation that enables interactive navigation through the dynamic scene environment. --- paper_title: Dense matching of multiple wide-baseline views paper_content: This paper describes a PDE-based method for dense depth extraction from multiple wide-baseline images. Emphasis lies on the usage of only a small amount of images. The integration of these multiple wide-baseline views is guided by the relative confidence that the system has in the matching to different views. This weighting is fine-grained in that it is determined for every pixel at every iteration. Reliable information spreads fast at the expense of less reliable data, both in terms of spatial communications within a view and in terms of information exchange between the views. Changes in intensity between images can be handled in a similar fine grained fashion. --- paper_title: 1 1-1 Dense Disparity Map Estimation Respecting Image Discontinuities: A PDE and Scale-Space Based Approach paper_content: We present an energy based approach to estimate a dense disparity map from a set of two weakly calibrated stereoscopic images while preserving its discontinuities resulting from image boundaries. We first derive a simplified expression for the disparity that allows us to estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the Nagel-Enkelmann operator. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method. The resulting parabolic problem has a unique solution. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scalespace. Experimental results on both synthetic and real images are presented to illustrate the capabilities of this PDE and scale-space based method. --- paper_title: View-dependent refinement of progressive meshes paper_content: Level-of-detail (LOD) representations are an important tool for realtime rendering of complex geometric environments. The previously introduced progressive mesh representation defines for an arbitrary triangle mesh a sequence of approximating meshes optimized for view-independent LOD. In this paper, we introduce a framework for selectively refining an arbitrary progressive mesh according to changing view parameters. We define efficient refinement criteria based on the view frustum, surface orientation, and screen-space geometric error, and develop a real-time algorithm for incrementally refining and coarsening the mesh according to these criteria. The algorithm exploits view coherence, supports frame rate regulation, and is found to require less than 15% of total frame time on a graphics workstation. Moreover, for continuous motions this work can be amortized over consecutive frames. In addition, smooth visual transitions (geomorphs) can be constructed between any two selectively refined meshes. A number of previous schemes create view-dependent LOD meshes for height fields (e.g. terrains) and parametric surfaces (e.g. NURBS). Our framework also performs well for these special cases. Notably, the absence of a rigid subdivision structure allows more accurate approximations than with existing schemes. We include results for these cases as well as for general meshes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation Display algorithms; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional --- paper_title: Progressive simplicial complexes paper_content: In this paper, we introduce the progressive simplicial complex (PSC) representation, a new format for storing and transmitting triangulated geometric models. Like the earlier progressive mesh (PM) representation, it captures a given model as a coarse base model together with a sequence of refinement transformations that progressively recover detail. The PSC representation makes use of a more general refinement transformation, allowing the given model to be an arbitrary triangulation (e.g. any dimension, non-orientable, non-manifold, non-regular), and the base model to always consist of a single vertex. Indeed, the sequence of refinement transformations encodes both the geometry and the topology of the model in a unified multiresolution framework. The PSC representation retains the advantages of PM’s. It defines a continuous sequence of approximating models for runtime level-of-detail control, allows smooth transitions between any pair of models in the sequence, supports progressive transmission, and offers a space-efficient representation. Moreover, by allowing changes to topology, the PSC sequence of approximations achieves better fidelity than the corresponding PM sequence. We develop an optimization algorithm for constructing PSC representations for graphics surface models, and demonstrate the framework on models that are both geometrically and topologically complex. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional --- paper_title: Progressive multiresolution meshes for deforming surfaces paper_content: Time-varying surfaces are ubiquitous in movies, games, and scientific applications. For reasons of efficiency and simplicity of formulation, these surfaces are often generated and represented as dense polygonal meshes with static connectivity. As a result, such deforming meshes often have a tremendous surplus of detail, with many more vertices and polygons than necessary for any given frame. An extensive amount of work has addressed the issue of simplifying a static mesh: however, these methods are inadequate for time-varying surfaces when there is a high degree of non-rigid deformation. We thus propose a new multiresolution representation for deforming surfaces that, together with our dynamic improvement scheme, provides high quality surface approximations at any level-of-detail, for all frames of an animation. Our algorithm also gives rise to a new progressive representation for time-varying multiresolution hierarchies, consisting of a base hierarchy for the initial frame and a sequence of update operations for subsequent frames. We demonstrate that this provides a very effective means of extracting static or view-dependent approximations for a deforming mesh over all frames of an animation. --- paper_title: Efficient compression of non-manifold polygonal meshes paper_content: We present a method for compressing non-manifold polygonal meshes, i.e. polygonal meshes with singularities, which occur very frequently in the real-world. Most efficient polygonal compression methods currently available are restricted to a manifold mesh: they require a conversion process, and fail to retrieve the original model connectivity after decompression. The present method works by converting the original model to a manifold model, encoding the manifold model using an existing mesh compression technique, and clustering, or stitching together during the decompression process vertices that were duplicated earlier to faithfully recover the original connectivity. This paper focuses on efficiently encoding and decoding the stitching information. By separating connectivity from geometry and properties, the method avoids encoding vertices (and properties bound to vertices) multiple times; thus a reduction of the size of the bit-stream of about 10% is obtained compared with encoding the model as a manifold. --- paper_title: Geometry Coding and VRML paper_content: The virtual-reality modeling language (VRML) is rapidly becoming the standard file format for transmitting three-dimensional (3-D) virtual worlds across the Internet. Static and dynamic descriptions of 3-D objects, multimedia content, and a variety of hyperlinks can be represented in VRML files. Both VRML browsers and authoring tools for the creations of VRML files are widely available for several different platforms. In this paper, we describe the topologically assisted geometric compression technology included in our proposal for the VRML compressed binary format. This technology produces significant reduction of file sizes and, subsequently, of the time required for transmission of such filed across the Internet. Compression ratios of 50:1 or more are achieved for large models. The proposal also includes a binary encoding to create compact, rapidly parsable binary VRML files. The proposal is currently being evaluated by the Compressed Binary Format Working Group of the VRML consortium as a possible extension of the VRML standard. In the topologically assisted compression scheme, a polyhedron is represented using two interlocking trees: a spanning tree of vertices and a spanning tree of triangles. The connectivity information represented in other compact schemes, such as triangular strips and generalized triangular meshes, can be directly derived from this representation. Connectivity information for large models is compressed with storage requirements approaching one bit per triangle. A variable-length, optionally lossy compression technique is used for vertex positions, normals, colors, and texture coordinates. The format supports all VRML property binding conventions. --- paper_title: Progressive forest split compression paper_content: In this paper we introduce the Progressive Forest Split (PFS) representation, a new adaptive refinement scheme for storing and transmitting manifold triangular meshes in progressive and highly compressed form. As in the Progressive Mesh (PM) method of Hoppe, a triangular mesh is represented as a low resolution polygonal model followed by a sequence of refinement operations, each one specifying how to add triangles and vertices to the previous level of detail to obtain a new level. The PFS format shares with PM and other refinement schemes the ability to smoothly interpolate between consecutive levels of detail. However, it achieves much higher compression ratios than PM by using a more complex refinement operation which can, at the expense of reduced granularity, be encoded more efficiently. A forest split operation doubling the number n of triangles of a mesh requires a maximum of approximately 3:5n bits to represent the connectivity changes, as opposed to approximately (5 + log2(n))n bits in PM. We describe algorithms to efficiently encode and decode the PFS format. We also show how any surface simplification algorithm based on edge collapses can be modified to convert single resolution triangular meshes to the PFS format. The modifications are simple and only require two additional topological tests on each candidate edge collapse. We show results obtained by applying these modifications to the Variable Tolerance method of Gueziec. CR --- paper_title: Compressed Progressive Meshes paper_content: Most systems that support visual interaction with 3D models use shape representations based on triangle meshes. The size of these representations imposes limits on applications for which complex 3D models must be accessed remotely. Techniques for simplifying and compressing 3D models reduce the transmission time. Multiresolution formats provide quick access to a crude model and then refine it progressively. Unfortunately, compared to the best nonprogressive compression methods, previously proposed progressive refinement techniques impose a significant overhead when the full resolution model must be downloaded. The CPM (compressed progressive meshes) approach proposed here eliminates this overhead. It uses a new technique, which refines the topology of the mesh in batches, which each increase the number of vertices by up to 50 percent. Less than an amortized total of 4 bits per triangle encode where and how the topological refinements should be applied. We estimate the position of new vertices from the positions of their topological neighbors in the less refined mesh using a new estimator that leads to representations of vertex coordinates that are 50 percent more compact than previously reported progressive geometry compression techniques. --- paper_title: Layered depth images paper_content: In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan’s warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alphacompositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem. --- paper_title: Fitting smooth surfaces to dense polygon meshes paper_content: Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. While dense polygon meshes are an adequate representation for some applications, many users prefer smooth surface representations for reasons of compactness, control, manufacturability, or appearance. In this thesis, we present algorithms and an end-to-end software system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for interactive modification and animation and a fine but more expensive model suitable for rendering. ::: The first step in our process consists of interactively painting patch boundaries onto the polygonal surface. In many applications, the placement of patch boundaries is considered part of the creative process and is not amenable to automation. We present efficient techniques for representing, creating and editing curves on dense polygonal surfaces. ::: The second step in our process consists of finding a gridded resampling of each bounded section of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Our parameterization algorithm is automatic, efficient, and robust, even for complex polygonal surfaces. Prior algorithms have lacked one or more of these properties, making them unusable for dense meshes. Our strategy also provides the user a flexible method to design parameterizations--an ability that previous literature in surface approximation does not address. ::: The third and final step of our process consists of fitting a hybrid of B-spline surfaces and displacement maps to our gridded re-sampling. The displacement map is an image representation of the error between the fitted B-spline surfaces and our spring grid. Since displacement maps are just images our hybrid representation facilitates the use of image processing operators for manipulating the geometric detail of an object. ::: Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes--important for an interactive system. --- paper_title: Automatic reconstruction of B-spline surfaces of arbitrary topological type paper_content: Creating freeform surfaces is a challenging task even with advanced geometric modeling systems. Laser range scanners offer a promising alternative for model acquisition—the 3D scanning of existing objects or clay maquettes. The problem of converting the dense point sets produced by laser scanners into useful geometric models is referred to as surface reconstruction. In this paper, we present a procedure for reconstructing a tensor product B-spline surface from a set of scanned 3D points. Unlike previous work which considers primarily the problem of fitting a single B-spline patch, our goal is to directly reconstruct a surface of arbitrary topological type. We must therefore define the surface as a network of B-spline patches. A key ingredient in our solution is a scheme for automatically constructing both a network of patches and a parametrization of the data points over these patches. In addition, we define the B-spline surface using a surface spline construction, and demonstrate that such an approach leads to an efficient procedure for fitting the surface while maintaining tangent plane continuity. We explore adaptive refinement of the patch network in order to satisfy user-specified error tolerances, and demonstrate our method on both synthetic and real data. CR --- paper_title: Fast Dynamic Tessellation of Trimmed NURBS Surfaced1 paper_content: Trimmed NURBS (non-uniform rational B-splines) surfaces are being increasingly used and standardized in geometric modeling applications. Fast graphical processing of trimmed NURBS at interactive speeds is absolutely essential to enable these applications. which poses some unique challenges in software, hardware, and algorithm design. This paper presents a technique that uses graphical compilation to enable fast dynamic tessellation of trimmed NURBS surfaces under highly varying transforms. ::: ::: ::: ::: We use the concept of graphical data compilation. through which we preprocess the NURBS surface into a compact, view-independent form amenable for fast per-frame extraction of triangles. Much of the complexity of processing is absorbed during compilation. Arbitrarily complex trimming regions are broken down into simple regions that are specially designed to facilitate tessellation before rendering. Potentially troublesome cases of degeneracies in the surface are detected and dealt with during compilation. Compilation enables a clean separation of algorithm-intensive and compute-intensive operations, and provides for parallel implementations of the latter. Also, we exercise a classification technique while processing trimming loops. which robustly takes care of geometric ambiguities and deals with special cases while keeping the compilation code simple and concise. --- paper_title: The NURBS Book paper_content: The book covers all aspects of non-uniform rational B-splines necessary to design geometry in a computer-aided environment. Basic B-spline features, curve and surface algorithms, and state-of-the-art geometry tools are all discussed. Detailed code for design algorithms and computational tricks are covered too, in a lucid, easy-to-understand style, with a minimum of mathematics and using numerous worked examples. The book will be a must for students, researchers, and implementors whose work involves the use of splines. --- paper_title: Surface fitting with hierarchical splines paper_content: We consider the fitting of tensor product parametric spline surfaces to gridded data. The continuity of the surface is provided by the basis chosen. When tensor product splines are used with gridded data, the surface fitting problem decomposes into a sequence of curve fitting processes, making the computations particularly efficient. The use of a hierarchical representation for the surface adds further efficiency by adaptively decomposing the fitting process into subproblems involving only a portion of the data. Hierarchy also provides a means of storing the resulting surface in a compressed format. Our approach is compared to multiresolution analysis and the use of wavelets. --- paper_title: Interpolating Subdivision for meshes with arbitrary topology paper_content: Subdivision is a powerful paradigm for the generation of surfaces of arbitrary topology. Given an initial triangular mesh the goal is to produce a smooth and visually pleasing surface whose shape is controlled by the initial mesh. Of particular interest are interpolating ::: schemes since they match the original data exactly, and play an important role in fast multiresolution and wavelet techniques. Dyn, Gregory, and Levin introduced the Butterfly scheme, which yields C^1 surfaces in the topologically regular setting. Unfortunately it ::: exhibits undesirable artifacts in the case of an irregular topology. We examine these failures and derive an improved scheme, which retains the simplicity of the Butterfly scheme, is interpolating, and results in smoother surfaces. --- paper_title: Multiresolution analysis of arbitrary meshes paper_content: In computer graphics and geometric modeling, shapes are often represented by triangular meshes. With the advent of laser scanning systems, meshes of extreme complexity are rapidly becoming commonplace. Such meshes are notoriously expensive to store, transmit, render, and are awkward to edit. Multiresolution analysis offers a simple, unified, and theoretically sound approach to dealing with these problems. Lounsbery et al. have recently developed a technique for creating multiresolution representations for a restricted class of meshes with subdivision connectivity. Unfortunately, meshes encountered in practice typically do not meet this requirement. In this paper we present a method for overcoming the subdivision connectivity restriction, meaning that completely arbitrary meshes can now be converted to multiresolution form. The method is based on the approximation of an arbitrary initial mesh M by a mesh MJ that has subdivision connectivity and is guaranteed to be within a specified tolerance. The key ingredient of our algorithm is the construction of a parametrization of M over a simple domain. We expect this parametrization to be of use in other contexts, such as texture mapping or the approximation of complex meshes by NURBS patches. CR --- paper_title: Multiresolution signal processing for meshes paper_content: We generalize basic signal processing tools such as downsampling, upsampling, and filters to irregular connectivity triangle meshes. This is accomplished through the design of a non-uniform relaxation procedure whose weights depend on the geometry and we ::: show its superiority over existing schemes whose weights depend only on connectivity. This is combined with known mesh simplification methods to build subdivision and pyramid algorithms. We demonstrate the power of these algorithms through a number of application examples including smoothing, enhancement, editing, and texture mapping. --- paper_title: Interpolatory Subdivision on Open Quadrilateral Nets with Arbitrary Topology paper_content: A simple interpolatory subdivision scheme for quadrilateral nets with arbitrary topology is presented which generates C 1 surfaces in the limit. The scheme satisfies important requirements for practical applications in computer graphics and engineering. These requirements include the necessity to generate smooth surfaces with local creases and cusps. The scheme can be applied to open nets in which case it generates boundary curves that allow a C 0 -join of several subdivision patches. Due to the local support of the scheme, adaptive refinement strategies can be applied. We present a simple device to preserve the consistency of such adaptively refined nets. --- paper_title: Multiresolution analysis for surfaces of arbitrary topological type paper_content: Multiresolution analysis and wavelets provide useful and efficient tools for representing functions at multiple levels of detail. Wavelet representations have been used in a broad range of applications, including image compression, physical simulation, and numerical analysis. In this article, we present a new class of wavelets, based on subdivision surfaces, that radically extends the class of representable functions. Whereas previous two-dimensional methods were restricted to functions difined on R 2 , the subdivision wavelets developed here may be applied to functions defined on compact surfaces of arbitrary topological type. We envision many applications of this work, including continuous level-of-detail control for graphics rendering, compression of geometric models, and acceleration of global illumination algorithms. Level-of-detail control for spherical domains is illustrated using two examples: shape approximation of a polyhedral model, and color approximation of global terrain data. --- paper_title: Globally smooth parameterizations with low distortion paper_content: Good parameterizations are of central importance in many digital geometry processing tasks. Typically the behavior of such processing algorithms is related to the smoothness of the parameterization and how much distortion it contains. Since a parameterization maps a bounded region of the plane to the surface, a parameterization for a surface which is not homeomorphic to a disc must be made up of multiple pieces. We present a novel parameterization algorithm for arbitrary topology surface meshes which computes a globally smooth parameterization with low distortion. We optimize the patch layout subject to criteria such as shape quality and metric distortion, which are used to steer a mesh simplification approach for base complex construction. Global smoothness is achieved through simultaneous relaxation over all patches, with suitable transition functions between patches incorporated into the relaxation procedure. We demonstrate the quality of our parameterizations through numerical evaluation of distortion measures and the excellent rate distortion performance of semi-regular remeshes produced with these parameterizations. The numerical algorithms required to compute the parameterizations are robust and run on the order of minutes even for large meshes. --- paper_title: Progressive geometry compression paper_content: We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. The latter two do not contribute to the reduction of error in a compression setting. Using semi-regular meshes, parameter and connectivity information can be virtually eliminated. Coupled with semi-regular wavelet transforms, zerotree coding, and subdivision based reconstruction we see improvements in error by a factor four (12dB) compared to other progressive coding schemes. --- paper_title: Interactive multiresolution mesh editing paper_content: We describe a multiresolution representation for meshes based on subdivision, which is a natural extension of the existing patch-based surface representations. Combining subdivision and the smoothing algorithms of Taubin [26] allows us to construct a set of algorithms for interactive multiresolution editing of complex hierarchical meshes of arbitrary topology. The simplicity of the underlying algorithms for refinement and coarsification enables us to make them local and adaptive, thereby considerably improving their efficiency. We have built a scalable interactive multiresolution editing system based on such algorithms. --- paper_title: Recursively generated B-spline surfaces on arbitrary topological meshes paper_content: Abstract This paper describes a method for recursively generating surfaces that approximate points lying-on a mesh of arbitrary topology. The method is presented as a generalization of a recursive bicubic B-spline patch subdivision algorithm. For rectangular control-point meshes, the method generates a standard B-spline surface. For non-rectangular meshes, it generates surfaces that are shown to reduce to a standard B-spline surface except at a small number of points, called extraordinary points. Therefore, everywhere except at these points the surface is continuous in tangent and curvature. At the extraordinary points, the pictures of the surface indicate that the surface is at least continuous in tangent, but no proof of continuity is given. A similar algorithm for biquadratic B-splines is also presented. --- paper_title: Exact evaluation of Catmull-Clark subdivision surfaces at arbitrary parameter values paper_content: In this paper we disprove the belief widespread within the computer graphics community that Catmull-Clark subdivision surfaces cannot be evaluated directly without explicitly subdividing. We show that the surface and all its derivatives can be evaluated in terms of a set of eigenbasis functions which depend only on the subdivision scheme and we derive analytical expressions for these basis functions. In particular, on the regular part of the control mesh where Catmull-Clark surfaces are bi-cubic B-splines, the eigenbasis is equal to the power basis. Also, our technique is both easy to implement and efficient. We have used our implementation to compute high quality curvature plots of subdivision surfaces. The cost of our evaluation scheme is comparable to that of a bi-cubic spline. Therefore, our method allows many algorithms developed for parametric surfaces to be applied to Catmull-Clark subdivision surfaces. This makes subdivision surfaces an even more attractive tool for free-form surface modeling. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Curve, Surface, Solid, and Object Representations J.6 [Computer Applications]: Computer-Aided Engineering—Computer Aided Design (CAD) --- paper_title: Subdivision surfaces in character animation paper_content: The creation of believable and endearing characters in computer graphics presents a number of technical challenges, including the modeling, animation and rendering of complex shapes such as heads, hands, and clothing. Traditionally, these shapes have been modeled with NURBS surfaces despite the severe topological restrictions that NURBS impose. In order to move beyond these restrictions, we have recently introduced subdivision surfaces into our production environment. Subdivision surfaces are not new, but their use in high-end CG production has been limited. Here we describe a series of developments that were required in order for subdivision surfaces to meet the demands of high-end production. First, we devised a practical technique for constructing provably smooth variable-radius fillets and blends. Second, we developed methods for using subdivision surfaces in clothing simulation including a new algorithm for efficient collision detection. Third, we developed a method for constructing smooth scalar fields on subdivision surfaces, thereby enabling the use of a wider class of programmable shaders. These developments, which were used extensively in our recently completed short film Geri’s game, have become a highly valued feature of our production environment. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling; I.3.3 [Computer Graphics]: Picture/Image Generation. --- paper_title: Displaced subdivision surfaces paper_content: In this paper we introduce a new surface representing, the displaced subdivision surface . It represents a detailed surface model as a scalar-valued displacement over a smooth domain surface. Our representation defines both the domain surface and the displacement function using a unified subdivision framework, allowing for simple and efficient evaluation of analytic surface properties. We present a simple, automatic scheme for converting detailed geometric models into such a representation. The challenge in this conversion process is to find a simple subdivision surface that still faithfully expresses the detailed model as its offset. We demonstrate that displaced subdivision surfaces offer a number of benefits, including geometry compression, editing, animation, scalability, and adaptive rendering. In particular, the encoding of fine detail as a scalar function makes the representation extremely compact. --- paper_title: Multilevel representation and transmission of real objects with progressive octree particles paper_content: We present a multilevel representation scheme adapted to storage, progressive transmission, and rendering of dense data sampled on the surface of real objects. Geometry and object attributes, such as color and normal, are encoded in terms of surface particles associated to a hierarchical space partitioning based on an octree. Appropriate ordering of surface particles results in a compact multilevel representation without increasing the size of the uniresolution model corresponding to the highest level of detail. This compact representation can progressively be decoded by the viewer and transformed by a fast direct triangulation technique into a sequence of triangle meshes with increasing levels of detail. The representation requires approximately 5 bits per particle (2.5 bits per triangle) to encode the basic geometrical structure. The vertex positions can then be refined by means of additional precision bits, resulting in 5 to 9 bits per triangle for representing a 12-bit quantized geometry. The proposed representation scheme is demonstrated with the surface data of various real objects. --- paper_title: QSplat: a multiresolution point rendering system for large meshes paper_content: Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples. --- paper_title: Efficient High Quality Rendering of Point Sampled Geometry paper_content: We propose a highly efficient hierarchical representation for point sampled geometry that automatically balances sampling density and point coordinate quantization. The representation is very compact with a memory consumption of far less than 2 bits per point position which does not depend on the quantization precision. We present an efficient rendering algorithm that exploits the hierarchical structure of the representation to perform fast 3D transformations and shading. The algorithm is extended to surface splatting which yields high quality anti-aliased and water tight surface renderings. Our pure software implementation renders up to 14 million Phong shaded and textured samples per second and about 4 million anti-aliased surface splats on a commodity PC. This is more than a factor 10 times faster than previous algorithms. --- paper_title: Surface splatting paper_content: Modern laser range and optical scanners need rendering techniques that can handle millions of points with high resolution textures. This paper describes a point rendering and texture filtering technique called surface splatting which directly renders opaque and transparent surfaces from point clouds without connectivity. It is based on a novel screen space formulation of the Elliptical Weighted Average (EWA) filter. Our rigorous mathematical analysis extends the texture resampling framework of Heckbert to irregularly spaced point samples. To render the points, we develop a surface splat primitive that implements the screen space EWA filter. Moreover, we show how to optimally sample image and procedural textures to irregular point data during pre-processing. We also compare the optimal algorithm with a more efficient view-independent EWA pre-filter. Surface splatting makes the benefits of EWA texture filtering available to point-based rendering. It provides high quality anisotropic texture filtering, hidden surface removal, edge anti-aliasing, and order-independent transparency. --- paper_title: Progressive point set surfaces paper_content: Progressive point set surfaces (PPSS) are a multilevel point-based surface representation. They combine the usability of multilevel scalar displacement maps (e.g., compression, filtering, geometric modeling) with the generality of point-based surface representations (i.e., no fixed homology group or continuity class). The multiscale nature of PPSS fosters the idea of point-based modeling. The basic building block for the construction of PPSS is a projection operator, which maps points in the proximity of the shape onto local polynomial surface approximations. The projection operator allows the computing of displacements from smoother to more detailed levels. Based on the properties of the projection operator we derive an algorithm to construct a base point set. Starting from this base point set, a refinement rule using the projection operator constructs a PPSS from any given manifold surface. --- paper_title: Progressive multilevel meshes from octree particles paper_content: We present a multilevel object modeling scheme adapted to storage and progressive transmission of complex 3D objects. Geometry and texture are encoded in terms of surface particles associated to a hierarchical space partitioning via an octree. Proper ordering of surface particles results in a compact multilevel representation without increasing the size of the uni-resolution model which provides the highest level of detail. This compact representation can be progressively decoded by the viewer and transformed by a fast direct triangulation technique into a sequence of triangle meshes of increasing levels of detail. The proposed scheme is demonstrated with 3D models of real objects constructed by a shape from silhouette technique. --- paper_title: Pointshop 3D: an interactive system for point-based surface editing paper_content: We present a system for interactive shape and appearance editing of 3D point-sampled geometry. By generalizing conventional 2D pixel editors, our system supports a great variety of different interaction techniques to alter shape and appearance of 3D point models, including cleaning, texturing, sculpting, carving, filtering, and resampling. One key ingredient of our framework is a novel concept for interactive point cloud parameterization allowing for distortion minimal and aliasing-free texture mapping. A second one is a dynamic, adaptive resampling method which builds upon a continuous reconstruction of the model surface and its attributes. These techniques allow us to transfer the full functionality of 2D image editing operations to the irregular 3D point setting. Our system reads, processes, and writes point-sampled models without intermediate tesselation. It is intended to complement existing low cost 3D scanners and point rendering pipelines for efficient 3D content creation. --- paper_title: The digital Michelangelo project: 3D scanning of large statues paper_content: We describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. Our system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merging, and viewing scanned data. As a demonstration of this system, we digitized 10 statues by Michelangelo, including the well-known figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Our largest single dataset is of the David - 2 billion polygons and 7,000 color images. In this paper, we discuss the challenges we faced in building this system, the solutions we employed, and the lessons we learned. We focus in particular on the unusual design of our laser triangulation scanner and on the algorithms and software we developed for handling very large scanned models. --- paper_title: Confetti: object-space point blending and splatting paper_content: We present Confetti, a novel point-based rendering approach based on object-space point interpolation of densely sampled surfaces. We introduce the concept of a transformation-invariant covariance matrix of a set of points which can efficiently be used to determine splat sizes in a multiresolution point hierarchy. We also analyze continuous point interpolation in object-space and we define a new class of parameterized blending kernels as well as a normalization procedure to achieve smooth blending. Furthermore, we present a hardware accelerated rendering algorithm based on texture mapping and /spl alpha/-blending as well as programmable vertex and pixel-shaders. --- paper_title: Surface modeling with oriented particle systems paper_content: Splines and deformable surface models are widely used in computer graphics to describe free-form surfaces. These methods require manual preprocessing to discretize the surface into patches and to specify their connectivity. We present a new model of elastic surfaces based on interacting particle systems, which, unlike previous techniques, can be used to sptiL join, or extend surfaces without the need for manual intervention. The particles we use have longrange attraction forces and short-range repulsion forces and follow Newtonian dynamics, much tiie recent computational models of fluids and solids. To enable our particles to model surface elements instead of point masses or volume elements, we add an orientation to each particle’s state. We devise new interaction potentials for our oriented particles which favor locally planar or spherical arrangements. We atso develop techniques for adding new particles automatically, which enables our surfaces to stretch and grow. We demonstrate the application of our new particle system to modefing surfaces in 3-D and the interpolation of 3-D point sets. --- paper_title: Streaming QSplat: a viewer for networked visualization of large, dense models paper_content: Steady growth in the speeds of network links and graphics accelerator cards has brought increasing interest in streaming transmission of three-dimensional data sets. We demonstrate how streaming visualization can be made practical for data sets containing hundreds of millions of samples. Our system is based on QSplat, a multiresolution rendering system for dense polygon meshes that employs a bounding sphere hierarchy data structure and splat rendering. We show how to incorporate view-dependent progressive transmission into QSplat, by having the client request visible portions of the model in order from coarse to fine resolution. In addition, we investigate interaction techniques for improving the effectiveness of streaming data visualization. In particular, we explore color-coding streamed data by resolution, examine the order in which data should be transmitted in order to minimize visual distraction, and propose tools for giving the user fine control over download order. --- paper_title: Point based animation of elastic, plastic and melting objects paper_content: We present a method for modeling and animating a wide spectrum of volumetric objects, with material properties anywhere in the range from stiff elastic to highly plastic. Both the volume and the surface representation are point based, which allows arbitrarily large deviations form the original shape. In contrast to previous point based elasticity in computer graphics, our physical model is derived from continuum mechanics, which allows the specification of common material properties such as Young's Modulus and Poisson's Ratio. In each step, we compute the spatial derivatives of the discrete displacement field using a Moving Least Squares (MLS) procedure. From these derivatives we obtain strains, stresses and elastic forces at each simulated point. We demonstrate how to solve the equations of motion based on these forces, with both explicit and implicit integration schemes. In addition, we propose techniques for modeling and animating a point-sampled surface that dynamically adapts to deformations of the underlying volumetric model. --- paper_title: Surfels: surface elements as rendering primitives paper_content: Surface elements (surfels) are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. Unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels are point primitives without explicit connectivity. Surfel attributes comprise depth, texture color, normal, and others. As a pre-process, an octree-based surfel representation of a geometric object is computed. During sampling, surfel positions and normals are optionally perturbed, and different levels of texture colors are prefiltered and stored per surfel. During rendering, a hierarchical forward warping algorithm projects surfels to a z-buffer. A novel method called visibility splatting determines visible surfels and holes in the z-buffer. Visible surfels are shaded using texture filtering, Phong illumination, and environment mapping using per-surfel normals. Several methods of image reconstruction, including supersampling, offer flexible speed-quality tradeoffs. Due to the simplicity of the operations, the surfel rendering pipeline is amenable for hardware implementation. Surfel objects offer complex shape, low rendering cost and high image quality, which makes them specifically suited for low-cost, real-time graphics, such as games. --- paper_title: Layered depth images paper_content: In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan’s warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alphacompositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem. --- paper_title: Generalized Voxel Coloring paper_content: Image-based reconstruction from randomly scattered views is a challenging problem. We present a new algorithm that extends Seitz and Dyer's Voxel Coloring algorithm. Unlike their algorithm, ours can use images from arbitrary camera locations. The key problem in this class of algorithms is that of identifying the images from which a voxel is visible. Unlike Kutulakos and Seitz's Space Carving technique, our algorithm solves this problem exactly and the resulting reconstructions yield better results in our application, which is synthesizing new views. One variation of our algorithm minimizes color consistency comparisons; another uses less memory and can be accelerated with graphics hardware. We present efficiency measurements and, for comparison, we present images synthesized using our algorithm and Space Carving. --- paper_title: On visible surface generation by a priori tree structures paper_content: This paper describes a new algorithm for solving the hidden surface (or line) problem, to more rapidly generate realistic images of 3-D scenes composed of polygons, and presents the development of theoretical foundations in the area as well as additional related algorithms. As in many applications the environment to be displayed consists of polygons many of whose relative geometric relations are static, we attempt to capitalize on this by preprocessing the environment's database so as to decrease the run-time computations required to generate a scene. This preprocessing is based on generating a “binary space partitioning” tree whose in order traversal of visibility priority at run-time will produce a linear order, dependent upon the viewing position, on (parts of) the polygons, which can then be used to easily solve the hidden surface problem. In the application where the entire environment is static with only the viewing-position changing, as is common in simulation, the results presented will be sufficient to solve completely the hidden surface problem. --- paper_title: Modelling with implicit surfaces that interpolate paper_content: We introduce new techniques for modelling with interpolating implicit surfaces. This form of implicit surface was first used for problems of surface reconstruction and shape transformation, but the emphasis of our work is on model creation. These implicit surfaces are described by specifying locations in 3D through which the surface should pass, and also identifying locations that are interior or exterior to the surface. A 3D implicit function is created from these constraints using a variational scattered data interpolation approach, and the iso-surface of this function describes a surface. Like other implicit surface descriptions, these surfaces can be used for CSG and interference detection, may be interactively manipulated, are readily approximated by polygonal tilings, and are easy to ray trace. A key strength for model creation is that interpolating implicit surfaces allow the direct specification of both the location of points on the surface and the surface normals. These are two important manipulation techniques that are difficult to achieve using other implicit surface representations such as sums of spherical or ellipsoidal Gaussian functions ("blobbies"). We show that these properties make this form of implicit surface particularly attractive for interactive sculpting using the particle sampling technique introduced by Witkin and Heckbert. Our formulation also yields a simple method for converting a polygonal model to a smooth implicit model, as well as a new way to form blends between objects. --- paper_title: A theory of shape by space carving paper_content: In this paper we consider the problem of computing the 3D shape of an unknown, arbitrarily-shaped scene from multiple photographs taken at known but arbitrarily-distributed viewpoints. By studying the equivalence class of all 3D shapes that reproduce the input photographs, we prove the existence of a special member of this class, the photo hull, that (1) can be computed directly from photographs of the scene, and (2) subsumes all other members of this class. We then give a provably-correct algorithm called Space Carving, for computing this shape and present experimental results on complex real-world scenes. The approach is designed to (1) build photorealistic shapes that accurately model scene appearance from a wide range of viewpoints, and (2) account for the complex interactions between occlusion, parallax, shading, and their effects on arbitrary views of a 3D scene. --- paper_title: A Hierarchical Data Structure for Representing the Spatial Decomposition of 3-D Objects paper_content: The polytree, a generalization of the octree data structure, retains most of the desirable features of the octree structure while offering several advantages. --- paper_title: Reconstruction and representation of 3D objects with radial basis functions paper_content: We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBF — previously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages. The energy-minimisation characterisation of polyharmonic splines result in a “smoothest” interpolant. This scale-independent characterisation is well-suited to reconstructing surfaces from non-uniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a non-interpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and remeshing applications. Results are presented for real-world rangefinder data. --- paper_title: Multidimensional binary search trees used for associative searching paper_content: This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1)/ k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t )/ k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given. --- paper_title: A Simple Method for Color Quantization: Octree Quantization paper_content: A new method for filling a color table is presented that produces pictures of similar quality as existing methods, but requires less memory and execution time. All colors of an image are inserted in an octree, and this octree is reduced from the leaves to the root in such a way that every pixel has a well defined maximum error. The algorithm is described in PASCAL notation. --- paper_title: Efficient, Precise, and Accurate Utilization of the Uniqueness Constraint in Multi-View Stereo paper_content: In this paper, the depth cue due to the assumption of texture uniqueness is reviewed. The spatial direction over which a similarity measure is optimized, in order to establish a stereo correspondence, is considered and methods to increase the precision and accuracy of stereo reconstructions are presented. An efficient implementation of the above methods is offered, based on optimizations that evaluate potential correspondences hierarchically, in the spatial and angular dimensions. Furthermore, the expansion of the above techniques in a multi-view framework where calibration errors cause the misregistration of individually obtained reconstructions are considered, and a treatment of the data is proposed for the elimination of duplicate reconstructions of a single surface point. Finally, a processing step is proposed for the increase of reconstruction precision and post-processing of the final result. The above contributions are integrated in a generic and parallelizable implementation of the uniqueness constraint to observe speedup and increase in the fidelity of surface reconstruction. --- paper_title: Topology and geometry of unorganized point clouds paper_content: We present a new method for defining neighborhoods, and assigning principal curvature frames, and mean and Gauss curvatures to the points of an unorganized oriented point-cloud. The neighborhoods are estimated by measuring implicitly the surface distance between points. The 3D shape recovery is based on conformal geometry, works directly on the cloud, does not rely on the generation of polygonal, or smooth models. Test results on publicly available synthetic data, as ground truth, demonstrate that the method compares favorably to the established approaches for quantitative 3D shape recovery. The proposed method is developed to serve applications involving point based rendering and reliable extraction of differential properties from noisy unorganized point-clouds. --- paper_title: A volumetric method for building complex models from range images paper_content: A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles. --- paper_title: Hierarchical and adaptive visualization on nested grids paper_content: Modern numerical methods are capable to resolve fine structures in solutions of partial differential equations. Thereby they produce large amounts of data. The user wants to explore them interactively by applying visualization tools in order to understand the simulated physical process. Here we present a multiresolution approach for a large class of hierarchical and nested grids. It is based on a hierarchical traversal of mesh elements combined with an adaptive selection of the hierarchical depth. The adaptation depends on an error indicator which is closely related to the visual impression of the smoothness of isosurfaces or isolines, which are typically used to visualize data. Significant examples illustrate the applicability and efficiency on different types of meshes. --- paper_title: Modeling in Volume Graphics paper_content: In traditional CAD and solid modeling, 3D objects are represented in terms of their geometric components. In contrast, in volume graphics 3D objects are represented by a discrete digital model, which is stored as a large 3D array of unit volume elements (voxels). The rapid progress in hardware, primarily in memory subsystems, has been recently transforming the field of volume graphics into a major trend which offers an alternative to traditional 3D surface graphics. This paper discusses volume graphics and several related modeling techniques. --- paper_title: Light field mapping: efficient representation and hardware rendering of surface light fields paper_content: A light field parameterized on the surface offers a natural and intuitive description of the view-dependent appearance of scenes with complex reflectance properties. To enable the use of surface light fields in real-time rendering we develop a compact representation suitable for an accelerated graphics pipeline. We propose to approximate the light field data by partitioning it over elementary surface primitives and factorizing each part into a small set of lower-dimensional functions. We show that our representation can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the input data. Finally, we develop an image-based rendering method, light field mapping, that can visualize surface light fields directly from this compact representation at interactive frame rates on a personal computer. We also implement a new method of approximating the light field data that produces positive only factors allowing for faster rendering using simpler graphics hardware than earlier methods. We demonstrate the results for a variety of non-trivial synthetic scenes and physical objects scanned through 3D photography. --- paper_title: Free viewpoint video extraction, representation, coding, and rendering paper_content: Free viewpoint video provides the possibility to freely navigate within dynamic real world video scenes by choosing arbitrary viewpoints and view directions. So far, related work only considered free viewpoint video extraction, representation, and rendering methods. Compression and transmission has not yet been studied in detail and combined with the other components into one complete system. In this paper, we present such a complete system for efficient free viewpoint video extraction, representation, coding, and interactive rendering. Data representation is based on 3D mesh models and view-dependent texture mapping using video textures. The geometry extraction is based on a shape-from-silhouette algorithm. The resulting voxel models are converted into 3D meshes that are coded using MPEG-4 SNHC tools. The corresponding video textures are coded using an H.264/AVC codec. Our algorithms for view-dependent texture mapping have been adopted as an extension of MPEG-4 AFX. The presented results illustrate that based on the proposed methods a complete transmission system for efficient free viewpoint video can be built. --- paper_title: Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach paper_content: We present a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs. Our modeling approach, which combines both geometry-based and imagebased techniques, has two components. The first component is a photogrammetricmodeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models. Our approach can be used to recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach’s ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs. CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Modeling and recovery of physical attributes; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysis Stereo; J.6 [Computer-Aided Engineering]: Computer-aided design (CAD). --- paper_title: Opacity light fields: interactive rendering of surface light fields with view-dependent opacity paper_content: We present new hardware-accelerated techniques for rendering surface light fields with opacity hulls that allow for interactive visualization of objects that have complex reflectance properties and elaborate geometrical details. The opacity hull is a shape enclosing the object with view-dependent opacity parameterized onto that shape. We call the combination of opacity hulls and surface light fields the opacity light field. Opacity light fields are ideally suited for rendering of the visually complex objects and scenes obtained with 3D photography. We show how to implement opacity light fields in the framework of three surface light field rendering methods: view-dependent texture mapping, unstructured lumigraph rendering, and light field mapping. The modified algorithms can be effectively supported on modern graphics hardware. Our results show that all three implementations are able to achieve interactive or real-time frame rates. --- paper_title: Unstructured lumigraph rendering paper_content: We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples. --- paper_title: Light field rendering paper_content: A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis --- paper_title: A Real-Time Distributed Light Field Camera paper_content: We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples. --- paper_title: Interpolating view and scene motion by dynamic view morphing paper_content: We introduce the problem of view interpolation for dynamic scenes. Our solution to this problem extends the concept of view morphing and retains the practical advantages of that method. We are specifically concerned with interpolating between two reference views captured at different times, so that there is a missing interval of time between when the views were taken. The synthetic interpolations produced by our algorithm portray one possible physically-valid version of what transpired in the scene during the missing time. It is assumed that each object in the original scene underwent a series of rigid translations. Dynamic view morphing can work with widely-spaced reference views, sparse point correspondences, and uncalibrated cameras. When the camera-to-camera transformation can be determined, the synthetic interpolation will portray scene objects moving along straight-line, constant-velocity trajectories in world space. --- paper_title: The Plenoptic Function and the Elements of Early Vision paper_content: This chapter contains sections titled: The Plenoptic Function, Plenoptic Structures, The Plenoptic Function and Elemental Measurements in Early Vision, Plenoptic Measurements in the Human Visual System, Periodic Tables for Early Vision, Further Computations, Conclusion, Appendix, References --- paper_title: Novel View Synthesis by Cascading Trilinear Tensors paper_content: Presents a new method for synthesizing novel views of a 3D scene from two or three reference images in full correspondence. The core of this work is the use and manipulation of an algebraic entity, termed the "trilinear tensor", that links point correspondences across three images. For a given virtual camera position and orientation, a new trilinear tensor can be computed based on the original tensor of the reference images. The desired view can then be created using this new trilinear tensor and point correspondences across two of the reference images. --- paper_title: Light field rendering paper_content: A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis --- paper_title: View Interpolation for Dynamic Scenes paper_content: This paper describes a novel technique for synthesizing a dynamic scene from two images without the use of a 3D model. A scene containing rigid or non-rigid objects, in which each object can move in any orientation or direction, is considered. It is shown that such a scene can be converted into several equivalent static scenes, where each scene only includes one rigid object. Our method can generate a series of continuous and realistic intermediate views from only two reference images without 3D knowledge. The procedure consists of three main steps: segmentation, morphing and postwarping. The key frames are first segmented into several layers. Each layer can be realistically morphed after determining its fundamental matrix. Based on the decomposition of 3D rotation matrix, an optimized and unique postwarping path is automatically determined by the least distortion method and boundary connection constraint. Finally, four experiments, which include morphing of a rotating rigid object in presence of occlusion and morphing of non-rigid objects (human), are demonstrated. --- paper_title: Free viewpoint video synthesis and presentation from multiple sporting videos paper_content: This paper introduces two kinds of free viewpoint observation systems for sporting events captured with uncalibrated multiple cameras in a stadium. In the first system (viewpoint on demand system), a user can watch the realistic sporting scenes with the original stadium. In the second system (mixed reality presentation system), a user can virtually watch the scenes overlaid on a desktop stadium model via a video see-through head mounted display (HMD). In the both systems, the user can observe sporting events from his/her favorite viewpoints, where the virtual view images are synthesized and presented by performing view interpolation. As outdoor scenes often have variations in lighting, we develop the systems to handle the changes of lighting condition. If the captured scene contains shadows, we synthesize the virtual view image of the shadows of the background and the foreground independently from real camera images using projective geometry between cameras. The shadows of the foreground objects are then overlaid on the synthesized background of the original stadium or the stadium model in front of the user respectively. The results indicate that the appearance of shadows can produce a realistic mixed reality presentation. --- paper_title: View interpolation for image synthesis paper_content: Image-space simplifications have been used to accelerate the calculation of computer graphic images since the dawn of visual simulation. Texture mapping has been used to provide a means by which images may themselves be used as display primitives. The work reported by this paper endeavors to carry this concept to its logical extreme by using interpolated images to portray three-dimensional scenes. The special-effects technique of morphing, which combines interpolation of texture maps and their shape, is applied to computing arbitrary intermediate frames from an array of prestored images. If the images are a structured set of views of a 3D object or scene, intermediate frames derived by morphing can be used to approximate intermediate 3D transformations of the object or scene. Using the view interpolation approach to synthesize 3D scenes has two main advantages. First, the 3D representation of the scene may be replaced with images. Second, the image synthesis time is independent of the scene complexity. The correspondence between images, required for the morphing method, can be predetermined automatically using the range data associated with the images. The method is further accelerated by a quadtree decomposition and a view-independent visible priority. Our experiments have shown that the morphing can be performed at interactive rates on today’s high-end personal computers. Potential applications of the method include virtual holograms, a walkthrough in a virtual environment, image-based primitives and incremental rendering. The method also can be used to greatly accelerate the computation of motion blur and soft shadows cast by area light sources. CR Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. Additional Keywords: image morphing, interpolation, virtual reality, motion blur, shadow, incremental rendering, real-time display, virtual holography, motion compensation. --- paper_title: Survey of image-based representations and compression techniques paper_content: We survey the techniques for image-based rendering (IBR) and for compressing image-based representations. Unlike traditional three-dimensional (3-D) computer graphics, in which 3-D geometry of the scene is known, IBR techniques render novel views directly from input images. IBR techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative techniques. IBR techniques demonstrate a surprising diverse range in their extent of use of images and geometry in representing 3-D scenes. We explore the issues in trading off the use of images and geometry by revisiting plenoptic-sampling analysis and the notions of view dependency and geometric proxies. Finally, we highlight compression techniques specifically designed for image-based representations. Such compression techniques are important in making IBR techniques practical. --- paper_title: Automatic extraction of Irregular Network digital terrain models paper_content: For representation of terrain, an efficient alternative to dense grids is the Triangulated Irregular Network (TIN), which represents a surface as a set of non-overlapping contiguous triangular facets, of irregular size and shape. The source of digital terrain data is increasingly dense raster models produced by automated orthophoto machines or by direct sensors such as synthetic aperture radar. A method is described for automatically extracting a TIN model from dense raster data. An initial approximation is constructed by automatically triangulating a set of feature points derived from the raster model. The method works by local incremental refinement of this model by the addition of new points until a uniform approximation of specified tolerance is obtained. Empirical results show that substantial savings in storage can be obtained. --- paper_title: Procedural modeling of cities paper_content: Modeling a city poses a number of problems to computer graphics. Every urban area has a transportation network that follows population and environmental influences, and often a superimposed pattern plan. The buildings appearances follow historical, aesthetic and statutory rules. To create a virtual city, a roadmap has to be designed and a large number of buildings need to be generated. We propose a system using a procedural approach based on L-systems to model cities. From various image maps given as input, such as land-water boundaries and population density, our system generates a system of highways and streets, divides the land into lots, and creates the appropriate geometry for the buildings on the respective allotments. For the creation of a city street map, L-systems have been extended with methods that allow the consideration of global goals and local constraints and reduce the complexity of the production rules. An L-system that generates geometry and a texturing system based on texture elements and procedural methods compose the buildings. --- paper_title: The Algorithmic Beauty of Plants-The Virtual Laboratory paper_content: A bipartite eyeglass temple including a front section attached to an eyeglass frame and a rear section terminating in an ear engaging portion. Said front and rear sections are adjustably connected together so that the length of the temple can be varied to most comfortably accommodate the temple to a particular wearer. In addition, the rear section can be adjusted to vary the tension of a resilient means which yieldably urges said rear section forwardly for holding its rear part in engagement with the ear. The rear section may also be rotatably adjusted for conformably and most comfortably engaging the head behind the ear. --- paper_title: The Fractal Geometry of Nature paper_content: "...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right." Nature --- paper_title: Instant architecture paper_content: This paper presents a new method for the automatic modeling of architecture. Building designs are derived using split grammars, a new type of parametric set grammar based on the concept of shape. The paper also introduces an attribute matching system and a separate control grammar, which offer the flexibility required to model buildings using a large variety of different styles and design ideas. Through the adaptive nature of the design grammar used, the created building designs can either be generic or adhere closely to a specified goal, depending on the amount of data available. --- paper_title: Spacetime constraints paper_content: Spacetime constraints are a new method for creating character animation. The animator specifies what the character has to do, for instance, "jump from here to there, clearing a hurdle in between;" how the motion should be performed, for instance "don't waste energy," or "come down hard enough to splatter whatever you land on;" the character's physical structure---the geometry, mass, connectivity, etc. of the parts; and the physical resources' available to the character to accomplish the motion, for instance the character's muscles, a floor to push off from, etc. The requirements contained in this description, together with Newton's laws, comprise a problem of constrained optimization. The solution to this problem is a physically valid motion satisfying the "what" constraints and optimizing the "how" criteria. We present as examples a Luxo lamp performing a variety of coordinated motions. These realistic motions conform to such principles of traditional animation as anticipation, squash-and-stretch, follow-through, and timing. --- paper_title: Computer Animation: Algorithms and Techniques paper_content: Driven by demand from the entertainment industry for better and more realistic animation, technology continues to evolve and improve. The algorithms and techniques behind this technology are the foundation of this comprehensive book, which is written to teach?you the fundamentals of animation programming. ::: ::: In this third edition, the most current techniques are covered along with the theory and high-level computation that have earned the book a reputation as the best technically-oriented animation resource. Key topics such as fluids, hair, and crowd animation have been expanded, and extensive new coverage of clothes and cloth has been added. New material on simulation provides a more diverse look at this important area and more example animations and chapter projects and exercises are included. Additionally, spline coverage has been expanded and new video compression and formats (e.g., iTunes) are covered. ::: ::: ::: ::: * Includes companion site with contemporary animation examples drawn from research and entertainment, sample animations, and example code * Describes the key mathematical and algorithmic foundations of animation that provide you with a deep understanding and control of technique * Expanded and new coverage of key topics including: fluids and clouds, cloth and clothes, hair, and crowd animation *?Explains the algorithms used for path following, hierarchical kinematic modelling, rigid body dynamics, flocking behaviour, particle systems, collision detection, and more ::: Table of Contents ::: ::: ::: Chapter 1: Introduction ::: ::: Chapter 2: Technical Background ::: ::: Chapter 3: Interpolating Values ::: ::: Chapter 4: Interpolation-Based Animation ::: ::: Chapter 5: Kinematic Linkages ::: ::: Chapter 6: Motion Capture ::: ::: Chapter 7: Physically Based Animation ::: ::: Chapter 8: Fluids: Liquids & Gases ::: ::: Chapter 9: Modeling and Animating Human Figures ::: ::: Chapter 10: Facial Animation ::: ::: Chapter 11: Behavioral Animation ::: ::: Chapter 12: Special Models for Animation ::: ::: Appendix A: Rendering Issues ::: ::: Appendix B: Background Information and Techniques ::: ::: Index --- paper_title: Elastically deformable models paper_content: The theory of elasticity describes deformable materials such as rubber, cloth, paper, and flexible metals. We employ elasticity theory to construct differential equations that model the behavior of non-rigid curves, surfaces, and solids as a function of time. Elastically deformable models are active: they respond in a natural way to applied forces, constraints, ambient media, and impenetrable obstacles. The models are fundamentally dynamic and realistic animation is created by numerically solving their underlying differential equations. Thus, the description of shape and the description of motion are unified. --- paper_title: Fast animation and control of nonrigid structures paper_content: We describe a fast method for creating physically based animationof non-rigid objects. Rapid simulation of non- rigid behavior isbased on global deformations. Con- straints are used to connectnon-rigid pieces to each other, forming complex models. Constraintsalso provide mo- tion control, allowing model points to be movedaccurately along specified trajectories. The use of deformationsthat are linear in the state of the system causes the constraintmatrices to be constant. Pre-inverting these matrices there- foreyields an enormous benefit in performance, allowing reasonablycomplex models to be manipulated at interac- tive speed. --- paper_title: Physically based models with rigid and deformable components paper_content: A class of physically based models suitable for animating flexible objects in simulated physical environments was proposed earlier by the authors (1987). The original formulation works as well in practice for models whose shapes are moderately to highly deformable, but it tends to become numerically ill conditioned as the rigidity of the models is increased. An alternative formulation of deformable models is presented in which deformations are decomposed into a reference component, which may represent an arbitrary shape, and a displacement component, allowing deformation away from this reference shape. The application of the deformable models to a physically based computer animation project is illustrated. > --- paper_title: Dynamic deformation of solid primitives with constraints paper_content: This paper develops a systematic approach to deriving dynamic models from parametrically defined solid primitives, global geometric deformations and local finite- element deformations. Even though their kinematics is styl- ized by the particular solid primitive used, the models be- have in a physically correct way with prescribed mass dis- tributions and elasticities. We also propose efficient con- straint methods for connecting these new dynamic primitives together to make articulated models. Our techniques make it possible to build and animate constrained, nonrigid, unibody or multibody objects in simulated physical environments at interactive rates. 1 Introduction The graphics literature is replete with solid object representa- tions. Unfortunately, it is not particularly easy to synthesize realistic animation through direct application of the geomet- ric representations of solid modeling (5), and the problems are exacerbated when animate objects can deform. Physics- based animation has begun to overcome some of the difficul- ties. We propose a systematic approach for creating dynamic solid models capable of realistic physical behaviors starting from common solid primitives such as spheres, cylinders, cones, or superquadrics. Such primitives can "deform" kine- matically in simple ways; for example, a cylinder deforms as its radius or length is changed. To gain additional model- ing power we allow the primitives to undergo parameterized global deformations (bends, tapers, twists, shears, etc.) of the sort proposed in (2). To further enhance the geometric flexibility, we permit local free-form deformations. Our lo- cal deformations are similar in spirit to the FFDs of (12), but rather than being ambient space warps (12, 10), they are in- corporated directly into the solid primitive as finite element shape functions. Through the application of Lagrangian mechanics and the finite element method our models inherit generalized coor- dinates that comprise the geometric parameters of the solid primitive, the global and local deformation parameters, and the six degrees of freedom of rigid-body motion. Lagrange equations govern the dynamics, dictating the evolution of the --- paper_title: A modeling system based on dynamic constraints paper_content: We present "dynamic constraints," a physically-based technique for constraint-based control of computer graphics models. Using dynamic constraints, we build objects by specifying geometric constraints; the models assemble themselves as the elements move to satisfy the constraints. The individual elements are rigid bodies which act in accordance with the rules of physics, and can thus exhibit physically realistic behavior. To implement the constraints, a set of "constraint forces" is found, which causes the bodies to act in accordance with the constraints; finding these "constraint forces" is an inverse dynamics problem. --- paper_title: The rendering equation paper_content: We present an integral equation which generalizes a variety of known rendering algorithms. In the course of discussing a monte carlo solution we also present a new form of variance reduction, called Hierarchical sampling and give a number of elaborations shows that it may be an efficient new technique for a wide variety of monte carlo procedures. The resulting rendering algorithm extends the range of optical phenomena which can be effectively simulated. --- paper_title: A Ray tracing algorithm for progressive radiosity paper_content: A new method for computing form-factors within a progressive radiosity approach is presented. Previously, the progressive radiosity approach has depended on the use of the hemi-cube algorithm to determine form-factors. However, sampling problems inherent in the hemi-cube algorithm limit its usefulness for complex images. A more robust approach is described in which ray tracing is used to perform the numerical integration of the form-factor equation. The approach is tailored to provide good, approximate results for a low number of rays, while still providing a smooth continuum of increasing accuracy for higher numbers of rays. Quantitative comparisons between analytically derived form-factors and ray traced form-factors are presented. --- paper_title: Realistic Image Synthesis Using Photon Mapping paper_content: The creation of photorealistic images of three-dimensional models is central to computer graphics. Photon mapping, an extension of ray tracing, makes it possible to efficiently simulate global illumination in complex scenes. Photon mapping can simulate caustics (focused light, like shimmering waves at the bottom of a swimming pool), diffuse inter-reflections (e.g., the bleeding of colored light from a red wall onto a white floor, giving the floor a reddish tint), and participating media (such as clouds or smoke). This book is a practical guide to photon mapping; it provides both the theory and the practical insight necessary to implement photon mapping and simulate all types of direct and indirect illumination efficiently. --- paper_title: Environment Mapping and Other Applications of World Projections paper_content: Various techniques have been developed that employ projections of the world as seen from a particular viewpoint. Blinn and Newell introduced reflection mapping for simulating mirror reflections on curved surfaces. Miller and Hoffman have presented a general illumination model based on environment mapping. World projections have also been used to model distant objects and to produce pictures with the fish-eye distortion required for Omnimax frames. This article proposes a uniform framework for representing and using world projections and argues that the best general-purpose representation is the is projection onto a cube. Surface shading and texture filtering are discussed in the context of environment mapping, and methods are presented for obtaining diffuse and specular surface illumination from prefiltered environment maps. Comparisons are made with ray tracing, noting that two problems with ray tracing?obtaining diffuse reflection and antialiasing specular reflection?can be handled effectively by environment mapping. --- paper_title: Continuous shading of curved surfaces paper_content: A procedure for computing shaded pictures of curved surfaces is presented. The surface is approximated by small polygons in order to solve easily the hidden-parts problem, but the shading of each polygon is computed so that the discontinuities of shade are eliminated across the surface and a smooth appearance is obtained. In order to achieve speed efficiency, the technique developed by Watkins is used which makes possible a hardware implementation of this algorithm. --- paper_title: Illumination for computer generated pictures paper_content: The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images. --- paper_title: Simulation of wrinkled surfaces paper_content: Computer generated shaded images have reached an impressive degree of realism with the current state of the art. They are not so realistic, however, that they would fool many people into believing they are real. One problem is that the surfaces tend to look artificial due to their extreme smoothness. What is needed is a means of simulating the surface irregularities that are on real surfaces. In 1973 Ed Catmull introduced the idea of using the parameter values of parametrically defined surfaces to index into a texture definition function which scales the intensity of the reflected light. By tying the texture pattern to the parameter values, the texture is guaranteed to rotate and move with the object. This is good for showing patterns painted on the surface, but attempts to simulate rough surfaces in this way are unconvincing. This paper presents a method of using a texturing function to perform a small perturbation on the direction of the surface normal before using it in the intensity calculations. This process yields images with realistic looking surface wrinkles without the need to model each wrinkle as a separate surface element. Several samples of images made with this technique are included. --- paper_title: Layered construction for deformable animated characters paper_content: A methodology is proposed for creating and animating computer generated characters which combines recent research advances in robotics, physically based modeling and geometric modeling. The control points of geometric modeling deformations are constrained by an underlying articulated robotics skeleton. These deformations are tailored by the animator and act as a muscle layer to provide automatic squash and stretch behavior of the surface geometry. A hierarchy of composite deformations provides the animator with a multi-layered approach to defining both local and global transition of the character's shape. The muscle deformations determine the resulting geometric surface of the character. This approach provides independent representation of articulation from surface geometry, supports higher level motion control based on various computational models, as well as a consistent, uniform character representation which can be tuned and tweaked by the animator to meet very precise expressive qualities. A prototype system (Critter) currently under development demonstrates research results towards layered construction of deformable animated characters. --- paper_title: Digital Representations of Human Movement paper_content: There are many different approaches to the representation, wlthm a digital computer, of mformatlon describing the movement of the human body. The general Issue of movement representatmn is approached from two points of vmw: notatmn systems designed for recording movement and ammation systems designed for the display of movement. The interpretation of one partmular notation system, Labanotatlon, is examined to extract a set of "primitive movement concepts" which can be used to animate a realistic human body on a graphics display The body is represented computatlonally as a network of specml-purpose processors--one processor situated at each joint of the body--each with an instructmn set demgned around the movement concepts derived from Labanotatlon Movement Is achmved by slmulatmg the behawor of these processors as they interpret their respective programs. --- paper_title: Real time muscle deformations using mass-spring systems paper_content: We propose a method to simulate muscle deformation in real-time, still aiming at satisfying visual results; that is, we are not attempting perfect simulation but building a useful tool for interactive applications. Muscles are represented at 2 levels: the action lines and the muscle shape. The action line represents the force produced by a muscle on the bones, while the muscle shapes used in the simulation consist of a surface based model fitted to the boundary of medical image data. The algorithm to model muscle shapes is described. To physically simulate deformations, we used a mass-spring system with a new kind of springs called "angular springs" which were developed to control the muscle volume during simulation. Results are presented as examples at the end of the paper. --- paper_title: 3-D model-based tracking of humans in action: a multi-view approach paper_content: We present a vision system for the 3-D model-based tracking of unconstrained human movement. Using image sequences acquired simultaneously from multiple views, we recover the 3-D body pose at each time instant without the use of markers. The pose-recovery problem is formulated as a search problem and entails finding the pose parameters of a graphical human model whose synthesized appearance is most similar to the actual appearance of the real human in the multi-view images. The models used for this purpose are acquired from the images. We use a decomposition approach and a best-first technique to search through the high dimensional pose parameter space. A robust variant of chamfer matching is used as a fast similarity measure between synthesized and real edge images. We present initial tracking results from a large new Humans-in-Action (HIA) database containing more than 2500 frames in each of four orthogonal views. They contain subjects involved in a variety of activities, of various degrees of complexity, ranging from the more simple one-person hand waving to the challenging two-person close interaction in the Argentine Tango. --- paper_title: Ballerinas generated by a personal computer paper_content: Beautiful 3-D figures of ballerinas have been generated by a non-professional artist using a personal computer with commercially available ray-tracing software and metaballs. This paper describes how these beautiful artistic creations have been drawn using very limited computational tools. --- paper_title: Human skin model capable of natural shape variation paper_content: In the production of character animation that treats living things moving at will, as in the case of humans and animals, it is important to express natural action and realistic body shape. If we can express these freely and easily, we will be able to apply character animation to many fields, such as the simulation of dances and sports, and electronic stand-ins. The computer graphics technique may be one of the most effective means to achieve such goals. We have developed human skin model capable of natural shape variation. This model has a skeleton structure, and free form surfaces cover the skeleton just like skin. The model permits continuous motion of every components of the skeleton according to actions. During such movements, the skin retains smoothness and naturalness. We are verifying the human skin model by producing several short animation pieces. --- paper_title: Real-time Display of Virtual Humans: Levels of Details and Impostors paper_content: Rendering and animating in real-time a multitude of articulated characters presents a real challenge, and few hardware systems are up to the task. Up to now, little research has been conducted to tackle the issue of real-time rendering of numerous virtual humans. This paper presents a hardware-independent technique that improves the display rate of animated characters by acting on the sole geometric and rendering information. We first review the acceleration techniques traditionally in use in computer graphics and highlight their suitability to articulated characters. We then show how impostors can be used to render virtual humans. We introduce concrete case studies that demonstrate the effectiveness of our approach. Finally, we tackle the visibility issue. --- paper_title: Subdivision surfaces in character animation paper_content: The creation of believable and endearing characters in computer graphics presents a number of technical challenges, including the modeling, animation and rendering of complex shapes such as heads, hands, and clothing. Traditionally, these shapes have been modeled with NURBS surfaces despite the severe topological restrictions that NURBS impose. In order to move beyond these restrictions, we have recently introduced subdivision surfaces into our production environment. Subdivision surfaces are not new, but their use in high-end CG production has been limited. Here we describe a series of developments that were required in order for subdivision surfaces to meet the demands of high-end production. First, we devised a practical technique for constructing provably smooth variable-radius fillets and blends. Second, we developed methods for using subdivision surfaces in clothing simulation including a new algorithm for efficient collision detection. Third, we developed a method for constructing smooth scalar fields on subdivision surfaces, thereby enabling the use of a wider class of programmable shaders. These developments, which were used extensively in our recently completed short film Geri’s game, have become a highly valued feature of our production environment. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling; I.3.3 [Computer Graphics]: Picture/Image Generation. --- paper_title: Layered construction for deformable animated characters paper_content: A methodology is proposed for creating and animating computer generated characters which combines recent research advances in robotics, physically based modeling and geometric modeling. The control points of geometric modeling deformations are constrained by an underlying articulated robotics skeleton. These deformations are tailored by the animator and act as a muscle layer to provide automatic squash and stretch behavior of the surface geometry. A hierarchy of composite deformations provides the animator with a multi-layered approach to defining both local and global transition of the character's shape. The muscle deformations determine the resulting geometric surface of the character. This approach provides independent representation of articulation from surface geometry, supports higher level motion control based on various computational models, as well as a consistent, uniform character representation which can be tuned and tweaked by the animator to meet very precise expressive qualities. A prototype system (Critter) currently under development demonstrates research results towards layered construction of deformable animated characters. --- paper_title: Understanding Motion Capture for Computer Animation and Video Games paper_content: From the Publisher: ::: In Understanding Motion Capture for Computer Animation and Video Games, industry insider Alberto Menache tells the complete story of motion capture, examining its technical details as well as its growth as an industry. Menache's narrative voice and in-depth technical discussions allow the reader not only to learn motion capture, but also to understand the reasons behind its successes, failures, and increasing role in such blockbuster films as Batman Forever and Batman and Robin. With its careful balance between technical analysis and industry trends, Understanding Motion Capture for Computer Animation and Video Games is the first book to explore the controversial art and practice of modern character animation using motion capture. --- paper_title: Articulated Figure Positioning by Multiple Constraints paper_content: A problem that arises in positioning an articulated figure is the solution of 3D joint positions (kinematics) when goal positions, rather than joint angles, are given. If more than one such goal is o be achived, tge problem is often solved interactively by positioning or solving one component of the linkage, then adjusting another, then redoing the first, and so on. This iterative process us skiw abd tedious. We present a method that automatically solves multiple simultaneous joint position goals. The user interface offers a six-degree-of-freedom input device to specify joint angles and goal positions interactively. Examplesare used to demonstrate the power and efficiency of this method for keypositon animation. --- paper_title: Making Them Move: Mechanics, Control & Animation of Articulated Figures paper_content: Making Them Move: Mechanics, Control, and Animation of Articulated Figures Edited by Norman I. Badler, Brian A. Barsky, and David Zeltzer PART ONE -- INTERACTING WITH ARTICULATED FIGURES Chapter 1 Task-level Graphical Simulation: Abstraction, Representation, and Control David Zeltzer Chapter 2 Composition of Realistic Animation Sequences for Multiple Human Figures Tom Calvert Chapter 3 Animation from Instructions Norman I. Badler, Bonnie L. Webber, Jugal Kalita, and Jeffrey Esakov PART TWO -- ARTIFICIAL AND BIOLOGICAL MECHANISMS FOR MOTOR CONTROL ARTIFICIAL MOTOR PROGRAMS Chapter 4 A Robot that Walks: Emergent Behaviors from a Carefully Evolved Network Rodney A. Brooks BIOLOGICAL MOTOR PROGRAMS Chapter 5 Sensory Elements in Pattern-Generating Networks K.G. Pearson Chapter 6 Motor Programs as Units of Movement Control Douglas E. Young and Richard A. Schmidt Chapter 7 Dynamics and Task-specific Coordinations M.T. Turvey, Elliot Saltzman, and R.C. Schmidt Chapter 8 Dynamic Pattern Generation and Recognition J.A.S. Kelso and A.S. Pandya LEARNING MOTOR PROGRAMS Chapter 9 A Computer System for Movement Schemas Peter H. Greene and Dan Solomon PART THREE -- MOTION CONTROL ALGORITHMS Chapter 10 Constrained Optimization of Articulated Animal Movement in Computer Animation Michael Girard Chapter 11 Goal-directed Animation of Tubular Articulated Figures or How Snakes Play Golf Gavin Miller Chapter 12 Human Body Deformations Using Joint-dependent Local Operators and Finite-Element Theory Nadia Magnenat-Thalmann and Daniel Thalmann PART FOUR -- COMPUTING THE DYNAMICS OF MOTION Chapter 13 Dynamic Experiences Jane Wilhelms Chapter 14 Using Dynamics in Computer Animation: Control and Solution Issues Mark Green Chapter 15 Teleological Modeling Alan H. Barr Appendix A: Video Notes Appendix B: About the Authors Index --- paper_title: Interactive animation of parametric models paper_content: This paper describes a program which allows parametric models of three-dimensional characters and scenes to be interactively controlled for computer animation. The system attempts to span the two most common approaches to animation: language-driven or programmed and visually-driven or interactive. Models are designed in a geometry language which supports vector and matrix arithmetic, transformations and instancing of primitive parts. As a result, constraints and functional dependencies between different parts can be programmed. Control is achieved by parameterizing the model. Subsets of parameters can be connected to different logical input devices, establishing an input mode to control the model's shape. Parameter sets can be stored to form a database of positions. Positions then can be mapped to frames and interpolated to animate the model. --- paper_title: Parametric keyframe interpolation incorporating kinetic adjustment and phrasing control paper_content: Parametric keyframing is a popular animation technique where values for parameters which control the position, orientation, size, and shape of modeled objects are determined at key times, then interpolated for smooth animation. Typically the parameter values defined by the keyframes are interpolated by spline techniques with the result that the parameter change kinetics are implicitly defined by the given keyframe times and data points. Existing interpolation systems for animation are examined and found to lack certain desirable features such as continuity of acceleration or convenient kinetic control. The requirements of interpolation for animation are analyzed in order to determine the characteristics of a satisfactory system. A new interpolation system is developed and implemented which incorporates second-derivative continuity (continuity of acceleration), local control, convenient kinetic control, and joining and phrasing of successive motions. Phrasing control includes the ability to parametrically control the degree and extent of smooth motion flow between separately defined motions. --- paper_title: A Real Time Face Tracking And Animation System paper_content: In this paper, a novel system for real time face tracking and animation is presented. The system is composed of two major components: (1) real time infra-red (IR) based active facial feature tracking, and (2) real time facial expression generation based on a 3D face avatar. Twenty-two feature points, head pose orientation and eye close-open status are effectively extracted through a video input. Based on the detected facial features, a 3D face model is animated by a dynamic inference algorithm and a transformation from facial motion parameters to facial animation parameters. The work can be extended to the fields of real time facial expression analysis and synthesis for applications of human-computer interaction, model-based video conferencing and low bit rate avatar communication. The performance of the developed system is evaluated by its real time implementation for facial expression generation. --- paper_title: Facial image synthesis using skin texture recording paper_content: A significant issue in synthesizing realistic faces is the representation of skin grain and wrinkles. This paper describes a new approach based on the registered body of a real person. This recording allows simultaneously registering of the 3-D coordinates of a point and the corresponding reflected intensity. Using a 4-D B-spline surface to reconstruct the face, we come close to achieving a photographic result. --- paper_title: Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models paper_content: An approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions is presented. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. The estimation of dynamical facial muscle contractions from video sequences of expressive human faces is considered. An estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images is developed. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions. > --- paper_title: Physically-Based Facial Modeling , Analysis , and Animation paper_content: We develop a new 3D hierarchical model of the human face. The model incorporates a physically-based approximation to facial tissue and a set of anatomically-motivated facial muscle actuators. Despite its sophistication, the model is efficient enough to produce facial animation at interactive rates on a high-end graphics workstation. A second contribution of this paper is a technique for estimating muscle contractions from video sequences of human faces performing expressive articulations. These estimates may be input as dynamic control parameters to the face model in order to produce realistic animation. Using an example, we demonstrate that our technique yields sufficiently accurate muscle contraction estimates for the model to reproduce expressions from dynamic images of faces. --- paper_title: MPEG-4 Facial Animation: The Standard, Implementation and Applications paper_content: From the Publisher: ::: This book concentrates on the animation of faces. The Editors put the MPEG-4 FA standard against the historical background of research on facial animation and model-based coding, and provide a brief history of the development of the standard itself. In part 2 there is a comprehensive overview of the FA specification with the goal of helping the reader understand how the standard really works. Part 3, forms the bulk of the book and covers the implementations of the standard on both encoding and decoding side. Several face animation techniques for MPEG-4 FA decoders are presented. While the standard itself actually specifies only the decoder, for applications it is interesting to look at the wide range of technologies for producing and encoding the FA contents. These include video analysis, speech analysis and synthesis, as well as keyframe animation. The last part of the book provides several examples of applications using the MPEG-4 FA standard. ::: It will be useful for companies implementing products and services related to the new standard, researchers in several domains related to facial animation, as well as wider technical audience interested in new technologies and applications.· ::: The main people involved in the standardization process are contributors to the book Provides several examples of applications using the MPEG-4 Facial Animation standard, including video and speech analysis This will become THE reference for the MPEG-4 Facial Animation Aids the understanding of the reasoning behind the standard specification, how it works and what the potential applications are Gives an overview of the technologies directly related to the standard and its implementation ::: Essential reading for the Industry and research community especially engineers, researchers and developers. --- paper_title: Simulation of Facial Muscle Actions Based on Rational Free Form Deformations paper_content: This paper describes interactive facilities for simulating abstract muscle actions using rational free form deformations (RFFD). The particular muscle action is simulated as the displacement of the control points of the control-unit for an RFFD defined on a region of interest. One or several simulated muscle actions constitute a minimum perceptible action (MPA), which is defined as the atomic action unit, similar to action unit (AU) of the facial action coding system (FACS), to build an expression --- paper_title: Face and 2-d mesh animation in MPEG-4 paper_content: This paper presents an overview of some of the synthetic visual objects supported by MPEG-4 version-1, namely animated faces and animated arbitrary 2D uniform and Delaunay meshes. We discuss both specification and compression of face animation and 2D-mesh animation in MPEG-4. Face animation allows to animate a proprietary face model or a face model downloaded to the decoder. We also address integration of the face animation tool with the text-to-speech interface (TTSI), so that face animation can be driven by text input. --- paper_title: Langwidere: a new facial animation system paper_content: This paper presents Langwidere, a facial animation system that integrates hierarchical spline modeling with simulated muscles based upon local area surface deformations. The multi-level shape representation of a hierarchical spline provides control over the extent of deformations, while at the same time reducing the amount of data needed to define the surface. The facial model is constructed from a single closed surface defining the entire head and internal structures of the mouth cavity. Simulated muscles are attached to various levels of the surface with more rudimentary levels substituting for bone such as the skull and jaw. The combination of a hierarchical model and simulated muscles provides a smooth surface with precise and flexible control over surface shape. > --- paper_title: 3DAV exploration of video-based rendering technology in MPEG paper_content: New kinds of media are emerging that extend the functionality of available technology. The growth of immersive recording technologies has led to video-based rendering systems for photographing and reproducing environments in motion. This lends itself to new forms of interactivity for the viewer, including the ability to explore a photographic scene and interact with its features. The three-dimensional (3-D) qualities of objects in the scene can be extracted by analysis techniques and displayed by the use of stereo vision. The data types and image bandwidth needed for this type of media experience may require especially efficient formats for representation, coding, and transmission. An overview is presented here of the MPEG activity exploring the need for standardization in this area to support these new applications, under the name of 3DAV (for 3-D audio-visual). As an example, a detailed solution for omnidirectional video is presented as one of the application scenarios in 3DAV. --- paper_title: Interactive 3-D Video Representation and Coding Technologies paper_content: Interactivity in the sense of being able to explore and navigate audio-visual scenes by freely choosing viewpoint and viewing direction, is an important key feature of new and emerging audio-visual media. This paper gives an overview of suitable technology for such applications, with a focus on international standards, which are beneficial for consumers, service providers, and manufacturers. We first give a general classification and overview of interactive scene representation formats as commonly used in computer graphics literature. Then, we describe popular standard formats for interactive three-dimensional (3-D) scene representation and creation of virtual environments, the virtual reality modeling language (VRML), and the MPEG-4 BInary Format for Scenes (BIFS) with some examples. Recent extensions to MPEG-4 BIFS, the Animation Framework eXtension (AFX), providing advanced computer graphics tools, are explained and illustrated. New technologies mainly targeted at reconstruction, modeling, and representation of dynamic real world scenes are further studied. The user shall be able to navigate photorealistic scenes within certain restrictions, which can be roughly defined as 3-D video. Omnidirectional video is an extension of the planar two-dimensional (2-D) image plane to a spherical or cylindrical image plane. Any 2-D view in any direction can be rendered from this overall recording to give the user the impression of looking around. In interactive stereo two views, one for each eye, are synthesized to provide the user with an adequate depth cue of the observed scene. Head motion parallax viewing can be supported in a certain operating range if sufficient depth or disparity data are delivered with the video data. In free viewpoint video, a dynamic scene is captured by a number of cameras. The input data are transformed into a special data representation that enables interactive navigation through the dynamic scene environment. --- paper_title: An introduction to the MPEG-4 animation framework eXtension paper_content: This paper presents the MPEG-4 Animation Framework eXtension (AFX) standard, ISO/IEC 14496-16. Initiated by the MPEG Synthetic/Natural Hybrid Coding group in 2000, MPEG-4 AFX proposes an advanced framework for interactive multimedia applications using both natural and synthetic objects. Following this model, new synthetic objects have been specified, increasing content realism over existing MPEG-4 synthetic objects. The general overview of MPEG-4 AFX is provided on top of the review of MPEG-4 standards to explain the relationship between MPEG-4 and MPEG-4 AFX. Then we give a bird's-eye view of new tools available in this standard. ---
Title: Scene Representation Technologies for 3DTV—A Survey Section 1: INTRODUCTION Description 1: Provide an overview of the 3DTV system, its inputs, and the essential role of 3-D scene representation. Discuss the various requirements for scene representation and introduce the organization of the paper. Section 2: DENSE DEPTH REPRESENTATIONS Description 2: Discuss the fundamentals and key methods for dense depth representations, including layered depth images, depth image-based rendering, and rate-distortion optimal dense depth representations. Section 3: SURFACE-BASED REPRESENTATIONS Description 3: Provide a comparative survey of techniques for representing 3-D surface geometry, including polygonal meshes, NURBS, subdivision surfaces, and point-based modeling. Section 4: VOLUMETRIC REPRESENTATIONS Description 4: Explain the notion of volumetric representations and the associated data structures, such as voxels and octrees. Discuss their applications in multiview stereo and space-carving approaches. Section 5: TEXTURE REPRESENTATIONS Description 5: Examine the importance of texturing for realistic scene rendering in 3DTV, detailing single texture representation and multitexture representation methods. Section 6: PSEUDO-3-D REPRESENTATIONS Description 6: Describe image-based representations that avoid explicit 3-D geometry, such as image interpolation, image warping, and light field representations. Section 7: OBJECT-BASED 3-D SCENE MODELING Description 7: Discuss the three basic tasks involved in dynamic 3-D scene modeling: representation, animation, and rendering. Highlight techniques for outdoor environments, complex phenomena, and natural objects. Section 8: HEAD AND BODY SPECIFIC REPRESENTATIONS Description 8: Focus on 3-D representation techniques for human faces and bodies, detailing methods for modeling the skeleton, body appearance, and facial animation. Section 9: STANDARDS FOR 3-D SCENE REPRESENTATIONS Description 9: Review the importance of interoperability and standards for 3-D scene representations, covering ISO standards, VRML, X3D, and MPEG-4.
A Survey of Body Sensor Networks
12
--- paper_title: Body Area Networks: A Survey paper_content: Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications. --- paper_title: Wireless Non-contact EEG/ECG Electrodes for Body Sensor Networks paper_content: A wireless EEG/ECG system using non-contact sensors is presented. The system consists of a set of simple capacitive electrodes manufactured on a standard printed circuit board that can operate through fabric or other insulation. Each electrode provides 46dB of gain over a .7-100Hz bandwidth with a noise level of 3.80uV RMS for high quality brain and cardiac recordings. Signals are digitized directly on top of the electrode and transmitted in a digital serial daisy chain, minimizing the number of wires required on the body. A small wireless base unit transmits EEG/ECG telemetry to a computer for storage and processing. --- paper_title: EcoIMU: A Dual Triaxial-Accelerometer Inertial Measurement Unit for Wearable Applications paper_content: This paper describes EcoIMU, a gyro-free, wearable inertial measurement unit (IMU) built with a pair of triaxial accelerometers that are spatially separated and wirelessly connected on the body. It can output the translation and rotation of the body for the purpose of motion tracking and dead reckoning applications. It mitigates error accumulation and drift problems using domain knowledge including pause identification and geometric constraints on the two nodes. Experimental results show EcoIMU to have less drift error while consuming significantly lower power than comparable IMUs built with MEMS gyroscopes and accelerometers. --- paper_title: An active electrode for biopotential recording from small localized bio-sources paper_content: BackgroundLaser bio-stimulation is a well-established procedure in Medical Acupuncture. Nevertheless there is still a confusion as to whether it works or the effect is just placebo. Although a plethora of scientific papers published, showing positive clinical results, there is still a lack of objective scientific proofs about the bio-stimulation effect of lasers used in Acupuncture. The objective of this work was to design and build a body surface electrode and an amplifier for biopotential recording from acupuncture points, considered here as small localized bio-sources (SLB). The design is aimed for studying SLB potentials provoked by laser stimulus, in search for objective proofs of the bio-stimulation effect of lasers used in Medical Acupuncture.MethodsThe active electrode presented features a new adjustable anchoring system and fractionation of the biopotential amplifier between the electrode and the cabinet's location. The new adjustable electrode anchoring system is designed to reduce the electrode-skin contact impedance, its variation and motion artifacts. That is achieved by increasing the electrode-skin tension and decreasing its relative movement. Additionally the sensing element provides local constant skin stretching thus eliminating the contribution of the skin potential artifact. The electrode is attached to the skin by a double-sided adhesive pad, where the sensing element is a stainless steel, 4 mm in diameter. The fractionation of the biopotential amplifier is done by incorporating the amplifier's front-end op-amps at the electrodes, thus avoiding the use of extra buffers. The biopotential amplifier features two selectable modes of operation: semi-AC-mode with a -3 dB bandwidth of 0.32–1000 Hz and AC-mode with a bandwidth of 0.16–1000 Hz.ResultsThe average measured DC electrode-skin contact impedance of the proposed electrode was 450 kΩ, with electrode tension of 0.3 kg/cm2 on an unprepared skin of the inner forearm. The peak-to-peak noise voltage measured at the amplifier output, with input terminals connected to common, was 10 mVp-p, or 2 μVp-p referred to the input. The common-mode rejection ratio of the amplifier was 96 dB at 50 Hz, measured with imbalanced electrodes' impedances. The prototype was also tested practically and sample records were obtained after a low intensity SLB laser stimulation. All measurements showed almost a complete absence of 50 Hz interference, although no electrolyte gel or skin preparation was applied.ConclusionThe results showed that the new active electrode presented significantly reduced the electrode-skin impedance, its variation and motion artifact influences. This allowed SLB signals with relatively high quality to be recorded without skin preparation. The design offers low noise and major reduction in parts, size and power consumption. The active electrode specifications were found to be better or at least comparable to those of other existing designs. --- paper_title: Recognition of hand movements using wearable accelerometers paper_content: Accelerometer based activity recognition systems have typically focused on recognizing simple ambulatory activities of daily life, such as walking, sitting, standing, climbing stairs, etc. In this work, we developed and evaluated algorithms for detecting and recognizing short duration hand movements (lift to mouth, scoop, stir, pour, unscrew cap). These actions are a part of the larger and complex Instrumental Activities of Daily Life (IADL) making a drink and drinking. We collected data using small wireless tri-axial accelerometers worn simultaneously on different parts of the hand. Acceleration data for training was collected from 5 subjects, who also performed the two IADLs without being given specific instructions on how to complete them. Feature vectors (mean, variance, correlation, spectral entropy and spectral energy) were calculated and tested on three classifiers (AdaBoost, HMM, k-NN). AdaBoost showed the best performance, with an overall accuracy of 86% for detecting each of these hand actions. The results show that although some actions are recognized well with the generalized classifer trained on the subject-independent data, other actions require some amount of subject-specific training. We also observed an improvement in the performance of the system when multiple accelerometers placed on the right hand were used. --- paper_title: Power and Area Efficient Wavelet-Based On-chip ECG Processor for WBAN paper_content: This paper proposes a power and area efficient electrocardiogram (ECG) signal processing application specific integrated circuits (ASIC) for wireless body area networks (WBAN). This signal processing ASIC can accurately detect the QRS peak with high frequency noise suppression. The proposed ECG signal processor is implemented in 0.18μm CMOS technology. It occupies only 1.2mm2 in area and 9μW in power consumption. Therefore, this ECG processor is convenient for long-term monitoring of cardio-vascular condition of patients, and is very suitable for on-body WBAN applications. --- paper_title: Sensor Placement for Activity Detection Using Wearable Accelerometers paper_content: Activities of daily living are important for assessing changes in physical and behavioural profiles of the general population over time, particularly for the elderly and patients with chronic diseases. Although accelerometers are widely integrated with wearable sensors for activity classification, the positioning of the sensors and the selection of relevant features for different activity groups still pose interesting research challenges. This paper investigates wearable sensor placement at different body positions and aims to provide a framework that can answer the following questions: (i) What is the ideal sensor location for a given group of activities? (ii) Of the different time-frequency features that can be extracted from wearable accelerometers, which ones are most relevant for discriminating different activity types? --- paper_title: Clubfoot Pattern Recognition towards Personalized Insole Design paper_content: personalized insole design is a novel approach for better quality of daily life. In this study we developed a low-cost foot pressure measurement system elaborated for primary care and community hospitals, subsequently the feature extraction and pattern recognition were carried out in order to assist the clubfoot diagnosis. The original data were obtained from 20 adults with normal feet and 30 patients with diagnosed clubfeet. Features such as peak pressure and regional contact area were deduced from 10 anatomically significant areas. It was indicated that, comparing with normal feet, flat feet exhibited larger contact area in midfoot (p --- paper_title: Respiratory Rate and Flow Waveform Estimation from Tri-axial Accelerometer Data paper_content: There is a strong medical need for continuous, unobstrusive respiratory monitoring, and many shortcomings to existing methods. Previous work shows that MEMS accelerometers worn on the torso can measure inclination changes due to breathing, from which a respiratory rate can be obtained. There has been limited validation of these methods. The problem of practical continuous monitoring, in which patient movement disrupts the measurements and the axis of interest changes, has also not been addressed. We demonstrate a method based on tri-axial accelerometer data from a wireless sensor device, which tracks the axis of rotation and obtains angular rates of breathing motion. The resulting rates are validated against gyroscope measurements and show high correlation to flow rate measurements using a nasal cannula. We use a movement detection method to classify periods in which the patient is static and breathing signals can be observed accurately. Within these periods we obtain a close match to cannula measurements, for both the flow rate waveform and derived respiratory rates, over multi-hour datasets obtained from wireless sensor devices on hospital patients. We discuss future directions for improvement and potential methods for estimating absolute airflow rate and tidal volume. --- paper_title: Breathing Feedback System with Wearable Textile Sensors paper_content: Breathing exercises form an essential part of the treatment for respiratory illnesses such as cystic fibrosis. Ideally these exercises should be performed on a daily basis. This paper presents an interactive system using a wearable textile sensor to monitor breathing patterns. A graphical user interface provides visual real-time feedback to patients. The aim of the system is to encourage the correct performance of prescribed breathing exercises by monitoring the rate and the depth of breathing. The system is straight forward to use, low-cost and can be installed easily within a clinical setting or in the home. Monitoring the user with a wearable sensor gives real-time feedback to the user as they perform the exercise, allowing them to perform the exercises independently. There is also potential for remote monitoring where the user’s overall performance over time can be assessed by a clinician. --- paper_title: On distributed fault-tolerant detection in wireless sensor networks paper_content: In this paper, we consider two important problems for distributed fault-tolerant detection in wireless sensor networks: 1) how to address both the noise-related measurement error and sensor fault simultaneously in fault-tolerant detection and 2) how to choose a proper neighborhood size n for a sensor node in fault correction such that the energy could be conserved. We propose a fault-tolerant detection scheme that explicitly introduces the sensor fault probability into the optimal event detection process. We mathematically show that the optimal detection error decreases exponentially with the increase of the neighborhood size. Experiments with both Bayesian and Neyman-Pearson approaches in simulated sensor networks demonstrate that the proposed algorithm is able to achieve better detection and better balance between detection accuracy and energy usage. Our work makes it possible to perform energy-efficient fault-tolerant detection in a wireless sensor network. --- paper_title: A 20 µW contact impedance sensor for wireless body-area-network transceiver paper_content: A low power contact impedance sensor (CIS) is presented for an energy efficient wireless body-area-network (WBAN) transceiver using a human body as a transmission medium. The proposed CIS adopts the capacitive sensing technique based on the LC resonance for detecting the parasitic capacitance between the electrodes and the human body to automatically turn on or off the transceiver. The 3'b resistive sensor, combined with a reconfigurable output driver in the transmitter, is proposed to compensate the channel quality degradation due to the contact impedance variation. It can reduce both the linearity and sensitivity requirements of the receiver front-end by 7 dB. It leads to significantly reduce the power of the LNA more than 70 % (from 2.0 mW to 0.6 mW). The proposed CIS of 1.0 mm · 1.4 mm is fabricated in 0.18 µm CMOS technology, and dissipates only 20 µW from 1.0 V. --- paper_title: A Low Power Wake-Up Circuitry Based on Dynamic Time Warping for Body Sensor Networks paper_content: Enhancing the wear ability and reducing the form factor often are among the major objectives in design of wearable platforms. Power optimization techniques will significantly reduce the form factor and/or will prolong the time intervals between recharges. In this paper, we propose an ultra low power programmable architecture based on Dynamic Time Warping specifically designed for wearable inertial sensors. The low power architecture performs the signal processing merely as fast as the production rate for the inertial sensors, and further considers the minimum bit resolution and the number of samples that are just enough to detect the movement of interest. Our results show that the power consumption for inertial based monitoring systems can be reduced by at least three orders of magnitude using our proposed architecture compared to the state-of-the-art low power microcontrollers. --- paper_title: Enzyme catalysed biofuel cells paper_content: Biofuel cells are energy conversion devices that use biocatalysts to convert chemical energy to electrical energy. Over the last decade, research in this area has intensified, especially in the area of direct electron transfer between enzymes and electrodes. This review details the fundamentals of enzymatic biofuel cells and reviews characterization techniques that can be used to evaluate and optimize biofuel cells. The review details the advantages and disadvantages of mediated and direct bioelectrocatalysis at electrodes for biofuel cells. It also compares and contrasts different enzyme immobilization techniques and different electrode structures. --- paper_title: A 65 nm Sub-$V_{t}$ Microcontroller With Integrated SRAM and Switched Capacitor DC-DC Converter paper_content: Aggressive supply voltage scaling to below the device threshold voltage provides significant energy and leakage power reduction in logic and SRAM circuits. Consequently, it is a compelling strategy for energy-constrained systems with relaxed performance requirements. However, effects of process variation become more prominent at low voltages, particularly in deeply scaled technologies. This paper presents a 65 nm system-on-a-chip which demonstrates techniques to mitigate variation, enabling sub-threshold operation down to 300 mV. A 16-bit microcontroller core is designed with a custom sub-threshold cell library and timing methodology to address output voltage failures and propagation delays in logic gates. A 128 kb SRAM employs an 8 T bit-cell to ensure read stability, and peripheral assist circuitry to allow sub-Vt reading and writing. The logic and SRAM function in the range of 300 mV to 600 mV, consume 27.2 pJ/cycle at the optimal V DD of 500 mV, and 1 muW standby power at 300 mV. To supply variable voltages at these low power levels, a switched capacitor DC-DC converter is integrated on-chip and achieves above 75% efficiency while delivering between 10 muW to 250 muW of load power. --- paper_title: Physical Movement Monitoring Using Body Sensor Networks: A Phonological Approach to Construct Spatial Decision Trees paper_content: Monitoring human activities using wearable sensor nodes has the potential to enable many useful applications for everyday situations. Limited computation, battery lifetime and communication bandwidth make efficient use of these platforms crucial. In this paper, we introduce a novel classification model that identifies physical movements from body-worn inertial sensors while taking collaborative nature and limited resources of the system into consideration. Our action recognition model uses a decision tree structure to minimize the number of nodes involved in classification of each action. The decision tree is constructed based on the quality of action recognition in individual nodes. A clustering technique is employed to group similar actions and measure quality of per-node identifications. We pose an optimization problem for finding a minimal set of sensor nodes contributing to the action recognition. We then prove that this problem is NP-hard and provide fast greedy algorithms to approximate the solution. Finally, we demonstrate the effectiveness of our distributed algorithm on data collected from five healthy subjects. In particular, our system achieves a 72.4% reduction in the number of active nodes while maintaining 93.3% classification accuracy. --- paper_title: Sensor Placement for Activity Detection Using Wearable Accelerometers paper_content: Activities of daily living are important for assessing changes in physical and behavioural profiles of the general population over time, particularly for the elderly and patients with chronic diseases. Although accelerometers are widely integrated with wearable sensors for activity classification, the positioning of the sensors and the selection of relevant features for different activity groups still pose interesting research challenges. This paper investigates wearable sensor placement at different body positions and aims to provide a framework that can answer the following questions: (i) What is the ideal sensor location for a given group of activities? (ii) Of the different time-frequency features that can be extracted from wearable accelerometers, which ones are most relevant for discriminating different activity types? --- paper_title: Using Multi-modal Sensing for Human Activity Modeling in the Real World paper_content: Traditionally smart environments have been understood to represent those (often physical) spaces where computation is embedded into the users’ surrounding infrastructure, buildings, homes, and workplaces. Users of this “smartness” move in and out of these spaces. Ambient intelligence assumes that users are automatically and seamlessly provided with context-aware, adaptive information, applications and even sensing – though this remains a significant challenge even when limited to these specialized, instrumented locales. Since not all environments are “smart” the experience is not a pervasive one; rather, users move between these intelligent islands of computationally enhanced space while we still aspire to achieve a more ideal anytime, anywhere experience. Two key technological trends are helping to bridge the gap between these smart environments and make the associated experience more persistent and pervasive. Smaller and more computationally sophisticated mobile devices allow sensing, communication, and services to be more directly and continuously experienced by user. Improved infrastructure and the availability of uninterrupted data streams, for instance location-based data, enable new services and applications to persist across environments. --- paper_title: Wireless sensor networks: a survey paper_content: This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed. --- paper_title: Multimodal analysis of body sensor network data streams for real-time healthcare paper_content: Fundamental advances in low power circuits, wireless communication, physiological sensor design, and multimedia stream processing, have led to the deployment of body sensor networks for the real-time monitoring of individual health in diverse settings. In this paper we will present a summary of the state-of-the-art in the design and development of aggregation, processing, analysis, and retrieval techniques for body sensor network data streams. In particular, we will focus on multi-modal stream analysis techniques, in distributed and resource constrained environments, for effective real-time healthcare applications. We will describe the associated research challenges ranging from designing novel applications and mining algorithms to systems issues of resource-adaptation, reliability etc., and the intersection of these. We will also present practical deployments and emerging applications of body sensor networks both in individual healthcare as well as in applications for large-scale public health tracking of communities. We will conclude with a summary of open challenges in the field. --- paper_title: Characterization of implantable biosensor membrane biofouling. paper_content: The material-tissue interaction that results from sensor implantation is one of the major obstacles in developing viable, long-term implantable biosensors. Strategies useful for the characterization and modification of sensor biocompatibility are widely scattered in the literature, and there are many peripheral studies from which useful information can be gleaned. The current paper reviews strategies suitable for addressing biofouling, one aspect of biosensor biocompatibility. Specifically, this paper addresses the effect of membrane biofouling on sensor sensitivity from the standpoint of glucose transport limitations. Part I discusses the in vivo and in vitro methods used to characterize biofouling and the effects of biofouling on sensor performance, while Part II presents techniques intended to improve biosensor biocompatibility. --- paper_title: Portable Preimpact Fall Detector With Inertial Sensors paper_content: Falls and the resulting hip fractures in the elderly are a major health and economic problem. The goal of this study was to investigate the feasibility of a portable preimpact fall detector in detecting impending falls before the body impacts on the ground. It was hypothesized that a single sensor with the appropriate kinematics measurements and detection algorithms, located near the body center of gravity, would be able to distinguish an in-progress and unrecoverable fall from nonfalling activities. The apparatus was tested in an array of daily nonfall activities of young (n = 10) and elderly (n = 14) subjects, and simulated fall activities of young subjects. A threshold detection method was used with the magnitude of inertial frame vertical velocity as the main variable to separate the nonfall and fall activities. The algorithm was able to detect all fall events at least 70 ms before the impact. With the threshold adapted to each individual subject, all falls were detected successfully, and no false alarms occurred. This portable preimpact fall detection apparatus will lead to the development of a new generation inflatable hip pad for preventing fall-related hip fractures. --- paper_title: Activity Recognition Using Inertial Sensing for Healthcare, Wellbeing and Sports Applications: A Survey paper_content: This paper surveys the current research directions of activity recognition using inertial sensors, with potential application in healthcare, wellbeing and sports. The analysis of related work is organized according to the five main steps involved in the activity recognition process: preprocessing, segmentation, feature extraction, dimensionality reduction and classification. For each step, we present the main techniques utilized, their advantages and drawbacks, performance metrics and usage examples. We also discuss the research challenges, such as user behavior and technical limitations, as well as the remaining open research questions. --- paper_title: ECG-Cryptography and Authentication in Body Area Networks paper_content: Wireless body area networks (BANs) have drawn much attention from research community and industry in recent years. Multimedia healthcare services provided by BANs can be available to anyone, anywhere, and anytime seamlessly. A critical issue in BANs is how to preserve the integrity and privacy of a person's medical data over wireless environments in a resource efficient manner. This paper presents a novel key agreement scheme that allows neighboring nodes in BANs to share a common key generated by electrocardiogram (ECG) signals. The improved Jules Sudan (IJS) algorithm is proposed to set up the key agreement for the message authentication. The proposed ECG-IJS key agreement can secure data commnications over BANs in a plug-n-play manner without any key distribution overheads. Both the simulation and experimental results are presented, which demonstrate that the proposed ECG-IJS scheme can achieve better security performance in terms of serval performance metrics such as false acceptance rate (FAR) and false rejection rate (FRR) than other existing approaches. In addition, the power consumption analysis also shows that the proposed ECG-IJS scheme can achieve energy efficiency for BANs. --- paper_title: Addressing Context Awareness Techniques in Body Sensor Networks paper_content: Context awareness in Body Sensor Networks (BSNs) has the significance of associating physiological user activity and environment to the sensed signals of the user. The context information derived in a BSN can be used in pervasive healthcare monitoring for relating importance to events and specifically for accurate episode detection. In this paper, we address the issues of context-aware sensing in BSNs, along with a comparison of different techniques for deducing context awareness, namely, Artificial Neural Networks, Bayesian Networks, and Hidden Markov Models. --- paper_title: Real-time physical activity monitoring by data fusion in body sensor networks paper_content: A physical activity monitoring system by data fusion in body sensor networks is presented in this paper, which targets at providing body status information in real time and identifying body activities. By fusion of data collected from several accelerometer sensors placed on different parts of the body, the activities can be identified and tracked Mathematical approaches employed in the system include Kalman filter and hidden Markov model. With the proposed system architecture, these algorithms are distributed to different components of the system. The proposed system is applied to monitoring and identifying daily activities in laboratory and comparatively intensive activities in a gym room. Video-based approach is used as the benchmark to evaluate its performance. Comparative results indicate that, by using the proposed system, body status of daily activities can be estimated with good accuracy in real time, and body activity is identified with high accuracy within short system latency. --- paper_title: COORDINATED SENSING OF NETWORKED BODY SENSORS USING MARKOV DECISION PROCESSES paper_content: This article describes a Markov decision process MDP framework for coordinated sensing between correlated sensors in a body-area network. The technique is designed to extend the life of mobile continuous health-monitoring systems based on energy-constrained wearable sensors. The technique enables distributed sensors in a body-area network to adapt their sampling rates in response to changing criticality of the detected data and the limited energy reserve at each sensor node. The relationship between energy consumption, sampling rates, and utility of coordinated measurements is formulated as an MDP. This MDP is solved to generate a globally optimal policy that specifies the sampling rates for each sensor for all possible states of the system. This policy is computed offline before deployment and only the resulting policy is stored within each sensor node. We also present a method of executing the global policy without requiring continuous communication between the sensors. Each sensor node maintains a local estimate of the global state. Communication occurs only when an information-theoretic model of the uncertainty in the local state estimates exceeds a predefined threshold. We show results on simulated data that demonstrate the efficacy of this distributed-control framework and compare the performance of the proposed controller with other policies. --- paper_title: Protein Engineering for Biosensors paper_content: In Chapter 2, we introduced the basic concept of electrochemical sensors and biosensors. In this chapter, we will focus on the biological aspects of biosensors in two important regards; the first being the biological molecules involved in the molecular recognition process that gives the biosensors their specificity and sensitivity as illustrated in Figure 3.1 [1]. We will discuss how these proteins can be engineered to improve sensor performance. The second aspect is concerned with biocompatibility, which is the mutual interaction between the sensor and the tissue within which it is located. Although progress has been made in making implantable biosensors reliable and robust over a period of days, there are still significant technical issues associated with long-term (weeks to months) implantation. This reflects in part the response of the tissue to trauma and in part the inherent robustness of the biological molecules used in the sensor. This implies that the solution to long-term implantation will come from a combination of factors including minimally invasive implantation, understanding and modulating tissue response to implantation, and modifying the properties of the biomolecules. Molecular recognition occurring at or near the sensor surface can be transduced through a variety of different physical sensing modalities and this leads to a sensor classification shown below. --- paper_title: Information fusion for wireless sensor networks: Methods, models, and classifications paper_content: Wireless sensor networks produce a large amount of data that needs to be processed, delivered, and assessed according to the application objectives. The way these data are manipulated by the sensor nodes is a fundamental issue. Information fusion arises as a response to process data gathered by sensor nodes and benefits from their processing capability. By exploiting the synergy among the available data, information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity. In this work, we survey the current state-of-the-art of information fusion by presenting the known methods, algorithms, architectures, and models of information fusion, and discuss their applicability in the context of wireless sensor networks. --- paper_title: Handbook of Multisensor Data Fusion paper_content: Multisensor Data Fusion D. L. Hall and J. Llinas Data Fusion Perspectives and Its Role in Information Processing O. Kessler and F. White Revisions to the JDL Data Fusion Model A. N. Steinberg and C. L. Bowman Introduction to the Algorithms of Data Association in Multiple-Target Tracking J. K. Uhlmann The Principles and Practice of Image and Spatial Data Fusion E. Waltz and T. Waltz Data Registration R. R. Brooks and L. Grewe Data Fusion Automation: A Top-Down Perspective R. Antony Overview of Distributed Decision Fusion M. E. Liggins Introduction to Particle Filtering: The Next Stage in Tracking M. E. Liggins and K-C. Chang Target Tracking Using Probabilistic Data Association-Based Techniques with Applications to Sonar, Radar, and EO Sensors T. Kirubarajan and Y. Bar-Shalom An Introduction to the Combinatorics of Optimal and Approximate Data Association J. K. Uhlmann A Bayesian Approach to Multiple-Target Tracking L. D. Stone Data Association Using Multiple-Frame Assignments A. B. Poore, S. Lu, and B. J. Suchomel General Decentralized Data Fusion with Covariance Intersection S. Julier and J. K. Uhlmann Data Fusion in Nonlinear Systems S. Julier and J. K. Uhlmann Random Set Theory for Multisource-Multitarget Information Fusion R. Mahler Distributed Fusion Architectures, Algorithms, and Performance within a Network-Centric Architecture M. E. Liggins and K-C. Chang Foundations of Situation and Threat Assessment A. N. Steinberg An Introduction to Level 5 Fusion: The Role of the User E. Blasch Perspectives on the Human Side of Data Fusion: Prospects for Improved Effectiveness using Advanced Human-Computer Interfaces D. L. Hall, C. M. Hall, and S. A. H. McMullen Requirements Derivation for Data Fusion Systems E. Waltz and D. L. Hall A Systems Engineering Approach for Implementing Data Fusion Systems C. L. Bowman and A. N. Steinberg Studies and Analyses within Project Correlation: An In-Depth Assessment of Correlation Problems and Solution Techniques J. Llinas, Capt. L. McConnell, C. L. Bowman, D. L. Hall, and P. Applegate Data Management Support to Tactical Data Fusion R. Antony Assessing the Performance of Multisensor Fusion Processes J. Llinas A Survey of COTS Software for Multisensor Data Fusion S. A. Hall McMullen, R. R. Sherry, and S. Miglani A Survey of Multisensor Data Fusion Systems M. L. Nichols Data Fusion for Developing Predictive Diagnostics for Electromechanical Systems C. S. Byington and A. K. Garga Adapting Data Fusion to Chemical and Biological Sensors D. C. Swanson Fusion of Ground and Satellite Data via the Army Battle Command System S. Aungst, M. Campbell, J. Kuhns, D. Beyerle, T. Bacastow, and J. Knox Developing Information Fusion Methods for Combat Identification T. M. Schuck, J. Bockett Hunter, and D. D. Wilson --- paper_title: Combination of body sensor networks and on-body signal processing algorithms: the practical case of MyHeart project paper_content: Smart clothes increase the efficiency of long-term non-invasive monitoring systems by facilitating the placement of sensors and increasing the number of measurement locations. Since the sensors are either garment-integrated or embedded in an unobtrusive way in the garment, the impact on the subject's comfort is minimized. However, the main challenge of smart clothing lies in the enhancement of signal quality and the management of the huge data volume resulting from the variable contact with the skin, movement artifacts, non-accurate location of sensors and the large number of acquired signals. This paper exposes the strategies and solutions adopted in the European 1ST project MyHeart to address these problems, from the definition of the body sensor network to the description of two embedded signal processing techniques performing on-body ECG enhancement and motion activity classification. --- paper_title: A context-aware service platform to support continuous care networks for home-based assistance paper_content: Efficient and effective treatment of chronic disease conditions requires the implementation of emerging continuous care models. These models pose several technology-oriented challenges for home-based continuous care, requiring assistance services based on collaboration among different stakeholders: health operators, patient relatives, as well as social community members. This work describes a context-aware service platform designed for improving patient quality of life by supporting care team activity, intervention and cooperation. Leveraging on an ontology-based context management middleware, the proposed architecture exploits information coming from biomedical and environmental sensing devices and from patient social context in order to automate context-aware patient case management, especially for alarm detection and management purposes. --- paper_title: Activity identification using body-mounted sensors—a review of classification techniques paper_content: With the advent of miniaturized sensing technology, which can be body-worn, it is now possible to collect and store data on different aspects of human movement under the conditions of free living. This technology has the potential to be used in automated activity profiling systems which produce a continuous record of activity patterns over extended periods of time. Such activity profiling systems are dependent on classification algorithms which can effectively interpret body-worn sensor data and identify different activities. This article reviews the different techniques which have been used to classify normal activities and/or identify falls from body-worn sensor data. The review is structured according to the different analytical techniques and illustrates the variety of approaches which have previously been applied in this field. Although significant progress has been made in this important area, there is still significant scope for further work, particularly in the application of advanced classification techniques to problems involving many different activities. --- paper_title: Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiers paper_content: This paper presents a systematic design approach for constructing neural classifiers that are capable of classifying human activities using a triaxial accelerometer. The philosophy of our design approach is to apply a divide-and-conquer strategy that separates dynamic activities from static activities preliminarily and recognizes these two different types of activities separately. Since multilayer neural networks can generate complex discriminating surfaces for recognition problems, we adopt neural networks as the classifiers for activity recognition. An effective feature subset selection approach has been developed to determine significant feature subsets and compact classifier structures with satisfactory accuracy. Experimental results have successfully validated the effectiveness of the proposed recognition scheme. --- paper_title: Activity Recognition Using Inertial Sensing for Healthcare, Wellbeing and Sports Applications: A Survey paper_content: This paper surveys the current research directions of activity recognition using inertial sensors, with potential application in healthcare, wellbeing and sports. The analysis of related work is organized according to the five main steps involved in the activity recognition process: preprocessing, segmentation, feature extraction, dimensionality reduction and classification. For each step, we present the main techniques utilized, their advantages and drawbacks, performance metrics and usage examples. We also discuss the research challenges, such as user behavior and technical limitations, as well as the remaining open research questions. --- paper_title: Data Compression by Temporal and Spatial Correlations in a Body-Area Sensor Network: A Case Study in Pilates Motion Recognition paper_content: We consider a body-area sensor network (BSN) consisting of multiple small, wearable sensor nodes deployed on a human body to track body motions. Concerning that human bodies are relatively small and wireless packets are subject to more serious contention and collision, this paper addresses the data compression problem in a BSN. We observe that, when body parts move, although sensor nodes in vicinity may compete strongly with each other, the transmitted data usually exist some levels of redundancy and even strong temporal and spatial correlations. Unlike traditional data compression approaches for large-scale and multihop sensor networks, our scheme is specifically designed for BSNs, where nodes are likely fully connected and overhearing among sensor nodes is possible. In our scheme, an offline phase is conducted in advance to learn the temporal and spatial correlations of sensing data. Then, a partial ordering of sensor nodes is determined to represent their transmission priorities so as to facilitate data compression during the online phase. We present algorithms to determine such partial ordering and discuss the design of the underlying MAC protocol to support our compression model. An experimental case study in Pilates exercises for patient rehabilitation is reported. The results show that our schemes reduce more than 70 percent of overall transmitted data compared with previous approaches. --- paper_title: A combined method to reduce motion artifact and power line interference for wearable healthcare systems paper_content: A combined method to reduce motion artifact (MA) and power line interference (PLI) for wearable healthcare system is introduced. The proposed method has a block for the reduction of MA using measured electrode-skin impedance as a reference signal and a block for the cancellation of PLI using mixed-signal feedback to relax the dynamic range requirements of components in the forward path. The measured electrode-skin impedance is used as a reference signal of an adaptive filter for reducing MA. The 60Hz notch frequency which has the same amplitude as PLI's is made digitally and uses feedback to the input stage to reduce PLI. This method was simulated using MATLAB/Simulink for high-level verification of the method's validity. --- paper_title: QRS Detection Based on Morphological Filter and Energy Envelope for Applications in Body Sensor Networks paper_content: Emerging body sensor networks (BSN) provide solutions for continuous health monitoring at anytime and from anywhere. The implementation of these monitoring solutions requires wearable sensor devices and thus creates new technology challenges in both software and hardware. This paper presents a QRS detection method for wearable Electrocardiogram (ECG) sensor in body sensor networks. The success of proposed method is based on the combination of two computationally efficient procedures, i.e., single-scale mathematical morphological (MM) filter and approximated envelope. The MM filter removes baseline wandering, impulsive noise and the offset of DC component while the approximated envelope enhances the QRS complexes. The performance of the algorithm is verified with standard MIT-BIH arrhythmia database as well as exercise ECG data. It achieves a low detection error rate of 0.42% based on the MIT-BIH database. --- paper_title: Fuzzy Soft Mathematical Morphology paper_content: A new framework which extends the concepts of soft mathematical morphology into fuzzy sets is presented. Images can be considered as arrays of fuzzy singletons on the Cartesian grid. Based on this notion the definitions for the basic fuzzy soft morphological operations are derived. Compatibility with binary soft mathematical morphology as well as the algebraic properties of fuzzy soft operations are studied. Explanation of the defined operations is also provided through several examples and experimental results. --- paper_title: An Effective QRS Detection Algorithm for Wearable ECG in Body Area Network paper_content: A novel QRS detection algorithm for wearable ECG devices and its FPGA implementation are presented in this paper. The proposed algorithm utilizes the hybrid opening- closing mathematical morphology filtering to suppress the impulsive noise and remove the baseline drift and uses modulus accumulation to enhance the signal. The proposed algorithm achieves an average QRS detection rate of 99.53%, a sensitivity of 99.82% and a positive prediction of 99.71% against the MIT/BIH Arrhythmia Database. It compares favorably to published methods. --- paper_title: Remote Activity Monitoring of the Elderly Using a Two-Axis Accelerometer paper_content: This paper presents a design and experimental study of a remote posture monitoring system. The monitoring system aims for applications in activity analysis of the elderly. We propose a wearable sensor system, which consists of a two-axis accelerometer and RF wireless communication modules. The acquired body motion signals from accelerometers are transmitted to a host computer via RF link for feature extraction and pattern recognition. Wavelet transform techniques are adopted for feature extraction of human body postures. The signal is decomposed into five levels and low-frequency components are extracted to obtain useful features. Pattern recognition techniques are then applied to distinguish five basic postures: up stairs, down stairs, walking, standing, and sitting. Experimental results are presented to show the effectiveness of the proposed method. --- paper_title: A QRS Complex Detection Algorithm Based on Mathematical Morphology and Envelope paper_content: The QRS complex detection is very important to ECG analysis. This paper aims to present an algorithm of QRS complex detection based on mathematical morphology and envelope. Baseline wandering is removed from ECG signal by morphological method. Then the signal gets the envelope through a low-pass filter, improving signal-to-noise .The performance of the algorithm is evaluated with MIT-BIH database. The algorithm gets the high detection rate (99.79%) and high speed --- paper_title: Power and Area Efficient Wavelet-Based On-chip ECG Processor for WBAN paper_content: This paper proposes a power and area efficient electrocardiogram (ECG) signal processing application specific integrated circuits (ASIC) for wireless body area networks (WBAN). This signal processing ASIC can accurately detect the QRS peak with high frequency noise suppression. The proposed ECG signal processor is implemented in 0.18μm CMOS technology. It occupies only 1.2mm2 in area and 9μW in power consumption. Therefore, this ECG processor is convenient for long-term monitoring of cardio-vascular condition of patients, and is very suitable for on-body WBAN applications. --- paper_title: Detecting and tracking dim moving point target in IR image sequence paper_content: This paper presents a novel algorithm for detecting and tracking dim moving point target in IR image sequence with low SNR. Original images are preprocessed using temperature non-linear elimination and Top-hat operator, and then a composite frame is obtained by reducing the three-dimensional (3D) spatio-temporal scanning for target to 2D spatial hunting. Finally the target trajectory is tracked under the condition of constant false-alarm probability (CFAR). Based on the experimental results, the algorithm can successfully detect dim moving point target and accurately estimate its trajectory. The algorithm, insensitive to the velocity mismatch and the changes of statistical distribution of background or noise, is adaptable to real-time target detection and tracking. --- paper_title: Extraction of Gait Features Using a Wireless Body Sensor Network (BSN) paper_content: A body sensor network (BSN) is presented in this paper, which targets at integrating physical activity signal and physiologically related features to provide detailed body context information. Issues concerning the structure of the proposed BSN, functions and interactions between its components are stressed. The BSN has been applied to conduct gait analysis in this study. A gait tracker based on Kalman filter is proposed to estimate hip angle and swing velocity of the lower limb segment using the sensor measurements. Experimental result shows that the movement of the legs during gait cycles is well revealed with the proposed method --- paper_title: Activity identification using body-mounted sensors—a review of classification techniques paper_content: With the advent of miniaturized sensing technology, which can be body-worn, it is now possible to collect and store data on different aspects of human movement under the conditions of free living. This technology has the potential to be used in automated activity profiling systems which produce a continuous record of activity patterns over extended periods of time. Such activity profiling systems are dependent on classification algorithms which can effectively interpret body-worn sensor data and identify different activities. This article reviews the different techniques which have been used to classify normal activities and/or identify falls from body-worn sensor data. The review is structured according to the different analytical techniques and illustrates the variety of approaches which have previously been applied in this field. Although significant progress has been made in this important area, there is still significant scope for further work, particularly in the application of advanced classification techniques to problems involving many different activities. --- paper_title: An Effective QRS Detection Algorithm for Wearable ECG in Body Area Network paper_content: A novel QRS detection algorithm for wearable ECG devices and its FPGA implementation are presented in this paper. The proposed algorithm utilizes the hybrid opening- closing mathematical morphology filtering to suppress the impulsive noise and remove the baseline drift and uses modulus accumulation to enhance the signal. The proposed algorithm achieves an average QRS detection rate of 99.53%, a sensitivity of 99.82% and a positive prediction of 99.71% against the MIT/BIH Arrhythmia Database. It compares favorably to published methods. --- paper_title: Activity Recognition Using Inertial Sensing for Healthcare, Wellbeing and Sports Applications: A Survey paper_content: This paper surveys the current research directions of activity recognition using inertial sensors, with potential application in healthcare, wellbeing and sports. The analysis of related work is organized according to the five main steps involved in the activity recognition process: preprocessing, segmentation, feature extraction, dimensionality reduction and classification. For each step, we present the main techniques utilized, their advantages and drawbacks, performance metrics and usage examples. We also discuss the research challenges, such as user behavior and technical limitations, as well as the remaining open research questions. --- paper_title: Sensor Based Time Series Classification of Body Movement paper_content: Advances in sensing and monitoring technology are being incorporated into today’s healthcare practice. As a result, the concept of Body Sensor Networks (BSN) has been proposed to describe the wearable/wireless devices for healthcare applications. One of the major application scenarios for BSN is to detect and classify the body movements for long-term lifestyle and healthcare monitoring. This paper introduces a new approach for analyzing the time series obtained from BSN. In our research, the BSN record the acceleration data of the volunteer’s movement while performing a set of activities such as jogging, walking, resting, and transitional activities. The main contribution of this paper is the proposed time series approximation and feature extraction algorithm that can convert the sensor-based time series data into a density map. We have performed extensive experiments to compare the accuracy in classifying the time series into different activities. It is concluded that the proposed approach would aid greatly the development of efficient health monitoring systems in the future. To the best of our knowledge, no similar research has been reported in the BSN field and we expect our research could provide useful insights for further investigation. --- paper_title: Integration of Sensing and Feedback Components for Human Motion Replication paper_content: Replication of human body motion is a very important means to maintain a subject’s emotion, knowledge and experience. The replication process requires accurate motion capturing system with sensor technologies to measure postures of human bodies and posture transmission, as well as feedback systems to adjust postures to fit into targeted ones. The sensing and feedback technologies are fundamental building blocks of motion capturing systems, work training system and rehabilitation systems. In particular, the construction of sensing and feedback systems for dynamic postures is much more complicated than that for static postures in terms of the time evolution and non-ridge body. We believe that dynamic postures can be represented by a set of blueprint or code like trajectories of particle of human body movement. Furthermore, we derive that the sensing and feedback systems should be able to establish to directly measure those critical particles without relying on external infrastructures. In the paper, some of those kinds of sensing and feedback devices are presented and some evidences such as feature contours are obtained through analysis of captured data by those devices in order to prove our estimation. We confess that our work is preliminary for this new field, but we hope that the work presented here can lead more efforts to bring out systematic approaches of feature detection and extraction of human postures whose characteristics are different from those of video and audio. --- paper_title: Information fusion for wireless sensor networks: Methods, models, and classifications paper_content: Wireless sensor networks produce a large amount of data that needs to be processed, delivered, and assessed according to the application objectives. The way these data are manipulated by the sensor nodes is a fundamental issue. Information fusion arises as a response to process data gathered by sensor nodes and benefits from their processing capability. By exploiting the synergy among the available data, information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity. In this work, we survey the current state-of-the-art of information fusion by presenting the known methods, algorithms, architectures, and models of information fusion, and discuss their applicability in the context of wireless sensor networks. --- paper_title: Stair climbing detection during daily physical activity using a miniature gyroscope paper_content: A new method of monitoring physical activity that is able to detect walking upstairs using a miniature gyroscope attached to the shank is presented. Wavelet transformation, in conjunction with a simple kinematics model, was used to detect toe-off, heel-strike and foot-flat, as well cycles corresponding to stair ascent. To evaluate the system, three studies were performed. The method was first tested on 10 healthy young volunteer subjects in a gait laboratory where an ultrasonic motion system was used as a reference system. In the second study, the system was tested on three hospitalized elderly people to classify walking upstairs from walking downstairs and flat walking. In the third study, monitoring was performed on seven patients with peripheral vascular disease for 60 min during their daily physical activity. The first study revealed a close relationship between the ambulatory and the reference systems. Compared to the reference system, the ambulatory system had an overall sensitivity and specificity of 98% and 97%, respectively. In the second study, the ambulatory system also showed a very high sensitivity (>94%) in identifying a 50 stairs ascent from walking on the flat and walking downstairs. Finally, compared with visual surveillance, we observed a relatively high accuracy in identifying 196 walking upstairs cycles through daily physical activity in the third study. Our results demonstrated a reliable technique of measuring walking upstairs during physical activity. --- paper_title: Human Postures Recognition Based on D-S Evidence Theory and Multi-sensor Data Fusion paper_content: Body Sensor Networks (BSNs) are conveying notable attention due to their capabilities in supporting humans in their daily life. In particular, real-time and noninvasive monitoring of assisted livings is having great potential in many application domains, such as health care, sport/fitness, e-entertainment, social interaction and e-factory. And the basic as well as crucial feature characterizing such systems is the ability of detecting human actions and behaviors. In this paper, a novel approach for human posture recognition is proposed. Our BSN system relies on an information fusion method based on the D-S Evidence Theory, which is applied on the accelerometer data coming from multiple wearable sensors. Experimental results demonstrate that the developed prototype system is able to achieve a recognition accuracy between 98.5% and 100% for basic postures (standing, sitting, lying, squatting). --- paper_title: Activity identification using body-mounted sensors—a review of classification techniques paper_content: With the advent of miniaturized sensing technology, which can be body-worn, it is now possible to collect and store data on different aspects of human movement under the conditions of free living. This technology has the potential to be used in automated activity profiling systems which produce a continuous record of activity patterns over extended periods of time. Such activity profiling systems are dependent on classification algorithms which can effectively interpret body-worn sensor data and identify different activities. This article reviews the different techniques which have been used to classify normal activities and/or identify falls from body-worn sensor data. The review is structured according to the different analytical techniques and illustrates the variety of approaches which have previously been applied in this field. Although significant progress has been made in this important area, there is still significant scope for further work, particularly in the application of advanced classification techniques to problems involving many different activities. --- paper_title: A spatio-temporal architecture for context aware sensing paper_content: Context-aware sensing is an integral part of the body sensor network (BSN) design and it allows the understanding of intrinsic characteristics of the sensed signal and determination of how BSNs should react to different events and adapt its monitoring behaviour. The purpose of this paper is to propose a novel spatio-temporal self-organising map that minimises the number of neurons involved whilst maintaining a high accuracy in class separation for both static and dynamic activities. --- paper_title: Graph-Coupled HMMs for Modeling the Spread of Infection paper_content: We develop Graph-Coupled Hidden Markov Models (GCHMMs) for modeling the spread of infectious disease locally within a social network. Unlike most previous research in epidemiology, which typically models the spread of infection at the level of entire populations, we successfully leverage mobile phone data collected from 84 people over an extended period of time to model the spread of infection on an individual level. Our model, the GCHMM, is an extension of widely-used Coupled Hidden Markov Models (CHMMs), which allow dependencies between state transitions across multiple Hidden Markov Models (HMMs), to situations in which those dependencies are captured through the structure of a graph, or to social networks that may change over time. The benefit of making infection predictions on an individual level is enormous, as it allows people to receive more personalized and relevant health advice. --- paper_title: Recognition of hand movements using wearable accelerometers paper_content: Accelerometer based activity recognition systems have typically focused on recognizing simple ambulatory activities of daily life, such as walking, sitting, standing, climbing stairs, etc. In this work, we developed and evaluated algorithms for detecting and recognizing short duration hand movements (lift to mouth, scoop, stir, pour, unscrew cap). These actions are a part of the larger and complex Instrumental Activities of Daily Life (IADL) making a drink and drinking. We collected data using small wireless tri-axial accelerometers worn simultaneously on different parts of the hand. Acceleration data for training was collected from 5 subjects, who also performed the two IADLs without being given specific instructions on how to complete them. Feature vectors (mean, variance, correlation, spectral entropy and spectral energy) were calculated and tested on three classifiers (AdaBoost, HMM, k-NN). AdaBoost showed the best performance, with an overall accuracy of 86% for detecting each of these hand actions. The results show that although some actions are recognized well with the generalized classifer trained on the subject-independent data, other actions require some amount of subject-specific training. We also observed an improvement in the performance of the system when multiple accelerometers placed on the right hand were used. --- paper_title: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy paper_content: Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy. --- paper_title: Design and analysis of low-power body area networks based on biomedical signals paper_content: A body area network (BAN) as one branch of Sensor Networks, is an inter-disciplinary area which holds great promises for revolutionising the current health care systems. BAN combines the real-time updating of biomedical data with the continuous and dynamic health care monitoring closely. A number of intelligence biomedical sensors can be integrated into a wireless BAN system, and the system can be used for prevention, diagnosis and timely treatment of various medical conditions. In this article, we propose a data fusion technique for a BAN based on biomedical signals. This proposed solution is of much lower complexity than conventional techniques and hence can significantly reduce the power consumption in the BAN. The technology is carried out by removing redundant and unnecessary sample information and shifting a large portion of processing and control loads to the remote control centre in an asymmetric manner. This approach not only reduces the power consumption of biosensor nodes in a BAN, but also ens... --- paper_title: Compressive Sensing of Neural Action Potentials Using a Learned Union of Supports paper_content: Wireless neural recording systems are subject to stringent power consumption constraints to support long-term recordings and to allow for implantation inside the brain. In this paper, we propose using a combination of on-chip detection of action potentials ("spikes") and compressive sensing (CS) techniques to reduce the power consumption of the neural recording system by reducing the power required for wireless transmission. We empirically verify that spikes are compressible in the wavelet domain and show that spikes from different neurons acquired from the same electrode have subtly different sparsity patterns or supports. We exploit the latter fact to further enhance the sparsity by incorporating a union of these supports learned over time into the spike recovery procedure. We show, using extra cellular recordings from human subjects, that this mechanism improves the SNDR of the recovered spikes over conventional basis pursuit recovery by up to 9.5 dB (6 dB mean) for the same number of CS measurements. Though the compression ratio in our system is contingent on the spike rate at the electrode, for the datasets considered here, the mean ratio achieved for 20-dB SNDR recovery is improved from 26:1 to 43:1 using the learned union of supports. --- paper_title: Wireless sensor networks: a survey paper_content: This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed. --- paper_title: Compressed data aggregation for energy efficient wireless sensor networks paper_content: As a burgeoning technique for signal processing, compressed sensing (CS) is being increasingly applied to wireless communications. However, little work is done to apply CS to multihop networking scenarios. In this paper, we investigate the application of CS to data collection in wireless sensor networks, and we aim at minimizing the network energy consumption through joint routing and compressed aggregation. We first characterize the optimal solution to this optimization problem, then we prove its NP-completeness. We further propose a mixed-integer programming formulation along with a greedy heuristic, from which both the optimal (for small scale problems) and the near-optimal (for large scale problems) aggregation trees are obtained. Our results validate the efficacy of the greedy heuristics, as well as the great improvement in energy efficiency through our joint routing and aggregation scheme. --- paper_title: Data Compression by Temporal and Spatial Correlations in a Body-Area Sensor Network: A Case Study in Pilates Motion Recognition paper_content: We consider a body-area sensor network (BSN) consisting of multiple small, wearable sensor nodes deployed on a human body to track body motions. Concerning that human bodies are relatively small and wireless packets are subject to more serious contention and collision, this paper addresses the data compression problem in a BSN. We observe that, when body parts move, although sensor nodes in vicinity may compete strongly with each other, the transmitted data usually exist some levels of redundancy and even strong temporal and spatial correlations. Unlike traditional data compression approaches for large-scale and multihop sensor networks, our scheme is specifically designed for BSNs, where nodes are likely fully connected and overhearing among sensor nodes is possible. In our scheme, an offline phase is conducted in advance to learn the temporal and spatial correlations of sensing data. Then, a partial ordering of sensor nodes is determined to represent their transmission priorities so as to facilitate data compression during the online phase. We present algorithms to determine such partial ordering and discuss the design of the underlying MAC protocol to support our compression model. An experimental case study in Pilates exercises for patient rehabilitation is reported. The results show that our schemes reduce more than 70 percent of overall transmitted data compared with previous approaches. --- paper_title: Wavelet-based low-delay ECG compression algorithm for continuous ECG transmission paper_content: The delay performance of compression algorithms is particularly important when time-critical data transmission is required. In this paper, we propose a wavelet-based electrocardiogram (ECG) compression algorithm with a low delay property for instantaneous, continuous ECG transmission suitable for telecardiology applications over a wireless network. The proposed algorithm reduces the frame size as much as possible to achieve a low delay, while maintaining reconstructed signal quality. To attain both low delay and high quality, it employs waveform partitioning, adaptive frame size adjustment, wavelet compression, flexible bit allocation, and header compression. The performances of the proposed algorithm in terms of reconstructed signal quality, processing delay, and error resilience were evaluated using the Massachusetts Institute of Technology University and Beth Israel Hospital (MIT-BIH) and Creighton University Ventricular Tachyarrhythmia (CU) databases and a code division multiple access-based simulation model with mobile channel noise --- paper_title: Probabilistic routing in on-body sensor networks with postural disconnections paper_content: This paper presents a novel store-and-forward packet routing algorithm for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. A probabilistic packet routing protocol is then developed using a stochastic link cost, reflecting the body postural trends. The performance of the proposed protocol is evaluated experimentally and via simulation, and is compared with a generic probabilistic routing protocol and a specialized on-body packet flooding mechanism that provides the routing delay lower-bounds. It is shown that via successful modeling of the spatio-temporal locality of link disconnection patterns, the proposed algorithm can provide better routing delay performance compared to the existing probabilistic routing protocols in the literature. --- paper_title: Joint active queue management and congestion control protocol for healthcare applications in wireless body sensor networks paper_content: Wireless Body Sensor Network (WBSN) consists of a large number of distributed sensor nodes. Wireless sensor networks are offering the next evolution in biometrics and healthcare monitoring applications. The present paper proposes a congestion control protocol based on the learning automata which prevents the congestion by controlling the source rate. Furthermore, a new active queue management mechanism is developed. The main objective of the proposed active queue management mechanism is to control and manage the entry of each packet to sensor nodes based on learning automata. The proposed system is able to discriminate different physiological signals and assign them different priorities. Thus, it would be possible to provide better quality of service for transmitting highly important vital signs. The simulation results confirm that the proposed protocol improves system throughput and reduces delay and packet dropping. --- paper_title: Using relay network to increase life time in wireless body area sensor networks paper_content: Wireless body area sensor networks will revolutionize health care services by remote, continuous and non-invasive monitoring. Body area sensor networks (BASN) should monitor various physiological parameters of a person for a long period of time. Thus, efficient energy usage in sensor nodes is essential in order to provide a long life time for the network. This paper investigates the effect of adding a relay network to the network of body sensors to reduce energy consumption of sensor nodes when transmitting data to the sink. --- paper_title: Probabilistic routing in on-body sensor networks with postural disconnections paper_content: This paper presents a novel store-and-forward packet routing algorithm for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. A probabilistic packet routing protocol is then developed using a stochastic link cost, reflecting the body postural trends. The performance of the proposed protocol is evaluated experimentally and via simulation, and is compared with a generic probabilistic routing protocol and a specialized on-body packet flooding mechanism that provides the routing delay lower-bounds. It is shown that via successful modeling of the spatio-temporal locality of link disconnection patterns, the proposed algorithm can provide better routing delay performance compared to the existing probabilistic routing protocols in the literature. --- paper_title: A 5.2 mW Self-Configured Wearable Body Sensor Network Controller and a 12 $\mu$ W Wirelessly Powered Sensor for a Continuous Health Monitoring System paper_content: A self-configured body sensor network controller and a high efficiency wirelessly powered sensor are presented for a wearable, continuous health monitoring system. The sensor chip harvests its power from the surrounding health monitoring band using an Adaptive Threshold Rectifier (ATR) with 54.9% efficiency, and it consumes 12 μW to implement an electrocardiogram (ECG) analog front-end and an ADC. The ATR is implemented with a standard CMOS process for low cost. The adhesive bandage type sensor patch is composed of the sensor chip, a Planar-Fashionable Circuit Board (P-FCB) inductor, and a pair of dry P-FCB electrodes. The dry P-FCB electrodes enable long term monitoring without skin irritation. The network controller automatically locates the sensor position, configures the sensor type (self-configuration), wirelessly provides power to the configured sensors, and transacts data with only the selected sensors while dissipating 5.2 mW at a single 1.8 V supply. Both the sensor and the health monitoring band are implemented using P-FCB for enhanced wearability and for lower production cost. The sensor chip and the network controller chip occupy 4.8 mm 2 and 15.0 mm 2 , respectively, including pads, in standard 0.18 μm 1P6M CMOS technology. --- paper_title: Ultra-low Power Sensors with Near Field Communication for Mobile Applications paper_content: In this paper, we study the applicability of an emerging RFID-based communication technology, NFC (Near Field Communication), to ultra-low power wireless sensors. We present potential application examples of passive and semi-passive NFC-enabled sensors. We compare their NFC-based implementations to corresponding implementations based on short-range radios such as Bluetooth and Wibree. The comparison addresses both technical properties and usability. We also introduce Smart NFC Interface, which is our general-purpose prototype platform for rapid prototyping of NFC and Bluetooth implementations. Two pilot sensor implementations and an NFC-Bluetooth gateway implementation based on our platform are presented and evaluated. Finally, needs and possibilities for technical improvements of available NFC technology are discussed. --- paper_title: Temperature-aware routing for telemedicine applications in embedded biomedical sensor networks paper_content: Biomedical sensors, called invivo sensors, are implanted in human bodies, and cause some harmful effects on surrounding body tissues. Particularly, temperature rise of the invivo sensors is dangerous for surrounding tissues, and a high temperature may damage them from a long term monitoring. In this paper, we propose a thermal-aware routing algorithm, called least total-route-temperature (LTRT) protocol, in which nodes temperatures are converted into graph weights, and minimum temperature routes are obtained. Furthermore, we provide an extensive simulation evaluation for comparing several other related schemes. Simulation results show the advantages of the proposed scheme. --- paper_title: A Distributed Trust Evaluation Model and Its Application Scenarios for Medical Sensor Networks paper_content: The development of medical sensor networks (MSNs) is imperative for e-healthcare, but security remains a formidable challenge yet to be resolved. Traditional cryptographic mechanisms do not suffice given the unique characteristics of MSNs, and the fact that MSNs are susceptible to a variety of node misbehaviors. In such situations, the security and performance of MSNs depend on the cooperative and trust nature of the distributed nodes, and it is important for each node to evaluate the trustworthiness of other nodes. In this paper, we identify the unique features of MSNs and introduce relevant node behaviors, such as transmission rate and leaving time, into trust evaluation to detect malicious nodes. We then propose an application-independent and distributed trust evaluation model for MSNs. The trust management is carried out through the use of simple cryptographic techniques. Simulation results demonstrate that the proposed model can be used to effectively identify malicious behaviors and thereby exclude malicious nodes. This paper also reports the experimental results of the Collection Tree Protocol with the addition of our proposed model in a network of TelosB motes, which show that the network performance can be significantly improved in practice. Further, some suggestions are given on how to employ such a trust evaluation model in some application scenarios. --- paper_title: A Wireless-Interface SoC Powered by Energy Harvesting for Short-range Data Communication paper_content: This paper describes the design and estimation of a wireless-interface SoC for wireless battery-less mouse with short-range data communication capability. It comprises an RF transmitter and microcontroller. An SoC, which is powered by an electric generator that exploits gyration energy by dragging the mouse, was fabricated using the TSMC 0.18-mum CMOS process. Features of the SoC are the adoption of a simple FSK modulation scheme, a single-end configuration on the RF transmitter, and the specific microcontroller design for mouse operation. We verified that the RF transmitter can perform data communication with a 1-m range at 2.17 mW, and the microcontroller consumes 0.03 mW at 1 MHz, which shows that the total power consumption in the SoC is 2.2 mW. This is sufficiently low for the SoC to operate with energy harvesting --- paper_title: Plug 'n Play Simplicity for Wireless Medical Body Sensors paper_content: Wireless medical body sensors are a key technology for unobtrusive health monitoring. The easy setup of such wireless body area networks is crucial to protect the user from the complexity of these systems. But automatically forming a wireless network comprising all sensors attached to the same body is challenging. We present a method for making wireless body-worn medical sensors aware of the persons they belong to by combining body-coupled with wireless communication. This enables a user to create a wireless body sensor network by just sticking the sensors to her body. A personal identifier allows sensors to annotate their readings with a user ID thereby ensuring safety in personal healthcare environments with multiple users. --- paper_title: ReTrust: Attack-Resistant and Lightweight Trust Management for Medical Sensor Networks paper_content: Wireless medical sensor networks (MSNs) enable ubiquitous health monitoring of users during their everyday lives, at health sites, without restricting their freedom. Establishing trust among distributed network entities has been recognized as a powerful tool to improve the security and performance of distributed networks such as mobile ad hoc networks and sensor networks. However, most existing trust systems are not well suited for MSNs due to the unique operational and security requirements of MSNs. Moreover, similar to most security schemes, trust management methods themselves can be vulnerable to attacks. Unfortunately, this issue is often ignored in existing trust systems. In this paper, we identify the security and performance challenges facing a sensor network for wireless medical monitoring and suggest it should follow a two-tier architecture. Based on such an architecture, we develop an attack-resistant and lightweight trust management scheme named ReTrust. This paper also reports the experimental results of the Collection Tree Protocol using our proposed system in a network of TelosB motes, which show that ReTrust not only can efficiently detect malicious/faulty behaviors, but can also significantly improve the network performance in practice. --- paper_title: Accelerometer-based fall detection using optimized ZigBee data streaming paper_content: Wireless body-area sensor networks (WBSNs) are key components of e-health solutions. Wearable wireless sensors can monitor and collect many different physiological parameters accurately, economically and efficiently. In this work we focus on WBSN for fall detection applications, where the real-time nature of I/O data streams is of critical importance. Additionally, this generation of alarms promises to maximize system life. Throughput and energy efficiency of the communication protocol must also be carefully optimized. In this article we investigate ZigBee's ability to meet WBSN requirements, with higher communication efficiency and lower power consumption than a Bluetooth serial port profile (SPP) based solution. As a case study we implemented an accelerometer-based fall detection algorithm, able to detect eight different fall typologies by means of a single sensor worn on the subjects' waist. This algorithm has a low computational complexity and can be processed on an embedded platform. Fall simulations were performed by three voluntary subjects. Preliminary results are promising and show excellent values for both sensitivity and specificity. This case study showed how a ZigBee-based network can be used for high throughput WBSN scenarios. --- paper_title: ECG-Cryptography and Authentication in Body Area Networks paper_content: Wireless body area networks (BANs) have drawn much attention from research community and industry in recent years. Multimedia healthcare services provided by BANs can be available to anyone, anywhere, and anytime seamlessly. A critical issue in BANs is how to preserve the integrity and privacy of a person's medical data over wireless environments in a resource efficient manner. This paper presents a novel key agreement scheme that allows neighboring nodes in BANs to share a common key generated by electrocardiogram (ECG) signals. The improved Jules Sudan (IJS) algorithm is proposed to set up the key agreement for the message authentication. The proposed ECG-IJS key agreement can secure data commnications over BANs in a plug-n-play manner without any key distribution overheads. Both the simulation and experimental results are presented, which demonstrate that the proposed ECG-IJS scheme can achieve better security performance in terms of serval performance metrics such as false acceptance rate (FAR) and false rejection rate (FRR) than other existing approaches. In addition, the power consumption analysis also shows that the proposed ECG-IJS scheme can achieve energy efficiency for BANs. --- paper_title: Investigating network architectures for body sensor networks paper_content: The choice of network architecture for body sensor networks is an important one because it significantly affects overall system design and performance. Current approaches use propagation models or specific medium access control protocols to study architectural choices. The issue with the first approach is that the models do not capture the effects of interference and fading. Further, the question of architecture can be raised without imposing a specific MAC protocol. In this paper, we first evaluate the star and multihop network topologies against design goals, such as power and delay efficiency. We then design experiments to investigate the behavior of electromagnetic propagation at 2.4 GHz through and around the human body. Along the way, we develop a novel visualization tool to aid in summarizing information across all pairs of nodes, thus providing a way to discern patterns in large data sets visually. Our results suggest that while a star architecture with nodes operating at low power levels might suffice in a cluttered indoor environment, nodes in an outdoor setting will have to operate at higher power levels or change to a multihop architecture to support acceptable packet delivery ratios. Through simple analysis, the potential increase in packet delivery ratio by switching to a multihop architecture is evaluated. --- paper_title: Heartbeat driven medium access control for body sensor networks paper_content: H-MAC is a novel Time Division Multiple Access (TDMA) based MAC protocol designed for Body Sensor Networks (BSNs). It improves energy efficiency by exploiting human heartbeat rhythm information to perform time synchronization for TDMA. Heartbeat rhythm is inherent in every human body and can be detected in a variety of biosignals. Therefore, biosensors in BSNs can extract the heartbeat rhythm from their sensory data. Moreover, all the rhythms represented by peak sequences are naturally synchronized since they are driven by the same source, the heartbeat. By following the rhythm, wireless biosensors can achieve time synchronization without having to turn on their radio to receive periodic timing information from a central controller, so that energy cost for time synchronization can be completely avoided and the lifetime of network can be prolonged. An active synchronization recovery scheme is also developed, in which two resynchronization procedures are implemented. The algorithms are verified using real world data from MIT-BIH multi-parameter database MIMIC. --- paper_title: PNP-MAC: Preemptive Slot Allocation and Non-Preemptive Transmission for Providing QoS in Body Area Networks paper_content: One of the most important and yet most challenging issues in Body Area Networks (BANs) is to provide diverse Quality of Service (QoS). Most physiological data monitoring applications require low rate periodic reporting while real-time entertainment applications require high rate continuous streaming. Emergency alarm, the most time-critical but unpredictable data, must be delivered instantaneously. A BAN should satisfy these diverse requirements since applications each with distinctive QoS requirement may run simultaneously. We propose PNP-MAC protocol that can flexibly handle variety of applications with diverse requirements through fast, preemptive slot allocation, non-preemptive transmission in the allocated slots, and flexible superframe adjustments. Performance evaluation using OPNET network simulator shows that PNP-MAC can satisfy diverse delay and throughput requirements of various applications such as continuous streaming, routine periodic monitoring, and time-critical emergency alarm. --- paper_title: An ultra-low-power medium access control protocol for body sensor network paper_content: In this paper, BSN-MAC, a medium access control (MAC) protocol designed for Body Sensor Networks (BSNs) is proposed. Due to the traffic coupling and sensor diversity characteristics of BSNs, common MAC protocols can not satisfy the unique requirements of the biomedical sensors in BSNs. BSN-MAC exploits the feedback information from the deployed sensors to form a closed-loop control of the MAC parameters. A control algorithm is proposed to enable the BSN coordinator to adjust parameters of the IEEE 802.15.4 superframe to achieve both energy efficiency and low latency on energy critical nodes. We evaluate the performance of BSN-MAC by comparing it with the IEEE 802.15.4 MAC protocol using energy efficiency as the primary metric. --- paper_title: Energy-Efficiency Analysis of a Distributed Queuing Medium Access Control Protocol for Biomedical Wireless Sensor Networks in Saturation Conditions paper_content: The aging population and the high quality of life expectations in our society lead to the need of more efficient and affordable healthcare solutions. For this reason, this paper aims for the optimization of Medium Access Control (MAC) protocols for biomedical wireless sensor networks or wireless Body Sensor Networks (BSNs). The hereby presented schemes always have in mind the efficient management of channel resources and the overall minimization of sensors’ energy consumption in order to prolong sensors’ battery life. The fact that the IEEE 802.15.4 MAC does not fully satisfy BSN requirements highlights the need for the design of new scalable MAC solutions, which guarantee low-power consumption to the maximum number of body sensors in high density areas (i.e., in saturation conditions). In order to emphasize IEEE 802.15.4 MAC limitations, this article presents a detailed overview of this de facto standard for Wireless Sensor Networks (WSNs), which serves as a link for the introduction and initial description of our here proposed Distributed Queuing (DQ) MAC protocol for BSN scenarios. Within this framework, an extensive DQ MAC energy-consumption analysis in saturation conditions is presented to be able to evaluate its performance in relation to IEEE 802.5.4 MAC in highly dense BSNs. The obtained results show that the proposed scheme outperforms IEEE 802.15.4 MAC in average energy consumption per information bit, thus providing a better overall performance that scales appropriately to BSNs under high traffic conditions. These benefits are obtained by eliminating back-off periods and collisions in data packet transmissions, while minimizing the control overhead. --- paper_title: An Attempt to Model the Human Body as a Communication Channel paper_content: Using the human body as a transmission medium for electrical signals offers novel data communication in biomedical monitoring systems. In this paper, galvanic coupling is presented as a promising approach for wireless intra-body communication between on-body sensors. The human body is characterized as a transmission medium for electrical current by means of numerical simulations and measurements. Properties of dedicated tissue layers and geometrical body variations are investigated, and different electrodes are compared. The new intra-body communication technology has shown its feasibility in clinical trials. Excellent transmission was achieved between locations on the thorax with a typical signal-to-noise ratio (SNR) of 20 dB while the attenuation increased along the extremities. --- paper_title: Characterization of ultra-wide bandwidth wireless indoor channels: a communication-theoretic view paper_content: An ultra-wide bandwidth (UWB) signal propagation experiment is performed in a typical modern laboratory/office building. The bandwidth of the signal used in this experiment is in excess of 1 GHz, which results in a differential path delay resolution of less than a nanosecond, without special processing. Based on the experimental results, a characterization of the propagation channel from a communications theoretic view point is described, and its implications for the design of a UWB radio receiver are presented. Robustness of the UWB signal to multipath fading is quantified through histograms and cumulative distributions. The all RAKE (ARAKE) receiver and maximum-energy-capture selective RAKE (SRAKE) receiver are introduced. The ARAKE receiver serves as the best case (bench mark) for RAKE receiver design and lower bounds the performance degradation caused by multipath. Multipath components of measured waveforms are detected using a maximum-likelihood detector. Energy capture as a function of the number of single-path signal correlators used in UWB SRAKE receiver provides a complexity versus performance tradeoff. Bit-error-probability performance of a UWB SRAKE receiver, based on measured channels, is given as a function of the signal-to-noise ratio and the number of correlators implemented in the receiver. --- paper_title: Theoretical Limits for Estimation of Vital Signal Parameters Using Impulse Radio UWB paper_content: In this paper, Cramer-Rao lower bounds (CRLBs) for estimation of vital signal parameters, such as respiration and heart-beat rates, using ultra-wideband (UWB) pulses are derived. In addition, a simple closed-form CRLB expression is obtained for sinusoidal displacement functions under certain conditions. Moreover, a two-step suboptimal solution is proposed, which is based on time-delay estimation via matched filtering followed by least-squares (LS) estimation. It is shown that the proposed solution is asymptotically optimal in the limit of certain system parameters. Simulation studies are performed to evaluate the lower bounds and performance of the proposed solution for realistic system parameters. --- paper_title: Development and performance analysis of an intra-body communication device paper_content: Personal area networks would benefit from a wireless communication system in which a variety of information could be exchanged through wearable electronic devices and sensors. Intra-body communication using the human body as the transmission medium enables wireless communication without transmitting radio waves through the air. A human arm phantom is designed and used to reduce uncertainty in experiments with the human body. The phantom exhibits transmission characteristics similar to those of the human body at frequencies between 1 MHz and 10 MHz. A 10.7 MHz frequency modulation (FM) intra-body transmitter and receiver are developed which allow transmission of analog sine waves even in the presence of external noise. Digital data transmission at 9600 bps was also achieved using newly fabricated 10.7 MHz frequency shift keying (FSK) transmitter and receiver devices. The carrier frequency of 10.7 MHz, which is the intermediate frequency in FM radio receivers, means that a wide selection of commercial radio frequency (RF) devices is available. --- paper_title: A 0.2-mW 2-Mb/s Digital Transceiver Based on Wideband Signaling for Human Body Communications paper_content: This paper presents a low-power wideband signaling (WBS) digital transceiver for data transmission through a human body for body area network applications. The low-power and highspeed human body communication (HBC) utilizes a digital transceiver chip based on WBS and adopts a direct-coupled interface (DCI) which uses an electrode of 50-Omega impedance. The channel investigation with the DCI identities an optimum channel bandwidth of 10 kHz to 100 MHz. The WBS digital transceiver exploits a direct digital transmitter and an all-digital clock and data recovery (CDR) circuit. To further reduce power consumption, the proposed CDR circuit incorporates a low-voltage digitally-controlled oscillator and a quadratic sampling technique. The WBS digital transceiver chip with a 0.25-mum standard CMOS technology has 2-Mb/s data rate at a bit error rate of 1.1 times 10-7, dissipating only 0.2 mW from a 1-V supply generated by a 1.5-V battery. --- paper_title: Coexistence of IEEE802.15.4 with other Systems in the 2.4 GHz-ISM-Band paper_content: Wireless systems continue to rapidly gain popularity. This is extremely true for data networks in the local and personal area, which are called WLAN and WPAN, respectively. However, most of those systems are working in the license-free industrial scientific medical (ISM) frequency bands, where neither resource planning nor bandwidth allocation can be guaranteed. To date, the most widespread systems in the 2.4 GHz ISM band are IEEE802.11 as stated in IEEE Std. 802-11 (1997) and Bluetooth, with ZigBee based in IEEE Std. 802.15.4 (2003) and IEEE802.15.4 as upcoming standards for short range wireless networks. In this paper we examine the mutual effects of these different communication standards. Measurements are performed with real-life equipment, in order to quantify coexistence issues --- paper_title: Experimental Studies on Human Body Communication Characteristics Based Upon Capacitive Coupling paper_content: Human Body Communication (HBC) is regarded as a burgeoning transmission technology for short-range body sensor network applications. However, there are currently few full-scale on-body measurements describing the principle of body channel propagation characteristics upon capacitive coupling. This paper focuses on the comprehensive experiments on different body parts to investigate body channel characteristics. Using capacitive coupling technique, the body channel characteristics were measured both in frequency domain and in time domain. Based on the whole body measurement results, it was found that the body maintained stable attenuation characteristics: the lowest attenuation is approximate-15dB at 28MHz. Arthrosis such as elbow, knee and wrist affected channel attenuation characteristic by about 2dB. Furthermore, the experiment results illustrate that the fat content in body also affects channel characteristic by 4dB. --- paper_title: A Novel Technique Enabling the Realisation of 60 GHz Body Area Networks paper_content: This paper presents a novel technique to enable over-body propagation at 60 GHz. A flexible material has been created that enables the propagation of surface waves around the body without the need of repeaters, high powers or high gain antennas. The solution is wireless and self-redundant, and will facilitate the development of light weight, high bandwidth, and low power, wireless body area networks that could offer improvements for mobile health monitoring applications as well as utility in sports and entertainment industries. --- paper_title: Characterization of On-Body Communication Channel and Energy Efficient Topology Design for Wireless Body Area Networks paper_content: Wireless body area networks (WBANs) offer many promising new applications in the area of remote health monitoring. An important element in the development of a WBAN is the characterization of the physical layer of the network, including an estimation of the delay spread and the path loss between two nodes on the body. This paper discusses the propagation channel between two half-wavelength dipoles at 2.45 GHz, placed near a human body and presents an application for cross-layer design in order to optimize the energy consumption of different topologies. Propagation measurements are performed on real humans in a multipath environment, considering different parts of the body separately. In addition, path loss has been numerically investigated with an anatomically correct model of the human body in free space using a 3-D electromagnetic solver. Path loss parameters and time-domain channel characteristics are extracted from the measurement and simulation data. A semi-empirical path loss model is presented for an antenna height above the body of 5 mm and antenna separations from 5 cm up to 40 cm. A time-domain analysis is performed and models are presented for the mean excess delay and the delay spread. As a cross-layer application, the proposed path loss models are used to evaluate the energy efficiency of single-hop and multihop network topologies. --- paper_title: Energy-Efficient TDMA-Based MAC Protocol for Wireless Body Area Networks paper_content: Body Area Networks (BAN) are a specific type of Network structure. They are spread over a very small area and their available power is heavily constrained. Hence it is useful to have gateway points in the network, such as nodes carried around the belt, that are less power constrained and can be used for network coordination. This network structure can result in very low transmission power/range for the sensors and effective TDMA timing control. This paper presents an energy-efficient MAC protocol for communication within the Wireless Body Area Network. The protocol takes advantage of the fixed nature of the Body Area Network to implement a TDMA strategy with very little communication overhead, long sleep times for the sensor transceivers and robustness to communication errors. The protocol is implemented on the Analog Devices ADF7020 RF transceivers. --- paper_title: Ultra Low Power Signal Oriented Approach for Wireless Health Monitoring paper_content: In recent years there is growing pressure on the medical sector to reduce costs while maintaining or even improving the quality of care. A potential solution to this problem is real time and/or remote patient monitoring by using mobile devices. To achieve this, medical sensors with wireless communication, computational and energy harvesting capabilities are networked on, or in, the human body forming what is commonly called a Wireless Body Area Network (WBAN). We present the implementation of a novel Wake Up Receiver (WUR) in the context of standardised wireless protocols, in a signal-oriented WBAN environment and present a novel protocol intended for wireless health monitoring (WhMAC). WhMAC is a TDMA-based protocol with very low power consumption. It utilises WBAN-specific features and a novel ultra low power wake up receiver technology, to achieve flexible and at the same time very low power wireless data transfer of physiological signals. As the main application is in the medical domain, or personal health monitoring, the protocol caters for different types of medical sensors. We define four sensor modes, in which the sensors can transmit data, depending on the sensor type and emergency level. A full power dissipation model is provided for the protocol, with individual hardware and application parameters. Finally, an example application shows the reduction in the power consumption for different data monitoring scenarios. --- paper_title: Numerical Analysis of CSMA/CA for Pattern-Based WBAN System paper_content: This paper presents a numerical analysis of a CSMA/CA mechanism for a pattern-based WBAN system. Several equations are derived to observe the number of back-off periods required by high, medium, and low traffic BAN Nodes (BNs). Numerical results show that CSMA/CA mechanism is not suitable for medium and low traffic BNs, and high traffic BNs encounter heavy collision problems. This suggests the use of a TDMA-based solution that can solve a range of problems including traffic heterogeneity and correlation problems. --- paper_title: Heartbeat driven medium access control for body sensor networks paper_content: H-MAC is a novel Time Division Multiple Access (TDMA) based MAC protocol designed for Body Sensor Networks (BSNs). It improves energy efficiency by exploiting human heartbeat rhythm information to perform time synchronization for TDMA. Heartbeat rhythm is inherent in every human body and can be detected in a variety of biosignals. Therefore, biosensors in BSNs can extract the heartbeat rhythm from their sensory data. Moreover, all the rhythms represented by peak sequences are naturally synchronized since they are driven by the same source, the heartbeat. By following the rhythm, wireless biosensors can achieve time synchronization without having to turn on their radio to receive periodic timing information from a central controller, so that energy cost for time synchronization can be completely avoided and the lifetime of network can be prolonged. An active synchronization recovery scheme is also developed, in which two resynchronization procedures are implemented. The algorithms are verified using real world data from MIT-BIH multi-parameter database MIMIC. --- paper_title: ATLAS: A Traffic Load Aware Sensor MAC Design for Collaborative Body Area Sensor Networks paper_content: In collaborative body sensor networks, namely wireless body area networks (WBANs), each of the physical sensor applications is used to collaboratively monitor the health status of the human body. The applications of WBANs comprise diverse and dynamic traffic loads such as very low-rate periodic monitoring (i.e., observation) data and high-rate traffic including event-triggered bursts. Therefore, in designing a medium access control (MAC) protocol for WBANs, energy conservation should be the primary concern during low-traffic periods, whereas a balance between satisfying high-throughput demand and efficient energy usage is necessary during high-traffic times. In this paper, we design a traffic load-aware innovative MAC solution for WBANs, called ATLAS. The design exploits the superframe structure of the IEEE 802.15.4 standard, and it adaptively uses the contention access period (CAP), contention free period (CFP) and inactive period (IP) of the superframe based on estimated traffic load, by applying a dynamic "wh" (whenever which is required) approach. Unlike earlier work, the proposed MAC design includes load estimation for network load-status awareness and a multi-hop communication pattern in order to prevent energy loss associated with long range transmission. Finally, ATLAS is evaluated through extensive simulations in ns-2 and the results demonstrate the effectiveness of the protocol. --- paper_title: An ultra-low-power medium access control protocol for body sensor network paper_content: In this paper, BSN-MAC, a medium access control (MAC) protocol designed for Body Sensor Networks (BSNs) is proposed. Due to the traffic coupling and sensor diversity characteristics of BSNs, common MAC protocols can not satisfy the unique requirements of the biomedical sensors in BSNs. BSN-MAC exploits the feedback information from the deployed sensors to form a closed-loop control of the MAC parameters. A control algorithm is proposed to enable the BSN coordinator to adjust parameters of the IEEE 802.15.4 superframe to achieve both energy efficiency and low latency on energy critical nodes. We evaluate the performance of BSN-MAC by comparing it with the IEEE 802.15.4 MAC protocol using energy efficiency as the primary metric. --- paper_title: Energy-Efficiency Analysis of a Distributed Queuing Medium Access Control Protocol for Biomedical Wireless Sensor Networks in Saturation Conditions paper_content: The aging population and the high quality of life expectations in our society lead to the need of more efficient and affordable healthcare solutions. For this reason, this paper aims for the optimization of Medium Access Control (MAC) protocols for biomedical wireless sensor networks or wireless Body Sensor Networks (BSNs). The hereby presented schemes always have in mind the efficient management of channel resources and the overall minimization of sensors’ energy consumption in order to prolong sensors’ battery life. The fact that the IEEE 802.15.4 MAC does not fully satisfy BSN requirements highlights the need for the design of new scalable MAC solutions, which guarantee low-power consumption to the maximum number of body sensors in high density areas (i.e., in saturation conditions). In order to emphasize IEEE 802.15.4 MAC limitations, this article presents a detailed overview of this de facto standard for Wireless Sensor Networks (WSNs), which serves as a link for the introduction and initial description of our here proposed Distributed Queuing (DQ) MAC protocol for BSN scenarios. Within this framework, an extensive DQ MAC energy-consumption analysis in saturation conditions is presented to be able to evaluate its performance in relation to IEEE 802.5.4 MAC in highly dense BSNs. The obtained results show that the proposed scheme outperforms IEEE 802.15.4 MAC in average energy consumption per information bit, thus providing a better overall performance that scales appropriately to BSNs under high traffic conditions. These benefits are obtained by eliminating back-off periods and collisions in data packet transmissions, while minimizing the control overhead. --- paper_title: Probabilistic routing in on-body sensor networks with postural disconnections paper_content: This paper presents a novel store-and-forward packet routing algorithm for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. A probabilistic packet routing protocol is then developed using a stochastic link cost, reflecting the body postural trends. The performance of the proposed protocol is evaluated experimentally and via simulation, and is compared with a generic probabilistic routing protocol and a specialized on-body packet flooding mechanism that provides the routing delay lower-bounds. It is shown that via successful modeling of the spatio-temporal locality of link disconnection patterns, the proposed algorithm can provide better routing delay performance compared to the existing probabilistic routing protocols in the literature. --- paper_title: Communication scheduling to minimize thermal effects of implanted biosensor networks in homogeneous tissue paper_content: A network of biosensors can be implanted in a human body for health monitoring, diagnostics, or as a prosthetic device. Biosensors can be organized into clusters where most of the communication takes place within the clusters, and long range transmissions to the base station are performed by the cluster leader to reduce the energy cost. In some applications, the tissues are sensitive to temperature increase and may be damaged by the heat resulting from normal operations and the recharging of sensor nodes. Our work is the first to consider rotating the cluster leadership to minimize the heating effects on human tissues. We explore the factors that lead to temperature increase, and the process for calculating the specific absorption rate (SAR) and temperature increase of implanted biosensors by using the finite-difference time-domain (FDTD) method. We improve performance by rotating the cluster leader based on the leadership history and the sensor locations. We propose a simplified scheme, temperature increase potential, to efficiently predict the temperature increase in tissues surrounding implanted sensors. Finally, a genetic algorithm is proposed to exploit the search for an optimal temperature increase sequence. --- paper_title: Temperature-aware routing for telemedicine applications in embedded biomedical sensor networks paper_content: Biomedical sensors, called invivo sensors, are implanted in human bodies, and cause some harmful effects on surrounding body tissues. Particularly, temperature rise of the invivo sensors is dangerous for surrounding tissues, and a high temperature may damage them from a long term monitoring. In this paper, we propose a thermal-aware routing algorithm, called least total-route-temperature (LTRT) protocol, in which nodes temperatures are converted into graph weights, and minimum temperature routes are obtained. Furthermore, we provide an extensive simulation evaluation for comparing several other related schemes. Simulation results show the advantages of the proposed scheme. --- paper_title: An energy-efficient configuration management for multi-hop wireless body area networks paper_content: In this paper, a heuristic adaptive routing algorithm for an energy-efficient configuration management has been suggested which can reduce energy consumption while guaranteeing QoS for the emergency data in wireless body area networks (WBANs). The priority and vicinity of the nodes are taken into account for the selection of reachable parent nodes, when the nodes are disconnected due to the mobile nature of human body. We derive a mathematical model and presented algorithm for maintaining balanced power consumption with guaranteed QoS. A simulation has been performed to evaluate the performance of the proposed heuristic algorithm. --- paper_title: A Robust Protocol Stack for Multi-hop Wireless Body Area Networks with Transmit Power Adaptation ∗ paper_content: Wireless Body Area Networks (WBANs) have characteristic properties that should be considered for designing a proper network architecture. Movement of on-body sensors, low quality and time-variant wireless links, and the demand for a reliable and fast data transmission at low energy cost are some challenging issues in WBANs. Using ultra low power wireless transceivers to reduce power consumption causes a limited transmission range. This implies that a multi-hop protocol is a promising design choice. This paper proposes a multi-hop protocol for human body health monitoring. The protocol is robust against frequent changes of the network topology due to posture changes, and variation of wireless link quality. A technique for adapting the transmit power of sensor nodes at run-time allows to optimize power consumption while ensuring a reliable outgoing link for every node in the network and avoiding network disconnection. --- paper_title: Energy Efficient Thermal Aware Routing Algorithms for Embedded Biomedical Sensor Networks paper_content: One of the major applications of sensor networks in near future will be in the area of biomedical research. Implanted biosensor nodes are already being used for various medical applications. These in-vivo sensor networks collect different biometric data and communicate the data to the base station wirelessly. These sensor networks produce heat, as the nodes have to communicate among themselves wirelessly. The rise in temperature of the nodes due to communication should not be very high. A high temperature of the in-vivo nodes for a prolonged period might damage the surrounding tissues. In this paper, we propose a new routing algorithm that reduces the amount of heat produced in the network. In the simple form, the algorithm routes packets to the coolest neighbor without inducing routing loops. In the adaptive form, the algorithm uses mechanisms to adapt to topologies with low degree of connectivity and to switch to shortest path routing if a time threshold is exceeded. The proposed algorithm performs much better in terms of reducing the amount of heat produced, delay and power consumption compared to the shortest hop routing algorithm and a previously proposed Thermal Aware Routing Algorithm (TARA). --- paper_title: Adapting radio transmit power in wireless body area sensor networks paper_content: Emerging body-wearable devices for continuous health monitoring are severely energy constrained and yet required to offer high communication reliability under fluctuating channel conditions. This paper investigates the dynamic adaptation of radio transmit power as a means of addressing this challenge. Our contributions are three-fold: we present empirical evidence that wireless link quality in body area networks changes rapidly when patients move; fixed radio transmit power therefore leads to either high loss (when link quality is bad), or wasted energy (when link quality is good). This motivates dynamic transmit power control, and our second contribution characterises the off-line optimal transmit power control that minimises energy usage subject to lower-bounds on reliability. Though not suited to practical implementation, the optimal gives insight into the feasibility of adaptive power control for body area networks, and provides a benchmark against which practical strategies can be compared. Our third contribution is to develop simple and practical on-line schemes that trade-off reliability for energy savings by changing transmit power based on feedback information from the receiver. Our schemes offer on average 9--25% savings in energy compared to using maximum transmit power, with little sacrifice in reliability, and demonstrate adaptive transmission power control as an effective technique for extending the lifetime of wireless body area sensor networks. --- paper_title: Cross-Layer Support for Energy Efficient Routing in Wireless Sensor Networks paper_content: The Dynamic Source Routing (DSR) algorithm computes a new route when packet loss occurs. DSR does not have an in-built mechanism to determine whether the packet loss was the result of congestion or node failure causing DSR to compute a new route. This leads to inefficient energy utilization when DSR is used in wireless sensor networks. In this work, we exploit cross-layer optimization techniques that extend DSR to improve its routing energy efficiency by minimizing the frequency of recomputed routes. Our proposed approach enables DSR to initiate a route discovery only when link failure occurs. We conducted extensive simulations to evaluate the performance of our proposed cross-layer DSR routing protocol. The simulation results obtained with our extended DSR routing protocol show that the frequency with which new routes are recomputed is 50% lower compared with the traditional DSR protocol. This improvement is attributed to the fact that, with our proposed cross-layer DSR, we distinguish between congestion and link failure conditions, and new routes are recalculated only for the latter. --- paper_title: A Low-delay Protocol for Multihop Wireless Body Area Networks paper_content: Wireless body area networks (WBANs) form a new and interesting area in the world of remote health monitoring. An important concern in such networks is the communication between the sensors. This communication needs to be energy efficient and highly reliable while keeping delays low. Mobility also has to be supported as the nodes are positioned on different parts of the body that move with regard to each other. In this paper, we present a new cross-layer communication protocol for WBANs: CICADA or Cascading Information retrieval by Controlling Access with Distributed slot Assignment. The protocol sets up a network tree in a distributed manner. This tree structure is subsequently used to guarantee collision free access to the medium and to route data towards the sink. The paper analyzes CICADA and shows simulation results. The protocol offers low delay and good resilience to mobility. The energy usage is low as the nodes can sleep in slots where they are not transmitting or receiving. --- paper_title: An ultra-low-power medium access control protocol for body sensor network paper_content: In this paper, BSN-MAC, a medium access control (MAC) protocol designed for Body Sensor Networks (BSNs) is proposed. Due to the traffic coupling and sensor diversity characteristics of BSNs, common MAC protocols can not satisfy the unique requirements of the biomedical sensors in BSNs. BSN-MAC exploits the feedback information from the deployed sensors to form a closed-loop control of the MAC parameters. A control algorithm is proposed to enable the BSN coordinator to adjust parameters of the IEEE 802.15.4 superframe to achieve both energy efficiency and low latency on energy critical nodes. We evaluate the performance of BSN-MAC by comparing it with the IEEE 802.15.4 MAC protocol using energy efficiency as the primary metric. --- paper_title: Cross-layer optimization protocol for guaranteed data streaming over Wireless Body Area Networks paper_content: In this paper, we study the problem of routing, bandwidth and flow assignment in Wireless Body Area Networks (BANs). Our solution considers BAN for real-time data streaming applications, where the real-time nature of data streams is of critical importance for providing a useful and efficient sensorial feedback for the user while system lifetime should be maximized. Thus, bandwidth and energy efficiency of the communication between energy constrained sensor nodes must be carefully optimized. The proposed solution takes into account nodes' residual energy during the establishment of the routing paths and adaptively allocates bandwidth to the nodes in the network. We also formulate the joint routing tree construction and bandwidth allocation problem as an Integer Linear Program that maximizes the network utility while satisfying the QoS requirements. We compare the resulting performance of our protocol with the optimal solution, and show that it closes a considerable portion of the gap from the theoretical optimal solution. --- paper_title: MOFBAN: a Lightweight Modular Framework for Body Area Networks paper_content: The increasing use of wireless networks, the constant miniaturization of electrical devices and the growing interest for remote health monitoring has led to the development of wireless on-body networks or WBANs. The research on communication in this type of network is still at it's infancy. The first communication protocols are being proposed, but a general architecture that can be used to integrate the protocols easily is still lacking. However, such an architecture could trigger the development of new protocols and ease the use of WBANs. In this paper, we present a lightweight modular framework for body area networks (MOFBAN). A modular structure is used which allows for a higher flexibility and improved energy efficiency. The paper first investigates the challenges and requirements needed for sending messages in a WBAN. Further, we discuss how this framework can be used when designing new protocols by defining the different components of the framework. --- paper_title: A smart poultice with reconfigurable sensor array for wearable cardiac healthcare paper_content: A smart poultice with reconfigurable sensor array is implemented by using planar fashionable circuit board (P-FCB) technology for wearable cardiac healthcare. It contains 1) 25-electrode array either for vital signal sensing or for external data communication, 2) a thin flexible battery, 3) a fabric inductor, and 4) a fabric circuit board on which a low power silicon chip is directly integrated. Start/stop operation of the smart poultice is realized by inductive-coupled remote controller with 8b ID verification function and external low power data transaction is achieved by using duty cycled body channel communication. --- paper_title: Accurate Activity Recognition Using a Mobile Phone Regardless of Device Orientation and Location paper_content: This paper investigates two major issues in using a tri-axial accelerometer-embedded mobile phone for continuous activity monitoring, i.e. the difference in orientations and locations of the device. Two experiments with a total of ten test subjects performed six daily activities were conducted in this study: one with a device fixed on the waist in sixteen different orientations and another with three different device locations (i.e., shirt-pocket, trouser-pocket and waist) in two different device orientations. For handling with varying device orientations, a projection-based method for device coordinate system estimation has been proposed. Based on the dataset with sixteen different device orientations, the experimental results have illustrated that the proposed method is efficient for rectifying the acceleration signals into the same coordinate system, yielding significantly improved activity recognition accuracy. After signal transformation, the recognition results of signals acquired from different device locations are compared. The experimental results show that when the sensor is placed on different rigid body, different models are required for certain activities. --- paper_title: An Attachable ECG Sensor Bandage with Planar-Fashionable Circuit Board paper_content: An attachable ECG sensor adhesive bandage is implemented for continuous ECG monitoring system by using Planar-Fashionable Circuit Board (P-FCB) technology. The sensor patch improves convenience at low cost: it is composed of dry electrodes and an inductor directly screen printed on fabric, and the sensor chip is also directly wire bonded on fabric. The sensor patch is wirelessly powered to remove battery for safety. Dry electrodes minimize skin irritation to enable long term monitoring. The implemented sensor patch successfully demonstrates capturing of ECG signal while dissipating only 12uW power. --- paper_title: Analysis of the severity of dyskinesia in patients with Parkinson's disease via wearable sensors paper_content: The aim of this study is to identify movement characteristics associated with motor fluctuations in patients with Parkinson's disease by relying on wearable sensors. Improved methods of assessing longitudinal changes in Parkinson's disease would enable optimization of treatment and maximization of patient function. We used eight accelerometers on the upper and lower limbs to monitor patients while they performed a set of standardized motor tasks. A video of the subjects was used by an expert to assign clinical scores. We focused on a motor complication referred to as dyskinesia, which is observed in association with medication intake. The sensor data were processed to extract a feature set responsive to the motor fluctuations. To assess the ability of accelerometers to capture the motor fluctuation patterns, the feature space was visualized using PCA and Sammon's mapping. Clustering analysis revealed the existence of intermediate clusters that were observed when changes occurred in the severity of dyskinesia. We present quantitative evidence that these intermediate clusters are the result of the high sensitivity of the proposed technique to changes in the severity of dyskinesia observed during motor fluctuation cycles. --- paper_title: Respiratory Rate and Flow Waveform Estimation from Tri-axial Accelerometer Data paper_content: There is a strong medical need for continuous, unobstrusive respiratory monitoring, and many shortcomings to existing methods. Previous work shows that MEMS accelerometers worn on the torso can measure inclination changes due to breathing, from which a respiratory rate can be obtained. There has been limited validation of these methods. The problem of practical continuous monitoring, in which patient movement disrupts the measurements and the axis of interest changes, has also not been addressed. We demonstrate a method based on tri-axial accelerometer data from a wireless sensor device, which tracks the axis of rotation and obtains angular rates of breathing motion. The resulting rates are validated against gyroscope measurements and show high correlation to flow rate measurements using a nasal cannula. We use a movement detection method to classify periods in which the patient is static and breathing signals can be observed accurately. Within these periods we obtain a close match to cannula measurements, for both the flow rate waveform and derived respiratory rates, over multi-hour datasets obtained from wireless sensor devices on hospital patients. We discuss future directions for improvement and potential methods for estimating absolute airflow rate and tidal volume. --- paper_title: Intelligent Mobile Health Monitoring System (IMHMS) paper_content: Health monitoring is repeatedly mentioned as one of the main application areas for Pervasive computing. Mobile Health Care is the integration of mobile computing and health monitoring. It is the application of mobile computing technologies for improving communication among patients, physicians, and other health care workers. As mobile devices have become an inseparable part of our life it can integrate health care more seamlessly to our everyday life. It enables the delivery of accurate medical information anytime anywhere by means of mobile devices. Recent technological advances in sensors, low-power integrated circuits, and wireless communications have enabled the design of low-cost, miniature, lightweight and intelligent bio-sensor nodes. These nodes, capable of sensing, processing, and communicating one or more vital signs, can be seamlessly integrated into wireless personal or body area networks for mobile health monitoring. In this paper we present Intelligent Mobile Health Monitoring System (IMHMS), which can provide medical feedback to the patients through mobile devices based on the biomedical and environmental data collected by deployed sensors. --- paper_title: A Wearable ECG Acquisition System With Compact Planar-Fashionable Circuit Board-Based Shirt paper_content: A wearable electrocardiogram (ECG) acquisition system implemented with planar-fashionable circuit board (P-FCB)-based shirt is presented. The proposed system removes cumbersome wires from conventional Holter monitor system for convenience. Dry electrodes screen-printed directly on fabric enables long-term monitoring without skin irritation. The ECG monitoring shirt exploits a monitoring chip with a group of electrodes around the body, and both the electrodes and the interconnection are implemented using P-FCB to enhance wearability and to lower production cost. The characteristics of P-FCB electrode are shown, and the prototype hardware is implemented to successfully verify the proposed concept. --- paper_title: Mobile sensor data collector using Android smartphone paper_content: In this paper, we present a system using an Android smartphone that collects, displays sensor data on the screen and streams to the central server simultaneously. Bluetooth and wireless Internet connections are used for data transmissions among the devices. Also, using Near Field Communication (NFC) technology, we have constructed a more efficient and convenient mechanism to achieve an automatic Bluetooth connection and application execution. This system is beneficial on body sensor networks (BSN) developed for medical healthcare applications. For demonstration purposes, an accelerometer, a temperature sensor and electrocardiography (ECG) signal data are used to perform the experiments. Raw sensor data are interpreted to either graphical or text notations to be presented on the smartphone and the central server. Furthermore, a Java-based central server application is used to demonstrate communication with the Android system for data storage and analysis. --- paper_title: Pervasive body sensor network: an approach to monitoring the post-operative surgical patient paper_content: Patients recovering from abdominal surgery are at risk of complications due to reduced mobility as a result of post-operative pain. The ability to pervasively monitor the recovery of this group of patients and identify those at risk of developing complications is therefore clinically desirable, which may result in an early intervention to prevent adverse outcomes. This paper describes the development and evaluation of a pervasive network of body sensors developed for monitoring the recovery of post-operative patients both in the hospital and homecare settings. --- paper_title: On the Road to a Textile Integrated Bioimpedance Early Warning System for Lung Edema paper_content: Early detection of lung edema for patients suffering from chronic heart disease improves the medical treatment and can avoid committal of the patient to an intensive care unit. Therefore, an early warning system monitoring the amount of fluid in the lungs by measuring trans-thoracic bioimpedance outside the body has been developed. The proposed system(TiBIS) consists of a textile integrated measurement module and a Personal Digital Assistant for signal processing and user interaction. --- paper_title: Novel Speech Signal Processing Algorithms for High-Accuracy Classification of Parkinson's Disease paper_content: There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD. --- paper_title: Breathing Feedback System with Wearable Textile Sensors paper_content: Breathing exercises form an essential part of the treatment for respiratory illnesses such as cystic fibrosis. Ideally these exercises should be performed on a daily basis. This paper presents an interactive system using a wearable textile sensor to monitor breathing patterns. A graphical user interface provides visual real-time feedback to patients. The aim of the system is to encourage the correct performance of prescribed breathing exercises by monitoring the rate and the depth of breathing. The system is straight forward to use, low-cost and can be installed easily within a clinical setting or in the home. Monitoring the user with a wearable sensor gives real-time feedback to the user as they perform the exercise, allowing them to perform the exercises independently. There is also potential for remote monitoring where the user’s overall performance over time can be assessed by a clinician. --- paper_title: Toward Machine Emotional Intelligence: Analysis of Affective Physiological State paper_content: The ability to recognize emotion is one of the hallmarks of emotional intelligence, an aspect of human intelligence that has been argued to be even more important than mathematical and verbal intelligences. This paper proposes that machine intelligence needs to include emotional intelligence and demonstrates results toward this goal: developing a machine's ability to recognize the human affective state given four physiological signals. We describe difficult issues unique to obtaining reliable affective data and collect a large set of data from a subject trying to elicit and experience each of eight emotional states, daily, over multiple weeks. This paper presents and compares multiple algorithms for feature-based recognition of emotional state from this data. We analyze four physiological signals that exhibit problematic day-to-day variations: The features of different emotions on the same day tend to cluster more tightly than do the features of the same emotion on different days. To handle the daily variations, we propose new features and algorithms and compare their performance. We find that the technique of seeding a Fisher Projection with the results of sequential floating forward search improves the performance of the Fisher Projection and provides the highest recognition rates reported to date for classification of affect from physiology: 81 percent recognition accuracy on eight classes of emotion, including neutral. --- paper_title: Elderly Risk Assessment of Falls with BSN paper_content: Due to the natural aging process, the risks associated with falling can increase significantly. For the elderly, this usually marks a rapid deterioration of their health. While there are identified strategies that can be adopted to reduce the number of falls, it is still not possible to prevent all falls. Clinically, the Tinetti Gait and Balance Assessment has been widely used to assess the risk of falls in elderly by examining balance and gait. This paper presents our initial results of using an ear-worn BSN sensor to detect aspects of the Tinetti Gait and Balance Assessment to predict the risk of falls compared to a healthy control cohort. For this study, data was collected from a control cohort of 12 healthy volunteers and a cohort of 16 elderly fallers of varying degrees of risk. The results derived have shown that it is possible to directly detect some aspects of the Tinetti Gait and Balance Assessment and the Timed Up and Go test, demonstrating the potential value of using the platform for continuous assessment in a home environment. --- paper_title: Portable Preimpact Fall Detector With Inertial Sensors paper_content: Falls and the resulting hip fractures in the elderly are a major health and economic problem. The goal of this study was to investigate the feasibility of a portable preimpact fall detector in detecting impending falls before the body impacts on the ground. It was hypothesized that a single sensor with the appropriate kinematics measurements and detection algorithms, located near the body center of gravity, would be able to distinguish an in-progress and unrecoverable fall from nonfalling activities. The apparatus was tested in an array of daily nonfall activities of young (n = 10) and elderly (n = 14) subjects, and simulated fall activities of young subjects. A threshold detection method was used with the magnitude of inertial frame vertical velocity as the main variable to separate the nonfall and fall activities. The algorithm was able to detect all fall events at least 70 ms before the impact. With the threshold adapted to each individual subject, all falls were detected successfully, and no false alarms occurred. This portable preimpact fall detection apparatus will lead to the development of a new generation inflatable hip pad for preventing fall-related hip fractures. --- paper_title: A Wearable Airbag to Prevent Fall Injuries paper_content: We have developed a wearable airbag that incorporates a fall-detection system that uses both acceleration and angular velocity signals to trigger inflation of the airbag. The fall-detection algorithm was devised using a thresholding technique with an accelerometer and gyro sensor. Sixteen subjects mimicked falls, and their acceleration waveforms were monitored. Then, we developed a fall-detection algorithm that could detect signals 300 ms before the fall. This signal was used as a trigger to inflate the airbag to a capacity of 2.4 L. Although the proposed system can help to prevent fall-related injuries, further development is needed to miniaturize the inflation system. --- paper_title: Self-organising object networks using context zones for distributed activity recognition paper_content: Activity recognition has a high applicability scope in patient monitoring since it has the potential to observe patients' actions and recognise erratic behaviour. Our activity recognition architecture described in this paper is particularly suited for this task due to the fact that collaboration of constituent components, namely Object Networks, Activity Map and Activity Inference Engine create a flexible and scalable platform taking into consideration needs of individual users. We utilise information generated from sensors that observe user interaction with the objects in the environment and also information from body-worn sensors. This information is processed in a distributed manner through the object network hierarchy which we formally define. The object network has the effect of increasing the level of abstraction of information such that this high-level information is utilised by the Activity Inference Engine. This engine also takes into consideration information from the user's profiles in order to deduce the most probable activity and at the same time observe any erratic or potentially unsafe behaviour. We also present a scenario and show the results of our study. --- paper_title: Implementation of Context-Aware Distributed Sensor Network System for Managing Incontinence Among Patients with Dementia paper_content: Incontinence is highly prevalent in Patients with Dementia (PWD) due to a decline in their physical and mental abilities. Those PWD may lie in soiled diaper for prolonged periods if timely diaper change is not in place. Current manual care practices may not be able to immediately detect soiled diaper, although costly and labor intensive scheduled checks are performed. Delays in diaper change can cause serious social and medical issues. So, timely and effective continence management is important to potentially avoid the undesirable consequences. By developing assistive system leveraging on sensors, wireless sensor network, ambient intelligence and reminders, it is feasible to detect soiled diaper and remind the carers for timely intervention. With around the clock sensing, distributed monitoring and context-aware intervention, timely diaper change is possible anywhere anytime without wasting unnecessary care-giving resources and without causing annoyances to the elderly. --- paper_title: Swimming Stroke Kinematic Analysis with BSN paper_content: The recent maturity of body sensor networks has enabled a wide range of applications in sports, well-being and healthcare. In this paper, we hypothesise that a single unobtrusive head-worn inertial sensor can be used to infer certain biomotion details of specific swimming techniques. The sensor, weighing only seven grams is mounted on the swimmer's goggles, limiting the disturbance to a minimum. Features extracted from the recorded acceleration such as the pitch and roll angles allow to recognise the type of stroke, as well as basic biomotion indices. The system proposed represents a non-intrusive, practical deployment of wearable sensors for swimming performance monitoring. --- paper_title: A Framework for Golf Training Using Low-Cost Inertial Sensors paper_content: Body Sensor Networks are rapidly expanding to everyday applications due to recent advancements in Micro-Electro-Mechanical Systems (MEMS) sensing, wireless communication and power management technologies. We leverage these advancements to develop a framework for the use of MEMS inertial sensors as a low-cost putting coach for golf. Accurate putting requires substantial control and precision that is acquired via significant practice. Unfortunately, many golfers are not aware that they are practicing flawed mechanics. An electronic coach has the capability to point out these flawed movements before they become the norm. Our framework is the first step to an electronic coach and consists of a model for a putting swing, the design of a custom sensor platform and the implementation of signal processing functions to accurately estimate the trajectory of the golf club. Based upon our model we propose the use of sensor fusion algorithms to increase accuracy without increasing hardware demands. The accuracy of the system is experimentally evaluated using a controlled test platform. --- paper_title: Simple Barcode System Based on Inonogels for Real Time pH-Sweat Monitoring paper_content: This paper presents the fabrication, characterization and the performance of a wearable, robust, flexible and disposable barcode system based on novel ionic liquid polymer gels (ionogels) for monitoring in real time mode the pH of the sweat generated during an exercise period. Up to now sweat analysis has been carried out using awkward methods of collecting sweat followed by laboratory analysis. The approach presented here can provide immediate feedback regarding sweat composition. The great advantage of sweat analysis is the fact that it is a completely non-invasive means of analyzing the wearer’s physiological state and ensuring their health and well-being. --- paper_title: TENNISSENSE: A MULTI-SENSORY APPROACH TO PERFORMANCE ANALYSIS IN TENNIS paper_content: There is sufficient evidence in the current literature that the ability to accurately capture and model the accelerations, angular velocities and orientations involved in the tennis stroke could facilitate a major step forward in the application of biomechanics to tennis coaching (Tanabe & Ito, 2007; Gordon & Dapena, 2006). The TennisSense Project, run in collaboration with Tennis Ireland, aims to create the infrastructure required to digitally capture physical, tactical and physiological data from tennis players in order to assist in their coaching and improve performance. This study examined the potential for using Wireless Inertial Monitoring Units (WIMUs) to model the biomechanical aspects of the tennis stroke and for developing coaching tools that utilise this information. --- paper_title: Sound Based Heart Rate Monitoring for Wearable Systems paper_content: This paper presents an alternative approach for heart rate measurement. Instead of using an ECG sensor, the proposed design uses sound signals received from a microphone which does not require skin-contact. Specifically, the design uses an air conductive microphone and an efficient algorithm to estimate the heart beat parameters of the wearer. The estimates are obtained for different activities undertaken by the wearer. The activities studied include sitting, jumping, reading, laughing, singing, and coughing. Data are collected from researchers and students working in the laboratory, and in the presence of lung sound, and other environmental sounds and noise. Preliminary results show that, the method can be an effective alternative means of monitoring cardiac (heart) sounds in a natural environment. --- paper_title: An Assistive Body Sensor Network Glove for Speech- and Hearing-Impaired Disabilities paper_content: This paper presents a hand-gesture based interface for facilitating communication among speech- and hearing-impaired disabilities. In the system, a wireless sensor glove equipped with five flex sensors and a 3D accelerometer is used as the input device. By integrating the speech synthesizer onto an automatic gesture recognition system, user's hand gestures can be translated into sounds. In this study, we proposed a hierarchical gesture recognition framework based on the combined use of multivariate Gaussian distribution, bigram and a set of rules for model and feature set selection, deriving from a detailed analysis of misclassified gestures in the confusion matrix. To illustrate the practical use of the framework, a gesture recognition experiment has been conducted on American Sign Language (ASL) finger spelling gestures with two additional gestures representing space and full stop. The recognition model has been validated on the pangram "The quick brown fox jumps over the lazy dog.". --- paper_title: Implantation and Explantation of a Wireless Epiretinal Retina Implant Device: Observations during the EPIRET3 Prospective Clinical Trial paper_content: PURPOSE ::: Visual sensations in patients with blindness and retinal degenerations may be restored by electrical stimulation of retinal neurons with implantable microelectrode arrays. A prospective trial was initiated to evaluate the safety and efficacy of a wireless intraocular retinal implant (EPIRET3) in six volunteers with blindness and RP. ::: ::: ::: METHODS ::: The implant is a remotely controlled, fully intraocular wireless device consisting of a receiver and a stimulator module. The stimulator is placed on the retinal surface. Data and energy are transmitted via an inductive link from outside the eye to the implant. Surgery included removal of the lens, vitrectomy, and implantation of the EPIRET3 device through a corneal incision. The clinical outcome after implantation and explantation of the device was determined. The implant was removed after 4 weeks, according to the study protocol. ::: ::: ::: RESULTS ::: Implantation was successful in all six patients. While the anterior part was fixed with transscleral sutures, the stimulating foil was placed onto the posterior pole and fixed with retinal tacks. The implant was well tolerated, causing temporary moderate postoperative inflammation, whereas the position of the implant remained stable until surgical removal. In all cases explantation of the device was performed successfully. Adverse events were a sterile hypopyon effectively treated with steroids and antibiotics in one case and a retinal break in a second case during explantation requiring silicone oil surgery. ::: ::: ::: CONCLUSIONS ::: The EPIRET3 system can be successfully implanted and explanted in patients with blindness and RP. The surgical steps are feasible, and the postoperative follow-up disclosed an acceptable range of adverse events. --- paper_title: Novel communication services based on human body and environment interaction: applications inside trains and applications for handicapped people paper_content: We believe that near-field radio intra-body communications, wherein human body is used as the transmission medium, will be a very suitable solution for body area networks with many interesting applications towards a ubiquitous communication world. This paper presents some potential new applications of great usefulness for handicapped people. Moreover, we have also envisioned applications inside train coaches where the use of mobile phones is restricted in countries such as Japan. On the other hand, we present the system model designed to carry out these applications by using intra-body communications. Finally, in order to investigate the feasibility of this kind of communication, we analyze experimentally the characteristics of the intra-body propagation channels and we evaluate the performance of several digital modulation schemes ---
Title: A Survey of Body Sensor Networks Section 1: Sensors Description 1: Describe various types of sensors and their roles in BSNs, summarizing their functionalities, key components, and practical applications. Section 2: State-of-the-Art Research on BSN Status Description 2: Discuss recent advancements and current research trends in BSN sensor technology, focusing on wearability, data processing, energy control, and fault diagnosis. Section 3: Classification of Sensors Description 3: Classify BSN sensors based on different criteria like type of signals measured, data transmission media, deployment positions, and automatic adjustment capabilities. Section 4: Main Researched Sensors Description 4: Provide an in-depth analysis of the most commonly used sensors in BSNs, including ECG sensors, accelerometers, pressure sensors, and respiration sensors, outlining their specific attributes and use cases. Section 5: Design of Sensor Nodes Description 5: Discuss the design considerations of sensor nodes, focusing on energy control, fault diagnosis, and the reduction of sensor nodes to enhance system efficiency and performance. Section 6: Trends and Challenges Description 6: Analyze the development trends and technical challenges in BSN sensor research, covering aspects like power consumption, wearability, accuracy, and new material usage. Section 7: Data Fusion Description 7: Explain the process of data fusion in BSNs, including techniques for preprocessing, feature extraction, data fusion computing, and data compression, as well as state-of-the-art research in these areas. Section 8: State-of-the-Art Research on Data Fusion Description 8: Review major research achievements and latest developments in data fusion techniques applied within BSNs, with examples of prominent methods and algorithms. Section 9: Network Communication Description 9: Discuss BSN network communication, addressing network architecture, communication protocols, energy control, security issues, and comparison with WSNs. Section 10: State-of-the-Art Research on Network Communication Description 10: Review the latest research on BSN network communication, covering aspects like network topology, physical layer, MAC layer, and routing layer design factors. Section 11: Applications of BSNs Description 11: Explore various applications of BSNs across different fields such as medicine, social welfare, sports, and man-machine interfaces, outlining specific use cases and examples. Section 12: Conclusions Description 12: Summarize the key points covered in the paper, emphasizing the current status, challenges, and future directions of BSN research and applications.
Fluorescent Chemosensors for Toxic Organophosphorus Pesticides: A Review
8
--- paper_title: Ecological perspective on water quality goals paper_content: The central assumption of nonpoint source pollution control efforts in agricultural watersheds is that traditional erosion control programs are sufficient to insure high quality water resources. We outline the inadequacies of that assumption, especially as they relate to the goal of attaining ecological integrity. The declining biotic integrity of our water resources over the past two decades is not exclusively due to water quality (physical/chemical) degradation. Improvement in many aspects of the quality of our water resources must be approached with a much broader perspective than improvement of physical/chemical conditions. Other deficiencies in nonpoint pollution control programs are discussed and a new approach to the problem is outlined. --- paper_title: Natural and synthetic organic compounds in the environment-a symposium report. paper_content: Abstract In March 2000, an international two-day symposium was organized in Noordwijkerhout, The Netherlands, on ‘Natural and synthetic organic compounds in the environment’. The emphasis of the symposium was on the following classes of compounds: polycyclic aromatic hydrocarbons, xeno-estrogens, phyto-estrogens, and veterinary drugs. Sources, environmental distribution, uptake, biotransformation and toxic effects from the molecular to the population level were discussed. Other important aspects were the development of biomarkers, analytical methods, bioassays, molecular modelling and other research tools. Finally, the implications of the findings for government policies were discussed. In this paper, a summary is given of the most important facts and views presented at the symposium. --- paper_title: TESTING WATER QUALITY FOR PESTICIDE POLLUTION paper_content: Information from the first phase of the USA's National Water Quality Assessment (NAWQA) Program shows that pesticides are widespread in streams and ground water, and occur in geographical and seasonal patterns that follow land use and related pesticide use. The most heavily used compounds account for most detections, and most pesticides found in the environment usually occur as mixtures. Almost every stream sample collected contained at least one pesticide. For drinking water, the NAWQA results usually provide good news about individual pesticides in relation to current regulations and criteria. However, the extent of risk to humans here is not yet known, because the criteria cover limited numbers of pesticides and potential effects. The NAWQA results show a high potential for pesticide impacts on aquatic life in some streams, especially those where concentrations of more than one pesticide approach or exceed aquatic-life criteria for long time periods. This feature describes the NAWQA Programs's scope and structure, discusses the widespread occurrence of pesticides in natural waters, gives examples of use-detection relationships and considers the environmental significance of the Programs's findings and their implications for NAWQA. The NAWQA design may be modified to improve assessment in future studies. --- paper_title: Attention-Deficit/Hyperactivity Disorder and Urinary Metabolites of Organophosphate Pesticides paper_content: OBJECTIVE ::: The goal was to examine the association between urinary concentrations of dialkyl phosphate metabolites of organophosphates and attention-deficit/hyperactivity disorder (ADHD) in children 8 to 15 years of age. ::: ::: ::: METHODS ::: Cross-sectional data from the National Health and Nutrition Examination Survey (2000-2004) were available for 1139 children, who were representative of the general US population. A structured interview with a parent was used to ascertain ADHD diagnostic status, on the basis of slightly modified criteria from the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. ::: ::: ::: RESULTS ::: One hundred nineteen children met the diagnostic criteria for ADHD. Children with higher urinary dialkyl phosphate concentrations, especially dimethyl alkylphosphate (DMAP) concentrations, were more likely to be diagnosed as having ADHD. A 10-fold increase in DMAP concentration was associated with an odds ratio of 1.55 (95% confidence interval: 1.14-2.10), with adjustment for gender, age, race/ethnicity, poverty/income ratio, fasting duration, and urinary creatinine concentration. For the most-commonly detected DMAP metabolite, dimethyl thiophosphate, children with levels higher than the median of detectable concentrations had twice the odds of ADHD (adjusted odds ratio: 1.93 [95% confidence interval: 1.23-3.02]), compared with children with undetectable levels. ::: ::: ::: CONCLUSIONS ::: These findings support the hypothesis that organophosphate exposure, at levels common among US children, may contribute to ADHD prevalence. Prospective studies are needed to establish whether this association is causal. --- paper_title: Integrated Human and Ecological Risk Assessment: A Case Study of Organophosphorous Pesticides in the Environment paper_content: This study was chosen as an example of integrated risk assessment because organophosphorous esters (OPs) share exposure characteristics for different species, including human beings and because a common mechanism of action can be identified. The “Framework for the integration of health and ecological risk assessment” is being tested against a deterministic integrated environmental health risk assessment for OPs used in a typical farming community. It is argued that the integrated approach helps both the risk manager and the risk assessor in formulating a more holistic approach toward the risk of the use of OP-esters. It avoids conclusions based on incomplete assessments or on separate assessments. The database available can be expanded and results can be expressed in a more coherent manner. In the integrated exposure assessment of OPs, the risk assessments for human beings and the environment share many communalities with regards to sources and emissions, distribution routes and exposure scenarios. The site of action of OPs, acetylcholinesterase, has been established in a vast array of species, including humans. It follows that in the integrated approach the effects assessment for various species will show communalities in reported effects and standard setting approaches. In the risk characterization, a common set of evidence, common criteria, and common interpretations of those criteria are used to determine the cause of human and ecological effects that co-occur or are apparently associated with exposure to OPs. Results of health and ecological risk assessments are presented in a common format that facilitates comparison of results. It avoids acceptable risk conclusions with regard to the environment, which are unacceptable with regard to human risk and vice versa. Risk managers will be prompted to a more balanced judgement and understanding and acceptance of risk reduction measures will be facilitated. --- paper_title: Fluorescent sensors for organophosphorus nerve agent mimics. paper_content: We present a small molecule sensor that provides an optical response to the presence of an organophosphorus (OP)-containing nerve agent mimic. The design contains three key features: a primary alcohol, a tertiary amine in close proximity to the alcohol, and a fluorescent group used as the optical readout. In the sensor's rest state, the lone pair of electrons of the basic amine quenches the fluorescence of the nearby fluorophore through photoinduced electron transfer (PET). Exposure to an OP nerve agent mimic triggers phosphorylation of the primary alcohol followed rapidly by an intramolecular substitution reaction as the amine displaces the created phosphate. The quaternized ammonium salt produced by this cyclization reaction no longer possesses a lone pair of electrons, and a fluorescence readout is observed as the nonradiative PET quenching pathway of the fluorophore is shut down. --- paper_title: Electrochemical sensor for organophosphate pesticides and nerve agents using zirconia nanoparticles as selective sorbents. paper_content: An electrochemical sensor for detection of organophosphate (OP) pesticides and nerve agents using zirconia (ZrO2) nanoparticles as selective sorbents is presented. Zirconia nanoparticles were electrodynamically deposited onto the polycrystalline gold electrode by cyclic voltammetry. Because of the strong affinity of zirconia for the phosphoric group, nitroaromatic OPs strongly bind to the ZrO2 nanoparticle surface. The electrochemical characterization and anodic stripping voltammetric performance of bound OPs were evaluated using cyclic voltammetric and square-wave voltammetric (SWV) analysis. SWV was used to monitor the amount of bound OPs and provide simple, fast, and facile quantitative methods for nitroaromatic OP compounds. The sensor surface can be regenerated by successively running SWV scanning. Operational parameters, including the amount of nanoparticles, adsorption time, and pH of the reaction medium have been optimized. The stripping voltammetric response is highly linear over the 5-100 ng/mL (ppb) methyl parathion range examined (2-min adsorption), with a detection limit of 3 ng/mL and good precision (RSD = 5.3%, n = 10). The detection limit was improved to 1 ng/mL by using 10-min adsorption time. The promising stripping voltammetric performances open new opportunities for fast, simple, and sensitive analysis of OPs in environmental and biological samples. These findings can lead to a widespread use of electrochemical sensors to detect OP contaminates. --- paper_title: Degradation of some pesticides in the field and effect of processing paper_content: Standard methods have been applied for the analysis of pesticides, namely diazinon, azinphos-methyl, pirimicarb, methidathion, ethion and phosalone, used most frequently in the Elazig region. In order to find out the rates of degradation of pesticides on vegetables and fruit, the residues were analysed under native field conditions over a period of time. The effects of washing and peeling on the pesticide residue levels in fruits and vegetables were investigated. In addition, the effects of sunlight, volatilization and micro-organisms on the disappearance and fate of pesticide residues in the products were also studied in relation to time. --- paper_title: Biomaterials for Mediation of Chemical and Biological Warfare Agents paper_content: Recent events have emphasized the threat from chemical and biological warfare agents. Within the efforts to counter this threat, the biocatalytic destruction and sensing of chemical and biological weapons has become an important area of focus. The specificity and high catalytic rates of biological catalysts make them appropriate for decommissioning nerve agent stockpiles, counteracting nerve agent attacks, and remediation of organophosphate spills. A number of materials have been prepared containing enzymes for the destruction of and protection against organophosphate nerve agents and biological warfare agents. This review discusses the major chemical and biological warfare agents, decontamination methods, and biomaterials that have potential for the preparation of decontamination wipes, gas filters, column packings, protective wear, and self-decontaminating paints and coatings. --- paper_title: Colorimetric detection of chemical warfare simulants paper_content: Two simple chromogenic indicators (4) and (5), containing different supernucleophilic moieties, have been synthesized. Upon phosphorylation with two chemical warfare agent (CWA) simulants, a hypsochromic shift of approximately 50 nm is observed in an NaOH–DMSO solution. The oximate supernucleophile was found to be a better supernucleophile than the hydrazone moiety. Two X-ray crystal structures were obtained from unexpected synthetic side products obtained in this study. These will also be discussed. --- paper_title: A supramolecular microfluidic optical chemosensor. paper_content: A supramolecular microfluidic optical chemosensor (muFOC) has been fabricated. A serpentine channel has been patterned with a sol-gel film that incorporates a cyclodextrin supramolecule modified with a Tb(3+) macrocycle. Bright emission from the Tb(3+) ion is observed upon exposure of the (mu)FOC to biphenyl in aqueous solution. The signal transduction mechanism was elucidated by undertaking steady-state and time-resolved spectroscopic measurements directly on the optical chemosensor patterned within the microfluidic network. The presence of biphenyl in the cyclodextrin receptor site triggers Tb(3+) emission by an absorption-energy transfer-emission process. These results demonstrate that the intricate signal transduction mechanisms of supramolecular optical chemosensors are successfully preserved in microfluidic environments. --- paper_title: Influence of surface-active compounds on the response and sensitivity of cholinesterase biosensors for inhibitor determination paper_content: The influence of non-ionogenic surfactants, i.e., Tween-20, Triton X-100 and PEG-10 000, on the response of cholinesterase-based potentiometric biosensors and their sensitivity towards reversible and irreversible inhibitors were investigated. Acetyl-and butyrylcholinesterases were immobilized on nylon, cellulose nitrate films and tracing paper and were introduced into an assembly of potentiometric biosensors. The effect of surface-active compounds depends on the hydrophilic properties and porosity of the enzyme support material and the inhibition mechanism. In the range 0.002–0.3% m/v the surfactants show a reversible inhibiting effect on biosensor response. At lower concentrations (down to 10–4% m/v) the surfactants alter the analytical characteristics of reversible and irreversible inhibitor determination. The use of surface-active additives improves the biosensor selectivity in multi-component media. --- paper_title: Fluorescent Detection of Chemical Warfare Agents: Functional Group Specific Ratiometric Chemosensors paper_content: Indicators providing highly sensitive and functional group specific fluorescent response to diisopropyl fluorophosphate (DFP, a nerve gas (G-agent) simulant) are reported. Nonemissive indicator 2 reacts with DFP to give a cyclized compound 2+A- that shows a high emission due to its highly planar and rigid structure. Very weak emission was observed by the addition of HCl. Another indicator based on pyridyl naphthalene exhibits a large shift in its emission spectrum after reaction with DFP, which provides for quantitative ratiometric detection. --- paper_title: Molecularly imprinted polymer sensors for pesticide and insecticide detection in water. paper_content: Antibodies, peptides, and enzymes are often used as molecular recognition elements in chemical and biological sensors. However, their lack of stability and signal transduction mechanisms limits their use as sensing devices. Recent advances in the field of molecularly imprinted polymers (MIPs) have created synthetic materials that can mimic the function of biological receptors but with less stability constraints. These polymers can provide high sensitivity and selectivity while maintaining excellent thermal and mechanical stability. To further enhance the advantages of the traditional imprinted polymer approach, an additional fluorescent component has been introduced into these polymers. Such a component provides enhanced chemical affinity as well as a method for signal transduction. In this type of imprinted polymer, binding of the target analyte invokes a specific spectral signature from the reporter molecule. Previous work has provided molecularly imprinted polymers that are selective for the hydrolysis products of organophosphorus species such as the nerve agents sarin and soman. (A. L. Jenkins, O. M. Uy and G. M. Murray, Anal. Chem., 1999, 71, 373). In this paper the direct imprinting of non-hydrolyzed organophosphates including pesticides and insecticides is described. Detection limits for these newly developed MIP sensors are less than 10 parts per trillion (ppt) with long linear dynamic ranges (ppt to ppm) and response times of less than 15 min. --- paper_title: Polymer-based lanthanide luminescent sensor for detection of the hydrolysis product of the nerve agent Soman in water. paper_content: The techniques of molecular imprinting and sensitized lanthanide luminescence have been combined to create the basis for a sensor that can selectively measure the hydrolysis product of the nerve agent Soman in water. The sensor functions by selectively and reversibly binding the phosphonate hydrolysis product of this agent to a functionality-imprinted copolymer possessing a coordinatively bound luminescent lanthanide ion, Eu3+. Instrumental support for this device is designed to monitor the appearance of a narrow luminescence band in the 610-nm region of the Eu3+ spectrum that results when the analyte is coordinated to the copolymer. The ligand field shifted luminescence was excited using 1 mW of the 465.8-nm line of an argon ion laser and monitored via an optical fiber using a miniature spectrometer. For this configuration, the limit of detection for the hydrolysis product is 7 parts per trillion (ppt) in solution with a linear range from 10 ppt to 10 ppm. Chemical and spectroscopic selectivities have been combined to reduce the likelihood of false positive analyses. Chemically analogous organophosphorus pesticides tested against the sensor have been shown to not interfere with determination. --- paper_title: Polymer based lanthanide luminescent sensors for the detection of nerve agents paper_content: Several devices are being constructed to measure and detect the nerve ::: agents Sarin and Soman. The devices function by selectively binding the ::: phosphonate hydrolysis products to a luminescent functionality-imprinted ::: copolymer. The copolymers possess a securely bound luminescent lanthanide ::: ion, such as Eu ::: 3 ::: ::: + ::: , in a coordination complex that ::: has been templated for the chemical functionality resulting from the ::: hydrolysis of Sarin and Soman but has had a weakly bound anion substituted ::: by mass action. The instrumental support for the device is being designed ::: to monitor the change that occurs in the luminescence spectrum of the ::: lanthanide when the analyte is coordinated. The ligand field shifted ::: luminescence of the lanthanide is excited by a compact laser and monitored ::: via optical fiber by either a filter photometer or a ::: monochromator. Miniaturization will be applied to each of the lab bench ::: components to produce a field portable device that will potentially be ::: comparable in size to a pH meter. Initial results using an Ar ion laser ::: excitation source providing 0.3 mW at 465.7 nm yield a limit of detection ::: of 125 ppt. The chemical and spectroscopic selectivity of this device are ::: being combined to reduce the likelihood of false positive analyses. --- paper_title: Studies on a surface acoustic wave (SAW) dosimeter sensor for organophosphorous nerve agents paper_content: As a follow-up of previous work on a Surface Acoustic Wave (SAW) sensor for nerve agents, irreversible response effects have been studied in more detail. Surface analytical studies indicated that degradation products are responsible for the effects observed. In addition it was tried to explore these effects for the development of a nerve agent dosimeter. Experiments were conducted to test the performance of a SAW sensor coated with La(III) 2-bis(carboxymethyl)amino hexadecanoic acid. The experiments revealed that many improvements must be made especially with respect to sensitivity and linear response behaviour. © 1997 Elsevier Science S.A. --- paper_title: Detection of a Chemical Warfare Agent Simulant in Various Aerosol Matrixes by Ion Mobility Time-of-Flight Mass Spectrometry paper_content: For the first time, a traditional radioactive nickel (63Ni) beta emission ionization source for ion mobility spectrometry was employed with an atmospheric pressure ion mobility orthogonal reflector time-of-flight mass spectrometer (IM(tof)MS) to detect a chemical warfare agent (CWA) simulant from aerosol samples. Aerosol-phase sampling employed a quartz cyclonic chamber for sample introduction. The simulant reference material, which closely mimicked the characteristic chemical structure of CWAs as defined and described by Schedule 1, 2, or 3 of the Chemical Warfare Convention treaty verification, was used in this study. An overall elevation in arbitrary signal intensity of approximately 1.0 orders of magnitude was obtained by the progressive increase of the thermal AP-IMS temperature from 75 to 275 degrees C. A mixture of one G-type nerve simulant (dimethyl methylphosphonate (DMMP)) in four (water, kerosene, gasoline, diesel) matrixes was found in each case (AP-IMS temperature 75-275 degrees C) to be clearly resolved in less than 2.20 x 10(4) micros using the IM(tof)MS instrument. Corresponding ions, masses, drift times, K(o) values, and arbitrary signal intensities for each of the sample matrixes are reported for the CWA simulant DMMP. --- paper_title: Determination of Pesticides Using Electrochemical Enzymatic Biosensors paper_content: In the recent decade numerous biosensing methods for detection of pesticides have been developed using integrated enzymatic biosensors and immunosensors. Enzymatic determination of pesticides is most often based on inhibition of the activity of selected enzymes such as cholinesterases, organophosphate hydrolase, alkaline and acid phosphatase, ascorbate oxidase, acetolactate synthase and aldehyde dehydrogenase. Enzymatic biosensors were developed using various electrochemical signal transducers, different methods of enzyme immobilization and various measuring methodologies. Application of single-use screen-printed biosensors in batch measurements and flow-injection analysis with enzyme biosensors are most intensively developed procedures. An improvement of detectability level can be achieved by the use of recombinant enzyme mutants, while multi-component determinations by the use of biosensor matrices and data processing with artificial neural networks. In some cases the determined pesticide can be also a substrate of enzymatic reaction. Another area of development of biosensors for determination of pesticides is in the design of microbial biosensors and photosystem-based biosensors with electrochemical biosensors. --- paper_title: Cooperation between Artificial Receptors and Supramolecular Hydrogels for Sensing and Discriminating Phosphate Derivatives paper_content: This study has successfully demonstrated that the cooperative action of artificial receptors with semi-wet supramolecular hydrogels may produce a unique and efficient molecular recognition device not only for the simple sensing of phosphate derivatives, but also for discriminating among phosphate derivatives. We directly observed by confocal laser scanning microscopy that fluorescent artificial receptors can dynamically change the location between the aqueous cavity and the hydrophobic fibers upon guest-binding under semi-wet conditions provided by the supramolecular hydrogel. On the basis of such a guest-dependent dynamic redistribution of the receptor molecules, a sophisticated means for molecular recognition of phosphate derivatives can be rationally designed in the hydrogel matrix. That is, the elaborate utilization of the hydrophobic fibrous domains, as well as the water-rich hydrophilic cavities, enables us to establish three distinct signal transduction modes for phosphate sensing: the use of (i) ... --- paper_title: Highly Sensitive and Selective Amperometric Microbial Biosensor for Direct Determination ofp-Nitrophenyl-Substituted Organophosphate Nerve Agents paper_content: We report herein a whole cell-based amperometric biosensor for highly selective, highly sensitive, direct, single-step, rapid, and cost-effective determination of organophosphate pesticides with a p-nitrophenyl substituent. The biosensor was comprised of a p-nitrophenol degrader, Pseudomonas putida JS444, genetically engineered to express organophosphorus hydrolase (OPH) on the cell surface immobilized on the carbon paste electrode. Surface-expressed OPH catalyzed hydrolysis of the p-nitrophenyl substituent organophosphorus pesticides such as paraoxon, parathion, and methyl parathion to release p-nitrophenol, which was subsequently degraded by the enzymatic machinery of P. putida JS444. The electrooxidization current of the intermediates was measured and correlated to the concentration of organophosphates. The best sensitivity and response time were obtained using a sensor constructed with 0.086 mg dry weight of cells operating at 600 mV applied potential (vs Ag/AgCl reference) in 50 mM citrate--phosphate pH 7.5 buffer with 50 microM CoCl2 at room temperature. Under optimum operating conditions the biosensor measured as low as 0.28 ppb of paraoxon, 0.26 ppb of methyl parathion, and 0.29 ppb parathion. These detection limits are comparable to cholinesterase inhibition-based biosensors. Unlike the inhibition-based format, this biosensor manifests a selective response to organophosphate pesticides with a p-nitrophenyl substituent only, has a simplified single-step protocol with short response time, and can be used for repetitive/multiple and on-line analysis. The service life of the microbial amperometric biosensor was 5 days when stored in the operating buffer at 4 degrees C. The new biosensor offers great promise for rapid environmental monitoring of OP pesticides with nitrophenyl substituent. --- paper_title: Spectrophotometric detection of organophosphate diazinon by porphyrin solution and porphyrin-dyed cotton fabric paper_content: Abstract The absorbance spectrum of porphyrin meso-tetraphenylporphine (TPP) shifts to a shorter wavelength when interacting with the organophosphate diazinon. This spectral shift in the presence of diazinon is more obvious in the difference spectra (TPP + diazinon) − TPP, and can be observed in porphyrin–DMF solution and porphyrin-dyed cotton fabric. In solution, the difference spectrum has a peak at 412 nm and a trough at 421 nm. For TPP dyed cotton fabric, the difference spectrum has a peak at 415 nm and a trough at 430 nm. The absorbance difference (Δ A ) between peak and trough in the difference spectra has a linear relationship with diazinon concentration. This spectral property of porphyrin can be used to detect diazinon in the environment. Diazinon can be detected at 0.5 ppm level by TPP in solution, and at 11 ppm level by TPP dyed cotton fabric. The solid state detection capability of TPP dyed cotton fabric implies that textiles can serve as the platform for chemical sensors. --- paper_title: Acetylcholinesterase in organic solvents for the detection of pesticides: Biosensor application paper_content: Abstract Acetylcholinesterase (AChE) from electric eel, in a free or immobilized state, can be used for the detection of insecticides. This system is convenient because of the selectivity and specificity of the inhibition of AChE by organophosphorus and carbamate insecticides. However, these pesticides are highly soluble only in organic solvents. This article deals with the determination of the activity of AChE in organic solvents. Firstly, the effect of different organic solvents on activity was determined using the free enzyme. The AChE activity was maintained using the free enzyme. The AChE activity was maintained depending on the nature of the solvent. The results were applied to the biosensing system and a new method for the detection of organophosphorus and carbamate insecticides was developed. A correlation between the AChE activity and a physico-chemical parameter, was found in order to predict the effect of the solvent on enzyme activity. Upon comparison of the correlation curve obtained with free and immobilized enzyme, it appeared that immobilization enhanced the stability of the enzyme and increased the number of usable organic solvents. The inhibition of AChE by organophosphorus and carbamate insecticides was tested in organic solvents and the limit of detection determined. The inhibitory capacity of AChE was maintained in most organic solvents. The reactivation of immobilized enzyme with 2-pyridine aldoxime methiodide (2-PAM) allowed the repeated use of the same enzyme electrode. The application of the biosensor for the detection of organophosphorus and carbamate insecticides in organic solvents using chemical knowledge will be useful for the detection of pesticide residues present in water and food at very low levels. --- paper_title: Biomaterials for Mediation of Chemical and Biological Warfare Agents paper_content: Recent events have emphasized the threat from chemical and biological warfare agents. Within the efforts to counter this threat, the biocatalytic destruction and sensing of chemical and biological weapons has become an important area of focus. The specificity and high catalytic rates of biological catalysts make them appropriate for decommissioning nerve agent stockpiles, counteracting nerve agent attacks, and remediation of organophosphate spills. A number of materials have been prepared containing enzymes for the destruction of and protection against organophosphate nerve agents and biological warfare agents. This review discusses the major chemical and biological warfare agents, decontamination methods, and biomaterials that have potential for the preparation of decontamination wipes, gas filters, column packings, protective wear, and self-decontaminating paints and coatings. --- paper_title: A supramolecular microfluidic optical chemosensor. paper_content: A supramolecular microfluidic optical chemosensor (muFOC) has been fabricated. A serpentine channel has been patterned with a sol-gel film that incorporates a cyclodextrin supramolecule modified with a Tb(3+) macrocycle. Bright emission from the Tb(3+) ion is observed upon exposure of the (mu)FOC to biphenyl in aqueous solution. The signal transduction mechanism was elucidated by undertaking steady-state and time-resolved spectroscopic measurements directly on the optical chemosensor patterned within the microfluidic network. The presence of biphenyl in the cyclodextrin receptor site triggers Tb(3+) emission by an absorption-energy transfer-emission process. These results demonstrate that the intricate signal transduction mechanisms of supramolecular optical chemosensors are successfully preserved in microfluidic environments. --- paper_title: Fluorescent Sensors for the Detection of Chemical Warfare Agents paper_content: Along with biological and nuclear threats, chemical warfare agents are some of the most feared weapons of mass destruction. Compared to nuclear weapons they are relatively easy to access and deploy, which makes them in some aspects a greater threat to national and global security. A particularly hazardous class of chemical warfare agents are the nerve agents. Their rapid and severe effects on human health originate in their ability to block the function of acetylcholinesterase, an enzyme that is vital to the central nervous system. This article outlines recent activities regarding the development of molecular sensors that can visualize the presence of nerve agents (and related pesticides) through changes of their fluorescence properties. Three different sensing principles are discussed: enzyme-based sensors, chemically reactive sensors, and supramolecular sensors. Typical examples are presented for each class and different fluorescent sensors for the detection of chemical warfare agents are summarized and compared. --- paper_title: Characterization of organophosphorus hydrolases and the genetic manipulation of the phosphotriesterase from Pseudomonas diminuta paper_content: Abstract There are a variety of enzymes which are specifically capable of hydrolyzing organophosphorus esters with different phosphoryl bonds from the typical phosphotriester bonds of common insecticidal neurotoxins (e.g. paraoxon or coumaphos) to the phosphonate-fluoride bonds of chemical warfare agents (e.g. soman or sarin). These enzymes comprise a diverse set of enzymes whose basic architecture and substrate specificities vary dramatically, yet they appear to be ubiquitous throughout nature. The most thoroughly studied of these enzymes is the organophosphate hydrolase (opd gene product) of Pseudomonas diminuta and Flavobacterium sp. ATCC 27551, and the heterologous expression, post-translational modification, and genetic engineering studies undertaken with this enzyme are described. --- paper_title: Acetylcholinesterase Fiber-Optic Biosensor for Detection of Anticholinesterases paper_content: An optical sensor for anticholinesterases (AntiChEs) was constructed by immobilizing fluorescein isothiocyanate (FITC)-tagged eel electric organ acetylcholinesterase (AChE) on quartz fibers and monitoring enzyme activity. The pH-dependent fluorescent signal generated by FITC-AChE, present in the evanescent zone on the fiber surface, was quenched by the protons produced during acetylcholine (ACh) hydrolysis. Analysis of the fluorescence response showed Michaelis-Menten kinetics with a Kapp value of 420 microM for ACh hydrolysis. The reversible inhibitor edrophonium (0.1 mM) inhibited AChE and consequently reduced fluorescence quenching. The biosensor response immediately recovered upon its removal. The carbamate neostigmine (0.1 mM) also inhibited the biosensor response but recovery was much slower. In the presence of ACh, the organophosphate (OP) diisopropylfluorophosphate (DFP) at 0.1 mM did not interfere with the ACh-dependent fluorescent signal quenching, but preexposure of the biosensor to DFP in absence of ACh inhibited totally and irreversibly the biosensor response. However, the DFP-treated AChE biosensor recovered fully after a 10-min perfusion with pralidoxime (2-PAM). Echothiophate, a quaternary ammonium OP, inhibited the ACh-induced fluorescence quenching in the presence of ACh and the phosphorylated biosensor was reactivated with 2-PAM. These effects reflected the mechanism of action of the inhibitors with AChE and the inhibition constants obtained were comparable to those from colorimetric methods. The biosensor detected concentrations of the carbamate insecticides bendiocarb and methomyl and the OPs echothiophate and paraoxon in the nanomolar to micromolar range. Malathion, parathion, and dicrotophos were not detected even at millimolar concentrations; however, longer exposure or prior modification of these compounds (i.e., to malaoxon, paraoxon) may increase the biosensor detection limits. This AChE biosensor is fast, sensitive, reusable, and relatively easy to operate. Since the instrument is portable and can be self-contained, it shows potential adaptability to field use. --- paper_title: Detection of paraoxon by immobilized organophosphorus hydrolase in a Langmuir–Blodgett film paper_content: Abstract Langmuir–Blodgett (LB) film deposition technique was employed for the immobilization of organophosphorus hydrolase (OPH). OPH enzyme was covalently bonded to a fluorescent probe, fluorescein isothiocyanate (FITC), and used as a biological recognition element. Under optimal experimental conditions, OPH monolayers were deposited onto the surface of silanized quartz slides as LB film and utilized as a bioassay for the detection of paraoxon. Two different methods were employed for detection of paraoxon: the fluorescence quenching of the fluorescence probe (FITC) covalently bonded to OPH and the UV–vis absorption spectrum of the paraoxon hydrolysis product. The UV–vis absorption measurement demonstrated a linear relationship between the absorbance at 400 nm and the concentration of paraoxon solutions over the range of 1.0 × 10 −7 –1.0 × 10 −5 M (0.27–27 ppm). By observing the FITC fluorescence quenching, the concentration of paraoxon can be detected as low as 10 −9 M (S/N = 3). The research described herein showed that the LB film bioassay had high sensitivity, rapid response time and good reproducibility. --- paper_title: Influence of surface-active compounds on the response and sensitivity of cholinesterase biosensors for inhibitor determination paper_content: The influence of non-ionogenic surfactants, i.e., Tween-20, Triton X-100 and PEG-10 000, on the response of cholinesterase-based potentiometric biosensors and their sensitivity towards reversible and irreversible inhibitors were investigated. Acetyl-and butyrylcholinesterases were immobilized on nylon, cellulose nitrate films and tracing paper and were introduced into an assembly of potentiometric biosensors. The effect of surface-active compounds depends on the hydrophilic properties and porosity of the enzyme support material and the inhibition mechanism. In the range 0.002–0.3% m/v the surfactants show a reversible inhibiting effect on biosensor response. At lower concentrations (down to 10–4% m/v) the surfactants alter the analytical characteristics of reversible and irreversible inhibitor determination. The use of surface-active additives improves the biosensor selectivity in multi-component media. --- paper_title: Fluorescence polarization immunoassay based on a monoclonal antibody for the detection of the organophosphorus pesticide parathion-methyl. paper_content: A fluorescence polarization immunoassay (FPIA) based on a monoclonal antibody for the detection of parathion-methyl (PM) was developed and optimized. Fluorescein-labeled PM derivatives (tracers) with different structures were synthesized and purified by thin-layer chromatography. The influence of immunogen and tracer structures on the assay characteristics was investigated. PM concentration determinable by the FPIA ranged from 25 to 10000 ppb. The detection limit was 15 ppb. Methanol extracts of vegetable, fruit, and soil samples were diluted 1/10 for the analysis. Recovery in spiked samples averaged between 85 and 110%. The method developed is characterized by high specificity and reproducibility (CV ranged from 1.5 to 9.1% for interassay and from 1.8 to 14.1% for intra-assay). The FPIA method can be applied to the screening of food and environmental samples for PM residues without complicated cleanup. --- paper_title: Development of fluorescence polarization immunoassay for the detection of organophosphorus pesticides parathion and azinphos-methyl. paper_content: Organophosphorus Pesticides (OPPs) are a group of artificially synthesized substances used in farms to control pests and to enhance agricultural production. Although these compounds show preferential toxicity to insects, they are also toxic to humans and mammals by the same mode of action. ELISA now is an alternative method to detect OPPs. But, it must bear heterogenous properties, since several separation steps are needed during the ELISA method protocols. The FPIAs, which belong to homogenous assay, for determination of OPPs parathion and zainphos-methyl have been developed. The characteristics of Dep-EDF and PM-B-EDF tracers binding with antibodies A and D were investigated in the antibodies dilution experiments. The PM-B-EDF tracer combination with antibody D was selected to construct the standard curve for parathion detection. The IC50 value and the detection of limit were 1.96 mg/L and 0.179 mg/L, respectively, as shown in the standard curve. The tracers of PBM-EDF 2 and 3, which were chased from 4 PBM-EDF tracers, exhibited the good standard curves based on the MAb AZI-110. The FPIA constructed to analyze the azinphos-methyl showed the IC50 1.003 mg/L and detection limit 0.955 mg/L when PBM-EDF 2 was employed and the IC50 0.1487 mg/L and detection limit 0.150 mg/L were obtained when PBM-EDF 3 was used. --- paper_title: Determination of organophosphorus insecticides with a choline electrochemical biosensor paper_content: Abstract Organophosphorus insecticides have been determined with an amperometric hydrogen peroxide based choline biosensor. This class of pesticides inhibits cholinesterase enzymes which in the presence of their substrates produce choline. The decrease in activity of these enzymes is monitored by the choline sensor and is correlated to the concentration of pesticide present in solution. Pesticides such as paraoxon, parathion, methyl paraoxon and methyl parathion have been coupled with acetylcholine and butyrylcholine esterases. Results showed that although the enzyme activity of acetylcholinesterase (AChE) was higher than butyrylcholinesterase (BuChE), the lower specificity of BuChE resulted in a higher enzyme inhibition. The detection limit was less than 1 ppb with an incubation time of 120 min and 2 ppb when the incubation time was 30 min. This method has been applied to the analysis of pesticides in water samples. Results, compared with those obtained with a different analytical procedure (liquid/liquid extraction and GLC determination), demonstrated that our method is a good analytical choice to measure the total anticholinesterase activity in water samples. --- paper_title: Anticholinesterase activity of a new carbamate, heptylphysostigmine, in view of its use in patients with Alzheimer-type dementia paper_content: The anticholinesterase activity of a new carbamate, heptylphysostigmine, was studied in vitro. This compound is a competitive inhibitor of acetylcholinesterase (or true cholinesterase) having Ki = (1 +/- 0.5) X 10(-7) M. The inhibition was instantaneous at the onset and did not diminish with prolonged incubation of the drug and enzyme. --- paper_title: Design of fluorescent self-assembled multilayers and interfacial sensing for organophosphorus pesticides. paper_content: Abstract This paper details the fabrication of indole (ID) self-assembled multilayers (SAMs) and fluorescence interfacial sensing for organophosphorus (OP) pesticides. Quartz/APES/AuNP/ l -Cys/ID film was constructed on l -cysteine modified Quartz/APES/AuNP surface via electrostatic attraction between ID and l -cysteine. Cyclic voltammetry indicates that ID is immobilized successfully on the gold surface. Fluorescence of the Quartz/APES/AuNP/ l -Cys/ID film shows sensitive response toward OPs. The fluorescent sensing conditions of the SAMs are optimized that allow linear fluorescence response for methylparathion and monocrotophos over 5.97 × 10−7 to 3.51 × 10−6 g L−1 and 3.98 × 10−6 to 3.47 × 10−5 g L−1, with detection limit of 6.1 × 10−8 gL−1 and 3.28 × 10−6 gL−1, respectively. Compared to bulk phase detection, interfacial fluorescence sensing based on the SAMs technology shows higher sensitivity by at least 2 order of magnitude. --- paper_title: Use a Fluorescent Molecular Sensor for the Detection of Pesticides and Herbicides in Water paper_content: A fluorescent chemosensor based on modified �-cyclodextrin was used for the detection of pesticides and her- bicides in water. Binding constants, thermodynamic parameters and sensitivity factors were calculated and supported by AM1 semi-empirical method calculation. The results show clearly that a fluorescent chemical sensor based on modified � - cyclodextrin detects quantitatively the presence of pesticides or herbicides and allow a direct analysis of these pollutants as a new method of detection with limits of the order of ppm for the whole of the analytes. --- paper_title: Fluorescent sensors for organophosphorus nerve agent mimics. paper_content: We present a small molecule sensor that provides an optical response to the presence of an organophosphorus (OP)-containing nerve agent mimic. The design contains three key features: a primary alcohol, a tertiary amine in close proximity to the alcohol, and a fluorescent group used as the optical readout. In the sensor's rest state, the lone pair of electrons of the basic amine quenches the fluorescence of the nearby fluorophore through photoinduced electron transfer (PET). Exposure to an OP nerve agent mimic triggers phosphorylation of the primary alcohol followed rapidly by an intramolecular substitution reaction as the amine displaces the created phosphate. The quaternized ammonium salt produced by this cyclization reaction no longer possesses a lone pair of electrons, and a fluorescence readout is observed as the nonradiative PET quenching pathway of the fluorophore is shut down. --- paper_title: Fluorescent Detection of Chemical Warfare Agents: Functional Group Specific Ratiometric Chemosensors paper_content: Indicators providing highly sensitive and functional group specific fluorescent response to diisopropyl fluorophosphate (DFP, a nerve gas (G-agent) simulant) are reported. Nonemissive indicator 2 reacts with DFP to give a cyclized compound 2+A- that shows a high emission due to its highly planar and rigid structure. Very weak emission was observed by the addition of HCl. Another indicator based on pyridyl naphthalene exhibits a large shift in its emission spectrum after reaction with DFP, which provides for quantitative ratiometric detection. --- paper_title: Phosphoryl group transfer : evolution of a catalytic scaffold paper_content: It is proposed that enzymic phosphoryl-transfer reactions occur by concerted, step-wise, associative (phosphorane-intermediate) or dissociative (metaphosphate-intermediate) mechanisms, as dictated by the catalytic scaffold and the reactants. During the evolution of a phosphotransferase family, the mechanism of the phosphoryl-transfer reaction is in constant flux, potentially changing with each adaptation of the catalytic scaffold to a new phosphoryl-donor-acceptor pair. Phosphotransferases of the recently discovered haloacid dehalogenase superfamily of enzymes, one of the largest and most ubiquitous of the phosphotransferase families characterized to date, are described in the context of the co-evolution of the catalytic scaffold and mechanism. --- paper_title: Dual colorimetric and electrochemical sensing of organothiophosphorus pesticides by an azastilbene derivative paper_content: Abstract We have investigated the optical and electrochemical changes of the azastilbene, dimethyl-[4-(2-quinolin-2-yl-vinyl)-phenyl]-amine (DQA), with four organothiophosphorus (OTP) pesticides: ethion, malathion, parathion, and fenthion. Significant changes in UV–visible absorbance wavelength and in electrochemical signals indicate the effectiveness of DQA as an OTP sensor. --- paper_title: Dual optical and electrochemical saccharide detection based on a dipyrido[3,2-a:2'3'-c]phenazine (DPPZ) ligand paper_content: We report a new molecular sensor based on dipyrido[3,2-a:2′3′-c]phenazine (DPPZ) functionalized with boronic acid groups. The sensor binds to saccharides and upon binding results in changes in fluorescence intensity as well as cathodic shifts in the sensor’s formal potential. The ability of the new DPPZ-based sensor to provide both electrochemical and optical signal outputs demonstrates the viability of this family of molecules to be developed as dual-signal detectors. Measurements to determine the stability constants with four saccharides are shown. --- paper_title: Dual signal transduction mediated by a single type of olfactory receptor expressed in a heterologous system paper_content: Controversy exists over the relationship between the cAMP and IP 3 pathways in vertebrate olfactory signal transduction, as this process is known to occur by either of the two pathways. Recent studies have shown that a single olfactory neuron responds to both cAMP- and IP 3 -producing odorants, suggesting the existence of an olfactory receptor protein that can recognize both ligands. In this study we found that the rat olfactory receptor 17, stably expressed in HEK-293 cells, triggers the cAMP pathway upon stimulation by a specific odorant (octanal) at concentrations lower than 10 - 4 M; however, the receptor triggers both pathways at higher concentrations. This indicates that a single olfactory receptor, stimulated by a single pathway-inducing odorant, can evoke both pathways at high odorant concentrations. Using this heterologous system, both the dose-dependent response and receptor 17 specificity were analyzed. The dose-dependent Ca 2 + response curve, which also includes the release of Ca 2 + ions from internal stores at high odorant concentrations, was not monotonous, but had a local maximum and minimum with 10 - 1 0 and 10 - 7 M octanal, respectively, and reached a plateau at 10 - 2 M octanal. The specificity of the 17 receptor was lower when exposed to higher concentrations of odorants. --- paper_title: Protonation dependent electron transfer in 2-styrylquinolines paper_content: Abstract The absorption and emission spectra for 4′-substituted-2-trans-styrylquinoiline ( X = NMe 2 , 1 ; H , 2 ; CN , 3 ; NO 2 , 4 ) and 4′-N,N-dimethylamino-2-trans-styrylnaphthalene 5 were studied in various solvents and at various acid concentrations. Monoprotonated or doubly protonated forms of 1 are present depending on the acid concentration. Excited state deprotonation of the doubly protonated form of 1 is observed in aprotic dichloromethane solvent. This excited state deprotonation process can be prevented by introducing protic methanol to the aprotic solvent media. ---
Title: Fluorescent Chemosensors for Toxic Organophosphorus Pesticides: A Review Section 1: Introduction Description 1: Provide an overview of the environmental impact of organophosphorus pesticides, the necessity for monitoring these compounds, and the advantages of using fluorescent chemosensors. Section 2: Structure of Organophosphorus Compounds Description 2: Detail the chemical structure of organophosphorus pesticides, including the different types and their basic structural components. Section 3: OP Compounds and Their Toxicity Description 3: Discuss the toxic effects of organophosphorus compounds on human health and the environment, including their mechanisms of action and routes of exposure. Section 4: Advances in Detection of OP compounds Description 4: Describe the various advanced techniques developed for the detection of organophosphorus compounds, emphasizing traditional and newer methods. Section 5: Fluorescence-based Biosensors for OP Compounds Description 5: Review the development and applications of fluorescence-based biosensors specifically designed for detecting organophosphorus compounds. Section 6: Fluorescence-based Chemosensor Detection Methods Description 6: Discuss the innovative methods for the detection of organophosphorus compounds using fluorescent chemosensors, detailing specific examples and their mechanisms. Section 7: Sensors with Multiple Modes of Signal Transduction Description 7: Present multimodal sensing technologies and their benefits in minimizing false positives in the detection of organophosphorus pesticides. Section 8: Future Perspectives Description 8: Outline the future directions and improvements needed for developing more efficient, selective, and robust fluorescent chemosensors for organophosphorus pesticides.
A survey of open source tools for machine learning with big data in the Hadoop ecosystem
10
--- paper_title: The R project for statistical computing paper_content: A car seat for a young child which may be oriented in either a sitting or reclining position. The orientation of the car seat can be changed without disturbing the occupant or the secured position of the supporting frame. The car seat includes a seat structure, a support frame and linkage therebetween. The seat structure is designed to enclose the occupant for protection during severe maneuvering and collisions and includes a restrainer positioned across the front of the occupant which advantageously distributes the impact force on the occupant during a collision. The restrainer is held in place by a secondary seat belt system which does not require unbuckling when the seat orientation is changed. The linkage between the seat structure and the support frame provides a high seating position for comfort and visability and a reclining position for resting. --- paper_title: Addressing big data issues in Scientific Data Infrastructure paper_content: Big Data are becoming a new technology focus both in science and in industry. This paper discusses the challenges that are imposed by Big Data on the modern and future Scientific Data Infrastructure (SDI). The paper discusses a nature and definition of Big Data that include such features as Volume, Velocity, Variety, Value and Veracity. The paper refers to different scientific communities to define requirements on data management, access control and security. The paper introduces the Scientific Data Lifecycle Management (SDLM) model that includes all the major stages and reflects specifics in data management in modern e-Science. The paper proposes the SDI generic architecture model that provides a basis for building interoperable data or project centric SDI using modern technologies and best practices. The paper explains how the proposed models SDLM and SDI can be naturally implemented using modern cloud based infrastructure services provisioning model and suggests the major infrastructure components for Big Data. --- paper_title: Scaling up machine learning: parallel and distributed approaches paper_content: This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision). The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production. Visit the tutorial website at http://hunch.net/~large_scale_survey/ --- paper_title: Managing Big Data for Scientific Visualization paper_content: Many areas of endeavor have problems with big data. Some classical business applications have faced big data for some time (e.g. airline reservation systems), and newer business applications to exploit big data are under construction (e.g. data warehouses, federations of databases). While engineering and scientific visualization have also faced the problem for some time, solutions are less well developed, and common techniques are less well understood. In this section we offer some structure to understand what has been done to manage big data for engineering and scientific visualization, and to understand and go forward in areas that may prove fruitful. With this structure as backdrop, we discuss the work that has been done in management of big data, as well as our own work on demand-paged segments for fluid flow visualization. --- paper_title: Apache Hadoop YARN: yet another resource negotiator paper_content: The initial design of Apache Hadoop [1] was tightly focused on running massive, MapReduce jobs to process a web crawl. For increasingly diverse companies, Hadoop has become the data and computational agora---the de facto place where data and computational resources are shared and accessed. This broad adoption and ubiquitous usage has stretched the initial design well beyond its intended target, exposing two key shortcomings: 1) tight coupling of a specific programming model with the resource management infrastructure, forcing developers to abuse the MapReduce programming model, and 2) centralized handling of jobs' control flow, which resulted in endless scalability concerns for the scheduler. In this paper, we summarize the design, development, and current state of deployment of the next generation of Hadoop's compute platform: YARN. The new architecture we introduced decouples the programming model from the resource management infrastructure, and delegates many scheduling functions (e.g., task fault-tolerance) to per-application components. We provide experimental evidence demonstrating the improvements we made, confirm improved efficiency by reporting the experience of running YARN on production environments (including 100% of Yahoo! grids), and confirm the flexibility claims by discussing the porting of several programming frameworks onto YARN viz. Dryad, Giraph, Hoya, Hadoop MapReduce, REEF, Spark, Storm, Tez. --- paper_title: Large-scale machine learning at twitter paper_content: The success of data-driven solutions to difficult problems, along with the dropping costs of storing and processing massive amounts of data, has led to growing interest in large-scale machine learning. This paper presents a case study of Twitter's integration of machine learning tools into its existing Hadoop-based, Pig-centric analytics platform. We begin with an overview of this platform, which handles "traditional" data warehousing and business intelligence tasks for the organization. The core of this work lies in recent Pig extensions to provide predictive analytics capabilities that incorporate machine learning, focused specifically on supervised classification. In particular, we have identified stochastic gradient descent techniques for online learning and ensemble methods as being highly amenable to scaling out to large amounts of data. In our deployed solution, common machine learning tasks such as data sampling, feature generation, training, and testing can be accomplished directly in Pig, via carefully crafted loaders, storage functions, and user-defined functions. This means that machine learning is just another Pig script, which allows seamless integration with existing infrastructure for data management, scheduling, and monitoring in a production environment, as well as access to rich libraries of user-defined functions and the materialized output of other scripts. --- paper_title: Big Data with Cloud Computing: an insight on the computing environment, MapReduce, and programming frameworks paper_content: The term 'Big Data' has spread rapidly in the framework of Data Mining and Business Intelligence. This new scenario can be defined by means of those problems that cannot be effectively or efficiently addressed using the standard computing resources that we currently have. We must emphasize that Big Data does not just imply large volumes of data but also the necessity for scalability, i.e., to ensure a response in an acceptable elapsed time. When the scalability term is considered, usually traditional parallel-type solutions are contemplated, such as the Message Passing Interface or high performance and distributed Database Management Systems. Nowadays there is a new paradigm that has gained popularity over the latter due to the number of benefits it offers. This model is Cloud Computing, and among its main features we has to stress its elasticity in the use of computing resources and space, less management effort, and flexible costs. In this article, we provide an overview on the topic of Big Data, and how the current problem can be addressed from the perspective of Cloud Computing and its programming frameworks. In particular, we focus on those systems for large-scale analytics based on the MapReduce scheme and Hadoop, its open-source implementation. We identify several libraries and software projects that have been developed for aiding practitioners to address this new programming model. We also analyze the advantages and disadvantages of MapReduce, in contrast to the classical solutions in this field. Finally, we present a number of programming frameworks that have been proposed as an alternative to MapReduce, developed under the premise of solving the shortcomings of this model in certain scenarios and platforms. WIREs Data Mining Knowl Discov 2014, 4:380-409. doi: 10.1002/widm.1134 --- paper_title: Pregel: a system for large-scale graph processing paper_content: Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program. --- paper_title: Large-scale machine learning at twitter paper_content: The success of data-driven solutions to difficult problems, along with the dropping costs of storing and processing massive amounts of data, has led to growing interest in large-scale machine learning. This paper presents a case study of Twitter's integration of machine learning tools into its existing Hadoop-based, Pig-centric analytics platform. We begin with an overview of this platform, which handles "traditional" data warehousing and business intelligence tasks for the organization. The core of this work lies in recent Pig extensions to provide predictive analytics capabilities that incorporate machine learning, focused specifically on supervised classification. In particular, we have identified stochastic gradient descent techniques for online learning and ensemble methods as being highly amenable to scaling out to large amounts of data. In our deployed solution, common machine learning tasks such as data sampling, feature generation, training, and testing can be accomplished directly in Pig, via carefully crafted loaders, storage functions, and user-defined functions. This means that machine learning is just another Pig script, which allows seamless integration with existing infrastructure for data management, scheduling, and monitoring in a production environment, as well as access to rich libraries of user-defined functions and the materialized output of other scripts. --- paper_title: HaLoop: Efficient Iterative Data Processing on Large Clusters paper_content: The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce and Dryad are two popular platforms in which the dataflow takes the form of a directed acyclic graph of operators. These platforms lack built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, model fitting, and so on. This paper presents HaLoop, a modified version of the Hadoop MapReduce framework that is designed to serve these applications. HaLoop not only extends MapReduce with programming support for iterative applications, it also dramatically improves their efficiency by making the task scheduler loop-aware and by adding various caching mechanisms. We evaluated HaLoop on real queries and real datasets. Compared with Hadoop, on average, HaLoop reduces query runtimes by 1.85, and shuffles only 4% of the data between mappers and reducers. --- paper_title: Evaluating MapReduce frameworks for iterative Scientific Computing applications paper_content: Scientific Computing deals with solving complex scientific problems by applying resource-hungry computer simulation and modeling tasks on-top of supercomputers, grids and clusters. Typical scientific computing applications can take months to create and debug when applying de facto parallelization solutions like Message Passing Interface (MPI), in which the bulk of the parallelization details have to be handled by the users. Frameworks based on the MapReduce model, like Hadoop, can greatly simplify creating distributed applications by handling most of the parallelization and fault recovery details automatically for the user. However, Hadoop is strictly designed for simple, embarrassingly parallel algorithms and is not suitable for complex and especially iterative algorithms often used in scientific computing. The goal of this work is to analyze alternative MapReduce frameworks to evaluate how well they suit for solving resource hungry scientific computing problems in comparison to the assumed worst (Hadoop MapReduce) and best case (MPI) implementations for iterative algorithms. --- paper_title: Making sense of performance in data analytics frameworks paper_content: There has been much research devoted to improving the performance of data analytics frameworks, but comparatively little effort has been spent systematically identifying the performance bottlenecks of these systems. In this paper, we develop blocked time analysis, a methodology for quantifying performance bottlenecks in distributed computation frameworks, and use it to analyze the Spark framework's performance on two SQL benchmarks and a production workload. Contrary to our expectations, we find that (i) CPU (and not I/O) is often the bottleneck, (ii) improving network performance can improve job completion time by a median of at most 2%, and (iii) the causes of most stragglers can be identified. --- paper_title: Beyond Batch Processing: Towards Real-Time and Streaming Big Data paper_content: Today, big data are generated from many sources, and there is a huge demand for storing, managing, processing, and querying on big data. The MapReduce model and its counterpart open source implementation Hadoop, has proven itself as the de facto solution to big data processing, and is inherently designed for batch and high throughput processing jobs. Although Hadoop is very suitable for batch jobs, there is an increasing demand for non-batch requirements like: interactive jobs, real-time queries, and big data streams. Since Hadoop is not suitable for these non-batch workloads, new solutions are proposed to these new challenges. In this article, we discussed two categories of these solutions: real-time processing, and stream processing of big data. For each category, we discussed paradigms, strengths and differences to Hadoop. We also introduced some practical systems and frameworks for each category. Finally, some simple experiments were performed to approve effectiveness of new solutions compared to available Hadoop-based solutions. --- paper_title: Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing paper_content: Abstract : Many big data applications need to act on data arriving in real time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of state across the system and fault recovery. Furthermore, the models that provide fault recovery do so in an expensive manner, requiring either hot replication or long recovery times. We propose a new programming model discretized streams (D-Streams), that offers a high-level functional API, strong consistency, and efficient fault recovery. D-Streams support a new recovery mechanism that improves efficiency over the traditional replication and upstream backup schemes in streaming databases-parallel recovery of lost state-and unlike previous systems also mitigate stragglers. We implement D-Streams as an extension to the Spark cluster computing engine that lets users seamlessly intermix streaming, batch and interactive queries. Our system can process over 60 million records/second at sub-second latency on 100 nodes. --- paper_title: Efficient In-memory Data Management: An Analysis paper_content: This paper analyzes the performance of three systems for in-memory data management: Memcached, Redis and the Resilient Distributed Datasets (RDD) implemented by Spark. By performing a thorough performance analysis of both analytics operations and fine-grained object operations such as set/get, we show that neither system handles efficiently both types of workloads. For Memcached and Redis the CPU and I/O performance of the TCP stack are the bottlenecks -- even when serving in-memory objects within a single server node. RDD does not support efficient get operation for random objects, due to a large startup cost of the get job. Our analysis reveals a set of features that a system must support in order to achieve efficient in-memory data management. --- paper_title: A comparison of platforms for implementing and running very large scale machine learning algorithms paper_content: We describe an extensive benchmark of platforms available to a user who wants to run a machine learning (ML) inference algorithm over a very large data set, but cannot find an existing implementation and thus must "roll her own" ML code. We have carefully chosen a set of five ML implementation tasks that involve learning relatively complex, hierarchical models. We completed those tasks on four different computational platforms, and using 70,000 hours of Amazon EC2 compute time, we carefully compared running times, tuning requirements, and ease-of-programming of each. --- paper_title: Evaluating MapReduce frameworks for iterative Scientific Computing applications paper_content: Scientific Computing deals with solving complex scientific problems by applying resource-hungry computer simulation and modeling tasks on-top of supercomputers, grids and clusters. Typical scientific computing applications can take months to create and debug when applying de facto parallelization solutions like Message Passing Interface (MPI), in which the bulk of the parallelization details have to be handled by the users. Frameworks based on the MapReduce model, like Hadoop, can greatly simplify creating distributed applications by handling most of the parallelization and fault recovery details automatically for the user. However, Hadoop is strictly designed for simple, embarrassingly parallel algorithms and is not suitable for complex and especially iterative algorithms often used in scientific computing. The goal of this work is to analyze alternative MapReduce frameworks to evaluate how well they suit for solving resource hungry scientific computing problems in comparison to the assumed worst (Hadoop MapReduce) and best case (MPI) implementations for iterative algorithms. --- paper_title: Comparing Distributed Online Stream Processing Systems Considering Fault Tolerance Issues paper_content: This paper presents an analysis of four online stream processing systems (MillWheel, S4, Spark Streaming and Storm) regarding the strategies they use for fault tolerance. We use this sort of system for processing of data streams that can come from different sources such as web sites, sensors, mobile phones or any set of devices that provide real-time high-speed data. Typically, these systems are concerned more with the throughput in data processing than on fault tolerance. However, depending on the type of application, we should consider fault tolerance as an important a feature. The work describes some of the main strategies for fault tolerance - replication components, upstream backup, checkpoint and recovery - and shows how each of the four systems uses these strategies. In the end, the paper discusses the advantages and disadvantages of the combination of the strategies for fault tolerance in these systems. --- paper_title: Big Data: Principles and best practices of scalable realtime data systems paper_content: Services like social networks, web analytics, and intelligent e-commerce often need to manage data at a scale too big for a traditional database. As scale and demand increase, so does Complexity. Fortunately, scalability and simplicity are not mutually exclusiverather than using some trendy technology, a different approach is needed. Big data systems use many machines working in parallel to store and process data, which introduces fundamental challenges unfamiliar to most developers. Big Data shows how to build these systems using an architecture that takes advantage of clustered hardware along with new tools designed specifically to capture and analyze web-scale data. It describes a scalable, easy to understand approach to big data systems that can be built and run by a small team. Following a realistic example, this book guides readers through the theory of big data systems, how to use them in practice, and how to deploy and operate them once they're built. Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book. --- paper_title: Iterative parallel data processing with stratosphere: an inside look paper_content: Iterative algorithms occur in many domains of data analysis, such as machine learning or graph analysis. With increasing interest to run those algorithms on very large data sets, we see a need for new techniques to execute iterations in a massively parallel fashion. In prior work, we have shown how to extend and use a parallel data flow system to efficiently run iterative algorithms in a shared-nothing environment. Our approach supports the incremental processing nature of many of those algorithms. In this demonstration proposal we illustrate the process of implementing, compiling, optimizing, and executing iterative algorithms on Stratosphere using examples from graph analysis and machine learning. For the first step, we show the algorithm's code and a visualization of the produced data flow programs. The second step shows the optimizer's execution plan choices, while the last phase monitors the execution of the program, visualizing the state of the operators and additional metrics, such as per-iteration runtime and number of updates. To show that the data flow abstraction supports easy creation of custom programming APIs, we also present programs written against a lightweight Pregel API that is layered on top of our system with a small programming effort. --- paper_title: Big Data - State of the Art paper_content: This report is an investigation into the current state of the art with respect to "Big Data" frameworks and libraries. The primary purpose of this report is to investigate some of the available processing and analytical frameworks and/or libraries, identify some of their strengths and weaknesses through the application of a set of criteria. This criteria can then be used to compare other frameworks, systems, or libraries that are not present here to enable rapid and objective comparison. --- paper_title: A methodology for evaluating and selecting data mining software paper_content: As data mining evolves and matures more and more businesses are incorporating this technology into their business practices. However, currently data mining and decision support software is expensive and selection of the wrong tools can be costly in many ways. This paper provides direction and decision-making information to the practicing professional. A framework for evaluating data mining tools is presented and a methodology for applying this framework is described. Finally a case study to demonstrate the methods effectiveness is presented. This methodology represents the first-hand experience using many of the leading data mining tools against real business data at the Center for Data Insight (CDI) at Northern Arizona University (NAU). This is not a comprehensive review of commercial tools but instead provides a method and a point-of-reference for selecting the best software tool for a particular problem. Experience has shown that there is not one best data-mining tool for all purposes. This instrument is designed to accommodate differences in environments and problem domains. It is expected that this methodology will be used to publish tool comparisons and benchmarking results. --- paper_title: A Review of Ensemble Classification for DNA Microarrays Data paper_content: Ensemble classification has been a frequent topic of research in recent years, especially in bioinformatics. The benefits of ensemble classification (less prone to overfitting, increased classification performance, and reduced bias) are a perfect match for a number of issues that plague bioinformatics experiments. This is especially true for DNA microarray data experiments, due to the large amount of data (results from potentially tens of thousands of gene probes per sample) and large levels of noise inherent in the data. This work is a review of the current state of research regarding the applications of ensemble classification for DNA microarrays. We discuss what research thus far has demonstrated, as well as identify the areas where more research is required. --- paper_title: CLUSTERING-BASED NETWORK INTRUSION DETECTION paper_content: Recently data mining methods have gained importance in addressing network security issues, including network intrusion detection — a challenging task in network security. Intrusion detection systems aim to identify attacks with a high detection rate and a low false alarm rate. Classification-based data mining models for intrusion detection are often ineffective in dealing with dynamic changes in intrusion patterns and characteristics. Consequently, unsupervised learning methods have been given a closer look for network intrusion detection. We investigate multiple centroid-based unsupervised clustering algorithms for intrusion detection, and propose a simple yet effective self-labeling heuristic for detecting attack and normal clusters of network traffic audit data. The clustering algorithms investigated include, k-means, Mixture-Of-Spherical Gaussians, Self-Organizing Map, and Neural-Gas. The network traffic datasets provided by the DARPA 1998 offline intrusion detection project are used in our empirical investigation, which demonstrates the feasibility and promise of unsupervised learning methods for network intrusion detection. In addition, a comparative analysis shows the advantage of clustering-based methods over supervised classification techniques in identifying new or unseen attack types. --- paper_title: Large-scale machine learning at twitter paper_content: The success of data-driven solutions to difficult problems, along with the dropping costs of storing and processing massive amounts of data, has led to growing interest in large-scale machine learning. This paper presents a case study of Twitter's integration of machine learning tools into its existing Hadoop-based, Pig-centric analytics platform. We begin with an overview of this platform, which handles "traditional" data warehousing and business intelligence tasks for the organization. The core of this work lies in recent Pig extensions to provide predictive analytics capabilities that incorporate machine learning, focused specifically on supervised classification. In particular, we have identified stochastic gradient descent techniques for online learning and ensemble methods as being highly amenable to scaling out to large amounts of data. In our deployed solution, common machine learning tasks such as data sampling, feature generation, training, and testing can be accomplished directly in Pig, via carefully crafted loaders, storage functions, and user-defined functions. This means that machine learning is just another Pig script, which allows seamless integration with existing infrastructure for data management, scheduling, and monitoring in a production environment, as well as access to rich libraries of user-defined functions and the materialized output of other scripts. --- paper_title: The big data ecosystem at LinkedIn paper_content: The use of large-scale data mining and machine learning has proliferated through the adoption of technologies such as Hadoop, with its simple programming semantics and rich and active ecosystem. This paper presents LinkedIn's Hadoop-based analytics stack, which allows data scientists and machine learning researchers to extract insights and build product features from massive amounts of data. In particular, we present our solutions to the ``last mile'' issues in providing a rich developer ecosystem. This includes easy ingress from and egress to online systems, and managing workflows as production processes. A key characteristic of our solution is that these distributed system concerns are completely abstracted away from researchers. For example, deploying data back into the online system is simply a 1-line Pig command that a data scientist can add to the end of their script. We also present case studies on how this ecosystem is used to solve problems ranging from recommendations to news feed updates to email digesting to descriptive analytical dashboards for our members. --- paper_title: Big Data - State of the Art paper_content: This report is an investigation into the current state of the art with respect to "Big Data" frameworks and libraries. The primary purpose of this report is to investigate some of the available processing and analytical frameworks and/or libraries, identify some of their strengths and weaknesses through the application of a set of criteria. This criteria can then be used to compare other frameworks, systems, or libraries that are not present here to enable rapid and objective comparison. --- paper_title: Cloud Based Predictive Analytics: Text Classification, Recommender Systems and Decision Support paper_content: This paper presents a detailed study of technologies based on Hadoop and MapReduce available over the cloud for large-scale data mining and predictive analytics. Although some studies may have shown that cloud technologies relying on the MapReduce framework do not perform as well as parallel database management systems, e.g., with ad hoc queries and interactive applications, MapReduce has still been widely used by many organizations for big data storage and analytics. A number of MapReduce based tools are broadly available over the cloud. In this work we explore the Apache Hive data warehousing solution and particularly its Mahout data mining libraries for predictive analytics. We present results in the context of text classification, recommender systems and decision support. We develop prototype tools in these areas and discuss our outcomes from the study useful to researchers and other professionals in cloud computing and application domains. To the best of our knowledge, ours is among the first few in-depth studies on Mahout with application prototypes available for use. --- paper_title: Case Study Evaluation of Mahout as a Recommender Platform paper_content: Various libraries have been released to support the development of recommender systems for some time, but it is only relatively recently that larger scale, open-source platforms have become readily available. In the context of such platforms, evaluation tools are important both to verify and validate baseline platform functionality, as well as to provide support for testing new techniques and approaches developed on top of the platform. We have adopted Apache Mahout as an enabling platform for our research and have faced both of these issues in employing it as part of our work in collaborative ltering. This paper presents a case study of evaluation focusing on accuracy and coverage evaluation metrics in Apache Mahout, a recent platform tool that provides support for recommender system application development. As part of this case study, we developed a new metric combining accuracy and coverage in order to evaluate functional changes made to Mahout’s collaborative ltering algorithms. --- paper_title: Open Source Recommendation Systems for Mobile Application paper_content: The aim of Recommender Systems is to suggest useful items to users. Three major techniques can be highlighted in these systems: Collaborative Filtering, Content-Based Filtering and Hybrid Filtering. The collaborative method proposes recommendations based on what a group of users have enjoyed and it is widely used in Open Source Recommender Systems. The work presented in this paper takes place in the context of SoliMobile Project that aims to design, build and implement a package of innovative services focused on the individual in unstable situation (unemployment, homeless, etc.). In this paper, we present a study of open source recommender systems and their usefulness for SoliMobile. The paper also presents how our recommender system is fed by extracting implicit ratings using the techniques of Web Usage Mining. --- paper_title: A Distributed Methodology for Imbalanced Classification Problems paper_content: Current important challenges in data mining research are triggered by the need to address various particularities of real-world problems, such as imbalanced data and error cost distributions. This paper presents Distributed Evolutionary Cost-Sensitive Balancing, a distributed methodology for dealing with imbalanced data and -- if necessary -- cost distributions. The method employs a genetic algorithm to search for an optimal cost matrix and base classifier settings, which are then employed by a cost-sensitive classifier, wrapped around the base classifier. Individual fitness computation is the most intensive task in the algorithm, but it also presents a high parallelization potential. Two different parallelization alternatives have been explored: a computation-driven approach, and a data-driven approach. Both have been developed within the Apache Watchmaker framework and deployed on Hadoop-based infrastructures. Experimental evaluations performed up to this point have indicated that the computation-driven approach achieves a good classification performance, but does not reduce the running time significantly, the data-driven approach reduces the running time for slow algorithms, such as the kNN and the SVM, while still yielding important performance improvements. --- paper_title: K-means Clustering in the Cloud -- A Mahout Test paper_content: The K-Means is a well known clustering algorithm that has been successfully applied to a wide variety of problems. However, its application has usually been restricted to small datasets. Mahout is a cloud computing approach to K-Means that runs on a Hadoop system. Both Mahout and Hadoop are free and open source. Due to their inexpensive and scalable characteristics, these platforms can be a promising technology to solve data intensive problems which were not trivial in the past. In this work we studied the performance of Mahout using a large data set. The tests were running on Amazon EC2 instances and allowed to compare the gain in runtime when running on a multi node cluster. This paper presents some results of ongoing research. --- paper_title: Toolkit-Based High-Performance Data Mining of Large Data on MapReduce Clusters paper_content: The enormous growth of data in a variety of applications has increased the need for high performance data mining based on distributed environments. However, standard data mining toolkits per se do not allow the usage of computing clusters. The success of MapReduce for analyzing large data has raised a general interest in applying this model to other, data intensive applications. Unfortunately current research has not lead to an integration of GUI based data mining toolkits with distributed file system based MapReduce systems. This paper defines novel principles for modeling and design of the user interface, the storage model and the computational model necessary for the integration of such systems. Additionally, it introduces a novel system architecture for interactive GUI based data mining of large data on clusters based on MapReduce that overcomes the limitations of data mining toolkits. As an empirical demonstration we show an implementation based on Weka and Hadoop. --- paper_title: FIU-Miner: a fast, integrated, and user-friendly system for data mining in distributed environment paper_content: The advent of Big Data era drives data analysts from different domains to use data mining techniques for data analysis. However, performing data analysis in a specific domain is not trivial; it often requires complex task configuration, onerous integration of algorithms, and efficient execution in distributed environments.Few efforts have been paid on developing effective tools to facilitate data analysts in conducting complex data analysis tasks. In this paper, we design and implement FIU-Miner, a Fast, Integrated, and User-friendly system to ease data analysis. FIU-Miner allows users to rapidly configure a complex data analysis task without writing a single line of code. It also helps users conveniently import and integrate different analysis programs. Further, it significantly balances resource utilization and task execution in heterogeneous environments. A case study of a real-world application demonstrates the efficacy and effectiveness of our proposed system. --- paper_title: Risk adjustment of patient expenditures: A big data analytics approach paper_content: For healthcare applications, voluminous patient data contain rich and meaningful insights that can be revealed using advanced machine learning algorithms. However, the volume and velocity of such high dimensional data requires new big data analytics framework where traditional machine learning tools cannot be applied directly. In this paper, we introduce our proof-of-concept big data analytics framework for developing risk adjustment model of patient expenditures, which uses the “divide and conquer” strategy to exploit the big-yet-rich data to improve the model accuracy. We leverage the distributed computing platform, e.g., MapReduce, to implement advanced machine learning algorithms on our data set. In specific, random forest regression algorithm, which is suitable for high dimensional healthcare data, is applied to improve the accuracy of our predictive model. Our proof-of-concept framework demonstrates the effectiveness of predictive analytics using random forest algorithm as well as the efficiency of the distributed computing platform. --- paper_title: IntegrityMR: Integrity assurance framework for big data analytics and management applications paper_content: Big data analytics and knowledge management is becoming a hot topic with the emerging techniques of cloud computing and big data computing model such as MapReduce. However, large-scale adoption of MapReduce applications on public clouds is hindered by the lack of trust on the participating virtual machines deployed on the public cloud. In this paper, we extend the existing hybrid cloud MapReduce architecture to multiple public clouds. Based on such architecture, we propose IntegrityMR, an integrity assurance framework for big data analytics and management applications. We explore the result integrity check techniques at two alternative software layers: the MapReduce task layer and the applications layer. We design and implement the system at both layers based on Apache Hadoop MapReduce and Pig Latin, and perform a series of experiments with popular big data analytics and management applications such as Apache Mahout and Pig on commercial public clouds (Amazon EC2 and Microsoft Azure) and local cluster environment. The experimental result of the task layer approach shows high integrity (98% with a credit threshold of 5) with non-negligible performance overhead (18% to 82% extra running time compared to original MapReduce). The experimental result of the application layer approach shows better performance compared with the task layer approach (less than 35% of extra running time compared with the original MapReduce). --- paper_title: MLI: An API for Distributed Machine Learning paper_content: MLI is an Application Programming Interface designed to address the challenges of building Machine Learning algorithms in a distributed setting based on data-centric computing. Its primary goal is to simplify the development of high-performance, scalable, distributed algorithms. Our initial results show that, relative to existing systems, this interface can be used to build distributed implementations of a wide variety of common Machine Learning algorithms with minimal complexity and highly competitive performance and scalability. --- paper_title: Predicting the severity of motor neuron disease progression using electronic health record data with a cloud computing Big Data approach paper_content: Motor neuron diseases (MNDs) are a class of progressive neurological diseases that damage the motor neurons. An accurate diagnosis is important for the treatment of patients with MNDs because there is no standard cure for the MNDs. However, the rates of false positive and false negative diagnoses are still very high in this class of diseases. In the case of Amyotrophic Lateral Sclerosis (ALS), current estimates indicate 10% of diagnoses are false-positives, while 44% appear to be false negatives. In this study, we developed a new methodology to profile specific medical information from patient medical records for predicting the progression of motor neuron diseases. We implemented a system using Hbase and the Random forest classifier of Apache Mahout to profile medical records provided by the Pooled Resource Open-Access ALS Clinical Trials Database (PRO-ACT) site, and we achieved 66% accuracy in the prediction of ALS progress. --- paper_title: Mahout in Action paper_content: SummaryMahout in Action is a hands-on introduction to machine learning with Apache Mahout. Following real-world examples, the book presents practical use cases and then illustrates how Mahout can be applied to solve them. Includes a free audio- and video-enhanced ebook. About the TechnologyA computer system that learns and adapts as it collects data can be really powerful. Mahout, Apache's open source machine learning project, captures the core algorithms of recommendation systems, classification, and clustering in ready-to-use, scalable libraries. With Mahout, you can immediately apply to your own projects the machine learning techniques that drive Amazon, Netflix, and others. About this BookThis book covers machine learning using Apache Mahout. Based on experience with real-world applications, it introduces practical use cases and illustrates how Mahout can be applied to solve them. It places particular focus on issues of scalability and how to apply these techniques against large data sets using the Apache Hadoop framework.This book is written for developers familiar with Java - no prior experience with Mahout is assumed. What's InsideUse group data to make individual recommendations Find logical clusters within your data Filter and refine with on-the-fly classification Free audio and video extrasTable of ContentsMeet Apache Mahout PART 1 RECOMMENDATIONS Introducing recommenders Representing recommender data Making recommendations Taking recommenders to production Distributing recommendation computations PART 2 CLUSTERING Introduction to clustering Representing data Clustering algorithms in Mahout Evaluating and improving clustering quality Taking clustering to production Real-world applications of clustering PART 3 CLASSIFICATION Introduction to classification Training a classifier Evaluating and tuning a classifier Deploying a classifier Case study: Shop It To Me --- paper_title: Tackling the Poor Assumptions of Naive Bayes Text Classifiers paper_content: Naive Bayes is often used as a baseline in text classification because it is fast and easy to implement. Its severe assumptions make such efficiency possible but also adversely affect the quality of its results. In this paper we propose simple, heuristic solutions to some of the problems with Naive Bayes classifiers, addressing both systemic issues as well as problems that arise because text is not actually generated according to a multinomial model. We find that our simple corrections result in a fast algorithm that is competitive with state-of-the-art text classification algorithms such as the Support Vector Machine. --- paper_title: Big data solutions for predicting risk-of-readmission for congestive heart failure patients paper_content: Developing holistic predictive modeling solutions for risk prediction is extremely challenging in healthcare informatics. Risk prediction involves integration of clinical factors with socio-demographic factors, health conditions, disease parameters, hospital care quality parameters, and a variety of variables specific to each health care provider making the task increasingly complex. Unsurprisingly, many of such factors need to be extracted independently from different sources, and integrated back to improve the quality of predictive modeling. Such sources are typically voluminous, diverse, and vary significantly over the time. Therefore, distributed and parallel computing tools collectively termed big data have to be developed. In this work, we study big data driven solutions to predict the 30-day risk of readmission for congestive heart failure (CHF) incidents. First, we extract useful factors from National Inpatient Dataset (NIS) and augment it with our patient dataset from Multicare Health System (MHS). Then, we develop scalable data mining models to predict risk of readmission using the integrated dataset. We demonstrate the effectiveness and efficiency of the open-source predictive modeling framework we used, describe the results from various modeling algorithms we tested, and compare the performance against baseline non-distributed, non-parallel, non-integrated small data results previously published to demonstrate comparable accuracy over millions of records. --- paper_title: Evaluating parallel logistic regression models paper_content: Logistic regression (LR) has been widely used in applications of machine learning, thanks to its linear model. However, when the size of training data is very large, even such a linear model can consume excessive memory and computation time. To tackle both resource and computation scalability in a big-data setting, we evaluate and compare different approaches in distributed platform, parallel algorithm, and sublinear approximation. Our empirical study provides design guidelines for choosing the most effective combination for the performance requirement of a given application. --- paper_title: Improving situational awareness for humanitarian logistics through predictive modeling paper_content: Humanitarian aid efforts in response to natural and man-made disasters often involve complicated logistical challenges. Problems such as communication failures, damaged infrastructure, violence, looting, and corrupt officials are examples of obstacles that aid organizations face. The inability to plan relief operations during disaster situations leads to greater human suffering and wasted resources. Our team used the Global Database of Events, Location, and Tone (GDELT), a machine-coded database of international events, for all of the models described in this paper. We produced a range of predictive models for the occurrence of violence in Sudan, including time series, general logistic regression, and random forest models using both R and Apache Mahout. We also undertook a validation of the data within GDELT to confirm the event, actor, and location fields according to specific, pre-determined criteria. Our team found that, on average, 81.2 percent of the event codes in the database accurately reflected the nature of the articles. The best regression models had a mean square error (MSE) of 316.6 and the area under the receiver operating characteristic curve (AUC) was 0.868. The final random forest models had a MSE of 339.6 and AUC of 0.861. Using Mahout did not provide any significant advantages over R in the creation of these models. --- paper_title: B-dids: Mining anomalies in a Big-distributed Intrusion Detection System paper_content: The focus of this paper is to present the architecture of a Big-distributed Intrusion Detection System (B-dIDS) to discover multi-pronged attacks which are anomalies existing across multiple subnets in a distributed network. The B-dIDS is composed of two key components: a big data processing engine and an analytics engine. The big data processing is done through HAMR, which is a next generation in-memory MapReduce engine. HAMR has reported high speedups over existing big data solutions across several analytics algorithms. The analytics engine comprises a novel ensemble algorithm, which extracts training data from clusters of the multiple IDS alarms. The clustering is utilized as a preprocessing step to re-label the datasets based on their high similarity to known potential attacks. The overall aim is to predict multi-pronged attacks that are spread across multiple subnets but can be missed if not evaluated in an integrated manner. --- paper_title: Big Data Analytics framework for Peer-to-Peer Botnet detection using Random Forests paper_content: Abstract Network traffic monitoring and analysis-related research has struggled to scale for massive amounts of data in real time. Some of the vertical scaling solutions provide good implementation of signature based detection. Unfortunately these approaches treat network flows across different subnets and cannot apply anomaly-based classification if attacks originate from multiple machines at a lower speed, like the scenario of Peer-to-Peer Botnets. In this paper the authors build up on the progress of open source tools like Hadoop, Hive and Mahout to provide a scalable implementation of quasi-real-time intrusion detection system. The implementation is used to detect Peer-to-Peer Botnet attacks using machine learning approach. The contributions of this paper are as follows: (1) Building a distributed framework using Hive for sniffing and processing network traces enabling extraction of dynamic network features; (2) Using the parallel processing power of Mahout to build Random Forest based Decision Tree model which is applied to the problem of Peer-to-Peer Botnet detection in quasi-real-time. The implementation setup and performance metrics are presented as initial observations and future extensions are proposed. --- paper_title: Using Mahout for Clustering Wikipedia's Latest Articles: A Comparison between K-means and Fuzzy C-means in the Cloud paper_content: This paper compares k-means and fuzzy c-means for clustering a noisy realistic and big dataset. We made the comparison using a free cloud computing solution Apache Mahout/ Hadoop and Wikipedia's latest articles. In the past the usage of these two algorithms was restricted to small datasets. As so, studies were based on artificial datasets that do not represent a real document clustering situation. With this ongoing research we found that in a noisy dataset, fuzzy c-means can lead to worse cluster quality than k-means. The convergence speed of k-means is not always faster. We found as well that Mahout is a promise clustering technology but the preprocessing tools are not developed enough for an efficient dimensionality reduction. From our experience the use of the Apache Mahout is premature. --- paper_title: Distributed approximate spectral clustering for large-scale datasets paper_content: Data-intensive applications are becoming important in many science and engineering fields, because of the high rates in which data are being generated and the numerous opportunities offered by the sheer amount of these data. Large-scale datasets, however, are challenging to process using many of the current machine learning algorithms due to their high time and space complexities. In this paper, we propose a novel approximation algorithm that enables kernel-based machine learning algorithms to efficiently process very large-scale datasets. While important in many applications, current kernel-based algorithms suffer from a scalability problem as they require computing a kernel matrix which takes O(N2) in time and space to compute and store. The proposed algorithm yields substantial reduction in computation and memory overhead required to compute the kernel matrix, and it does not significantly impact the accuracy of the results. In addition, the level of approximation can be controlled to tradeoff some accuracy of the results with the required computing resources. The algorithm is designed such that it is independent of the subsequently used kernel-based machine learning algorithm, and thus can be used with many of them. To illustrate the effect of the approximation algorithm, we developed a variant of the spectral clustering algorithm on top of it. Furthermore, we present the design of a MapReduce-based implementation of the proposed algorithm. We have implemented this design and run it on our own Hadoop cluster as well as on the Amazon Elastic MapReduce service. Experimental results on synthetic and real datasets demonstrate that significant time and memory savings can be achieved using our algorithm. --- paper_title: K-means Clustering in the Cloud -- A Mahout Test paper_content: The K-Means is a well known clustering algorithm that has been successfully applied to a wide variety of problems. However, its application has usually been restricted to small datasets. Mahout is a cloud computing approach to K-Means that runs on a Hadoop system. Both Mahout and Hadoop are free and open source. Due to their inexpensive and scalable characteristics, these platforms can be a promising technology to solve data intensive problems which were not trivial in the past. In this work we studied the performance of Mahout using a large data set. The tests were running on Amazon EC2 instances and allowed to compare the gain in runtime when running on a multi node cluster. This paper presents some results of ongoing research. --- paper_title: Comparative recommender system evaluation: benchmarking recommendation frameworks paper_content: Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results. --- paper_title: MLI: An API for Distributed Machine Learning paper_content: MLI is an Application Programming Interface designed to address the challenges of building Machine Learning algorithms in a distributed setting based on data-centric computing. Its primary goal is to simplify the development of high-performance, scalable, distributed algorithms. Our initial results show that, relative to existing systems, this interface can be used to build distributed implementations of a wide variety of common Machine Learning algorithms with minimal complexity and highly competitive performance and scalability. --- paper_title: Mahout in Action paper_content: SummaryMahout in Action is a hands-on introduction to machine learning with Apache Mahout. Following real-world examples, the book presents practical use cases and then illustrates how Mahout can be applied to solve them. Includes a free audio- and video-enhanced ebook. About the TechnologyA computer system that learns and adapts as it collects data can be really powerful. Mahout, Apache's open source machine learning project, captures the core algorithms of recommendation systems, classification, and clustering in ready-to-use, scalable libraries. With Mahout, you can immediately apply to your own projects the machine learning techniques that drive Amazon, Netflix, and others. About this BookThis book covers machine learning using Apache Mahout. Based on experience with real-world applications, it introduces practical use cases and illustrates how Mahout can be applied to solve them. It places particular focus on issues of scalability and how to apply these techniques against large data sets using the Apache Hadoop framework.This book is written for developers familiar with Java - no prior experience with Mahout is assumed. What's InsideUse group data to make individual recommendations Find logical clusters within your data Filter and refine with on-the-fly classification Free audio and video extrasTable of ContentsMeet Apache Mahout PART 1 RECOMMENDATIONS Introducing recommenders Representing recommender data Making recommendations Taking recommenders to production Distributing recommendation computations PART 2 CLUSTERING Introduction to clustering Representing data Clustering algorithms in Mahout Evaluating and improving clustering quality Taking clustering to production Real-world applications of clustering PART 3 CLASSIFICATION Introduction to classification Training a classifier Evaluating and tuning a classifier Deploying a classifier Case study: Shop It To Me --- paper_title: Case Study Evaluation of Mahout as a Recommender Platform paper_content: Various libraries have been released to support the development of recommender systems for some time, but it is only relatively recently that larger scale, open-source platforms have become readily available. In the context of such platforms, evaluation tools are important both to verify and validate baseline platform functionality, as well as to provide support for testing new techniques and approaches developed on top of the platform. We have adopted Apache Mahout as an enabling platform for our research and have faced both of these issues in employing it as part of our work in collaborative ltering. This paper presents a case study of evaluation focusing on accuracy and coverage evaluation metrics in Apache Mahout, a recent platform tool that provides support for recommender system application development. As part of this case study, we developed a new metric combining accuracy and coverage in order to evaluate functional changes made to Mahout’s collaborative ltering algorithms. --- paper_title: Parallel matrix factorization for recommender systems paper_content: Matrix factorization, when the matrix has missing values, has become one of the leading techniques for recommender systems. To handle web-scale datasets with millions of users and billions of ratings, scalability becomes an important issue. Alternating least squares (ALS) and stochastic gradient descent (SGD) are two popular approaches to compute matrix factorization, and there has been a recent flurry of activity to parallelize these algorithms. However, due to the cubic time complexity in the target rank, ALS is not scalable to large-scale datasets. On the other hand, SGD conducts efficient updates but usually suffers from slow convergence that is sensitive to the parameters. Coordinate descent, a classical optimization approach, has been used for many other large-scale problems, but its application to matrix factorization for recommender systems has not been thoroughly explored. In this paper, we show that coordinate descent-based methods have a more efficient update rule compared to ALS and have faster and more stable convergence than SGD. We study different update sequences and propose the CCD++ algorithm, which updates rank-one factors one by one. In addition, CCD++ can be easily parallelized on both multi-core and distributed systems. We empirically show that CCD++ is much faster than ALS and SGD in both settings. As an example, with a synthetic dataset containing 14.6 billion ratings, on a distributed memory cluster with 64 processors, to deliver the desired test RMSE, CCD++ is 49 times faster than SGD and 20 times faster than ALS. When the number of processors is increased to 256, CCD++ takes only 16 s and is still 40 times faster than SGD and 20 times faster than ALS. --- paper_title: Hybreed: A software framework for developing context-aware hybrid recommender systems paper_content: This article introduces Hybreed, a software framework for building complex context-aware applications, together with a set of components that are specifically targeted at developing hybrid, context-aware recommender systems. Hybreed is based on a concept for processing context that we call dynamic contextualization. The underlying notion of context is very generic, enabling application developers to exploit sensor-based physical factors as well as factors derived from user models or user interaction. This approach is well aligned with context definitions that emphasize the dynamic and activity-oriented nature of context. As an extension of the generic framework, we describe Hybreed RecViews, a set of components facilitating the development of context-aware and hybrid recommender systems. With Hybreed and RecViews, developers can rapidly develop context-aware applications that generate recommendations for both individual users and groups. The framework provides a range of recommendation algorithms and strategies for producing group recommendations as well as templates for combining different methods into hybrid recommenders. Hybreed also provides means for integrating existing user or product data from external sources such as social networks. It combines aspects known from context processing frameworks with features of state-of-the-art recommender system frameworks, aspects that have been addressed only separately in previous research. To our knowledge, Hybreed is the first framework to cover all these aspects in an integrated manner. To evaluate the framework and its conceptual foundation, we verified its capabilities in three different use cases. The evaluation also comprises a comparative assessment of Hybreed's functional features, a comparison to existing frameworks, and a user study assessing its usability for developers. The results of this study indicate that Hybreed is intuitive to use and extend by developers. --- paper_title: An initial study of predictive machine learning analytics on large volumes of historical data for power system applications paper_content: Nowadays large volumes of industrial data are being actively generated and collected in various power system applications. Industrial Analytics in the power system field requires more powerful and intelligent machine learning tools, strategies, and environments to properly analyze the historical data and extract predictive knowledge. This paper discusses the situation and limitations of current approaches, analytic models, and tools utilized to conduct predictive machine learning analytics for very large volumes of data where the data processing causes the processor to run out of memory. Two industrial analytics cases in the power systems field are presented. Our results indicated the feasibility of forecasting substations fault events and power load using machine learning algorithm written in MapReduce paradigm or machine learning tools specific for Big Data. --- paper_title: DimmWitted: A Study of Main-Memory Statistical Analytics paper_content: We perform the first study of the tradeoff space of access methods and replication to support statistical analytics using first-order methods executed in the main memory of a Non-Uniform Memory Access (NUMA) machine. Statistical analytics systems differ from conventional SQL-analytics in the amount and types of memory incoherence that they can tolerate. Our goal is to understand tradeoffs in accessing the data in row- or column-order and at what granularity one should share the model and data for a statistical task. We study this new tradeoff space and discover that there are tradeoffs between hardware and statistical efficiency. We argue that our tradeoff study may provide valuable information for designers of analytics engines: for each system we consider, our prototype engine can run at least one popular task at least 100× faster. We conduct our study across five architectures using popular models, including SVMs, logistic regression, Gibbs sampling, and neural networks. --- paper_title: Large-scale logistic regression and linear support vector machines using spark paper_content: Logistic regression and linear SVM are useful methods for large-scale classification. However, their distributed implementations have not been well studied. Recently, because of the inefficiency of the MapReduce framework on iterative algorithms, Spark, an in-memory cluster-computing platform, has been proposed. It has emerged as a popular framework for large-scale data processing and analytics. In this work, we consider a distributed Newton method for solving logistic regression as well linear SVM and implement it on Spark. We carefully examine many implementation issues significantly affecting the running time and propose our solutions. After conducting thorough empirical investigations, we release an efficient and easy-to-use tool for the Spark community. --- paper_title: A Generic Solution to Integrate SQL and Analytics for Big Data paper_content: There is a need to integrate SQL processing with more advanced machine learning (ML) analytics to drive actionable insights from large volumes of data. As a first step towards this integration, we study how to efficiently connect big SQL systems (either MPP databases or new-generation SQL-on-Hadoop systems) with distributed big ML systems. We identify two important challenges to address in the integrated data analytics pipeline: data transformation, how to efficiently transform SQL data into a form suitable for ML, and data transfer, how to efficiently handover SQL data to ML systems. For the data transformation problem, we propose an In-SQL approach to incorporate common data transformations for ML inside SQL systems through extended user-defined functions (UDFs), by exploiting the massive parallelism of the big SQL systems. We propose and study a general method for transferring data between big SQL and big ML systems in a parallel streaming fashion. Furthermore, we explore caching intermediate or final results of data transformation to improve the performance. Our techniques are generic: they apply to any big SQL system that supports UDFs and any big ML system that uses Hadoop InputFormats to ingest input data. --- paper_title: Deep learning applications and challenges in big data analytics paper_content: Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning. --- paper_title: MLbase: A Distributed Machine-learning System paper_content: Machine learning (ML) and statistical techniques are key to transforming big data into actionable knowledge. In spite of the modern primacy of data, the complexity of existing ML algorithms is often overwhelming|many users do not understand the trade-os and challenges of parameterizing and choosing between dierent learning techniques. Furthermore, existing scalable systems that support machine learning are typically not accessible to ML researchers without a strong background in distributed systems and low-level primitives. In this work, we present our vision for MLbase, a novel system harnessing the power of machine learning for both end-users and ML researchers. MLbase provides (1) a simple declarative way to specify ML tasks, (2) a novel optimizer to select and dynamically adapt the choice of learning algorithm, (3) a set of high-level operators to enable ML researchers to scalably implement a wide range of ML methods without deep systems knowledge, and (4) a new run-time optimized for the data-access patterns of these high-level operators. --- paper_title: Predictive Analytics of Sensor Data Using Distributed Machine Learning Techniques paper_content: This work is based on a real-life data-set collected from sensors that monitor drilling processes and equipment in an oil and gas company. The sensor data stream-in at an interval of one second, which is equivalent to 86400 rows of data per day. After studying state-of-the-art Big Data analytics tools including Mahout, RHadoop and Spark, we chose Ox data's H2O for this particular problem because of its fast in-memory processing, strong machine learning engine, and ease of use. Accurate predictive analytics of big sensor data can be used to estimate missed values, or to replace incorrect readings due malfunctioning sensors or broken communication channel. It can also be used to anticipate situations that help in various decision makings, including maintenance planning and operation. --- paper_title: TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries paper_content: The proliferation of massive datasets combined with the development of sophisticated analytical techniques have enabled a wide variety of novel applications such as improved product recommendations, automatic image tagging, and improved speech-driven interfaces. These and many other applications can be supported by Predictive Analytic Queries (PAQs). A major obstacle to supporting PAQs is the challenging and expensive process of identifying and training an appropriate predictive model. Recent efforts aiming to automate this process have focused on single node implementations and have assumed that model training itself is a black box, thus limiting the effectiveness of such approaches on large-scale problems. In this work, we build upon these recent efforts and propose an integrated PAQ planning architecture that combines advanced model search techniques, bandit resource allocation via runtime algorithm introspection, and physical optimization via batching. The result is TuPAQ, a component of the MLbase system, which solves the PAQ planning problem with comparable quality to exhaustive strategies but an order of magnitude more efficiently than the standard baseline approach, and can scale to models trained on terabytes of data across hundreds of machines. --- paper_title: Scalable Automated Model Search paper_content: Abstract : Model search is a crucial component of data analytics pipelines and this laborious process of choosing an appropriate learning algorithm and tuning its parameters remains a major obstacle in the widespread adoption of machine learning techniques. Recent efforts aiming to automate this process have assumed model training itself to be a black-box, thus limiting the effectiveness of such approaches on large-scale problems. In this work, we build upon these recent efforts. By inspecting the inner workings of model training and framing model search as bandit-like resource allocation problem, we present an integrated distributed system for model search that targets large-scale learning applications. We study the impact of our approach on a variety of datasets and demonstrate that our system, named GHOSTFACE, solves the model search problem with comparable accuracy as basic strategies but an order of magnitude faster. We further demonstrate that GHOSTFACE can scale to models trained on terabytes of data across hundreds of machines. --- paper_title: Automatic extraction of topics on big data streams through scalable advanced analysis paper_content: Extracting words, data patterns and topic models from streaming big data by way of real-time processing is a challenging job. Currently, many of applied machine learning techniques in data mining aim to utilize online feedbacks by making model updates faster and quicker. However, Mahout and Massive Online Analysis (MOA) existing solutions are not supported for streaming machine learning, and consequently, not suitable for scalable multiple machines. In this paper enhanced the machine learning algorithms for extracting the words and generating topic models based on the continuing data which was initially proposed. One of the great advantages of the proposed algorithm was the capability to be scaled into multiple machines, in which made it very suitable for real-time processing of streaming data. In general, the algorithm includes two main methods: (a) the first method introduces a principle approach to pre-process documents in an associated time sequence. It implements a class to detect identical files from input files so as to reduce computation time. (b) The second method suits real time monitoring and control of the process from multiple asynchronous text streams. In the experiment, these two methods were alternatively executed, and subsequently after iterations a monotonic convergence was guaranteed. The study conducts the experiments based on a real-world dataset collected from TREC KBA Stream Corpus in 2012. Finally, the accuracy of the proposed method resulted in greater robustness towards the ability to deal with noise and reduce the computation. --- paper_title: Efficiently Finding Top-K Items from Evolving Distributed Data Streams paper_content: The problem of efficiently finding top-k frequent items has attracted much attention in recent yeras. Storage constraints in the processing node and intrinsic evloving feature of the data streams are two main challenges. In this paper, we propose a method to tackle these two challenges based on space-saving and gossip-based algorithms respectively. Our method is implemented on SAMOA, a scalable advanced massive online analysis machine learning framework. The experimental results show its effectiveness and scalability. --- paper_title: SAMOA: scalable advanced massive online analysis paper_content: SAMOA (SCALABLE ADVANCED MASSIVE ONLINE ANALYSIS) is a platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. samoa is written in Java, is open source, and is available at http://samoa-project.net under the Apache Software License version 2.0. --- paper_title: PARMA: a parallel randomized algorithm for approximate association rules mining in MapReduce paper_content: Frequent Itemsets and Association Rules Mining (FIM) is a key task in knowledge discovery from data. As the dataset grows, the cost of solving this task is dominated by the component that depends on the number of transactions in the dataset. We address this issue by proposing PARMA, a parallel algorithm for the MapReduce framework, which scales well with the size of the dataset (as number of transactions) while minimizing data replication and communication cost. PARMA cuts down the dataset-size-dependent part of the cost by using a random sampling approach to FIM. Each machine mines a small random sample of the dataset, of size independent from the dataset size. The results from each machine are then filtered and aggregated to produce a single output collection. The output will be a very close approximation of the collection of Frequent Itemsets (FI's) or Association Rules (AR's) with their frequencies and confidence levels. The quality of the output is probabilistically guaranteed by our analysis to be within the user-specified accuracy and error probability parameters. The sizes of the random samples are independent from the size of the dataset, as is the number of samples. They depend on the user-chosen accuracy and error probability parameters and on the parallel computational model. We implemented PARMA in Hadoop MapReduce and show experimentally that it runs faster than previously introduced FIM algorithms for the same platform, while 1) scaling almost linearly, and 2) offering even higher accuracy and confidence than what is guaranteed by the analysis. --- paper_title: Scalable online betweenness centrality in evolving graphs paper_content: Betweenness centrality measures the importance of an element of a graph, either a vertex or an edge, by the fraction of shortest paths that pass through it [1]. This measure is notoriously expensive to compute, and the best known algorithm, proposed by Brandes [2], runs in O(nm) time. The problems of efficiency and scalability are exacerbated in a dynamic setting, where the input is an evolving graph seen edge by edge, and the goal is to keep the betweenness centrality up to date. In this paper [8] we propose the first truly scalable and practical framework for computing vertex and edge betweenness centrality of large evolving graphs, incrementally and online. --- paper_title: Big Data Stream Learning with SAMOA paper_content: Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0. --- paper_title: Mahout in Action paper_content: SummaryMahout in Action is a hands-on introduction to machine learning with Apache Mahout. Following real-world examples, the book presents practical use cases and then illustrates how Mahout can be applied to solve them. Includes a free audio- and video-enhanced ebook. About the TechnologyA computer system that learns and adapts as it collects data can be really powerful. Mahout, Apache's open source machine learning project, captures the core algorithms of recommendation systems, classification, and clustering in ready-to-use, scalable libraries. With Mahout, you can immediately apply to your own projects the machine learning techniques that drive Amazon, Netflix, and others. About this BookThis book covers machine learning using Apache Mahout. Based on experience with real-world applications, it introduces practical use cases and illustrates how Mahout can be applied to solve them. It places particular focus on issues of scalability and how to apply these techniques against large data sets using the Apache Hadoop framework.This book is written for developers familiar with Java - no prior experience with Mahout is assumed. What's InsideUse group data to make individual recommendations Find logical clusters within your data Filter and refine with on-the-fly classification Free audio and video extrasTable of ContentsMeet Apache Mahout PART 1 RECOMMENDATIONS Introducing recommenders Representing recommender data Making recommendations Taking recommenders to production Distributing recommendation computations PART 2 CLUSTERING Introduction to clustering Representing data Clustering algorithms in Mahout Evaluating and improving clustering quality Taking clustering to production Real-world applications of clustering PART 3 CLASSIFICATION Introduction to classification Training a classifier Evaluating and tuning a classifier Deploying a classifier Case study: Shop It To Me --- paper_title: Knowledge discovery from imbalanced and noisy data paper_content: Class imbalance and labeling errors present significant challenges to data mining and knowledge discovery applications. Some previous work has discussed these important topics, however the relationship between these two issues has not received enough attention. Further, much of the previous work in this domain is fragmented and contradictory, leading to serious questions regarding the reliability and validity of the empirical conclusions. In response to these issues, we present a comprehensive suite of experiments carefully designed to provide conclusive, reliable, and significant results on the problem of learning from noisy and imbalanced data. Noise is shown to significantly impact all of the learners considered in this work, and a particularly important factor is the class in which the noise is located (which, as discussed throughout this work, has very important implications to noise handling). The impacts of noise, however, vary dramatically depending on the learning algorithm and simple algorithms such as naive Bayes and nearest neighbor learners are often more robust than more complex learners such as support vector machines or random forests. Sampling techniques, which are often used to alleviate the adverse impacts of imbalanced data, are shown to improve the performance of learners built from noisy and imbalanced data. In particular, simple sampling techniques such as random undersampling are generally the most effective. --- paper_title: Experimental perspectives on learning from imbalanced data paper_content: We present a comprehensive suite of experimentation on the subject of learning from imbalanced data. When classes are imbalanced, many learning algorithms can suffer from the perspective of reduced performance. Can data sampling be used to improve the performance of learners built from imbalanced data? Is the effectiveness of sampling related to the type of learner? Do the results change if the objective is to optimize different performance metrics? We address these and other issues in this work, showing that sampling in many cases will improve classifier performance. --- paper_title: Feature Selection with High-Dimensional Imbalanced Data paper_content: Feature selection is an important topic in data mining, especially for high dimensional datasets. Filtering techniques in particular have received much attention, but detailed comparisons of their performance is lacking. This work considers three filters using classifier performance metrics and six commonly-used filters. All nine filtering techniques are compared and contrasted using five different microarray expression datasets. In addition, given that these datasets exhibit an imbalance between the number of positive and negative examples, the utilization of sampling techniques in the context of feature selection is examined. --- paper_title: Enterprise Data Analysis and Visualization: An Interview Study paper_content: Organizations rely on data analysts to model customer engagement, streamline operations, improve production, inform business decisions, and combat fraud. Though numerous analysis and visualization tools have been built to improve the scale and efficiency at which analysts can work, there has been little research on how analysis takes place within the social and organizational context of companies. To better understand the enterprise analysts' ecosystem, we conducted semi-structured interviews with 35 data analysts from 25 organizations across a variety of sectors, including healthcare, retail, marketing and finance. Based on our interview data, we characterize the process of industrial data analysis and document how organizational features of an enterprise impact it. We describe recurring pain points, outstanding challenges, and barriers to adoption for visual analytic tools. Finally, we discuss design implications and opportunities for visual analysis research. --- paper_title: Imputation techniques for multivariate missingness in software measurement data paper_content: The problem of missing values in software measurement data used in empirical analysis has led to the proposal of numerous potential solutions. Imputation procedures, for example, have been proposed to `fill-in' the missing values with plausible alternatives. We present a comprehensive study of imputation techniques using real-world software measurement datasets. Two different datasets with dramatically different properties were utilized in this study, with the injection of missing values according to three different missingness mechanisms (MCAR, MAR, and NI). We consider the occurrence of missing values in multiple attributes, and compare three procedures, Bayesian multiple imputation, k Nearest Neighbor imputation, and Mean imputation. We also examine the relationship between noise in the dataset and the performance of the imputation techniques, which has not been addressed previously. Our comprehensive experiments demonstrate conclusively that Bayesian multiple imputation is an extremely effective imputation technique. --- paper_title: A Distributed Methodology for Imbalanced Classification Problems paper_content: Current important challenges in data mining research are triggered by the need to address various particularities of real-world problems, such as imbalanced data and error cost distributions. This paper presents Distributed Evolutionary Cost-Sensitive Balancing, a distributed methodology for dealing with imbalanced data and -- if necessary -- cost distributions. The method employs a genetic algorithm to search for an optimal cost matrix and base classifier settings, which are then employed by a cost-sensitive classifier, wrapped around the base classifier. Individual fitness computation is the most intensive task in the algorithm, but it also presents a high parallelization potential. Two different parallelization alternatives have been explored: a computation-driven approach, and a data-driven approach. Both have been developed within the Apache Watchmaker framework and deployed on Hadoop-based infrastructures. Experimental evaluations performed up to this point have indicated that the computation-driven approach achieves a good classification performance, but does not reduce the running time significantly, the data-driven approach reduces the running time for slow algorithms, such as the kNN and the SVM, while still yielding important performance improvements. --- paper_title: A Practical Evaluation of Information Processing and Abstraction Techniques for the Internet of Things paper_content: The term Internet of Things (IoT) refers to the interaction and communication between billions of devices that produce and exchange data related to real-world objects (i.e. things). Extracting higher level information from the raw sensory data captured by the devices and representing this data as machine-interpretable or human-understandable information has several interesting applications. Deriving raw data into higher level information representations demands mechanisms to find, extract, and characterize meaningful abstractions from the raw data. This meaningful abstractions then have to be presented in a human and/or machine-understandable representation. However, the heterogeneity of the data originated from different sensor devices and application scenarios such as e-health, environmental monitoring, and smart home applications, and the dynamic nature of sensor data make it difficult to apply only one particular information processing technique to the underlying data. A considerable amount of methods from machine-learning, the semantic web, as well as pattern and data mining have been used to abstract from sensor observations to information representations. This paper provides a survey of the requirements and solutions and describes challenges in the area of information abstraction and presents an efficient workflow to extract meaningful information from raw sensor data based on the current state-of-the-art in this area. This paper also identifies research directions at the edge of information abstraction for sensor data. To ease the understanding of the abstraction workflow process, we introduce a software toolkit that implements the introduced techniques and motivates to apply them on various data sets. ---
Title: A survey of open source tools for machine learning with big data in the Hadoop ecosystem Section 1: Understanding big data Description 1: This section provides definitions of big data and discusses the challenges associated with it. Section 2: Hadoop ecosystem Description 2: This section serves as an explanation and overview of the Hadoop ecosystem with a focus on tools that can help solve big data problems. Section 3: Storage layer Description 3: This section describes the storage solutions within the Hadoop ecosystem, including HDFS and various NoSQL databases. Section 4: Processing layer Description 4: This section examines different data processing tools and engines within the Hadoop ecosystem. Section 5: Management layer Description 5: This section describes the tools for user interaction, scheduling, monitoring, and coordination within the Hadoop ecosystem. Section 6: Data processing engines Description 6: This section discusses different data processing paradigms like batch, streaming, and iterative processing engines. Section 7: Machine learning toolkits Description 7: This section discusses criteria for evaluating various machine learning tools and libraries. Section 8: Evaluation of machine learning tools Description 8: This section provides an in-depth analysis of specific machine learning frameworks that can be used with the processing platforms. Section 9: Suggestions for future work Description 9: This section contains a discussion of key elements missing among the major toolkits. Section 10: Conclusion Description 10: This section presents the conclusions from this survey, highlighting current trends and future directions.
Intrastream Synchronization for Continuous Media Streams: A Survey of Playout Schedulers
8
--- paper_title: Synchronization of multimedia data for a multimedia news-on-demand application paper_content: We present a complete software control architecture for synchronizing multiple data streams generated from distributed media-storing database servers without the use of a global clock. Independent network connections are set up to remote workstations for multimedia presentations. Based on the document scenario and traffic predictions, stream delivery scheduling is performed in a centralized manner. Certain compensation mechanisms at the receiver are also necessary due to the presence of random network delays. A stream synchronization protocol (SSP) allows for synchronization recovery, ensuring a high quality multimedia display at the receiver. SSP uses synchronization quality of service parameters to guarantee the simultaneous delivery of the different types of data streams. In the proposed architecture, a priority-based synchronization control mechanism for MPEG-2 coded data streams is also provided. A performance modeling of the SSP is presented using the DSPN models. Relevant results such as the effect of the SSP, the number of synchronization errors, etc., are obtained. --- paper_title: Flow synchronization protocol paper_content: Presents an adaptive flow synchronization protocol that permits synchronized delivery of data to and from geographically distributed sites. Applications include inter-stream synchronization, synchronized delivery of information in a multisite conference, and synchronization for concurrency control in distributed computations. The contributions of this protocol in the area of flow synchronization are the ability to synchronize over arbitrary topologies, the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the application to tailor synchronization calculations to its service requirements. The authors take advantage of network protocols capable of maintaining network clock synchronization in the millisecond range. > --- paper_title: Multipoint multimedia teleconference system with adaptive synchronization paper_content: This paper discusses major issues involved in the design and implementation of a multipoint multimedia conference system, such as system architecture, conference management, session control, and intramedium and intermedia synchronization. In particular, emphasis is given to conference management and adaptive synchronization algorithms. The management of multiparticipants is based upon a distributed architecture for greater flexibility. The proposed synchronization algorithm is adaptive to network changes, eliminates the need for a global clock, and is immune to the clock frequency drift, while its realization is very simple and the involved overhead is minimal. The essence of the algorithm is partitioning the vicinity of the arrival epochs of multimedia objects into three regions and counting arrivals at each region. The function of the synchronizer is to shift the playback clock (PBC) according to the individual counter contents. The ideas proposed are implemented within a teleconference system on the Ethernet/FDDI using the TCP/UDP. Experimental results show that the proposed synchronization algorithm performs well in our network testbed environment. --- paper_title: The use of network delay estimation for multimedia data retrieval paper_content: Multimedia data have specific temporal presentation requirements. For example in video conferencing applications the voice and images of the participants must be delivered and presented synchronously. These requirements can be achieved by scheduling or managing system resources. We present a technique called limited a priori scheduling (LAP) to manage the delivery channel from source to destination for digital multimedia data. By using delay estimation a LAP scheduler can retrieve stored digital media spanning arbitrary networks with unspecified delays. The use of delay estimation also facilitates selective degradation of service in bandwidth and buffer limited situations. Such degradation enables the continuous real-time playout and synchronization of various media arriving from different sources. The performance of the LAP scheduler is described based on implementation and experimentation using Ethernet. --- paper_title: Analysis and Optimal Design of a Packet-Voice Receiver paper_content: The analysis problems arising from a digital packet switched speech network are outlined in a wide statistical environment. The statistical models assumed are discussed with respect to practical applications. Particular attention is devoted to a formal description of the influence of the packet voice receiver on the system behavior in terms of the delay pdf. The optimization of the system performance, in order to obtain the best voice quality, is stated as an optimal control problem, and the analysis results play a central role in the construction of the objective functions. The problem is solved in a particular analytical environment. The outlined analytical results are supported by critical comments on their comparison with practical implementations, allowing the designer a better capability in handling any problems. --- paper_title: Flow synchronization protocol paper_content: Presents an adaptive flow synchronization protocol that permits synchronized delivery of data to and from geographically distributed sites. Applications include inter-stream synchronization, synchronized delivery of information in a multisite conference, and synchronization for concurrency control in distributed computations. The contributions of this protocol in the area of flow synchronization are the ability to synchronize over arbitrary topologies, the introduction of an adaptive synchronization delay, the flexibility to maintain multiple synchronization groups, and the use of a modular architecture that permits the application to tailor synchronization calculations to its service requirements. The authors take advantage of network protocols capable of maintaining network clock synchronization in the millisecond range. > --- paper_title: Techniques for Packet Voice Synchronization paper_content: Packet switching has been proposed as an effective technology for integrating voice and data in a single network. An important aspect of packet-switched voice is the reconstruction of a continuous stream of speech from the set of packets that arrive at the destination terminal, each of which may encounter a different amount of buffering delay in the packet network. The magnitude of the variation in delay may range from a few milliseconds in a local area network to hundreds of milliseconds in a long-haul packet voice and data network. This paper discusses several aspects of the packet voice synchronization problem, and techniques that can be used to address it. These techniques estimate in some way the delay encountered by each packet and use the delay estimate to determine how speech is reconstructed. The delay estimates produced by these techniques can be used in managing the flow of information in the packet network to improve overall performance. Interactions of packet voice synchronization techniques with other network design issues are also discussed. --- paper_title: QoS impact on user perception and understanding of multimedia video clips paper_content: L ABSTIL4CT The widespread and increasing advent of multimedia technologies means that there must be a departure from the viewpoint that users expect a Quality of Service (QoS) which will only satisfy them perceptually. What should be expected of multimedia clips is that the QoS with which they are shown is such that it will enable the users to assimilate and understand the informational content of such clips. In this paper we present experimental resuks Iiiking users’ understanding and perception of multimedia ctips with the presentation QoS. Results show that the quality of video clips can be severely degraded without the user having to perceive any significant 10ss of informational content 1.1 --- paper_title: Dynamic Video Playout Smoothing Method for Multimedia Applications paper_content: Multimedia applications including video data require the smoothing of video playout to prevent potential discontinuity. In this paper, we propose a dynamic video playout smoothing method, called the Video Smoother, which dynamically adopts various playout rates in an attempt to compensate for high delay variance of networks. Specifically, if the number of frames in the buffer exceeds a given threshold (TH), the Smoother employs a maximum playout rate. Otherwise, the Smoother uses proportionally reduced rates in an effort to eliminate playout pauses resulting from the emptiness of the playout buffer. To determine THs under various loads, we present an analytic model assuming the Interrupted Poisson Process (IPP) arrival. Based on the analytic results, we establish a paradigm of determining THs and playout rates for achieving different playout qualities under various loads of networks. Finally, to demonstrate the viability of the Video Smoother, we have implemented a prototyping system including a multimedia teleconferencing application and the Video Smoother performing as part of the transport layer. The prototyping results show that the Video Smoother achieves smooth playout incurring only unnoticeable delays. --- paper_title: Human perception of jitter and media synchronization paper_content: Multimedia synchronization comprises both the definition and the establishment of temporal relationships among media types. The presentation of 'in sync' data streams is essential to achieve a natural impression, data that is 'out of sync' is perceived as being somewhat artificial, strange, or even annoying. Therefore, the goal of any multimedia system is to enable an application to present data without no or little synchronization errors. The achievement of this goal requires a detailed knowledge of the synchronization requirements at the user interface. The paper presents the results of a series of experiments about human media perception that may be used as 'quality of service' guidelines. The results show that skews between related data streams may still give the effect that the data is 'in sync' and gives some constraints under which jitter may be tolerated. The author uses the findings to develop a scheme for the processing of nontrivial synchronization skew between more than two data streams. --- paper_title: Multipoint multimedia teleconference system with adaptive synchronization paper_content: This paper discusses major issues involved in the design and implementation of a multipoint multimedia conference system, such as system architecture, conference management, session control, and intramedium and intermedia synchronization. In particular, emphasis is given to conference management and adaptive synchronization algorithms. The management of multiparticipants is based upon a distributed architecture for greater flexibility. The proposed synchronization algorithm is adaptive to network changes, eliminates the need for a global clock, and is immune to the clock frequency drift, while its realization is very simple and the involved overhead is minimal. The essence of the algorithm is partitioning the vicinity of the arrival epochs of multimedia objects into three regions and counting arrivals at each region. The function of the synchronizer is to shift the playback clock (PBC) according to the individual counter contents. The ideas proposed are implemented within a teleconference system on the Ethernet/FDDI using the TCP/UDP. Experimental results show that the proposed synchronization algorithm performs well in our network testbed environment. --- paper_title: The Concord algorithm for synchronization of networked multimedia streams paper_content: Synchronizing different data streams from multiple sources simultaneously at a receiver is one of the basic problems involved in multimedia distributed systems. This requirement stems from the nature of packet based networks which can introduce end-to-end delays that vary both within and across streams. We present a new algorithm called Concord, which provides an integrated solution for these single and multiple stream synchronization problems. It is notable because it defines a single framework to deal with both problems, and operates under the influence of parameters which can be supplied by the application involved. In particular these parameters are used to allow a trade-off between the packet loss rates, total end-to-end delay and skew for each of the streams. For applications like conferencing this is used to reduce delay by determining the minimum buffer delay/size required. --- paper_title: End-to-end delay analysis of videoconferencing over packet-switched networks paper_content: Videoconferencing is an important global application-it enables people around the globe to interact when distance separates them. In order for the participants in a videoconference call to interact naturally, the end-to-end delay should be below human perception; even though an objective and unique figure cannot be set, 100 ms is widely recognized as the desired one-way delay requirement for interaction. Since the global propagation delay can be about 100 ms, the actual end-to-end delay budget available to the system designer (excluding propagation delay) can be no more than 10 ms. We identify the components of the end-to-end delay in various configurations with the objective of understanding how it can be kept below the desired 10-ms bound. We analyze these components step-by-step through six system configurations obtained by combining three generic network architectures with two video encoding schemes. We study the transmission of raw video and variable bit rate (VBR) MPEG video encoding over (1) circuit switching; (2) synchronous packet switching; and (3) asynchronous packet switching. In addition, we show that constant bit rate (CBR) MPEG encoding delivers unacceptable delay-on the order of the group of pictures (GOP) time interval-when maximizing the quality for static scenes. This study aims at showing that having a global common time reference, together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is (1) below 10 ms; (2) independent of the network instant load; and (3) independent of the connection rate. The resulting end-to-end delay (excluding propagation delay) can be smaller than the video frame period, which is better than what can be obtained with circuit switching. --- paper_title: Preserving temporal signature: a way to convey time constrained flows paper_content: Addresses the problem of a temporal signature conservation, arising in multimedia communications. Specifically, the authors are interested in the problem of the flow reconstitution and variable end-to-end delay compensation at the receiver end-system. Requirements and algorithm associated with this design are described. The proposed algorithm is quickly understood, it may exploit either the globally synchronized or locally available sender/receiver clocks. The authors also present the simple formal model of the algorithm. This formal model allows to make an elegant and short proof of correctness of the protocol. > --- paper_title: A Synchronization Scheme for Stored Multimedia Streams paper_content: Multimedia streams such as audio and video impose tight temporal constraints due to their continuous nature. Often, different multimedia streams must be played out in a synchronized way. We present a scheme to ensure the continuous and synchronous playout of stored multimedia streams. We propose a protocol for the synchronized playback and we compute the buffer required to achieve both, the continuity within a single substream and the synchronization between related substreams. The scheme is very general because it only makes a single assumption, namely that the jitter is bounded. --- paper_title: Surveyor: An Infrastructure For Internet Performance Measurements paper_content: A digital memory system employing a rectangular array of known MNOS variable threshold insulated gate field effect transistor memory cells is actuated by auxiliary circuits which provide a four-step operating sequence. The memory cells are arranged in word rows in which the gate electrodes of all memory cells in a given row are connected together and in bit columns having common source and common drain connections. The auxiliary circuits provide intermediate gate voltages to a selected row of memory cells in the first step of the operating cycle so as to read the information stored in the memory cells into a register. In the second step of the operating sequence, a large negative gate voltage is applied to the selected row to circumvent the cumulative effect that might arise from a succession of positive WRITE pulses. In the third operating step, the memory cells in the selected row are set to their least negative threshold value by an appropriate "clear" pulse, and in the fourth operating step information is written back into the selected memory cells from the register. --- paper_title: The Concord algorithm for synchronization of networked multimedia streams paper_content: Synchronizing different data streams from multiple sources simultaneously at a receiver is one of the basic problems involved in multimedia distributed systems. This requirement stems from the nature of packet based networks which can introduce end-to-end delays that vary both within and across streams. We present a new algorithm called Concord, which provides an integrated solution for these single and multiple stream synchronization problems. It is notable because it defines a single framework to deal with both problems, and operates under the influence of parameters which can be supplied by the application involved. In particular these parameters are used to allow a trade-off between the packet loss rates, total end-to-end delay and skew for each of the streams. For applications like conferencing this is used to reduce delay by determining the minimum buffer delay/size required. --- paper_title: Delay Reduction Techniques for Playout Buffering paper_content: Receiver synchronization of continuous media streams is required to deal with delay differences and variations resulting from delivery over packet networks such as the Internet. This function is commonly provided using per-stream playout buffers which introduce additional delay in order to produce a playout schedule which meets the synchronization requirements. Packets which arrive after their scheduled playout time are considered late and are discarded. In this paper, we present the Concord algorithm, which provides a delay-sensitive solution for playout buffering. It records historical information and uses it to make short-term predictions about network delay with the aim of not reacting too quickly to short-lived delay variations. This allows an application-controlled tradeoff of packet lateness against buffering delay, suitable for applications which demand low delay but can tolerate or conceal a small amount of late packets. We present a selection of results from an extensive evaluation of Concord using Internet traffic traces. We explore the use of aging techniques to improve the effectiveness of the historical information and hence, the delay predictions. The results show that Concord can produce significant reductions in buffering delay and delay variations at the expense of packet lateness values of less than 1%. --- paper_title: End-to-end Internet packet dynamics paper_content: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. --- paper_title: Internet time synchronization: The Network Time Protocol paper_content: The network time protocol (NTP), which is designed to distribute time information in a large, diverse system, is described. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks. > --- paper_title: End-to-end delay analysis of videoconferencing over packet-switched networks paper_content: Videoconferencing is an important global application-it enables people around the globe to interact when distance separates them. In order for the participants in a videoconference call to interact naturally, the end-to-end delay should be below human perception; even though an objective and unique figure cannot be set, 100 ms is widely recognized as the desired one-way delay requirement for interaction. Since the global propagation delay can be about 100 ms, the actual end-to-end delay budget available to the system designer (excluding propagation delay) can be no more than 10 ms. We identify the components of the end-to-end delay in various configurations with the objective of understanding how it can be kept below the desired 10-ms bound. We analyze these components step-by-step through six system configurations obtained by combining three generic network architectures with two video encoding schemes. We study the transmission of raw video and variable bit rate (VBR) MPEG video encoding over (1) circuit switching; (2) synchronous packet switching; and (3) asynchronous packet switching. In addition, we show that constant bit rate (CBR) MPEG encoding delivers unacceptable delay-on the order of the group of pictures (GOP) time interval-when maximizing the quality for static scenes. This study aims at showing that having a global common time reference, together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is (1) below 10 ms; (2) independent of the network instant load; and (3) independent of the connection rate. The resulting end-to-end delay (excluding propagation delay) can be smaller than the video frame period, which is better than what can be obtained with circuit switching. --- paper_title: Synchronization of multimedia data for a multimedia news-on-demand application paper_content: We present a complete software control architecture for synchronizing multiple data streams generated from distributed media-storing database servers without the use of a global clock. Independent network connections are set up to remote workstations for multimedia presentations. Based on the document scenario and traffic predictions, stream delivery scheduling is performed in a centralized manner. Certain compensation mechanisms at the receiver are also necessary due to the presence of random network delays. A stream synchronization protocol (SSP) allows for synchronization recovery, ensuring a high quality multimedia display at the receiver. SSP uses synchronization quality of service parameters to guarantee the simultaneous delivery of the different types of data streams. In the proposed architecture, a priority-based synchronization control mechanism for MPEG-2 coded data streams is also provided. A performance modeling of the SSP is presented using the DSPN models. Relevant results such as the effect of the SSP, the number of synchronization errors, etc., are obtained. --- paper_title: Packet audio playout delay adjustment: performance bounds and algorithms paper_content: In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to com- pensate for variable network delays. In this paper, we con- sider the problem of adaptively adjusting the playout delay in order to keep this delay as small as possible, while at the same time avoiding excessive "loss" due to the arrival of packets at the receiver after their playout time has al- ready passed. The contributions of this paper are twofold. First, given a trace of packet audio receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses (due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks the network delay of recently received packets and efficiently maintains delay percentile information. This in- formation, together with a "delay spike" detection algorithm based on (but extending) our earlier work, is used to dy- namically adjust talkspurt playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio delay traces and performs close to the theoretical optimum over a range of parameter values of interest. --- paper_title: Adaptive playout mechanisms for packetized audio applications in wide-area networks paper_content: Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size. > --- paper_title: End-to-end packet delay and loss behavior in the internet paper_content: We use the measured round trip delays of small UDP probe packets sent at regular time intervals to analyze the end-to-end packet delay and loss behavior in the Internet. By varying the interval between probe packets, it is possible to study the structure of the Internet load over different time scales. In this paper, the time scales of interest range from a few milliseconds to a few minutes. Our observations agree with results obtained by others using simulation and experimental approaches. For example, our estimates of Internet workload are consistent with the hypothesis of a mix of bulk traffic with larger packet size, and interactive traffic with smaller packet size. We observe compression (or clustering) of the probe packets, rapid fluctuations of queueing delays over small intervals, etc. Our results also show interesting and less expected behavior. For example, we find that the losses of probe packets are essentially random unless the probe traffic uses a large fraction of the available bandwidth. We discuss the implications of these results on the design of control mechanisms for the Internet. --- paper_title: Design and Experimental Evaluation of an Adaptive Playout Delay Control Mechanism for Packetized Audio for Use over the Internet paper_content: We describe the design and the experimental evaluation of a playout delay control mechanism we have developed in order to support unicast, voice-based audio communications over the Internet. The proposed mechanism was designed to dynamically adjust the talkspurt playout delays to the traffic conditions of the underlying network without assuming either the existence of an external mechanism for maintaining an accurate clock synchronization between the sender and the receiver during the audio communication, or a specific distribution of the audio packet transmission delays. Performance figures derived from several experiments are reported that illustrate the adequacy of the proposed mechanism in dynamically adjusting the audio packet playout delay to the network traffic conditions while maintaining a small percentage of packet loss. --- paper_title: The Concord algorithm for synchronization of networked multimedia streams paper_content: Synchronizing different data streams from multiple sources simultaneously at a receiver is one of the basic problems involved in multimedia distributed systems. This requirement stems from the nature of packet based networks which can introduce end-to-end delays that vary both within and across streams. We present a new algorithm called Concord, which provides an integrated solution for these single and multiple stream synchronization problems. It is notable because it defines a single framework to deal with both problems, and operates under the influence of parameters which can be supplied by the application involved. In particular these parameters are used to allow a trade-off between the packet loss rates, total end-to-end delay and skew for each of the streams. For applications like conferencing this is used to reduce delay by determining the minimum buffer delay/size required. --- paper_title: Delay Reduction Techniques for Playout Buffering paper_content: Receiver synchronization of continuous media streams is required to deal with delay differences and variations resulting from delivery over packet networks such as the Internet. This function is commonly provided using per-stream playout buffers which introduce additional delay in order to produce a playout schedule which meets the synchronization requirements. Packets which arrive after their scheduled playout time are considered late and are discarded. In this paper, we present the Concord algorithm, which provides a delay-sensitive solution for playout buffering. It records historical information and uses it to make short-term predictions about network delay with the aim of not reacting too quickly to short-lived delay variations. This allows an application-controlled tradeoff of packet lateness against buffering delay, suitable for applications which demand low delay but can tolerate or conceal a small amount of late packets. We present a selection of results from an extensive evaluation of Concord using Internet traffic traces. We explore the use of aging techniques to improve the effectiveness of the historical information and hence, the delay predictions. The results show that Concord can produce significant reductions in buffering delay and delay variations at the expense of packet lateness values of less than 1%. --- paper_title: Voice synchronization in packet switching networks paper_content: An algorithm for voice synchronization for packet switching networks is presented. The algorithm has been tested both in simulation and on a real network. The algorithm runs on the TRAME packet switching network for both the Vocoder and CELP DoD voice coding standards. Some results of these tests are presented. Some details of the algorithm development and implementation are given as well. > --- paper_title: Packet audio playout delay adjustment: performance bounds and algorithms paper_content: In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to com- pensate for variable network delays. In this paper, we con- sider the problem of adaptively adjusting the playout delay in order to keep this delay as small as possible, while at the same time avoiding excessive "loss" due to the arrival of packets at the receiver after their playout time has al- ready passed. The contributions of this paper are twofold. First, given a trace of packet audio receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses (due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks the network delay of recently received packets and efficiently maintains delay percentile information. This in- formation, together with a "delay spike" detection algorithm based on (but extending) our earlier work, is used to dy- namically adjust talkspurt playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio delay traces and performs close to the theoretical optimum over a range of parameter values of interest. --- paper_title: Adaptive playout mechanisms for packetized audio applications in wide-area networks paper_content: Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size. > --- paper_title: Techniques for Packet Voice Synchronization paper_content: Packet switching has been proposed as an effective technology for integrating voice and data in a single network. An important aspect of packet-switched voice is the reconstruction of a continuous stream of speech from the set of packets that arrive at the destination terminal, each of which may encounter a different amount of buffering delay in the packet network. The magnitude of the variation in delay may range from a few milliseconds in a local area network to hundreds of milliseconds in a long-haul packet voice and data network. This paper discusses several aspects of the packet voice synchronization problem, and techniques that can be used to address it. These techniques estimate in some way the delay encountered by each packet and use the delay estimate to determine how speech is reconstructed. The delay estimates produced by these techniques can be used in managing the flow of information in the packet network to improve overall performance. Interactions of packet voice synchronization techniques with other network design issues are also discussed. --- paper_title: Multipoint multimedia teleconference system with adaptive synchronization paper_content: This paper discusses major issues involved in the design and implementation of a multipoint multimedia conference system, such as system architecture, conference management, session control, and intramedium and intermedia synchronization. In particular, emphasis is given to conference management and adaptive synchronization algorithms. The management of multiparticipants is based upon a distributed architecture for greater flexibility. The proposed synchronization algorithm is adaptive to network changes, eliminates the need for a global clock, and is immune to the clock frequency drift, while its realization is very simple and the involved overhead is minimal. The essence of the algorithm is partitioning the vicinity of the arrival epochs of multimedia objects into three regions and counting arrivals at each region. The function of the synchronizer is to shift the playback clock (PBC) according to the individual counter contents. The ideas proposed are implemented within a teleconference system on the Ethernet/FDDI using the TCP/UDP. Experimental results show that the proposed synchronization algorithm performs well in our network testbed environment. --- paper_title: Dynamic Video Playout Smoothing Method for Multimedia Applications paper_content: Multimedia applications including video data require the smoothing of video playout to prevent potential discontinuity. In this paper, we propose a dynamic video playout smoothing method, called the Video Smoother, which dynamically adopts various playout rates in an attempt to compensate for high delay variance of networks. Specifically, if the number of frames in the buffer exceeds a given threshold (TH), the Smoother employs a maximum playout rate. Otherwise, the Smoother uses proportionally reduced rates in an effort to eliminate playout pauses resulting from the emptiness of the playout buffer. To determine THs under various loads, we present an analytic model assuming the Interrupted Poisson Process (IPP) arrival. Based on the analytic results, we establish a paradigm of determining THs and playout rates for achieving different playout qualities under various loads of networks. Finally, to demonstrate the viability of the Video Smoother, we have implemented a prototyping system including a multimedia teleconferencing application and the Video Smoother performing as part of the transport layer. The prototyping results show that the Video Smoother achieves smooth playout incurring only unnoticeable delays. --- paper_title: Delay and Synchronization Control Middleware to Support Real-Time Multimedia Services over Wireless PCS Networks paper_content: This paper presents a discussion of several middleware design issues related to the support of real-time multimedia communications over wireless personal communication services (PCS) networks. Specific interests are given to error recovery and synchronization mechanisms. A hybrid automatic repeat request (ARQ) scheme is employed for error control in the proposed system because it can efficiently adapt to nonstationary wireless channels and yield high throughput and reliability. In particular, delay and delay jitter control related to retransmissions in the error control module are addressed. An adaptive source rate control mechanism is used to handle the fluctuation of the effective channel data rate due to retransmissions. An adaptive synchronization scheme is developed to compensate for long-term delay variation caused by large-scale fading so that synchronization is preserved and end-to-end delay is kept low. Simulation results from the performance evaluation of the system are presented. --- paper_title: A Synchronization Scheme for Stored Multimedia Streams paper_content: Multimedia streams such as audio and video impose tight temporal constraints due to their continuous nature. Often, different multimedia streams must be played out in a synchronized way. We present a scheme to ensure the continuous and synchronous playout of stored multimedia streams. We propose a protocol for the synchronized playback and we compute the buffer required to achieve both, the continuity within a single substream and the synchronization between related substreams. The scheme is very general because it only makes a single assumption, namely that the jitter is bounded. --- paper_title: Adaptive playout strategies for packet video receivers with finite buffer capacity paper_content: Due to random delay variations in current best effort networks, packet video applications rely on end-system buffering and playout adaptation to reduce the effects of disruptions on the required smooth stream presentation. To study the effect of buffering and playout adaptation, we present an analytical model based on the M/G/1 queueing system with finite buffer capacity, and traffic intensity equal to or greater than unity. This model fits well a range of new applications that have limited buffer resources for the reception of incoming frames. We introduce the variance of distortion of playout (VDoP), a new metric that accounts for the overall presentation disruption caused by buffer underflows, intentionally introduced gaps during slowdown periods and data loss from overflows. VDoP is an elegant and fair metric for the estimation of playout quality and will hopefully assist the development of better adaptation algorithms. Furthermore, the effect of finite buffer capacity is examined in relation to stream continuity revealing a system behavior not previously accounted for. The sensitivity of the system to the variance of the arrival process is also examined by means of simulation. Finally, an online algorithm is presented for the exploitation of our study on implemented systems. --- paper_title: Intelligent video smoother for multimedia communications paper_content: Multimedia communications often require intramedia synchronization for video data to prevent potential playout discontinuity resulting from network delay variation (jitter) while still achieving satisfactory playout throughput. In this paper, we propose a neural network (NN) based intravideo synchronization mechanism, called the intelligent video smoother (IVS), operating at the application layer of the receiving end system. The IVS is composed of an NN traffic predictor, an NN window determinator, and a window-based playout smoothing algorithm. The NN traffic predictor employs an on-line-trained back-propagation neural network (BPNN) to periodically predict the characteristics of traffic modeled by a generic interrupted Bernoulli process (IBP) over a future fixed time period. With the predicted traffic characteristics, the NN window determinator determines the corresponding optimal window by means of an off-line-trained BPNN in an effort to achieve a maximum of the playout quality (Q) value. The window-based playout smoothing algorithm then dynamically adopts various playout rates according to the window and the number of packets in the buffer. Finally, we show that via simulation results and live video scenes, compared to two other playout approaches, IVS achieves high-throughput and low-discontinuity playout under a mixture of IBP arrivals. --- paper_title: Intra- and inter-stream synchronisation for stored multimedia streams paper_content: Multimedia streams such as audio and video impose tight temporal constraints due to their continuous nature. Often, different multimedia streams must be presented in a synchronized way. We introduce a scheme for the continuous and synchronous delivery of distributed stored multimedia streams across a communications network. We propose a protocol for the synchronized playback, compute the buffer requirement, and describe the experimental results of our implementation. The scheme is very general and does not require bounded jitter or synchronized clocks and is able to cope with clock drifts and server drop outs. --- paper_title: An Adaptive Stream Synchronization Protocol paper_content: Protocols for synchronizing data streams should be highly adaptive with regard to both changing network conditions as well as to individual user needs. The Adaptive Synchronization Protocol we are going to describe in this paper supports any type of distribution of the stream group to be synchronized. It incorporates buffer level control mechanisms allowing an immediate reaction on overflow or underflow situations. Moreover, the proposed mechanism is flexible enough to support a variety of synchronization policies and allows to switch them dynamically during presentation. Since control messages are only exchanged when the network conditions actually change, the message overhead of the protocol is very low. --- paper_title: Dynamic Video Playout Smoothing Method for Multimedia Applications paper_content: Multimedia applications including video data require the smoothing of video playout to prevent potential discontinuity. In this paper, we propose a dynamic video playout smoothing method, called the Video Smoother, which dynamically adopts various playout rates in an attempt to compensate for high delay variance of networks. Specifically, if the number of frames in the buffer exceeds a given threshold (TH), the Smoother employs a maximum playout rate. Otherwise, the Smoother uses proportionally reduced rates in an effort to eliminate playout pauses resulting from the emptiness of the playout buffer. To determine THs under various loads, we present an analytic model assuming the Interrupted Poisson Process (IPP) arrival. Based on the analytic results, we establish a paradigm of determining THs and playout rates for achieving different playout qualities under various loads of networks. Finally, to demonstrate the viability of the Video Smoother, we have implemented a prototyping system including a multimedia teleconferencing application and the Video Smoother performing as part of the transport layer. The prototyping results show that the Video Smoother achieves smooth playout incurring only unnoticeable delays. --- paper_title: Adaptive playout strategies for packet video receivers with finite buffer capacity paper_content: Due to random delay variations in current best effort networks, packet video applications rely on end-system buffering and playout adaptation to reduce the effects of disruptions on the required smooth stream presentation. To study the effect of buffering and playout adaptation, we present an analytical model based on the M/G/1 queueing system with finite buffer capacity, and traffic intensity equal to or greater than unity. This model fits well a range of new applications that have limited buffer resources for the reception of incoming frames. We introduce the variance of distortion of playout (VDoP), a new metric that accounts for the overall presentation disruption caused by buffer underflows, intentionally introduced gaps during slowdown periods and data loss from overflows. VDoP is an elegant and fair metric for the estimation of playout quality and will hopefully assist the development of better adaptation algorithms. Furthermore, the effect of finite buffer capacity is examined in relation to stream continuity revealing a system behavior not previously accounted for. The sensitivity of the system to the variance of the arrival process is also examined by means of simulation. Finally, an online algorithm is presented for the exploitation of our study on implemented systems. --- paper_title: Intelligent video smoother for multimedia communications paper_content: Multimedia communications often require intramedia synchronization for video data to prevent potential playout discontinuity resulting from network delay variation (jitter) while still achieving satisfactory playout throughput. In this paper, we propose a neural network (NN) based intravideo synchronization mechanism, called the intelligent video smoother (IVS), operating at the application layer of the receiving end system. The IVS is composed of an NN traffic predictor, an NN window determinator, and a window-based playout smoothing algorithm. The NN traffic predictor employs an on-line-trained back-propagation neural network (BPNN) to periodically predict the characteristics of traffic modeled by a generic interrupted Bernoulli process (IBP) over a future fixed time period. With the predicted traffic characteristics, the NN window determinator determines the corresponding optimal window by means of an off-line-trained BPNN in an effort to achieve a maximum of the playout quality (Q) value. The window-based playout smoothing algorithm then dynamically adopts various playout rates according to the window and the number of packets in the buffer. Finally, we show that via simulation results and live video scenes, compared to two other playout approaches, IVS achieves high-throughput and low-discontinuity playout under a mixture of IBP arrivals. --- paper_title: Intra- and inter-stream synchronisation for stored multimedia streams paper_content: Multimedia streams such as audio and video impose tight temporal constraints due to their continuous nature. Often, different multimedia streams must be presented in a synchronized way. We introduce a scheme for the continuous and synchronous delivery of distributed stored multimedia streams across a communications network. We propose a protocol for the synchronized playback, compute the buffer requirement, and describe the experimental results of our implementation. The scheme is very general and does not require bounded jitter or synchronized clocks and is able to cope with clock drifts and server drop outs. --- paper_title: Dynamic Video Playout Smoothing Method for Multimedia Applications paper_content: Multimedia applications including video data require the smoothing of video playout to prevent potential discontinuity. In this paper, we propose a dynamic video playout smoothing method, called the Video Smoother, which dynamically adopts various playout rates in an attempt to compensate for high delay variance of networks. Specifically, if the number of frames in the buffer exceeds a given threshold (TH), the Smoother employs a maximum playout rate. Otherwise, the Smoother uses proportionally reduced rates in an effort to eliminate playout pauses resulting from the emptiness of the playout buffer. To determine THs under various loads, we present an analytic model assuming the Interrupted Poisson Process (IPP) arrival. Based on the analytic results, we establish a paradigm of determining THs and playout rates for achieving different playout qualities under various loads of networks. Finally, to demonstrate the viability of the Video Smoother, we have implemented a prototyping system including a multimedia teleconferencing application and the Video Smoother performing as part of the transport layer. The prototyping results show that the Video Smoother achieves smooth playout incurring only unnoticeable delays. --- paper_title: A Reliable, Adaptive Network Protocol for Video Transport paper_content: We present an adaptive network layer protocol for VBR video transport. It: (1) minimizes the buffer requirement in the network while guaranteeing that packets of VBR encoded video flows will not be lost, and (2) minimizes the end-to-end delay and jitter of frames. To achieve the former objective, we utilize a receiver-oriented adaptive credit-based flow control algorithm, and derive the necessary and sufficient number of buffers that should be reserved for ensuring its reliability. To minimize the end-to-end delay and jitter for VBR encoded video streams, we: (1) present bandwidth estimation techniques which exploit the structure of the video traffic, and (2) define a new fairness criteria for buffer allocation and then present a fair buffer/bandwidth allocation algorithm. We experimentally evaluate this protocol for a wide range of parameters and many network configurations, and demonstrate its adaptability. We also compare the performance of the protocol with numerous other schemes and demonstrate its suitability for video transport. --- paper_title: Adaptive playout strategies for packet video receivers with finite buffer capacity paper_content: Due to random delay variations in current best effort networks, packet video applications rely on end-system buffering and playout adaptation to reduce the effects of disruptions on the required smooth stream presentation. To study the effect of buffering and playout adaptation, we present an analytical model based on the M/G/1 queueing system with finite buffer capacity, and traffic intensity equal to or greater than unity. This model fits well a range of new applications that have limited buffer resources for the reception of incoming frames. We introduce the variance of distortion of playout (VDoP), a new metric that accounts for the overall presentation disruption caused by buffer underflows, intentionally introduced gaps during slowdown periods and data loss from overflows. VDoP is an elegant and fair metric for the estimation of playout quality and will hopefully assist the development of better adaptation algorithms. Furthermore, the effect of finite buffer capacity is examined in relation to stream continuity revealing a system behavior not previously accounted for. The sensitivity of the system to the variance of the arrival process is also examined by means of simulation. Finally, an online algorithm is presented for the exploitation of our study on implemented systems. --- paper_title: Intelligent video smoother for multimedia communications paper_content: Multimedia communications often require intramedia synchronization for video data to prevent potential playout discontinuity resulting from network delay variation (jitter) while still achieving satisfactory playout throughput. In this paper, we propose a neural network (NN) based intravideo synchronization mechanism, called the intelligent video smoother (IVS), operating at the application layer of the receiving end system. The IVS is composed of an NN traffic predictor, an NN window determinator, and a window-based playout smoothing algorithm. The NN traffic predictor employs an on-line-trained back-propagation neural network (BPNN) to periodically predict the characteristics of traffic modeled by a generic interrupted Bernoulli process (IBP) over a future fixed time period. With the predicted traffic characteristics, the NN window determinator determines the corresponding optimal window by means of an off-line-trained BPNN in an effort to achieve a maximum of the playout quality (Q) value. The window-based playout smoothing algorithm then dynamically adopts various playout rates according to the window and the number of packets in the buffer. Finally, we show that via simulation results and live video scenes, compared to two other playout approaches, IVS achieves high-throughput and low-discontinuity playout under a mixture of IBP arrivals. --- paper_title: Intra- and inter-stream synchronisation for stored multimedia streams paper_content: Multimedia streams such as audio and video impose tight temporal constraints due to their continuous nature. Often, different multimedia streams must be presented in a synchronized way. We introduce a scheme for the continuous and synchronous delivery of distributed stored multimedia streams across a communications network. We propose a protocol for the synchronized playback, compute the buffer requirement, and describe the experimental results of our implementation. The scheme is very general and does not require bounded jitter or synchronized clocks and is able to cope with clock drifts and server drop outs. --- paper_title: An Adaptive Stream Synchronization Protocol paper_content: Protocols for synchronizing data streams should be highly adaptive with regard to both changing network conditions as well as to individual user needs. The Adaptive Synchronization Protocol we are going to describe in this paper supports any type of distribution of the stream group to be synchronized. It incorporates buffer level control mechanisms allowing an immediate reaction on overflow or underflow situations. Moreover, the proposed mechanism is flexible enough to support a variety of synchronization policies and allows to switch them dynamically during presentation. Since control messages are only exchanged when the network conditions actually change, the message overhead of the protocol is very low. --- paper_title: Dynamic Video Playout Smoothing Method for Multimedia Applications paper_content: Multimedia applications including video data require the smoothing of video playout to prevent potential discontinuity. In this paper, we propose a dynamic video playout smoothing method, called the Video Smoother, which dynamically adopts various playout rates in an attempt to compensate for high delay variance of networks. Specifically, if the number of frames in the buffer exceeds a given threshold (TH), the Smoother employs a maximum playout rate. Otherwise, the Smoother uses proportionally reduced rates in an effort to eliminate playout pauses resulting from the emptiness of the playout buffer. To determine THs under various loads, we present an analytic model assuming the Interrupted Poisson Process (IPP) arrival. Based on the analytic results, we establish a paradigm of determining THs and playout rates for achieving different playout qualities under various loads of networks. Finally, to demonstrate the viability of the Video Smoother, we have implemented a prototyping system including a multimedia teleconferencing application and the Video Smoother performing as part of the transport layer. The prototyping results show that the Video Smoother achieves smooth playout incurring only unnoticeable delays. --- paper_title: Human perception of jitter and media synchronization paper_content: Multimedia synchronization comprises both the definition and the establishment of temporal relationships among media types. The presentation of 'in sync' data streams is essential to achieve a natural impression, data that is 'out of sync' is perceived as being somewhat artificial, strange, or even annoying. Therefore, the goal of any multimedia system is to enable an application to present data without no or little synchronization errors. The achievement of this goal requires a detailed knowledge of the synchronization requirements at the user interface. The paper presents the results of a series of experiments about human media perception that may be used as 'quality of service' guidelines. The results show that skews between related data streams may still give the effect that the data is 'in sync' and gives some constraints under which jitter may be tolerated. The author uses the findings to develop a scheme for the processing of nontrivial synchronization skew between more than two data streams. --- paper_title: Adaptive playout strategies for packet video receivers with finite buffer capacity paper_content: Due to random delay variations in current best effort networks, packet video applications rely on end-system buffering and playout adaptation to reduce the effects of disruptions on the required smooth stream presentation. To study the effect of buffering and playout adaptation, we present an analytical model based on the M/G/1 queueing system with finite buffer capacity, and traffic intensity equal to or greater than unity. This model fits well a range of new applications that have limited buffer resources for the reception of incoming frames. We introduce the variance of distortion of playout (VDoP), a new metric that accounts for the overall presentation disruption caused by buffer underflows, intentionally introduced gaps during slowdown periods and data loss from overflows. VDoP is an elegant and fair metric for the estimation of playout quality and will hopefully assist the development of better adaptation algorithms. Furthermore, the effect of finite buffer capacity is examined in relation to stream continuity revealing a system behavior not previously accounted for. The sensitivity of the system to the variance of the arrival process is also examined by means of simulation. Finally, an online algorithm is presented for the exploitation of our study on implemented systems. --- paper_title: vic: A Flexible Framework for Packet Video paper_content: The deployment of IP Multicast has fostered the development of a suite of applications, collectively known as the MBone tools, for real-time multimedia conferencing over the Internet. Two of these tools — nv from Xerox PARC and ivs from INRIA — provide video transmission using software-based codecs. We describe a new video tool, vic , that extends the groundbreaking work of nv and ivs with a more flexible system architecture. This flexibility is characterized by network layer independence, support for hardware-based codecs, a conference coordination model, an extensible user interface, and support for diverse compression algorithms. We also propose a novel compression scheme called “Intra-H.261”. Created as a hybrid of the nv and ivs codecs, Intra-H.261 provides a factor of 2–3 improvement in compression gain over the nv encoder (6 dB of PSNR) as well as a substantial improvement in run-time performance over the ivs H.261 coder. --- paper_title: Intelligent video smoother for multimedia communications paper_content: Multimedia communications often require intramedia synchronization for video data to prevent potential playout discontinuity resulting from network delay variation (jitter) while still achieving satisfactory playout throughput. In this paper, we propose a neural network (NN) based intravideo synchronization mechanism, called the intelligent video smoother (IVS), operating at the application layer of the receiving end system. The IVS is composed of an NN traffic predictor, an NN window determinator, and a window-based playout smoothing algorithm. The NN traffic predictor employs an on-line-trained back-propagation neural network (BPNN) to periodically predict the characteristics of traffic modeled by a generic interrupted Bernoulli process (IBP) over a future fixed time period. With the predicted traffic characteristics, the NN window determinator determines the corresponding optimal window by means of an off-line-trained BPNN in an effort to achieve a maximum of the playout quality (Q) value. The window-based playout smoothing algorithm then dynamically adopts various playout rates according to the window and the number of packets in the buffer. Finally, we show that via simulation results and live video scenes, compared to two other playout approaches, IVS achieves high-throughput and low-discontinuity playout under a mixture of IBP arrivals. --- paper_title: Overcoming Workstation Scheduling Problems in a Real-Time Audio Tool paper_content: The recent interest in multimedia conferencing is a result of the incorporation of cheap audio and video hard-ware in today's workstations, and also as a result of the development of a global infrastructure capable of supporting multimedia traffic - the Mbone. Audio quality is impaired by packet loss and variable delay in the network, and by lack of support for real-time applications in today's general purpose workstations. A considerable amount of research effort has focused on solving the network side of the problem by providing packet loss robustness techniques, and network conscious adaptive applications. Effort to solve the operating system induced problems has concentrated on kernel modifications. This paper presents an architecture for a real-time audio media agent that copes with the problems presented by the UNIX operating system at the application level. The mechanism produces a continuous audio signal, despite the variable allocation of processing time a real-time application is given under UNIX. Continuity of audio is ensured during scheduling hiccups by using the buffering capabilities of workstation audio devices drivers. Our solution also tries to restrict the amount of audio stored in the device buffers to a minimum, to reduce the perceived end-to-end delay of the audio signal. A comparison between the method presented here (adaptive cushion algorithm), and that used by all other audio tools shows substantial reductions in both the average end-to-end delay, and the audio sample loss caused by the operating system. --- paper_title: Packet audio playout delay adjustment: performance bounds and algorithms paper_content: In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to com- pensate for variable network delays. In this paper, we con- sider the problem of adaptively adjusting the playout delay in order to keep this delay as small as possible, while at the same time avoiding excessive "loss" due to the arrival of packets at the receiver after their playout time has al- ready passed. The contributions of this paper are twofold. First, given a trace of packet audio receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses (due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks the network delay of recently received packets and efficiently maintains delay percentile information. This in- formation, together with a "delay spike" detection algorithm based on (but extending) our earlier work, is used to dy- namically adjust talkspurt playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio delay traces and performs close to the theoretical optimum over a range of parameter values of interest. --- paper_title: Adaptive playout mechanisms for packetized audio applications in wide-area networks paper_content: Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size. > --- paper_title: Dynamic Video Playout Smoothing Method for Multimedia Applications paper_content: Multimedia applications including video data require the smoothing of video playout to prevent potential discontinuity. In this paper, we propose a dynamic video playout smoothing method, called the Video Smoother, which dynamically adopts various playout rates in an attempt to compensate for high delay variance of networks. Specifically, if the number of frames in the buffer exceeds a given threshold (TH), the Smoother employs a maximum playout rate. Otherwise, the Smoother uses proportionally reduced rates in an effort to eliminate playout pauses resulting from the emptiness of the playout buffer. To determine THs under various loads, we present an analytic model assuming the Interrupted Poisson Process (IPP) arrival. Based on the analytic results, we establish a paradigm of determining THs and playout rates for achieving different playout qualities under various loads of networks. Finally, to demonstrate the viability of the Video Smoother, we have implemented a prototyping system including a multimedia teleconferencing application and the Video Smoother performing as part of the transport layer. The prototyping results show that the Video Smoother achieves smooth playout incurring only unnoticeable delays. --- paper_title: A survey of packet loss recovery techniques for streaming audio paper_content: We survey a number of packet loss recovery techniques for streaming audio applications operating using IP multicast. We begin with a discussion of the loss and delay characteristics of an IP multicast channel, and from this show the need for packet loss recovery. Recovery techniques may be divided into two classes: sender- and receiver-based. We compare and contrast several sender-based recovery schemes: forward error correction (both media-specific and media-independent), interleaving, and retransmission. In addition, a number of error concealment schemes are discussed. We conclude with a series of recommendations for repair schemes to be used based on application requirements and network conditions. --- paper_title: Survey of error recovery techniques for IP-based audio-visual multicast applications paper_content: IP-based audio-visual multicast applications are gaining increasing interest since they can be realized using inexpensive network services that offer no guarantees for loss or delay. When using network services that do not guarantee the quality of service (QoS) required by audio-visual applications, recovery from losses due to congestion in the network is a key problem that must be solved. This survey gives an overview of existing transport-layer error control mechanisms and discusses their suitability for use in IP-based networks. Additionally, the impact of IP over ATM on the requirements of error control mechanisms is discussed. Different network scenarios are used to assess the performance of retransmission-based error correction and forward error correction. --- paper_title: Integrating packet FEC into adaptive voice playout buffer algorithms on the Internet paper_content: Transport of real-time voice traffic on the Internet is difficult due to packet loss and jitter. Packet loss is handled primarily through a variety of different forward error correction (FEC) algorithms and local repair at the receiver. Jitter is compensated for by means of adaptive playout buffer algorithms at the receiver. Traditionally, these two mechanisms have been investigated in isolation. In this paper, we show the interactions between adaptive playout buffer algorithms and FEC, and demonstrate the need for coupling. We propose a number of novel playout buffer algorithms which provide this coupling, and demonstrate their effectiveness through simulations based on both network models and real network traces. --- paper_title: Effects of interaction between error control and media synchronization on application-level performances paper_content: We study the interaction between error control and media synchronization. We approach the problem by considering a virtual application data unit (ADU) network with error control within the network boundary and media synchronization outside the network boundary. We introduce the concept of cumulative inter-ADU jitter, which is the running total of the differences between inter-ADU spacings at the receiver and at the sender, to model the ADU departure process from the network observed at the entry of the playback buffer. We investigate how different error control schemes, network and source characteristics influence the process on one hand and how the process affects the choice of playback delay in order to minimize ADU losses on the other hand. Our numerical results, obtained through simulation, verify the analysis and reveals that jitter control within the network is not as important as controlling the bandwidth and the delay, especially in the presence of error control and variable bit rate source. --- paper_title: Error Control Schemes for Networks: An Overview paper_content: In this paper, we investigate the issue of error control in wireless communication networks. We review the alternative error control schemes available for providing reliable end-to-end communication in wireless environments. Through case studies, the performance and tradeoffs of these schemes are shown. Based on the application environments and QoS requirements, the design issues of error control are discussed to achieve the best solution. --- paper_title: Packet audio playout delay adjustment: performance bounds and algorithms paper_content: In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to com- pensate for variable network delays. In this paper, we con- sider the problem of adaptively adjusting the playout delay in order to keep this delay as small as possible, while at the same time avoiding excessive "loss" due to the arrival of packets at the receiver after their playout time has al- ready passed. The contributions of this paper are twofold. First, given a trace of packet audio receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses (due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks the network delay of recently received packets and efficiently maintains delay percentile information. This in- formation, together with a "delay spike" detection algorithm based on (but extending) our earlier work, is used to dy- namically adjust talkspurt playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio delay traces and performs close to the theoretical optimum over a range of parameter values of interest. --- paper_title: Adaptive playout mechanisms for packetized audio applications in wide-area networks paper_content: Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size. > --- paper_title: QoS impact on user perception and understanding of multimedia video clips paper_content: L ABSTIL4CT The widespread and increasing advent of multimedia technologies means that there must be a departure from the viewpoint that users expect a Quality of Service (QoS) which will only satisfy them perceptually. What should be expected of multimedia clips is that the QoS with which they are shown is such that it will enable the users to assimilate and understand the informational content of such clips. In this paper we present experimental resuks Iiiking users’ understanding and perception of multimedia ctips with the presentation QoS. Results show that the quality of video clips can be severely degraded without the user having to perceive any significant 10ss of informational content 1.1 --- paper_title: Adaptive FEC-Based Error Control for Interactive Audio in the Internet paper_content: Excessive packet loss rates can dramatically decrease the audio quality perceived by users of Internet telephony applications. Recent results suggest that error control schemes using forward error correction (FEC) are good candidates for decreasing the impact of packet loss on audio quality. With FEC schemes, redundant information is transmitted along with the original information so that the lost original data can be recovered at least in part from the redundant information. Clearly, sending additional redundancy increases the probability of recovering lost packets, but it also increases the bandwidth requirements and thus the loss rate of the audio stream. This means that the FEC scheme must be coupled to a rate control scheme. Furthermore, the amount of redundant information used at any given point in time should also depend on the characteristics of the loss process at that time (it would make no sense to send much redundant information when the channel is loss free), on the end to end delay constraints (destination typically have to wait longer to decode the FEC as more FEC information is used), on the quality of the redundant information, etc. However, it is not clear how to choose the ”best” possible redundant information given all the constraints mentioned above. We address this issue in the paper, and illustrate our approach using a FEC scheme for packet audio recently standardized in the IETF. We find that the problem best redundant information can be expressed mathematically as a constrained optimization problem for which we give explicit solutions. We obtain from these solutions a simple algorithm with very interesting features: i) it optimizes a subjective measure of quality (such as the perceived audio quality at a destination) as opposed to a non subjective measure (such as the packet loss rate at a destination), ii) it incorporates the constraints of rate control and playout delay adjustment schemes, and iii) it adapts to varying (and estimated on line with RTCP) loss conditions in the network. We have been using the algorithm, together with a TCP-friendly rate control scheme, for a few months now and we have found it to provide very good audio quality even with high and varying loss rates. We present simulation and experimental results to illustrate its performance. Submitted to IEEE Infocom ’99. --- paper_title: Effects of interaction between error control and media synchronization on application-level performances paper_content: We study the interaction between error control and media synchronization. We approach the problem by considering a virtual application data unit (ADU) network with error control within the network boundary and media synchronization outside the network boundary. We introduce the concept of cumulative inter-ADU jitter, which is the running total of the differences between inter-ADU spacings at the receiver and at the sender, to model the ADU departure process from the network observed at the entry of the playback buffer. We investigate how different error control schemes, network and source characteristics influence the process on one hand and how the process affects the choice of playback delay in order to minimize ADU losses on the other hand. Our numerical results, obtained through simulation, verify the analysis and reveals that jitter control within the network is not as important as controlling the bandwidth and the delay, especially in the presence of error control and variable bit rate source. --- paper_title: Adaptive playout mechanisms for packetized audio applications in wide-area networks paper_content: Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size. > --- paper_title: Delay and Synchronization Control Middleware to Support Real-Time Multimedia Services over Wireless PCS Networks paper_content: This paper presents a discussion of several middleware design issues related to the support of real-time multimedia communications over wireless personal communication services (PCS) networks. Specific interests are given to error recovery and synchronization mechanisms. A hybrid automatic repeat request (ARQ) scheme is employed for error control in the proposed system because it can efficiently adapt to nonstationary wireless channels and yield high throughput and reliability. In particular, delay and delay jitter control related to retransmissions in the error control module are addressed. An adaptive source rate control mechanism is used to handle the fluctuation of the effective channel data rate due to retransmissions. An adaptive synchronization scheme is developed to compensate for long-term delay variation caused by large-scale fading so that synchronization is preserved and end-to-end delay is kept low. Simulation results from the performance evaluation of the system are presented. --- paper_title: Distributed Network Storage Service with Quality-of-Service Guarantees paper_content: This paper envisions a distributed network storage service with Quality-of-Service (QoS) guarantees and describes its architecture and key mechanisms. When fully realized, this service architecture would be able to support, in one integrated framework, network storage services ranging from best-effort caching to replication with performance guarantees. Content owners could, through the use of standardized protocols, reserve network storage resources to satisfy their application-specific performance requirements. They would be able to specify either the number and/or placement of the replicas, or higher-level performance goals based on access latency, bandwidth usage or data availability. The network storage provider would then optimally allocate storage resources to meet the service commitments, using leftover capacity for best-effort caching. Content consumers would then retrieve the nearest copy of the data object, be it from a replica, cache, or the original source, in a completely transparent manner. Furthermore, a distributed network storage infrastructure should be integrated with the existing transmission-based QoS framework so that applications can select the optimal combination of storage and transmission resources to satisfy their performance requirements.This work identifies and discusses key research areas and problems that need to be tackled, including those in service specification, resource mapping, admission control, resource reservation and real-time storage management. The paper establishes a QoS framework upon which community discussion on this vision can proceed. --- paper_title: Proxy-based distribution of streaming video over unicast/multicast connections paper_content: Multimedia streaming applications consume a significant amount of server and network bandwidth. In this paper we develop transmission schemes (SBatch, UPatch and MPatch) that exploit proxy-based prefix caching to reduce the transmission cost when part or all of the end-end path from the server to the clients is only unicast capable. We consider the problem of allocating a limited proxy buffer to a set of videos with different sizes and popularities. For a fractional caching policy which allows prefix caching, we develop a technique to analytically determine, for a given transmission scheme, the optimal proxy buffer allocation to each video that minimizes the bandwidth cost for the set. Our evaluations show that even a relatively small prefix cache can result in dramatic cost reductions, and that prefix caching significantly outperform policies that cache only entire objects at the proxy. Also proxy cache allocations that are tuned to the cost characteristics of a particular transmission scheme can significantly outperform transmission-scheme agnostic allocations. We find that carefully designed transmission schemes in conjunction with an optimal prefix caching scheme are the key to significant cost reductions over the case of transmitting a video separately to every client, when the end-end delivery path is either entirely unicast capable, or offers multicast only on the proxy-client path. --- paper_title: Video staging: a proxy-server-based approach to end-to-end video delivery over wide-area networks paper_content: Real-time distribution of stored video over wide-area networks (WANs) is a crucial component of many emerging distributed multimedia applications. The heterogeneity in the underlying network environments is an important factor that must be taken into consideration when designing an end-to-end video delivery system. We present a novel approach to the problem of end-to-end video delivery over WANs using proxy servers situated between local-area networks (LANs) and a backbone WAN. A major objective of our approach is to reduce the backbone WAN bandwidth requirement. Toward this end, we develop an effective video delivery technique called video staging via intelligent utilization of the disk bandwidth and storage space available at proxy servers. Using this video staging technique, only part of a video stream is retrieved directly from the central video server across the backbone WAN whereas the rest of the video stream is delivered to users locally from proxy servers attached to the LANs. In this manner, the WAN bandwidth requirement can be significantly reduced, particularly when a large number of users from the same LAN access the video data. We design several video staging methods and evaluate their effectiveness in trading the disk bandwidth of a proxy server for the backbone WAN bandwidth. We also develop two heuristic algorithms to solve the problem of designing a multiple video staging scheme for a proxy server with a given video access profile of a LAN. Our results demonstrate that the proposed proxy-server-based approach provides an effective and scalable solution to the problem of the end-to-end video delivery over WANs. --- paper_title: Distributing layered encoded video through caches paper_content: The efficient distribution of stored information has become a major concern in the Internet which has increasingly become a vehicle for the transport of stored video. Because of the highly heterogeneous access to the Internet, researchers and engineers have argued for layered encoded video. We investigate delivering layered encoded video using caches. Based on a stochastic knapsack model we develop a model for the layered video caching problem. We propose heuristics to determine which videos and which layers in the videos should be cached. We evaluate the performance of our heuristics through extensive numerical experiments. We also consider two intuitive extensions to the initial problem. --- paper_title: Proxy prefix caching for multimedia streams paper_content: High latency and loss rates in the Internet make it difficult to stream audio and video without introducing a large playback delay. To address these problems, we propose a prefix caching technique whereby a proxy stores the initial frames of popular clips. Upon receiving a request for the stream, the proxy initiates transmission to the client and simultaneously requests the remaining frames from the server. In addition to hiding the delay, throughput, and loss effects of a weaker service model between the server and the proxy, this novel yet simple prefix caching technique aids the proxy in performing workahead smoothing into the client playback buffer. By transmitting large frames in advance of each burst, workahead smoothing substantially reduces the peak and variability of the network resource requirements along the path from the proxy to the client. We describe how to construct a smooth transmission schedule, based on the size of the prefix, smoothing, and playback buffers, without increasing client playback delay. Experiments with MPEG traces show how a few megabytes of buffer space at the proxy can substantially reduce the bandwidth requirements of variable-bit-rate video. Drawing on these results, we present guidelines for allocating buffer space for each stream, and how to effectively share buffer and bandwidth resources among multiple clients and streams. ---
Title: Intrastream Synchronization for Continuous Media Streams: A Survey of Playout Schedulers Section 1: INTRODUCTION Description 1: This section introduces the concept of intrastream synchronization, the significance of preserving temporal relationships in continuous media streams, and outlines the key focus areas and structure of the paper. Section 2: ISSUES OF INTEREST AND OUTLINE Description 2: This section discusses various metrics used for assessing intrastream synchronization quality, categorizes existing schedulers, and provides an outline of the topics covered in the remaining sections of the paper. Section 3: MEDIA TYPE AND PLAYOUT ADAPTATION Description 3: This section elaborates on the applicability of different playout schemes for various media types, distinguishing between continuous and semi-continuous media, and discusses how playout schedulers address the unique characteristics of each type. Section 4: TIME-ORIENTED PLAYOUT SCHEDULERS Description 4: This section presents a detailed overview of time-oriented playout schedulers, including methods that use timestamps and clocks to manage media unit presentation times. Section 5: BUFFER-ORIENTED PLAYOUT SCHEDULERS Description 5: This section explores buffer-oriented playout schedulers that manage delay jitter by observing buffer occupancy and adjusting the playout point accordingly. Section 6: DISCUSSION AND IMPLEMENTED SYSTEMS Description 6: This section summarizes key concepts discussed, addresses the appropriateness of various playout schedulers for different applications, and cites examples of implemented systems. Section 7: RELATED RESEARCH ISSUES Description 7: This section briefly covers additional research areas relevant to playout adaptation, such as forward error correction and video caching. Section 8: CONCLUSIONS Description 8: This section provides a summary of the survey, highlighting the importance of playout schedulers in maintaining synchronization quality in stream communications and the major contributions of the paper.
A Survey of Routing Protocols in WBAN for Healthcare Applications
13
--- paper_title: A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks paper_content: Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion. --- paper_title: The Emerging Wireless Body Area Network on Android Smartphones: A Review paper_content: Our society now has driven us into an era where almost everything can be digitally monitored and controlled including the human body. The growth of wireless body area network (WBAN), as a specific scope of sensor networks which mounted or attached to human body also developing rapidly. It allows people to monitor their health and several daily activities. This study is intended to review the trend of WBAN especially on Android, one of the most popular smartphone platforms. A systematic literature review is concerned to the following parameters: the purpose of the device and/or application, the type of sensors, the type of Android device, and its connectivity. Most of the studies were more concern to healthcare or medical monitoring systems: blood pressure, electro cardiograph, tremor detection, etc. On the other hand, the rest of them aimed for activity tracker, environment sensing, and epidemic control. After all, those studies shown that not only Android can be a powerful platform to process data from various sensors but also smartphones can be a good alternative to develop WBANs for medical and other daily applications. --- paper_title: The state-of-the-art wireless body area sensor networks: A survey paper_content: Wireless body area sensor network is a sub-field of wireless sensor network. Wireless body area sensor network has come into existence after the development of wireless sensor network reached some ... --- paper_title: RE-ATTEMPT: A New Energy-Efficient Routing Protocol for Wireless Body Area Sensor Networks paper_content: Modern health care system is one of the most popular Wireless Body Area Sensor Network (WBASN) applications and a hot area of research subject to present work. In this paper, we present Reliability Enhanced-Adaptive Threshold based Thermal-unaware Energy-efficient Multi-hop ProTocol (RE-ATTEMPT) for WBASNs. The proposed routing protocol uses fixed deployment of wireless sensors (nodes) such that these are placed according to energy levels. Moreover, we use direct communication for the delivery of emergency data and multihop communication for the delivery of normal data. RE-ATTEMPT selects route with minimum hop count to deliver data which downplays the delay factor. Furthermore, we conduct a comprehensive analysis supported by MATLAB simulations to provide an estimation of path loss, and problem formulation with its solution via linear programming model for network lifetime maximization is also provided. In simulations, we analyze our protocol in terms of network lifetime, packet drops, and throughput. Results show better performance for the proposed protocol as compared to the existing one. --- paper_title: PERA: Priority-Based Energy-Efficient Routing Algorithm for WBANs paper_content: Wireless body area network (WBAN) is designed to provide a variety of services for both medical and entertainment applications. This paper proposes a priority-based energy-efficient routing algorithm (PERA) for WBAN. In PERA, first priority is assigned for emergency data, second priority is used for on demand data traffic and third priority is utilized for periodic data delivery between sink and root nodes. We develop two non-linear programming based optimization models for WBAN. We use these computational models to analyze the effect of define constraints on network performance. We also use this problem formulation frame work to estimate the best possible performance of WBAN subjected to additionally design parameters such as the energy levels, topology, source rates, reception power, and number of nodes. We introduce these models for a fixed deployment strategy and examined that these optimization models show prickly changes in network throughput and network lifetime. MATLAB simulations of proposed routing protocol are performed for five different evaluation criterion in comparison with SIMPLE and M-ATTEMPT. The simulation results show that proposed routing algorithm achieves longer lifetime and more reliable than opponent routing protocols. --- paper_title: Trust and Thermal Aware Routing Protocol (TTRP) for Wireless Body Area Networks paper_content: Recent advancements in wireless communication have made it possible to use low-power invasive or non-invasive sensor nodes to remotely monitor patients through wireless body area networks (WBANs). However, reliable and secure data transmission in WBANs is a challenging task. The issues such as privacy and confidentiality of patient related data require the reliable and secure data routing so that data can be kept safe from the malicious nodes. On the other hand, conventional biometric and cryptographic algorithms cannot be used in WBANs as they are, most of the time, complex and do not consider the misbehaving nature of nodes. Moreover, wireless nature of in vivo biomedical sensor nodes produces electromagnetic radiations which result in increased temperature; that could be harmful particularly for sensitive tissues. Therefore, trust and temperature based lightweight and resource efficient solutions are ultimate pre-requisite for WBAN. In this paper, we propose a trust and thermal-aware routing protocol for WBANs that considers trust among and temperature of nodes to isolate misbehaving nodes. This multi-facet routing strategy helps in providing a trusted and balanced network environment and restricts hotspot/misbehaving nodes to be part of trusted routes. Simulation results demonstrate that the proposed protocol performs better in terms of packet drop ratio, packet delay, throughput and temperature under varying traffic conditions as compared other state-of-art schemes. --- paper_title: Green Communication for Wireless Body Area Networks: Energy Aware Link Efficient Routing Approach paper_content: Recent technological advancement in wireless communication has led to the invention of wireless body area networks (WBANs), a cutting-edge technology in healthcare applications. WBANs interconnect with intelligent and miniaturized biomedical sensor nodes placed on human body to an unattended monitoring of physiological parameters of the patient. These sensors are equipped with limited resources in terms of computation, storage, and battery power. The data communication in WBANs is a resource hungry process, especially in terms of energy. One of the most significant challenges in this network is to design energy efficient next-hop node selection framework. Therefore, this paper presents a green communication framework focusing on an energy aware link efficient routing approach for WBANs (ELR-W). Firstly, a link efficiency-oriented network model is presented considering beaconing information and network initialization process. Secondly, a path cost calculation model is derived focusing on energy aware link efficiency. A complete operational framework ELR-W is developed considering energy aware next-hop link selection by utilizing the network and path cost model. The comparative performance evaluation attests the energy-oriented benefit of the proposed framework as compared to the state-of-the-art techniques. It reveals a significant enhancement in body area networking in terms of various energy-oriented metrics under medical environments. --- paper_title: A new routing protocol for WBAN to enhance energy consumption and network lifetime paper_content: Wireless Body Area Network (WBAN) comprises wireless sensor nodes that can be either on, implanted or around the body of a person. Routing in WBAN network is not the same as routing in Wireless Sensor Network (WSN) because WBAN protocols have to be thermal aware and data delivery has to be more reliable. Though different WBAN routing schemes have already been proposed, they are not energy efficient or they are unable to relay critical health data reliably. In this paper, a Multi hop routing protocol for WBAN has been proposed that is efficient in terms of transmitting power, Packet delivery Rate (PDR) and network lifetime. Fixed nodes are added to WBAN that are used as forwarder nodes. Proper subset of fixed nodes and sensor nodes can be selected as forwarder based on a cost function that includes metrics like transmission power of the node, velocity vector of the receiver, residual energy and the current location of the node with respect to the coordinator. Simulation studies show that the proposed protocol maximizes network life time and increases packet delivery. --- paper_title: The Emerging Wireless Body Area Network on Android Smartphones: A Review paper_content: Our society now has driven us into an era where almost everything can be digitally monitored and controlled including the human body. The growth of wireless body area network (WBAN), as a specific scope of sensor networks which mounted or attached to human body also developing rapidly. It allows people to monitor their health and several daily activities. This study is intended to review the trend of WBAN especially on Android, one of the most popular smartphone platforms. A systematic literature review is concerned to the following parameters: the purpose of the device and/or application, the type of sensors, the type of Android device, and its connectivity. Most of the studies were more concern to healthcare or medical monitoring systems: blood pressure, electro cardiograph, tremor detection, etc. On the other hand, the rest of them aimed for activity tracker, environment sensing, and epidemic control. After all, those studies shown that not only Android can be a powerful platform to process data from various sensors but also smartphones can be a good alternative to develop WBANs for medical and other daily applications. --- paper_title: Hybrid data-centric routing protocol of wireless body area network paper_content: Wireless Body Area Network (WBAN) plays an important role in many health related applications by transmitting patients data to central database of hospital. So, routing of data is an important part of WBAN. As WBAN has unique features and requirements, there are crucial issues like Quality of service (QoS), path loss, temperature rise, energy efficiency and network lifetime etc. which are needed to be considering while designing routing protocol. To overcome those issues, we propose hybrid routing protocol which support all WBAN data packets such as delay sensitive data (DSD), normal data (ND), critical data (CD) and reliability sensitive data (RSD) packets. For simplicity in classifying these data packets, we design new data classifier which takes minimum time for classification in order to improve network lifetime. While selecting the next hop we also consider route which have high link quality and low path temperature to solve issue of path loss and temperature rise respectively. NS-2 Simulator has been used for simulation of the proposed protocol. --- paper_title: MHRP: A novel mobility handling routing protocol in Wireless Body Area Network paper_content: For the last few years a lot of research work has been carried out on the applications and other aspects of Wireless Body Area Networks (WBAN). Majority of this work are found on healthcare domain. In WBAN, sensor nodes are used either as wearable or as implant devices. These devices collect information from the human body and transmit them wirelessly to remote server which is viewed and analyzed by the concerned persons sitting in their chambers. But, the link between the WBAN and the remote server changes frequently due to the movement of human body. As such maintaining seamless connectivity in such environment is a challenge of current research work. In this paper, a novel Mobility Handling Routing Protocol (MHRP) for WBAN has been proposed. The proposed protocol is faster, reliable, takes care of human mobility through posture detection and ensures seamless connectivity. In support of better performance of the proposed algorithm, analytical comparison with related existing works is given. --- paper_title: A survey on energy efficient routing protocols in wireless body area networks (WBAN) paper_content: Wireless body area network is a special purpose wireless sensor network that employs wireless sensor nodes in, on, or around the human body, and makes it possible to measure biological parameters of a person for special applications. One of the most fundamental concerns in wireless body area networks is accurate routing in order to send data promptly and properly. Routing protocols used in wireless body area networks are affected by a large number of factors including energy, topology, temperature, posture, radio range of sensors and appropriate quality of service in sensor nodes. Among these, achieving energy efficiency in wireless body area networks (WBANs) is a major challenge, because energy efficiency ultimately affect the network lifetime. Several routing techniques have been adopted to increase the energy efficiency. This paper aims to study wireless body area sensor networks and the study of few energy efficient routing protocols in WBAN. --- paper_title: In-Body Routing Protocols for Wireless Body Sensor Networks paper_content: Recent advances in wireless communication have led to the introduction of a novel network of miniaturized, low power, intelligent sensors that can be placed in, on, or around the body. This network is referred to as Wireless Body Area Network (WBAN). The main purpose of WBAN is to physiologically monitor patient's vital signs and consequently route the related data towards a base station. Since the environment of such a network is principally the human body, data routing mechanisms used in traditional wireless networks (e.g. WSN, WANET) need to be revised, and more restrictions have to be addressed in order to adapt it to WBAN routing challenges. Compared to those dedicated to on-body WBAN, in-body WBAN routing protocols have more constrains and restrictions and are expected to be efficient and robust. As better as we know, only few routing protocols have been proposed in literature and the research field stills underexplored. Therefore, in this paper we present an overview of the main existing routing protocols proposed for wireless in-body sensor networks (WIBSN). --- paper_title: Dynamic Connectivity Establishment and Cooperative Scheduling for QoS-Aware Wireless Body Area Networks paper_content: In a hospital environment, the total number of Wireless Body Area Network (WBAN) equipped patients requesting ubiquitous healthcare services in an area increases significantly. Therefore, increased traffic load and group-based mobility of WBANs degrades the performance of each WBAN significantly, concerning service delay and network throughput. In addition, the mobility of WBANs affects connectivity between a WBAN and an Access Point (AP) dynamically, which affects the variation in link quality significantly. To address the connectivity problem and provide Quality of Services (QoS) in the network, we propose a dynamic connectivity establishment and cooperative scheduling scheme, which minimizes the packet delivery delay and maximizes the network throughput. First, to secure the reliable connectivity among WBANs and APs dynamically, we formulate a selection parameter using a price-based approach. Thereafter, we formulate a utility function for the WBANs to offer QoS using a coalition game-theoretic approach. We study the performance of the proposed approach holistically, based on different network parameters. We also compare the performance of the proposed scheme with the existing state-of-the-art. --- paper_title: A Survey of Thermal-Aware Routing Protocols in Wireless Body Area Networks paper_content: Wireless body area networks (WBANs) have risen great concern among the society in the fields such as health care and disease control and prevention. However, the radio frequency radiation produced by frequent communication and utilization of nodes leads to temperature rise, which does great harm to human bodies. For that reason, the thermal-aware technology is of great value in research, which also provides a guarantee for the further development of WBANs. This paper provides brief overview, analysis and comparison of a variety of important thermal-aware routing protocols in WBANs. A detailed statement of the advantages and shortcomings of the routing protocols will be given according to their parameters and performance metrics. This paper aims to find the most suitable thermal-aware routing protocol in common environment. --- paper_title: Designing an energy efficient WBAN routing protocol paper_content: Advancement of medical science brings together new trend of proactive health care which gives rise to the era of Wireless Body Area Networks (WBAN). A number of issues including energy efficiency, reliability, optimal use of network bandwidth need to be considered for designing any multi-hop communication protocol for WBANs. Energy consumption depends on many factors like amount and frequency of forwarding traffic, node activity, distance from sink etc. Energy consumption gives rise to other issues like heated nodes. Existing routing protocols are mostly single hop or multi-hop, and generally focus on one issue ignoring the others. In this paper, we first identify the sources of energy drain, and then propose a 2-hop cost based energy efficient routing protocol for WBAN that formulates the energy drain of a node due to various reasons and incorporates it in the routing decision. Relative node mobility due to posture change is also considered here. The protocol is simulated in Castalia simulator and compared with state of the art protocols. It is found to outperform state of the art protocols in terms of packet delivery ratio for a given transmission power level. Moreover, only a small number of relays are found to be sufficient to stabilize packet delivery ratio. --- paper_title: QoS-based routing in Wireless Body Area Networks: a survey and taxonomy paper_content: Wireless Body Area Network (WBAN) constitutes a set of sensor nodes responsible for monitoring human physiological activities and actions. The increasing demand for real time applications in such networks stimulates many research activities in quality-of-service (QoS) based routing for data delivery. Designing such scheme of critical events while preserving the energy efficiency is a challenging task due to the dynamic of the network topology, severe constraints on power supply and limited in computation power and communication bandwidth. The design of QoS-based routing protocols becomes an essential part of WBANs and plays an important role in the communication stacks and has significant impact on the network performance. In this paper, we classify, survey, model and compare the most relevant and recent QoS-based routing protocols proposed in the framework of WBAN. A novel taxonomy of solutions is proposed, in which the comparison is performed with respect to relevant criteria. An analytical model is proposed in order to compare the performances of all the solutions. Furthermore, we provide a study of adaptability of the surveyed protocols related to the healthcare sector. --- paper_title: Performance Analysis of IEEE 802.15.6-Based Coexisting Mobile WBANs With Prioritized Traffic and Dynamic Interference paper_content: Intelligent wireless body area networks (WBANs) have entered into an incredible explosive popularization stage. WBAN technologies facilitate real-time and reliable health monitoring in e-healthcare and creative applications in other fields. However, due to the limited space and medical resources, deeply deployed WBANs are suffering severe interference problems. The interference affects the reliability and timeliness of data transmissions, and the impacts of interference become more serious in mobile WBANs because of the uncertainty of human movement. In this paper, we analyze the dynamic interference taking human mobility into consideration. The dynamic interference is investigated in different situations for WBANs coexistence. To guarantee the performance of different traffic types, a health critical index is proposed to ensure the transmission privilege of emergency data for intra- and inter-WBANs. Furthermore, the performance of the target WBAN, i.e., normalized throughput and average access delay, under different interference intensity are evaluated using a developed three-dimensional Markov chain model. Extensive numerical results show that the interference generated by mobile neighbor WBANs results in 70% throughput decrease for general medical data and doubles the packet delay experienced by the target WBAN for emergency data compared with single WBAN. The evaluation results greatly benefit the network design and management as well as the interference mitigation protocols design. --- paper_title: A Survey of Thermal-Aware Routing Protocols in Wireless Body Area Networks paper_content: Wireless body area networks (WBANs) have risen great concern among the society in the fields such as health care and disease control and prevention. However, the radio frequency radiation produced by frequent communication and utilization of nodes leads to temperature rise, which does great harm to human bodies. For that reason, the thermal-aware technology is of great value in research, which also provides a guarantee for the further development of WBANs. This paper provides brief overview, analysis and comparison of a variety of important thermal-aware routing protocols in WBANs. A detailed statement of the advantages and shortcomings of the routing protocols will be given according to their parameters and performance metrics. This paper aims to find the most suitable thermal-aware routing protocol in common environment. --- paper_title: Designing an energy efficient WBAN routing protocol paper_content: Advancement of medical science brings together new trend of proactive health care which gives rise to the era of Wireless Body Area Networks (WBAN). A number of issues including energy efficiency, reliability, optimal use of network bandwidth need to be considered for designing any multi-hop communication protocol for WBANs. Energy consumption depends on many factors like amount and frequency of forwarding traffic, node activity, distance from sink etc. Energy consumption gives rise to other issues like heated nodes. Existing routing protocols are mostly single hop or multi-hop, and generally focus on one issue ignoring the others. In this paper, we first identify the sources of energy drain, and then propose a 2-hop cost based energy efficient routing protocol for WBAN that formulates the energy drain of a node due to various reasons and incorporates it in the routing decision. Relative node mobility due to posture change is also considered here. The protocol is simulated in Castalia simulator and compared with state of the art protocols. It is found to outperform state of the art protocols in terms of packet delivery ratio for a given transmission power level. Moreover, only a small number of relays are found to be sufficient to stabilize packet delivery ratio. --- paper_title: A Survey of Routing Protocols in Wireless Body Sensor Networks paper_content: Wireless Body Sensor Networks (WBSNs) constitute a subset of Wireless Sensor Networks (WSNs) responsible for monitoring vital sign-related data of patients and accordingly route this data towards a sink. In routing sensed data towards sinks, WBSNs face some of the same routing challenges as general WSNs, but the unique requirements of WBSNs impose some more constraints that need to be addressed by the routing mechanisms. This paper identifies various issues and challenges in pursuit of effective routing in WBSNs. Furthermore, it provides a detailed literature review of the various existing routing protocols used in the WBSN domain by discussing their strengths and weaknesses. --- paper_title: Comparative analysis of energy efficient routing in WBAN paper_content: In this paper, we have discussed various routing techniques of wireless body area network (WBAN), its challenges and comparison has been done based on various parameters. Due to advancement in wireless technology miniature sensor nodes with low power, lightweight, invasive or non-invasive are placed in, on or around the human body to monitor the health condition. Routing protocols plays an important role to improve the overall performance of network in terms of delay, throughput, network lifetime etc. and to improve the quality of services (QoS) in WBAN. Network lifetime and successfully transmission of data to the sink node are two main factors to design a routing protocol. Routing in WBAN is classified as QoS based, temperature aware, clusters based routing, postural based, cross layer network based routing etc. These categories are further divided into various protocols. This paper provides a review of some of the existing routing protocols of WBAN by describing their techniques, advantages and disadvantages. Also, the comparison of ATTEMPT (Adaptive Threshold based Thermal unaware Energy-efficient Multi-hop Protocol), SIMPLE (Stable Increased-throughput Multihop Protocol for Link Efficiency) and EERDT (Energy Efficient and Reliable Data Transfer) protocol has been done by simulating in MATLAB. --- paper_title: MHRP: A novel mobility handling routing protocol in Wireless Body Area Network paper_content: For the last few years a lot of research work has been carried out on the applications and other aspects of Wireless Body Area Networks (WBAN). Majority of this work are found on healthcare domain. In WBAN, sensor nodes are used either as wearable or as implant devices. These devices collect information from the human body and transmit them wirelessly to remote server which is viewed and analyzed by the concerned persons sitting in their chambers. But, the link between the WBAN and the remote server changes frequently due to the movement of human body. As such maintaining seamless connectivity in such environment is a challenge of current research work. In this paper, a novel Mobility Handling Routing Protocol (MHRP) for WBAN has been proposed. The proposed protocol is faster, reliable, takes care of human mobility through posture detection and ensures seamless connectivity. In support of better performance of the proposed algorithm, analytical comparison with related existing works is given. --- paper_title: MHRP: A novel mobility handling routing protocol in Wireless Body Area Network paper_content: For the last few years a lot of research work has been carried out on the applications and other aspects of Wireless Body Area Networks (WBAN). Majority of this work are found on healthcare domain. In WBAN, sensor nodes are used either as wearable or as implant devices. These devices collect information from the human body and transmit them wirelessly to remote server which is viewed and analyzed by the concerned persons sitting in their chambers. But, the link between the WBAN and the remote server changes frequently due to the movement of human body. As such maintaining seamless connectivity in such environment is a challenge of current research work. In this paper, a novel Mobility Handling Routing Protocol (MHRP) for WBAN has been proposed. The proposed protocol is faster, reliable, takes care of human mobility through posture detection and ensures seamless connectivity. In support of better performance of the proposed algorithm, analytical comparison with related existing works is given. --- paper_title: Energy-Efficient and Distributed Network Management Cost Minimization in Opportunistic Wireless Body Area Networks paper_content: Mobility induced by limb/body movements in Wireless Body Area Networks (WBANs) significantly affects the link-quality of intra-BAN and inter-BAN communication units, which, in turn, affects the Quality-of-Service (QoS) of each WBAN, in terms of reliability, efficient data transmission and network throughput guarantees. Further, the variation in link-quality between WBANs and Access Points (APs) makes the WBAN-equipped patients more resource-constrained in nature, which also increases the data dissemination delay. Therefore, to minimize the data dissemination delay of the network, WBANs send patients’ physiological data to local servers using the proposed opportunistic transient connectivity establishment algorithm. Additionally, limb/body movements induce dynamic changes to the on-body network topology, which, in turn, increases the network management cost and decreases the life-time of the sensor nodes periodically. Also, mutual and cross technology interference among coexisting WBANs and other radio technologies increases the energy consumption rate of the sensor nodes and also the energy management cost. To address the problem of increased network management cost and data dissemination delay, we propose a network management cost minimization framework to optimize the network throughput and QoS of each WBAN. The proposed framework attempts to minimize the dynamic connectivity, interference management, and data dissemination costs for opportunistic WBAN. We have, theoretically, analyzed the performance of the proposed framework to provide reliable data transmission in opportunistic WBANs. Simulation results show significant improvement in the network performance compared to the existing solutions. --- paper_title: RE-ATTEMPT: A New Energy-Efficient Routing Protocol for Wireless Body Area Sensor Networks paper_content: Modern health care system is one of the most popular Wireless Body Area Sensor Network (WBASN) applications and a hot area of research subject to present work. In this paper, we present Reliability Enhanced-Adaptive Threshold based Thermal-unaware Energy-efficient Multi-hop ProTocol (RE-ATTEMPT) for WBASNs. The proposed routing protocol uses fixed deployment of wireless sensors (nodes) such that these are placed according to energy levels. Moreover, we use direct communication for the delivery of emergency data and multihop communication for the delivery of normal data. RE-ATTEMPT selects route with minimum hop count to deliver data which downplays the delay factor. Furthermore, we conduct a comprehensive analysis supported by MATLAB simulations to provide an estimation of path loss, and problem formulation with its solution via linear programming model for network lifetime maximization is also provided. In simulations, we analyze our protocol in terms of network lifetime, packet drops, and throughput. Results show better performance for the proposed protocol as compared to the existing one. --- paper_title: Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks paper_content: High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. --- paper_title: Trust and Thermal Aware Routing Protocol (TTRP) for Wireless Body Area Networks paper_content: Recent advancements in wireless communication have made it possible to use low-power invasive or non-invasive sensor nodes to remotely monitor patients through wireless body area networks (WBANs). However, reliable and secure data transmission in WBANs is a challenging task. The issues such as privacy and confidentiality of patient related data require the reliable and secure data routing so that data can be kept safe from the malicious nodes. On the other hand, conventional biometric and cryptographic algorithms cannot be used in WBANs as they are, most of the time, complex and do not consider the misbehaving nature of nodes. Moreover, wireless nature of in vivo biomedical sensor nodes produces electromagnetic radiations which result in increased temperature; that could be harmful particularly for sensitive tissues. Therefore, trust and temperature based lightweight and resource efficient solutions are ultimate pre-requisite for WBAN. In this paper, we propose a trust and thermal-aware routing protocol for WBANs that considers trust among and temperature of nodes to isolate misbehaving nodes. This multi-facet routing strategy helps in providing a trusted and balanced network environment and restricts hotspot/misbehaving nodes to be part of trusted routes. Simulation results demonstrate that the proposed protocol performs better in terms of packet drop ratio, packet delay, throughput and temperature under varying traffic conditions as compared other state-of-art schemes. --- paper_title: On designing lightweight qos routing protocol for delay-sensitive wireless body area networks paper_content: Quality of Service (QoS) provisioning is critical and mandatory in any Wireless Body Area Network (WBAN). Routing protocols for this kind of networks need to be very efficient and lightweight algorithms need to be developed as WBANs are wireless interconnection of resource constrained miniaturized sensors either implanted or worn in/on the body. Optimization of Delay QoS parameter is very important requirement of many WBAN applications, particularly those demanding real-time data delivery service. In this paper, a lightweight routing protocol called LRPD is proposed, which operates hop-by-hop to optimize end-to-end latency. The protocol is realized, and evaluated for performance comparison against existing protocols of same category. Comparative study demonstrates satisfactory performance improvements. --- paper_title: Joint Transmission Power Control and Relay Cooperation for WBAN Systems paper_content: Improving transmission reliability is a crucial challenge for Wireless Body Area Networks (WBANs) because of the instability of channel conditions and the stringent Packet Loss Ratio (PLR) requirement for many WBANs applications. On the other hand, limited by the size of WBAN nodes, the energy consumption of WBAN nodes should be minimized. In this paper, we jointly consider transmission power control, dynamic slot scheduling and two-hop cooperative mechanism and propose an Autocorrelation-based Adaptive Transmission (AAT) scheme that achieves a better trade-off between transmission reliability and energy consumption for WBAN systems. The new scheme is designed to be compatible with IEEE 802.15.6. We evaluated the performance of the newly proposed scheme by importing the real channel datasets into our simulation model. Simulation results demonstrate that the AAT method can effectively improve the transmission reliability while reducing the energy consumption. We also provide the performance evaluation from three perspectives, namely packet error ratio, energy consumption and energy efficiency, and provide recommendations on the application of the two-hop cooperative mechanism associated with the proposed AAT in the contexts of WBANs. --- paper_title: Hybrid data-centric routing protocol of wireless body area network paper_content: Wireless Body Area Network (WBAN) plays an important role in many health related applications by transmitting patients data to central database of hospital. So, routing of data is an important part of WBAN. As WBAN has unique features and requirements, there are crucial issues like Quality of service (QoS), path loss, temperature rise, energy efficiency and network lifetime etc. which are needed to be considering while designing routing protocol. To overcome those issues, we propose hybrid routing protocol which support all WBAN data packets such as delay sensitive data (DSD), normal data (ND), critical data (CD) and reliability sensitive data (RSD) packets. For simplicity in classifying these data packets, we design new data classifier which takes minimum time for classification in order to improve network lifetime. While selecting the next hop we also consider route which have high link quality and low path temperature to solve issue of path loss and temperature rise respectively. NS-2 Simulator has been used for simulation of the proposed protocol. --- paper_title: Energy-Efficient and Distributed Network Management Cost Minimization in Opportunistic Wireless Body Area Networks paper_content: Mobility induced by limb/body movements in Wireless Body Area Networks (WBANs) significantly affects the link-quality of intra-BAN and inter-BAN communication units, which, in turn, affects the Quality-of-Service (QoS) of each WBAN, in terms of reliability, efficient data transmission and network throughput guarantees. Further, the variation in link-quality between WBANs and Access Points (APs) makes the WBAN-equipped patients more resource-constrained in nature, which also increases the data dissemination delay. Therefore, to minimize the data dissemination delay of the network, WBANs send patients’ physiological data to local servers using the proposed opportunistic transient connectivity establishment algorithm. Additionally, limb/body movements induce dynamic changes to the on-body network topology, which, in turn, increases the network management cost and decreases the life-time of the sensor nodes periodically. Also, mutual and cross technology interference among coexisting WBANs and other radio technologies increases the energy consumption rate of the sensor nodes and also the energy management cost. To address the problem of increased network management cost and data dissemination delay, we propose a network management cost minimization framework to optimize the network throughput and QoS of each WBAN. The proposed framework attempts to minimize the dynamic connectivity, interference management, and data dissemination costs for opportunistic WBAN. We have, theoretically, analyzed the performance of the proposed framework to provide reliable data transmission in opportunistic WBANs. Simulation results show significant improvement in the network performance compared to the existing solutions. --- paper_title: A mobility-based temperature-aware routing protocol for Wireless Body Sensor Networks paper_content: A large number of temperature-aware routing algorithms have been proposed to prevent the damage of surrounding tissues caused by temperature rise in wireless body sensor network. But, since the existing temperature-aware routing algorithms for WBSN have not taken account of human postural mobility, large of packet are lost due to topological partitioning. To solve this problem, in this paper, we propose a new temperature-aware routing algorithm by employing store and carry scheme. Simulation results are given to provide the suitability of the proposed routing algorithm by demonstrating its performance in terms of reducing the number of hot spots, packet delivery delay and packet loss. --- paper_title: Thermal-aware routing algorithm for implanted sensor networks paper_content: Implanted biological sensors are a special class of wireless sensor networks that are used in-vivo for various medical applications. One of the major challenges of continuous in-vivo sensing is the heat generated by the implanted sensors due to communication radiation and circuitry power consumption. This paper addresses the issues of routing in implanted sensor networks. We propose a thermal-aware routing protocol that routes the data away from high temperature areas (hot spots). With this protocol each node estimates temperature change of its neighbors and routes packets around the hot spot area by a withdraw strategy. The proposed protocol can achieve a better balance of temperature rise and only experience a modest increased delay compared with shortest hop, but thermal-awareness also indicates the capability of load balance, which leads to less packet loss in high load situations. --- paper_title: RE-ATTEMPT: A New Energy-Efficient Routing Protocol for Wireless Body Area Sensor Networks paper_content: Modern health care system is one of the most popular Wireless Body Area Sensor Network (WBASN) applications and a hot area of research subject to present work. In this paper, we present Reliability Enhanced-Adaptive Threshold based Thermal-unaware Energy-efficient Multi-hop ProTocol (RE-ATTEMPT) for WBASNs. The proposed routing protocol uses fixed deployment of wireless sensors (nodes) such that these are placed according to energy levels. Moreover, we use direct communication for the delivery of emergency data and multihop communication for the delivery of normal data. RE-ATTEMPT selects route with minimum hop count to deliver data which downplays the delay factor. Furthermore, we conduct a comprehensive analysis supported by MATLAB simulations to provide an estimation of path loss, and problem formulation with its solution via linear programming model for network lifetime maximization is also provided. In simulations, we analyze our protocol in terms of network lifetime, packet drops, and throughput. Results show better performance for the proposed protocol as compared to the existing one. --- paper_title: Thermal-aware routing algorithm for implanted sensor networks paper_content: Implanted biological sensors are a special class of wireless sensor networks that are used in-vivo for various medical applications. One of the major challenges of continuous in-vivo sensing is the heat generated by the implanted sensors due to communication radiation and circuitry power consumption. This paper addresses the issues of routing in implanted sensor networks. We propose a thermal-aware routing protocol that routes the data away from high temperature areas (hot spots). With this protocol each node estimates temperature change of its neighbors and routes packets around the hot spot area by a withdraw strategy. The proposed protocol can achieve a better balance of temperature rise and only experience a modest increased delay compared with shortest hop, but thermal-awareness also indicates the capability of load balance, which leads to less packet loss in high load situations. --- paper_title: Trust and Thermal Aware Routing Protocol (TTRP) for Wireless Body Area Networks paper_content: Recent advancements in wireless communication have made it possible to use low-power invasive or non-invasive sensor nodes to remotely monitor patients through wireless body area networks (WBANs). However, reliable and secure data transmission in WBANs is a challenging task. The issues such as privacy and confidentiality of patient related data require the reliable and secure data routing so that data can be kept safe from the malicious nodes. On the other hand, conventional biometric and cryptographic algorithms cannot be used in WBANs as they are, most of the time, complex and do not consider the misbehaving nature of nodes. Moreover, wireless nature of in vivo biomedical sensor nodes produces electromagnetic radiations which result in increased temperature; that could be harmful particularly for sensitive tissues. Therefore, trust and temperature based lightweight and resource efficient solutions are ultimate pre-requisite for WBAN. In this paper, we propose a trust and thermal-aware routing protocol for WBANs that considers trust among and temperature of nodes to isolate misbehaving nodes. This multi-facet routing strategy helps in providing a trusted and balanced network environment and restricts hotspot/misbehaving nodes to be part of trusted routes. Simulation results demonstrate that the proposed protocol performs better in terms of packet drop ratio, packet delay, throughput and temperature under varying traffic conditions as compared other state-of-art schemes. --- paper_title: A mobility-based temperature-aware routing protocol for Wireless Body Sensor Networks paper_content: A large number of temperature-aware routing algorithms have been proposed to prevent the damage of surrounding tissues caused by temperature rise in wireless body sensor network. But, since the existing temperature-aware routing algorithms for WBSN have not taken account of human postural mobility, large of packet are lost due to topological partitioning. To solve this problem, in this paper, we propose a new temperature-aware routing algorithm by employing store and carry scheme. Simulation results are given to provide the suitability of the proposed routing algorithm by demonstrating its performance in terms of reducing the number of hot spots, packet delivery delay and packet loss. --- paper_title: Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks paper_content: High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. --- paper_title: Joint Transmission Power Control and Relay Cooperation for WBAN Systems paper_content: Improving transmission reliability is a crucial challenge for Wireless Body Area Networks (WBANs) because of the instability of channel conditions and the stringent Packet Loss Ratio (PLR) requirement for many WBANs applications. On the other hand, limited by the size of WBAN nodes, the energy consumption of WBAN nodes should be minimized. In this paper, we jointly consider transmission power control, dynamic slot scheduling and two-hop cooperative mechanism and propose an Autocorrelation-based Adaptive Transmission (AAT) scheme that achieves a better trade-off between transmission reliability and energy consumption for WBAN systems. The new scheme is designed to be compatible with IEEE 802.15.6. We evaluated the performance of the newly proposed scheme by importing the real channel datasets into our simulation model. Simulation results demonstrate that the AAT method can effectively improve the transmission reliability while reducing the energy consumption. We also provide the performance evaluation from three perspectives, namely packet error ratio, energy consumption and energy efficiency, and provide recommendations on the application of the two-hop cooperative mechanism associated with the proposed AAT in the contexts of WBANs. --- paper_title: On designing lightweight qos routing protocol for delay-sensitive wireless body area networks paper_content: Quality of Service (QoS) provisioning is critical and mandatory in any Wireless Body Area Network (WBAN). Routing protocols for this kind of networks need to be very efficient and lightweight algorithms need to be developed as WBANs are wireless interconnection of resource constrained miniaturized sensors either implanted or worn in/on the body. Optimization of Delay QoS parameter is very important requirement of many WBAN applications, particularly those demanding real-time data delivery service. In this paper, a lightweight routing protocol called LRPD is proposed, which operates hop-by-hop to optimize end-to-end latency. The protocol is realized, and evaluated for performance comparison against existing protocols of same category. Comparative study demonstrates satisfactory performance improvements. --- paper_title: Hybrid data-centric routing protocol of wireless body area network paper_content: Wireless Body Area Network (WBAN) plays an important role in many health related applications by transmitting patients data to central database of hospital. So, routing of data is an important part of WBAN. As WBAN has unique features and requirements, there are crucial issues like Quality of service (QoS), path loss, temperature rise, energy efficiency and network lifetime etc. which are needed to be considering while designing routing protocol. To overcome those issues, we propose hybrid routing protocol which support all WBAN data packets such as delay sensitive data (DSD), normal data (ND), critical data (CD) and reliability sensitive data (RSD) packets. For simplicity in classifying these data packets, we design new data classifier which takes minimum time for classification in order to improve network lifetime. While selecting the next hop we also consider route which have high link quality and low path temperature to solve issue of path loss and temperature rise respectively. NS-2 Simulator has been used for simulation of the proposed protocol. --- paper_title: A Distributed Game Methodology for Crowdsensing in Uncertain Wireless Scenario paper_content: With the exponentially increasing number of mobile devices, crowdsensing has been a hot topic to use the available resource of neighbor mobile devices to perform sensing tasks cooperatively. However, there still remain three main obstacles to be solved in the practical system. First, since mobile devices are selfish and rational, it is natural to provide cooperation for sensing with a reasonable payment. Meanwhile, due to the arrival and departure of sensing tasks, resource should be allocated and released dynamically when sensing task comes or leaves. To this end, this paper designs a game theoretic approach based incentive mechanism to encourage the “best” neighbor mobile devices to share their own resource for sensing. Next, in order to adjust resource among mobile devices for the better crowdsensing response, an auction based task migration algorithm is proposed, which can guarantee the truthfulness of announced price of auctioneer, individual rationality, profitability, and computational efficiency. Moreover, taking into account the random movement of mobile devices resulting in the stochastic connection, we also use multi-stage stochastic decision to take posterior resource allocation to compensate for inaccurate prediction. The numerical results show the effectiveness and improvement of the proposed multi-stage stochastic programming based distributed game theoretic methodology ( SPG ) for crowdsensing. --- paper_title: An Energy Centric Cluster-Based Routing Protocol for Wireless Sensor Networks paper_content: Clustering is an effective way to prolong the lifetime of a wireless sensor network (WSN). The common approach is to elect cluster heads to take routing and controlling duty, and to periodically rotate each cluster head’s role to distribute energy consumption among nodes. However, a significant amount of energy dissipates due to control messages overhead, which results in a shorter network lifetime. This paper proposes an energy-centric cluster-based routing mechanism in WSNs. To begin with, cluster heads are elected based on the higher ranks of the nodes. The rank is defined by residual energy and average distance from the member nodes. With the role of data aggregation and data forwarding, a cluster head acts as a caretaker for cluster-head election in the next round, where the ranks’ information are piggybacked along with the local data sending during intra-cluster communication. This reduces the number of control messages for the cluster-head election as well as the cluster formation in detail. Simulation results show that our proposed protocol saves the energy consumption among nodes and achieves a significant improvement in the network lifetime. --- paper_title: Joint optimal relay location and power allocation for ultra-wideband-based wireless body area networks paper_content: In this paper, we study the joint optimal relay location and power allocation problem for single-relay-assisted ultra-wideband (UWB)-based wireless body area networks (WBANs). Specifically, to optimize spectral efficiency (SE) for single-relay cooperative communication in UWB-based WBANs, we seek the relay with the optimal location together with the corresponding optimal power allocation. With proposed relay-location-based network models, the SE maximization problems are mathematically formulated by considering three practical scenarios, namely, along-torso scenario, around-torso scenario, and in-body scenario. Taking into account realistic power considerations for each scenario, the optimal relay location and power allocation are jointly derived and analyzed. Numerical results show the necessity of utilization of relay node for the spectral and energy-efficient transmission in UWB-based WBANs and demonstrate the effectiveness of the proposed scheme in particular for the around-torso and in-body scenarios. With the joint optimal relay location and power allocation, the proposed scheme is able to prolong the network lifetime and extend the transmission range in WBANs significantly compared to direct transmission. --- paper_title: Two-Stage Preamble Detection Scheme for IR-UWB Coexistence paper_content: Ultra-wideband (UWB) is a promising physical layer technology which potentially enables low power and low cost devices with applications for a wireless body area network (WBAN). A WBAN is a network with its communications devices in very close proximity to the human body and consists of a number of different sensor and actuator nodes. In this paper, a two-stage preamble detection algorithm is proposed in order to improve the coexistence of WBAN nodes in an impulse radio UWB based network. In the multi-stage process, a tentative detection during the first stage is taken into consideration to refine the overall detection performance at the next stage. A method for selecting the decision threshold parameters in each detection stage is also highlighted. We show via simulations that the proposed detection algorithm endowed with adaptive threshold calibration is capable of robustly detecting new incoming nodes against the variation of SNR. --- paper_title: Performance Improvement of Clustered Wireless Sensor Networks Using Swarm Based Algorithm paper_content: The sensor nodes in a wireless sensor network (WSN) have limited energy resources which adversely affect the long term performance of the network. So, the current research focus has been the designing of energy efficient algorithms for WSNs to improve network lifetime. This paper proposes a distributed swarm artificial bee colony (DSABC) algorithm with a clustering evaluation model to improve the energy capability of the interference aware network. The DSABC algorithm can optimize the dynamics of the cluster heads and sensor nodes in the WSN. The proposed algorithm can minimize the energy dissipation of nodes, balance the energy consumption across nodes and improve the lifetime of the network. The proposed algorithm has fewer control parameters in its objective function compared to other algorithms, so it is simple to implement in clustered sensor network. The simulation results prove the superiority of the proposed DSABC algorithm compared to other recent algorithms in improving the energy efficiency and longevity of the network. --- paper_title: A channel diversity path metric for dual channel Wireless Body Area networks paper_content: Wireless Body Area networks (WBANs) are a subset of wireless sensor networks that interconnect miniaturized nodes with sensor or actuator capabilities in, on, or around a human body. WBANs can operate over a number of different frequency bands such as MICS (Medical Implant Communications system), 2.4 GHz ISM (Industrial Scientific and Medical), UWB (Ultra-Wideband), and HBC (Human Body Communications) bands. Use for dual bands can improve connectivity, throughput and reliability. In this paper we proposed a metric called Weighed Multichannel Hop Count (WMHC) to provide better channel diversity and reduced in interference for inclusion in a multi-channel extension to the Ad Hoc on Demand Distance Vector (AODV) routing protocol for use in WBANs. We used the Castalia simulator to evaluate the metric. We showed that WMHC is a simple method of reducing interference and improving throughput. --- paper_title: An Optimal Clustering Mechanism Based on K-Means for Wireless Sensor Networks paper_content: Energy-efficient protocols are heavily involved specially for low-power, multi-functional wireless sensors networks (WSN's). Therfore, many studies have been presented to find a solution to increase the lifetime of the WSN's. With the application of routing information, various metaheurisitics have been proposed for energy-efficient clustering to ensure reliability and connectivity in WSN s even in large scale environnements. In this paper, a new method for optimal clustering with K-mens technique is proposed which aims to mitigate energy consumption and prolong the lifetime of WSN's. The obtained results seemed that the proposed protocol outperforms other existing clustering protocols, on the basis of energy consumption, throughput, network lifetime and packet delivery as performance metrics. --- paper_title: An Optimal Trust Aware Cluster Based Routing Protocol Using Fuzzy Based Trust Inference Model and Improved Evolutionary Particle Swarm Optimization in WBANs paper_content: The wireless body sensor network (WBSN) an extensive of WSN is in charge for the detection of patient’s health concerned data. This monitored health data are essential to be routed to the sink (base station) in an effective way by approaching the routing technique. Routing of tremendous sensed data to the base station minimizes the life time of the network due to heavy traffic occurrence. The major concern of this work is to increase the lifespan of the network which is considered as a serious problem in the wireless network functionalities. In order to recover this issue, we propose an optimal trust aware cluster based routing technique in WBSN. The human body enforced for the detection of health status is assembled with sensor nodes. In this paper, three novel schemes namely, improved evolutionary particle swarm optimization (IEPSO), fuzzy based trust inference model, and self-adaptive greedy buffer allocation and scheduling algorithm (SGBAS) are proposed for the secured transmission of data. The sensor nodes are gathered to form a cluster and from the cluster, it is necessary to select the cluster head (CH) for the effective transmission of data to nearby nodes without accumulation. The CH is chosen by considering IEPSO algorithm. For securable routing, we exhibit fuzzy based trust inference model to select the trusted path. Finally, to reduce traffic occurrence in the network, we introduce SGBAS algorithm. Experimental results demonstrate that our proposed method attains better results when compared with conventional clustering protocols and in terms of some distinctive QoS determinant parameters. --- paper_title: Cluster head selection for energy efficient and delay-less routing in wireless sensor network paper_content: Wireless sensor network (WSN) is comprised of tiny, cheap and power-efficient sensor nodes which effectively transmit data to the base station. The main challenge of WSN is the distance, energy and time delay. The power resource of the sensor node is a non-rechargeable battery. Here the greater the distance between the nodes, higher the energy consumption. For having the effective transmission of data with less energy, the cluster-head approach is used. It is well known that the time delay is directly proportional to the distance between the nodes and the base station. The cluster head is selected in such a way that it is spatially closer enough to the base station as well as the sensor nodes. So, the time delay can be substantially reduced. This, in turn, the transmission speed of the data packets can be increased. Firefly algorithm is developed for maximizing the energy efficiency of network and lifetime of nodes by selecting the cluster head optimally. In this paper firefly with cyclic randomization is proposed for selecting the best cluster head. The network performance is increased in this method when compared to the other conventional algorithms. --- paper_title: A Comparative Study of Interference and Mitigation Techniques in Wireless Body Area Networks paper_content: Wireless body area network (WBAN) is a remarkable, reliable and the most beneficial trend in the e-health monitoring system. WBANs are operated using low power wireless technology to link minute sensors with invasive or non-invasive technology for examining the patients. WBANs when used in a dense environment or operated along with other wireless sensor networks, there occurs communication interference which may reduce the system performance due to unstable signal integrity. So interference mitigation should be considered in the design. The interference is basically of two types: inter-network and intra-network interference. This paper focuses on a comparative study of mitigation techniques of inter-network interference and also discusses the open issues in WBAN. --- paper_title: Cluster Based Energy Efficient Routing Protocol Using ANT Colony Optimization and Breadth First Search paper_content: Abstract This paper presents an algorithm to choose optimal path for data delivery for continuous monitoring of vital signs of patients in Body Area Network (BAN) in hospital indoor environments where a large number of patients exist and the traffic generated on the path rapidly changes over time. The methodology for finding the optimal path includes a meta-heuristic that combines ANT Colony Optimization (ACO) using Clustering. We propose a ACOBAN Clustering for monitoring BAN data and propose a method to improve network life, energy, load balancing on the overall network. Since traffic generated by BANs on the network changes with time, so the optimal path is important in Wireless Body Area Network. Our algorithm ensures network connectivity by using a mechanism of modified Cluster Head rotation process level by level and using breadth first search algorithm which avoids trapping during exploration. In the current work, we implemented ACO method and done experimental results on OMNeT++ to prove that the proposed method can find a better solution than conventional methods. --- paper_title: Energy-Aware Data Aggregation Techniques in Wireless Sensor Network paper_content: A Wireless Sensor Network (WSN) is an exigent technology and it has huge number of applications in disaster management, health monitoring, military, security, and so on. This network faces some critical barriers like fault tolerance, energy consumption due to heterogeneous traffic loads and redundant data transmission. In which, nodes are miniscule and have restricted capability of processing with reduced power of battery. This limitation of reduced power of battery makes the sensor network prone to failure. Data aggregation is a vital technique for active data processing in WSN. With the support of data aggregation, the energy depletion is minimized by eliminating redundant data or by decreasing the number of sent packets. This study reviews various data aggregation techniques such as clustered aggregation, tree-based aggregation, in-network aggregation, and centralized data aggregation with focus on energy consumption of sensor nodes. --- paper_title: An Artificial Bee Colony-Based Green Routing Mechanism in WBANs for Sensor-Based E-Healthcare Systems paper_content: At present, sensor-based E-Healthcare systems are attracting more and more attention from academia and industry. E-Healthcare systems are usually a Wireless Body Area Network (WBANs), which can monitor or diagnose human health by placing miniaturized, low-power sensor nodes in or on patient's bodies to measure various physiological parameters. However, in this process, WBAN nodes usually use batteries, and especially for implantable flexible nodes, it is difficult to accomplish the battery replacement, so the energy that the node can carry is very limited, making the efficient use of energy the most important problem to consider when designing WBAN routing algorithms. By considering factors such as residual energy of node, the importance level of nodes, path cost and path energy difference ratios, this paper gives a definition of Optimal Path of Energy Consumption (OPEC) in WBANs, and designs the Optimal Energy Consumption routing based on Artificial Bee Colony (ABC) for WBANs (OEABC). A performance simulation is carried out to verify the effectiveness of the OEABC. Simulation results demonstrate that compared with the genetic algorithm and ant colony algorithm, the proposed OEABC has a better energy efficiency and faster convergence rate. --- paper_title: Optimization of Routing Algorithm for WBAN Using Genetic Approach paper_content: Wireless Body Area Networks are emerging as a technology of great importance in the field of health-care, sports, military and position tracking. It has a broader area of application because of its characteristics such as portability, real-time monitoring, low cost and real time feedback. Efficient Data Communication and limited energy resources are some of the major issues of WBANs. In this paper an attempt is made to optimize the routing algorithm using Genetic Heuristics. Low Energy Adaptive Clustering Hierarchy (LEACH) and Distributed Energy-Efficient Clustering (DEEC) are clustering protocols which attempt to maintain the energy efficiency of the sensor nodes. This paper proposes a scheme to optimize the clustering based routing protocol using the Genetic Approach to determine the energy consumption and therefore extend the lifetime of the network. The proposed protocol has been compared with LEACH and DEEC. It is observed that the proposed protocol shows much better results. --- paper_title: Proposed Energy Efficient Algorithm for Clustering and Routing in WSN paper_content: Clustering in WSN recently become big challenge and attracts many researches. Clustering is a way of grouping sensor nodes into clusters with CH responsible to receive from its members and send to base station BS, CH selection in efficient way prolongs network life time and stability region. So, improper CHs selection and distribution in sensing field will affect the performance of clustering. In WSN environment there is N sensor nodes and K CHs, there is \( {\text{N}}^{\text{k}} \) different ways to create clusters, so it’s difficult to identify optimal set of CHs without using search optimization algorithm. In this paper, the process of CH selection is formulated as single-objective optimization problem to find optimal set of CHs to form, one-hop clusters, in order to balance energy consumption, enhance stability and scalability using gravitational search algorithm (GSA). The problem has been solved using particle swarm optimization and GSA and compare the result against LEACH protocol. In this paper, several simulations have been done to demonstrate the efficiency of the proposed algorithm under different position for BS. Furthermore, new cost function has been proposed for Hierarchical Clustering. The objective of hierarchical clustering is to increase network lifetime and prolong network stability, several simulations have been done to compare the efficiency of multi-hop versus one-hop approach. --- paper_title: An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization paper_content: Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. --- paper_title: Proactive data routing using controlled mobility of a mobile sink in Wireless Sensor Networks paper_content: Abstract Stationary sink based Wireless Sensor Networks (WSN) have issues like sink neighborhood problem, end-to-end delay, data delivery ratio, network lifetime, etc. Although several routing approaches with mobile sink have been introduced to mitigate those issues across the network, very few, have considered delay requirements of applications. In this paper, we propose an efficient virtual grid based hierarchical routing approach suitable for delay bound applications, which judiciously selects a mobile sink’s path by considering both hop counts and data generation rates of the sensor nodes, which reduces the overall energy consumption for multi-hop data communication. Data aggregation at each level of the hierarchy aims to reduce data traffic and increase throughput. The performance of the proposed protocol has been evaluated using simulation based on different metrics and compared with an existing routing protocol. Results have demonstrated that it performs better than the existing one while it still meets delay constraints of applications. --- paper_title: Dynamic Path Planning Design for Mobile Sink with Burst Traffic in a Region of WSN paper_content: In mobile wireless sensor networks, priori-trail planning for the mobile sink is a commonly used solution to data collection from the whole network, for its low protocol overhead. However, these trail-based approaches lack efficient load balance mechanism to handle burst WSN traffic, which needs to be sent to the base station correctly with low delay. This paper proposed a dynamic path planning for mobile sink to balance load and avoid traffic bottleneck. It contains grid partition of the network, priori-trail creation, burst-traffic awareness and estimation, resources collaborative strategy, and dynamic routing adjustment. Experiments on NS-2 platform show that the proposed algorithm can efficiently balance the regular and burst data traffic with a low-delay and low loss rate performance of the network. --- paper_title: An Improved Routing Schema with Special Clustering Using PSO Algorithm for Heterogeneous Wireless Sensor Network paper_content: Energy efficiency and energy balancing are crucial research issues as per routing protocol designing for self-organized wireless sensor networks (WSNs). Many literatures used the clustering algorithm to achieve energy efficiency and energy balancing, however, there are usually energy holes near the cluster heads (CHs) because of the heavy burden of forwarding. As the clustering problem in lossy WSNs is proved to be a NP-hard problem, many metaheuristic algorithms are utilized to solve the problem. In this paper, a special clustering method called Energy Centers Searching using Particle Swarm Optimization (EC-PSO) is presented to avoid these energy holes and search energy centers for CHs selection. During the first period, the CHs are elected using geometric method. After the energy of the network is heterogeneous, EC-PSO is adopted for clustering. Energy centers are searched using an improved PSO algorithm and nodes close to the energy center are elected as CHs. Additionally, a protection mechanism is also used to prevent low energy nodes from being the forwarder and a mobile data collector is introduced to gather the data. We conduct numerous simulations to illustrate that our presented EC-PSO outperforms than some similar works in terms of network lifetime enhancement and energy utilization ratio. --- paper_title: Trust-Based Intrusion Detection and Clustering Approach for Wireless Body Area Networks paper_content: For most of the tele-health applications body area networks (BANs) have become a favouring and significant technology. This application domain is exclusive so assuring security and obtaining the trustworthy details of the patients’ physiological signs is difficult. To rectify this issue, an attack-resilient malicious node detection scheme (BAN-Trust) is brought-in in the current system, which can identify the malignant attacks on BANs. In this BAN-Trust scheme, malignant nodes is identified according to the nature acquired through the nodes by their own and approvals shared by various nodes. Nevertheless, BAN-Trust conceives the common behaviour among the nodes and it doesn’t conceive the energy of the nodes and gather the information for measuring the trust. So, here, trust-based intrusion detection and clustering is proposed in order to identify the malignant nodes and broadcast the energy-effective data. In our work, trust-based intrusion detection model is brought-in for identifying the malignant nodes. Different varieties of trusts were conceives, namely energy, data and communication trust, which can be developed among two sensor nodes. Once after identifying the malignant nodes, the rest of the nodes in the network were gathered in order to create the cluster. Every cluster has one cluster head (CH) that is chosen by utilizing the multi objective firefly algorithm. The target function of this system is to reduce the delay, increase the broadcast energy and throughput. The multiple body sensor nodes were in-charge for gathering different varieties of data that were sent to the CH. The CH then forwards the gathered data to the sink and sends the details to the system via gateway. By utilizing a hybrid encryption algorithm, the system’s data is encrypted and forwarded to the hospital server. Decrypting is done on the server side to disclose the exact data. The proposed methodology is executed by utilizing an NS-2 simulator. The experimental output provides that the proposed system accomplishes good performance when distinguished with the current system in terms of precision, recall, throughput, packet delivery ratio and end to end delay. --- paper_title: Energy Efficient Clustering Scheme (EECS) for Wireless Sensor Network with Mobile Sink paper_content: The participants in the Wireless Sensor Network (WSN) are highly resource constraint in nature. The clustering approach in the WSN supports a large-scale monitoring with ease to the user. The node near the sink depletes the energy, forming energy holes in the network. The mobility of the sink creates a major challenge in reliable and energy efficient data communication towards the sink. Hence, a new energy efficient routing protocol is needed to serve the use of networks with a mobile sink. The primary objective of the proposed work is to enhance the lifetime of the network and to increase the packet delivered to mobile sink in the network. The residual energy of the node, distance, and the data overhead are taken into account for selection of cluster head in this proposed Energy Efficient Clustering Scheme (EECS). The waiting time of the mobile sink is estimated. Based on the mobility model, the role of the sensor node is realized as finite state machine and the state transition is realized through Markov model. The proposed EECS algorithm is also been compared with Modified-Low Energy Adaptive Clustering Hierarchy (MOD-LEACH) and Gateway-based Energy-Aware multi-hop Routing protocol algorithms (M-GEAR). The proposed EECS algorithm outperforms the MOD-LEACH algorithm by 1.78 times in terms of lifetime and 1.103 times in terms of throughput. The EECS algorithm promotes unequal clustering by avoiding the energy hole and the HOT SPOT issues. --- paper_title: Energy efficient cooperative transmission in single-relay UWB based body area networks paper_content: Energy efficiency is one of the most critical parameters in ultra-wideband (UWB) based wireless body area networks (WBANs). In this paper, the energy efficiency optimization problem is investigated for cooperative transmission with a single relay in UWB based WBANs. Two practical onbody transmission scenarios are taken into account, namely, along-torso scenario and around-torso scenario. With a proposed single-relay WBAN model, a joint optimal scheme for the energy efficiency optimization is developed, which not only derives the optimal power allocation but also seeks the corresponding optimal relay location for each scenario. Simulation results show that the utilization of a relay node is necessary for the energy efficient transmission in particular for the around-torso scenario and the relay location is an important parameter. With the joint optimal relay location and power allocation, the proposed scheme is able to achieve up to 30 times improvement compared to direct transmission in terms of the energy efficiency when the battery of the sensor node is very limited, which indicates that it is an effective way to prolong the network lifetime in WBANs. ---
Title: A Survey of Routing Protocols in WBAN for Healthcare Applications Section 1: Introduction Description 1: Introduce the significance of WBAN technology, its potential applications in healthcare, and the current challenges in routing protocol design. Section 2: Application Field of a Wireless Body Area Network Description 2: Detail the various applications of WBAN in healthcare, including medical care services, chronic disease monitoring, assistance for the disabled and elderly, and other applications in athletics and military. Section 3: Characteristics of the Wireless Body Area Network Description 3: Describe the unique characteristics of WBAN, its architecture, and how it differs from traditional wireless sensor networks (WSNs). Section 4: Challenges in WBAN Routing Protocol Design Description 4: Discuss the major challenges in designing routing protocols for WBANs, such as dynamic topologies, energy efficiency, node temperature, and quality of service (QoS) requirements. Section 5: Classification of Routing Protocols for WBAN Description 5: Present the classification of WBAN routing protocols, describing the different categories and their specific objectives. Section 5.1: Posture-Based Routing Description 5.1: Explain how posture-based routing protocols account for body movements and improve routing efficiency. Section 5.2: Temperature-Based Routing Description 5.2: Describe routing protocols that use node temperature as a critical factor to ensure safety and operational efficiency. Section 5.3: Cross-Layer Routing Description 5.3: Outline cross-layer routing protocols that integrate multiple protocol layers to optimize network performance. Section 5.4: Cluster-Based Routing Description 5.4: Discuss cluster-based routing protocols and their effectiveness in managing energy consumption and network management. Section 5.5: QoS-Based Routing Description 5.5: Describe QoS-based routing protocols that prioritize different types of data to meet various quality of service requirements. Section 6: Comparative Analysis Description 6: Compare and analyze different routing protocols based on their objectives, performance, and suitability for various healthcare applications. Section 7: Prospects for WBAN and Suggestions for Routing Design Description 7: Present future prospects and suggestions for routing protocol design in WBANs, emphasizing innovative methods and new ideas. Section 8: Conclusions Description 8: Summarize the key findings of the survey, the importance of routing protocols in WBANs, and their roles in enabling efficient and reliable healthcare applications.
Image and Video Compression with Neural Networks: A Review
17
--- paper_title: Motion Vector Coding in the HEVC Standard paper_content: High Efficiency Video Coding (HEVC) is an emerging international video coding standard developed by the Joint Collaborative Team on Video Coding (JCT-VC). Compared to H.264/AVC, HEVC has achieved substantial compression performance improvement. During the HEVC standardization, we proposed several motion vector coding techniques, which were crosschecked by other experts and then adopted into the standard. In this paper, an overview of the motion vector coding techniques in HEVC is firstly provided. Next, the proposed motion vector coding techniques including a priority-based derivation algorithm for spatial motion candidates, a priority-based derivation algorithm for temporal motion candidates, a surrounding-based candidate list, and a parallel derivation of the candidate list, are also presented. Based on HEVC test model 9 (HM9), experimental results show that the combination of the proposed techniques achieves on average 3.1% bit-rate saving under the common test conditions used for HEVC development. --- paper_title: Nonlocal In-Loop Filter: The Way Toward Next-Generation Video Coding? paper_content: In-loop filtering has emerged as an essential coding tool since H.264/AVC, due to its delicate design, which reduces different kinds of compression artifacts. However, existing in-loop filters rely only on local image correlations, largely ignoring nonlocal similarities. In this article, the authors explore the design philosophy of in-loop filters and discuss their vision for the future of in-loop filter research by examining the potential of nonlocal similarities. Specifically, the group-based sparse representation, which jointly exploits an image's local and nonlocal self-similarities, lays a novel and meaningful groundwork for in-loop filter design. Hard- and soft-thresholding filtering operations are applied to derive the sparse parameters that are appropriate for compression artifact reduction. Experimental results show that this in-loop filter design can significantly improve the compression performance of the High Efficiency Video Coding (HEVC) standard, leading us in a new direction for improving compression efficiency. --- paper_title: Arithmetic coding for data compression paper_content: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding. --- paper_title: Low-Rank-Based Nonlocal Adaptive Loop Filter for High-Efficiency Video Compression paper_content: In video coding, the in-loop filtering has emerged as a key module due to its significant improvement on compression performance since H.264/Advanced Video Coding. Existing incorporated in-loop filters in video coding standards mainly take advantage of the local smoothness prior model used for images. In this paper, we propose a novel adaptive loop filter utilizing image nonlocal prior knowledge by imposing the low-rank constraint on similar image patches for compression noise reduction. In the filtering process, the reconstructed frame is first divided into image patch groups according to image patch similarity. The proposed in-loop filtering is formulated as an optimization problem with low-rank constraint for every group of image patches independently. It can be efficiently solved by soft-thresholding singular values of the matrix composed of image patches in the same group. To adapt the properties of the input sequences and bit budget, an adaptive threshold derivation model is established for every group of image patches according to the characteristics of compressed image patches, quantization parameters, and coding modes. Moreover, frame-level and largest coding unit-level control flags are signaled to further improve the adaptability from the sense of rate-distortion optimization. The performance of the proposed in-loop filter is analyzed when it collaborates with the existing in-loop filters in High Efficiency Video Coding. Extensive experimental results show that our proposed in-loop filter can further improve the performance of state-of-the-art video coding standard significantly, with up to 16% bit-rate savings. --- paper_title: Hadamard transform image coding paper_content: The introduction of the fast Fourier transform algorithm has led to the development of the Fourier transform image coding technique whereby the two-dimensional Fourier transform of an image is transmitted over a channel rather than the image itself. This devlopement has further led to a related image coding technique in which an image is transformed by a Hadamard matrix operator. The Hadamard matrix is a square array of plus and minus ones whose rows and columns are orthogonal to one another. A high-speed computational algorithm, similar to the fast Fourier transform algorithm, which performs the Hadamard transformation has been developed. Since only real number additions and subtractions are required with the Hadamard transform, an order of magnitude speed advantage is possible compared to the complex number Fourier transform. Transmitting the Hadamard transform of an image rather than the spatial representation of the image provides a potential toleration to channel errors and the possibility of reduced bandwidth transmission. --- paper_title: Discrete Cosine Transform paper_content: A discrete cosine transform (DCT) is defined and an algorithm to compute it using the fast Fourier transform is developed. It is shown that the discrete cosine transform can be used in the area of digital processing for the purposes of pattern recognition and Wiener filtering. Its performance is compared with that of a class of orthogonal transforms and is found to compare closely to that of the Karhunen-Loeve transform, which is known to be optimal. The performances of the Karhunen-Loeve and discrete cosine transforms are also found to compare closely with respect to the rate-distortion criterion. --- paper_title: Motion-compensated transform coding paper_content: Interframe hybrid transform/dpcm coders encode television signals by taking a spatial transform of a block of picture elements in a frame and predictively coding the resulting coefficients using the corresponding coefficients of the spatial block at the same location in the previous frame. These coders can be made more efficient for scenes containing objects in translational motion by first estimating the translational displacement of objects and then using coefficients of a spatially displaced block in the previous frame for prediction. This paper presents simulation results for such motion-compensated transform coders using two algorithms for estimating displacements. The first algorithm, which is developed in a companion paper, recursively estimates the displacements from the previously transmitted transform coefficients, thereby eliminating the need to transmit the displacement estimates. The second algorithm, due to Limb and Murphy, estimates displacements by taking ratios of accummulated frame difference and spatial difference signals in a block. In this scheme, the displacement estimates are transmitted to the receiver. Computer simulations on two typical real-life sequences of frames show that motion-compensated coefficient prediction results in coder bit rates that are 20 to 40 percent tower than conventional interframe transform coders using “frame difference of coefficients.” Comparisons of bit rates for approximately the same picture quality show that the two methods of displacement estimation are quite similar in performance with a slight preference for the scheme with recursive displacement estimation. --- paper_title: Adaptive loop filter with temporal prediction paper_content: In this paper, we propose a method to improve adaptive loop filter (ALF) efficiency with temporal prediction. For one frame, two sets of adaptive loop filter parameters are adaptively selected by rate distortion optimization. The first set of ALF parameters is estimated by minimizing the mean square error between the original frame and the current reconstructed frame. The second set of filter parameters is the one that is used in the latest prior frame. The proposed algorithm is implemented in HM3.0 software. Compared with the HM3.0 anchor, the proposed method achieves 0.4%, 0.3% and 0.3% BD bitrate reduction in average for high efficiency low delay B, high efficiency low delay P and high efficiency random access configuration, respectively. The encoding and decoding time increase by 1% and 2% on average, respectively. --- paper_title: Adaptive bilateral filter for improved in-loop filtering in the emerging high efficiency video coding standard paper_content: To face the still growing video compression needs, ITU and MPEG have jointly started a standardization project called High Efficiency Video Coding (HEVC) which aims to improve the compression efficiency of the state-of-the-art H.264/AVC standard for high and ultra high definition video. The HEVC codec still relies on the usual motion compensated, predictive, block based transform coding architecture with block sizes higher than 8×8 being now considered. Furthermore, the codec is also equipped with a Wiener in-loop filter which, together with the H.264/AVC deblocking filter, further reduces the distortion between the original and decoded frames introduced by lossy coding. While this filter allows indeed reducing the sum of square differences between the original and reconstructed frames, it is not specifically designed to reduce the ringing artifacts resulting from the use of large transform block sizes. In this context, this paper proposes to combine an adaptive bilateral filter together with the HEVC Wiener filter. The proposed combined filter allows an average bitrate reduction of about 7% regarding the HEVC codec without the Wiener filter and a 1.5% reduction regarding the HEVC codec with only the Wiener filter, always for the same quality. Moreover, the combined filter also reduces the ringing artifacts according to an objective metric specifically designed to quantify this type of coding artifact. --- paper_title: HEVC Deblocking Filter paper_content: This paper describes the in-loop deblocking filter used in the upcoming High Efficiency Video Coding (HEVC) standard to reduce visible artifacts at block boundaries. The deblocking filter performs detection of the artifacts at the coded block boundaries and attenuates them by applying a selected filter. Compared to the H.264/AVC deblocking filter, the HEVC deblocking filter has lower computational complexity and better parallel processing capabilities while still achieving significant reduction of the visual artifacts. --- paper_title: A comparison of fractional-pel interpolation filters in HEVC and H.264/AVC paper_content: The fractional-pel interpolation filter adopted in H.264/AVC improves motion compensation greatly. Recently, a new DCT-based fractional-pel interpolation filter is adopted in the oncoming standard HEVC. We are interested in the differences between these two types of fractional-pel interpolation filters. In this paper we describe the derivations of fractional-pel interpolation filters in HEVC and H.264/AVC in detail, and compare them on properties of frequency responses. We find that the half-pel interpolation filters in HEVC and H.264/AVC are very similar, but the low-pass properties of quarter-pel interpolation filters in HEVC are much better than those in H.264/AVC. Experimental results validate this phenomenon, the fractional-pel interpolation in H.264/AVC tends to increase BD-rates by more than 10% compared with that in HEVC, and this performance loss mainly comes from quarter-pel interpolation filters. On the other hand, the complexity of fractional-pel interpolation filtering in HEVC is greatly increased than that in H.264/AVC. --- paper_title: Experiments with linear prediction in television paper_content: The correlation present in a signal makes possible the prediction of the future of the signal in terms of the past and present. If the method used for prediction makes full use of the entire pertinent past, then the error signal — the difference between the actual and the predicted signal — will be a completely random wave of lower power than the original signal but containing all the information of the original. One method of prediction, which docs not make full use of the past, but which is nevertheless remarkably effective with certain signals and also appealing because of its relative simplicity, is linear prediction. Here the prediction for the next signal sample is simply the sum of previous signal samples each multiplied by an appropriate weighting factor. The best values for the weighting coefficients depend upon the statistics of the signal, but once they have been determined the prediction may be done with relatively simple apparatus. This paper describes the apparatus used for some experiments on linear prediction of television signals, and describes the results obtained to date. --- paper_title: Sample Adaptive Offset in the HEVC Standard paper_content: This paper provides a technical overview of a newly added in-loop filtering technique, sample adaptive offset (SAO), in High Efficiency Video Coding (HEVC). The key idea of SAO is to reduce sample distortion by first classifying reconstructed samples into different categories, obtaining an offset for each category, and then adding the offset to each sample of the category. The offset of each category is properly calculated at the encoder and explicitly signaled to the decoder for reducing sample distortion effectively, while the classification of each sample is performed at both the encoder and the decoder for saving side information significantly. To achieve low latency of only one coding tree unit (CTU), a CTU-based syntax design is specified to adapt SAO parameters for each CTU. A CTU-based optimization algorithm can be used to derive SAO parameters of each CTU, and the SAO parameters of the CTU are inter leaved into the slice data. It is reported that SAO achieves on average 3.5% BD-rate reduction and up to 23.5% BD-rate reduction with less than 1% encoding time increase and about 2.5% decoding time increase under common test conditions of HEVC reference software version 8.0. --- paper_title: Adaptive deblocking filter paper_content: This paper describes the adaptive deblocking filter used in the H.264/MPEG-4 AVC video coding standard. The filter performs simple operations to detect and analyze artifacts on coded block boundaries and attenuates those by applying a selected filter. --- paper_title: Adaptive Loop Filtering for Video Coding paper_content: Adaptive loop filtering for video coding is to minimize the mean square error between original samples and decoded samples by using Wiener-based adaptive filter. The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages. The suitable filter coefficients are determined by the encoder and explicitly signaled to the decoder. In order to achieve better coding efficiency, especially for high resolution videos, local adaptation is used for luma signals by applying different filters to different regions or blocks in a picture. In addition to filter adaptation, filter on/off control at coding tree unit (CTU) level is also helpful for improving coding efficiency. Syntax-wise, filter coefficients are sent in a picture level header called adaptation parameter set, and filter on/off flags of CTUs are interleaved at CTU level in the slice data. This syntax design not only supports picture level optimization but also achieves a low encoding latency. Simulation results show that the ALF can achieve on average 7% bit rate reduction for 25 HD sequences. The run time increases are 1% and 10% for encoders and decoders, respectively, without special attention to optimization in C++ code. --- paper_title: High performance scalable image compression with EBCOT paper_content: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich feature set, including resolution and SNR scalability together with a random access property. The algorithm has modest complexity and is extremely well suited to applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon. --- paper_title: Overview of the High Efficiency Video Coding (HEVC) Standard paper_content: High Efficiency Video Coding (HEVC) is currently being prepared as the newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of the HEVC standardization effort is to enable significantly improved compression performance relative to existing standards-in the range of 50% bit-rate reduction for equal perceptual video quality. This paper provides an overview of the technical features and characteristics of the HEVC standard. --- paper_title: Intra Coding of the HEVC Standard paper_content: This paper provides an overview of the intra coding techniques in the High Efficiency Video Coding (HEVC) standard being developed by the Joint Collaborative Team on Video Coding (JCT-VC). The intra coding framework of HEVC follows that of traditional hybrid codecs and is built on spatial sample prediction followed by transform coding and postprocessing steps. Novel features contributing to the increased compression efficiency include a quadtree-based variable block size coding structure, block-size agnostic angular and planar prediction, adaptive pre- and postfiltering, and prediction direction-based transform coefficient scanning. This paper discusses the design principles applied during the development of the new intra coding methods and analyzes the compression performance of the individual tools. Computational complexity of the introduced intra prediction algorithms is analyzed both by deriving operational cycle counts and benchmarking an optimized implementation. Using objective metrics, the bitrate reduction provided by the HEVC intra coding over the H.264/advanced video coding reference is reported to be 22% on average and up to 36%. Significant subjective picture quality improvements are also reported when comparing the resulting pictures at fixed bitrate. --- paper_title: Learning translation invariant recognition in a massively parallel networks paper_content: One major goal of research on massively parallel networks of neuron-like processing elements is to discover efficient methods for recognizing patterns. Another goal is to discover general learning procedures that allow networks to construct the internal representations that are required for complex tasks. This paper describes a recently developed procedure that can learn to perform a recognition task. The network is trained on examples in which the input vector represents an instance of a pattern in a particular position and the required output vector represents its name. After prolonged training, the network develops canonical internal representations of the patterns and it uses these canonical representations to identify familiar patterns in novel positions. --- paper_title: Handwritten zip code recognition with multilayer networks paper_content: An application of back-propagation networks to handwritten zip code recognition is presented. Minimal preprocessing of the data is required, but the architecture of the network is highly constrained and specifically designed for the task. The input of the network consists of size-normalized images of isolated digits. The performance on zip code digits provided by the US Postal Service is 92% recognition, 1% substitution, and 7% rejects. Structured neural networks can be viewed as statistical methods with structure which bridge the gap between purely statistical and purely structural methods. > --- paper_title: The JPEG still picture compression standard paper_content: For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’’ compression, and a predictive method for “lossless’’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method. --- paper_title: A neural network approach to transform image coding paper_content: A neural network approach is presented for transform image coding. It is shown that the three steps in the conventional transform image coding, i.e. the unitary transform of spatial domain image data, the quantization of the transform domain data and the binary coding of the quantized data, can be unified into a one-step optimization problem. Then, the optimization problem is solved by an appropriately constructed Hopfield neural network whose input is the spatial domain image data and whose output is binary codes. A practical circuit implementation is given to perform the transform image coding. the circuit has rM2 neurons, where r is the bit-rate, in bit/pixel, of the coding and M2 is the size of the images. Each neuron consists of only a non-linear voltage amplifier, a linear voltage-controlled current source, a d.c. current source, a linear passive resistor, a linear passive capacitor, and a weighted voltage summer which can be made of a single op amp with some linear passive resistors. Moreover, each neuron is locally connected with no more than b - 1 other neurons by wires, where b is the maximum bit allocated to a transform domain coefficient. Therefore, our proposed approach is particularly suitable for low-bit-rate image coding and VLSI implementation. Furthermore, the analogue and parallel nature of our approach matches perfectly the high-speed requirement of real-time image coding. --- paper_title: Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression paper_content: A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D Gabor representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations, which provide a complete image description in terms of locally windowed 2-D spectral coordinates embedded within global 2-D spatial coordinates. In the present neural network approach, based on interlaminar interactions involving two layers with fixed weights and one layer with adjustable weights, the network finds coefficients for complete conjoint 2-D Gabor transforms without restrictive conditions. In wavelet expansions based on a biologically inspired log-polar ensemble of dilations, rotations, and translations of a single underlying 2-D Gabor wavelet template, image compression is illustrated with ratios up to 20:1. Also demonstrated is image segmentation based on the clustering of coefficients in the complete 2-D Gabor transform. > --- paper_title: Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences paper_content: Artificial neural networks are appearing as useful alternatives to traditional statistical modelling techniques in many scientific disciplines. This paper presents a general introduction and discussion of recent applications of the multilayer perceptron, one type of artificial neural network, in the atmospheric sciences. --- paper_title: Neural Network Approaches to Image Compression paper_content: This paper presents a tutor a overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features of our own visual system, which allow us to process visual information with much ease. For example, multilayer perceptions can be used as nonlinear predictors in differential pulse-code modulation (DPCM). Such predictors have been shown to increase the predictive gain relative to a linear predictor. Another active area of research is in the application of Hebbian learning to the extraction of principal components, which are the basis vectors for the optimal linear Karhunen-Loeve transform (KLT). These learning algorithms are iterative, have some computational advantages over standard eigendecomposition techniques, and can be made to adapt to changes in the input signal. Yet another model, the self-organizing feature map (SOFM), has been used with a great deal of success in the design of codebooks for vector quantization (VQ). The resulting codebooks are less sensitive to initial conditions than the standard LBG algorithm, and the topological ordering of the entries can be exploited to further increasing the coding efficiency and reducing the computational complexity. > --- paper_title: A non-linear predictor for differential pulse-code encoder (DPCM) using artificial neural networks paper_content: A nonlinear predictor is designed for a DPCM encoder using artificial neural networks (ANN). The predictor is based on a multilayer perceptron with three input nodes, 30 hidden nodes and one output node. The back-propagation learning algorithm is used for the training of the network. Simulation results are presented to evaluate and compare the performance of the neural net based predictor (nonlinear) with that of an optimized linear predictor. Success in the use of the nonlinear predictor is demonstrated through the reduction in the entropy of the differential error signal as compared to that of a linear predictor. Also it is shown that the ANN predictor is much more robust for encoding noisy images compared to that of a linear predictor. > --- paper_title: Neural network approach to DPCM system design for image coding paper_content: This paper presents a neural network approach to differential pulse code modulation (DPCM) design for the encoding of images. Instead of traditional algorithms for the computation of the relevant coefficients, such as the autocovariance and autocorrelation methods, the predictor is designed by supervised training of a neural network on examples, i.e. on a typical sequence of pixel values. This allows the use of nonlinear as well as linear correlations. Efficient and fast neural net architectures, for nonlinear one-dimensional DPCM (NNDPCM) as well as two-dimensional adaptive DPCM (NNADPCM), have been designed and applied to still image coding. Computer simulation experiments have shown that the resulting encoders work very well. At a transmission rate of 1 bitpixel, the 1-D NNDPCM offers an advantage of about 4 dB in peak signal-to-noise ratio over the standard linear DPCM system. At a bit rate of 0.525 bit/pixel, the 2-D NNADPCM achieves 29.5 dB for the 512 × 512 Lena image, while there is little visible distortion in the reconstructed image. This performance is comparable to that of the best schemes known to date, whether DPCM based or not, while maintaining a lower encoding complexity. Furthermore, this establishes that there is substantial amount of nonlinear content available for 1-D and 2-D prediction in DPCM image coding. --- paper_title: Neural model for Karhunen-Loeve transform with application to adaptive image compression paper_content: A neural model approach to perform adaptive calculation of the principal components (eigenvectors) of the covariance matrix of an input sequence is proposed. The algorithm is based on the successive application of the modified Hebbian learning rule proposed by Oja on every new covariance matrix that results after calculating the previous eigenvectors. The approach is shown to converge to the next dominant component that is linearly independent of all previously determined eigenvectors. The optimal learning rate is calculated by minimising an error function of the learning rate along the gradient descent direction. The approach is applied to encode grey-level images adaptively, by calculating a limited number of the KLT coefficients that meet a specified performance criterion. The effect of changing the size of the input sequence (number of image subimages), the maximum number of coding coefficients on the bit-rate values, the compression ratio, the signal-to-noise ratio, and the generalisation capability of the model to encode new images are investigated. --- paper_title: Image compression with a hierarchical neural network paper_content: A neural network data compression method is presented. This network accepts a large amount of image or text data, compresses it for storage or transmission, and subsequently restores it when desired. A new training method, referred to as the Nested Training Algorithm (NTA), that reduces the training time considerably is presented. Analytical results are provided for the specification of the optimal learning rates and the size of the training data for a given image of specified dimensions. Performance of the network has been evaluated using both synthetic and real-world data. It is shown that the developed architecture and training algorithm provide high compression ratio and low distortion while maintaining the ability to generalize, and is very robust as well. --- paper_title: A neural network approach to transform image coding paper_content: A neural network approach is presented for transform image coding. It is shown that the three steps in the conventional transform image coding, i.e. the unitary transform of spatial domain image data, the quantization of the transform domain data and the binary coding of the quantized data, can be unified into a one-step optimization problem. Then, the optimization problem is solved by an appropriately constructed Hopfield neural network whose input is the spatial domain image data and whose output is binary codes. A practical circuit implementation is given to perform the transform image coding. the circuit has rM2 neurons, where r is the bit-rate, in bit/pixel, of the coding and M2 is the size of the images. Each neuron consists of only a non-linear voltage amplifier, a linear voltage-controlled current source, a d.c. current source, a linear passive resistor, a linear passive capacitor, and a weighted voltage summer which can be made of a single op amp with some linear passive resistors. Moreover, each neuron is locally connected with no more than b - 1 other neurons by wires, where b is the maximum bit allocated to a transform domain coefficient. Therefore, our proposed approach is particularly suitable for low-bit-rate image coding and VLSI implementation. Furthermore, the analogue and parallel nature of our approach matches perfectly the high-speed requirement of real-time image coding. --- paper_title: Artificial Neural Networks paper_content: From the Publisher: ::: This new text has been designed to present the concepts of artificial neural networks in a concise and logical manner for your computer engineering students. --- paper_title: Artificial neural network for image compression paper_content: An artificial neural network is proposed which is able to compress an image by computing a nonlinear nonorthogonal transform and its inverse. The network is trained with small blocks extracted from an image; after the learning phase, it proves to give satisfactory performances both for the learned and for different unlearned images. --- paper_title: Image data compression using a neural network model paper_content: Data compression and generalization capabilities are important for neural network models as learning machines. From this point of view, the image data compression characteristics of a neural network model are examined. The applied network model is a feedforward-type, three-layered network with the backpropagation learning algorithm. The implementation of this model on a hypercube parallel computer and its computation performance are described. Image data compression, generalization, and quantization characteristics are examined experimentally. Effects of learning using the discrete cosine transformation coefficients as initial connection weights are shown experimentally. > --- paper_title: Random network learning and image compression paper_content: Digital image compression serves a wide range of applications. Encoding an image into fewer bits can be useful in reducing the storage requirements in image archival systems, or in decreasing the bandwidth for image transmission for applications such as teleconferencing and HDTV. Although some applications (e.g. medical imaging) require lossless compression, image compression usually introduces some loss in the original image. Another issue is the speed of compression and/or decompression, especially in real-time applications, In this paper the authors use a learning random neural network to achieve fast lossy image compression for gray level images. > --- paper_title: Random Neural Networks with Negative and Positive Signals and Product Form Solution paper_content: We introduce a new class of random neural networks in which signals are either negative or positive. A positive signal arriving at a neuron increases its total signal count or potential by one; a negative signal reduces it by one if the potential is positive, and has no effect if it is zero. When its potential is positive, a neuron fires, sending positive or negative signals at random intervals to neurons or to the outside. Positive signals represent excitatory signals and negative signals represent inhibition. We show that this model, with exponential signal emission intervals, Poisson external signal arrivals, and Markovian signal movements between neurons, has a product form leading to simple analytical expressions for the system state. --- paper_title: Video compression with random neural networks paper_content: We summarize a novel neural network technique for video compression, using a "point-process" type neural network model we have developed, which is closer to biophysical reality and is mathematically much more tractable than standard models. Our algorithm uses an adaptive approach based upon the users' desired video quality Q, and achieves compression ratios of up to 500:1 for moving gray-scale images, based on a combination of motion detection, compression and temporal subsampling of frames. This leads to a compression ratio of over 1000:1 for full-color video sequences with the addition of the standard 4:1:1 spatial subsampling ratios in the chrominance images. The signal-to-noise-ratio obtained varies with the compression level and ranges from 29 dB to over 34 dB. Our method is computationally fast so that compression and decompression could possibly be performed in real-time software. --- paper_title: Video compression with wavelets and random neural network approximations paper_content: Modern video encoding techniques generate variable bit rates, because they take advantage of different rates of motion in scenes, in addition to using lossy compression within individual frames. We have introduced a novel method for video compression based on temporal subsampling of video frames, and for video frame reconstruction using neural network based function approximations. In this paper we describe another method using wavelets for still image compression of frames, and function approximations for the reconstruction of subsampled frames. We evaluated the performance of the method in terms of observed traffic characteristics for the resulting compressed and subsampled frames, and in terms of quality versus compression ratio curves with real video image sequences. Comparisons are presented with other standard methods. --- paper_title: Joint Autoregressive and Hierarchical Priors for Learned Image Compression paper_content: Recent models for learned image compression are based on autoencoders, learning approximately invertible mappings from pixels to a quantized latent representation. These are combined with an entropy model, a prior on the latent representation that can be used with standard arithmetic coding algorithms to yield a compressed bitstream. Recently, hierarchical entropy models have been introduced as a way to exploit more structure in the latents than simple fully factorized priors, improving compression performance while maintaining end-to-end optimization. Inspired by the success of autoregressive priors in probabilistic generative models, we examine autoregressive, hierarchical, as well as combined priors as alternatives, weighing their costs and benefits in the context of image compression. While it is well known that autoregressive models come with a significant computational penalty, we find that in terms of compression performance, autoregressive and hierarchical priors are complementary and, together, exploit the probabilistic structure in the latents better than all previous learned models. The combined model yields state-of-the-art rate--distortion performance, providing a 15.8% average reduction in file size over the previous state-of-the-art method based on deep learning, which corresponds to a 59.8% size reduction over JPEG, more than 35% reduction compared to WebP and JPEG2000, and bitstreams 8.4% smaller than BPG, the current state-of-the-art image codec. To the best of our knowledge, our model is the first learning-based method to outperform BPG on both PSNR and MS-SSIM distortion metrics. --- paper_title: Lossless Image Compression Using Reversible Integer Wavelet Transforms and Convolutional Neural Networks paper_content: In this work we introduce a lossless compression framework which incorporates convolutional neural networks (CNN) for wavelet subband prediction. A CNN is trained to predict detail coefficients from corresponding approximation coefficients, prediction error is then coded in place of wavelet coefficients. At decompression an identical CNN is used to reproduce the prediction and combine with the decoded residuals for perfect reconstruction of wavelet subbands --- paper_title: Variational image compression with a scale hyperprior paper_content: We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs, but largely unexplored in image compression using artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a complex prior jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR). Furthermore, we provide a qualitative comparison of models trained for different distortion metrics. --- paper_title: Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations paper_content: We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both. --- paper_title: Deep learning paper_content: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. --- paper_title: Lossy Image Compression with Compressive Autoencoders paper_content: We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images. --- paper_title: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling paper_content: In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. --- paper_title: Full Resolution Image Compression with Recurrent Neural Networks paper_content: This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures consist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study "one-shot" versus additive reconstruction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3%-8.8% AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding. --- paper_title: Spatially adaptive image compression using a tiled deep network paper_content: Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images. Existing image compression algorithms based on neural networks learn quantized representations with a constant spatial bit rate across each image. While entropy coding introduces some spatial variation, traditional codecs have benefited significantly by explicitly adapting the bit rate based on local image complexity and visual saliency. This paper introduces an algorithm that combines deep neural networks with quality-sensitive bit rate adaptation using a tiled network. We demonstrate the importance of spatial context prediction and show improved quantitative (PSNR) and qualitative (subjective rater assessment) results compared to a non-adaptive baseline and a recently published image compression model based on fully-convolutional neural networks. --- paper_title: Real-Time Adaptive Image Compression paper_content: We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. ::: Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. ::: Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates. --- paper_title: DRAW: A Recurrent Neural Network For Image Generation paper_content: This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye. --- paper_title: Towards Conceptual Compression paper_content: We introduce a simple recurrent variational auto-encoder architecture that significantly improves image modeling. The system represents the state-of-the-art in latent variable models for both the ImageNet and Omniglot datasets. We show that it naturally separates global conceptual information from lower level details, thus addressing one of the fundamentally desired properties of unsupervised learning. Furthermore, the possibility of restricting ourselves to storing only global information about an image allows us to achieve high quality 'conceptual compression'. --- paper_title: Light Field Image Compression Using Generative Adversarial Network-Based View Synthesis paper_content: Light field (LF) has become an attractive representation of immersive multimedia content for simultaneously capturing both the spatial and angular information of the light rays. In this paper, we present a LF image compression framework driven by a generative adversarial network (GAN)-based sub-aperture image (SAI) generation and a cascaded hierarchical coding structure. Specifically, we sparsely sample the SAIs in LF and propose the GAN of LF (LF-GAN) to generate the unsampled SAIs by analogy with adversarial learning conditioned on its surrounding contexts. In particular, the LF-GAN learns to interpret both the angular and spatial context of the LF structure and, meanwhile, generates intermediate hypothesis for the unsampled SAIs in a certain position. Subsequently, the sampled SAIs and the residues of the generated-unsampled SAIs are re-organized as pseudo-sequences and compressed by standard video codecs. Finally, the hierarchical coding structure is adopted for the sampled SAI to effectively remove the inter-view redundancies. During the training process of LF-GAN, the pixel-wise Euclidean loss and the adversarial loss are chosen as the optimization objective, such that sharp textures with less blurring in details can be produced. Extensive experimental results show that the proposed LF-GAN-based LF image compression framework outperforms the state-of-the-art learning-based LF image compression approach with on average 4.9% BD-rate reductions over multiple LF datasets. --- paper_title: A Hybrid Neural Network for Chroma Intra Prediction paper_content: For chroma intra prediction, previous methods exemplified by the Linear Model method (LM) usually assume a linear correlation between the luma and chroma components in a coding block. This assumption is inaccurate for complex image content or large blocks, and restricts the prediction accuracy. In this paper, we propose a chroma intra prediction method by exploiting both spatial and cross-channel correlations using a hybrid neural network. Specifically, we utilize a convolutional neural network to extract features from the reconstructed luma samples of the current block, as well as utilize a fully connected network to extract features from the neighboring reconstructed luma and chroma samples. The extracted twofold features are then fused to predict the chroma samples–Cb and Cr simultaneously. The proposed chroma intra prediction method is integrated into HEVC. Preliminary results show that, compared with HEVC plus LM, the proposed method achieves on average 0.2%, 3.1% and 2.0% BD-rate reduction on Y, Cb and Cr components, respectively, under All-Intra configuration. --- paper_title: A Dual-Network Based Super-Resolution for Compressed High Definition Video paper_content: Convolutional neural network (CNN) based super-resolution (SR) has achieved superior performance compared with traditional methods for uncompressed images/videos, but its performance degenerates dramatically for compressed content especially at low bit-rate scenario due to the mixture distortions during sampling and compressing. This is critical because images/videos are always compressed with degraded quality in practical scenarios. In this paper, we propose a novel dual-network structure to improve the CNN-based SR performance for compressed high definition video especially at low bit-rate. To alleviate the impact of compression, an enhancement network is proposed to remove the compression artifacts which is located ahead of the SR network. The two networks, enhancement network and SR network, are optimized stepwise for different tasks of compression artifact reduction and SR respectively. Moreover, an improved geometric self-ensemble strategy is proposed to further improve the SR performance. Extensive experimental results demonstrate that the dual-network scheme can significantly improve the quality of super-resolved images/videos compared with those reconstructed from single SR network for compressed content. It achieves around 31.5% bit-rate saving for 4 K video compression compared with HEVC when applying the proposed method in a SR-based video coding framework, which proves the potential of our method in practical scenarios, e.g., video coding and SR. --- paper_title: Neural network based intra prediction for video coding paper_content: Today’s hybrid video coding systems typically perform an intra-picture prediction whereby blocks of samples are predicted from previously decoded samples of the same picture. For example, HEVC uses a set of angular prediction patterns to exploit directional sample correlations. In this paper, we propose new intra-picture prediction modes whose construction consists of two steps: First, a set of features is extracted from the decoded samples. Second, these features are used to select a predefined image pattern as the prediction signal. Since several intra prediction modes are proposed for each block-shape, a specific signalization scheme is also proposed. Our intra prediction modes lead to significant coding gains over state of the art video coding technologies. --- paper_title: Toward a new video compression scheme using super-resolution paper_content: The term super-resolution is typically used in the literature to describe the process of obtaining a high resolution (HR) image or a sequence of HR images from a set of low resolution (LR) observations. This term has been applied primarily to spatial and temporal resolution enhancement. However, intentional pre-processing and downsampling can be applied during encoding and super-resolution techniques to upsample the image can be applied during decoding when video compression is the main objective. In this paper we consider the following three video compression models. The first one simply compresses the sequence using any of the available standard compression methods, the second one pre-processes (without downsampling) the image sequence before compression, so that post-processing (without upsampling) is applied to the compressed sequence. The third model includes downsampling in the pre-processing stage and the application of a super resolution technique during decoding. In this paper we describe these three models but concentrate ::: on the application of super-resolution techniques as a way to post-process and upsample a compressed video sequences. Experimental results are provided on a wide range of bitrates for two very important applications: format conversion between different platforms and scalable video coding. --- paper_title: Efficient CTU-based intra frame coding for HEVC based on deep learning paper_content: To further improve the compression efficiency of HEVC intra frame coding, in this paper, a deep learning-based framework is proposed. Inspired by recently developed deep learning models for image super-resolution (SR), we propose to train a CNN (convolutional neural network) model to precisely predict the residual information of each CTU (coding tree unit) at the HEVC encoder. As a result, better CTU reconstruction and better prediction for the compression of subsequent CTUs can be achieved. To reduce computational complexity, different from current CNN-based SR works, we propose to skip the non-linear mapping layer, and incorporate the residual learning to obtain better predicted residual for CTU encoding. Experimental results have shown that the proposed method achieves 3.2% bitrate reduction in average BDBR (Bjentegaard delta bit rate) with only 37% encoding complexity increased. --- paper_title: Enhanced Intra Prediction with Recurrent Neural Network in Video Coding paper_content: Intra prediction is one of the important parts in video/image codec. With intra prediction mechanism, spatial redundancy can be largely removed for further bit saving. However, current state-of-the-art intra prediction method does not produce satisfactory prediction result due to its limits in reference samples and modeling ability. To enhance the intra prediction in HEVC, in this paper, a deep neural network featuring spatial RNN, which models the spatial dependency of pixels as sequential dynamics, is proposed to generate better prediction signals. Experimental results show improvement in BD-Rate for the proposed method compared with the original HEVC prediction scheme. --- paper_title: Learning a Convolutional Neural Network for Image Compact-Resolution paper_content: We study the dual problem of image super-resolution (SR), which we term image compact-resolution (CR). Opposite to image SR that hallucinates a visually plausible high-resolution image given a low-resolution input, image CR provides a low-resolution version of a high-resolution image, such that the low-resolution version is both visually pleasing and as informative as possible compared to the high-resolution image. We propose a convolutional neural network (CNN) for image CR, namely, CNN-CR, inspired by the great success of CNN for image SR. Specifically, we translate the requirements of image CR into operable optimization targets for training CNN-CR: the visual quality of the compact resolved image is ensured by constraining its difference from a naively downsampled version and the information loss of image CR is measured by upsampling/super-resolving the compact-resolved image and comparing that to the original image. Accordingly, CNN-CR can be trained either separately or jointly with a CNN for image SR. We explore different training strategies as well as different network structures for CNN-CR. Our experimental results show that the proposed CNN-CR clearly outperforms simple bicubic downsampling and achieves on average 2.25 dB improvement in terms of the reconstruction quality on a large collection of natural images. We further investigate two applications of image CR, i.e., low-bit-rate image compression and image retargeting. Experimental results show that the proposed CNN-CR helps achieve significant bits saving than High Efficiency Video Coding when applied to image compression and produce visually pleasing results when applied to image retargeting. --- paper_title: Convolutional Neural Network-Based Block Up-Sampling for Intra Frame Coding paper_content: Inspired by the recent advances of image super-resolution using convolutional neural network (CNN), we propose a CNN-based block up-sampling scheme for intra frame coding. A block can be down-sampled before being compressed by normal intra coding, and then up-sampled to its original resolution. Different from previous studies on down/up-sampling-based coding, the up-sampling methods in our scheme have been designed by training CNN instead of hand-crafted. We explore a new CNN structure for up-sampling, which features deconvolution of feature maps, multi-scale fusion, and residue learning, making the network both compact and efficient. We also design different networks for the up-sampling of luma and chroma components, respectively, where the chroma up-sampling CNN utilizes the luma information to boost its performance. In addition, we design a two-stage up-sampling process, the first stage being within the block-by-block coding loop, and the second stage being performed on the entire frame, so as to refine block boundaries. We also empirically study how to set the coding parameters of down-sampled blocks for pursuing the frame-level rate-distortion optimization. Our proposed scheme is implemented into the high-efficiency video coding (HEVC) reference software, and a comprehensive set of experiments have been performed to evaluate our methods. Experimental results show that our scheme achieves significant bits saving compared with the HEVC anchor, especially at low bit rates, leading to on average 5.5% BD-rate reduction on common test sequences and on average 9.0% BD-rate reduction on ultrahigh definition test sequences. --- paper_title: Convolutional Neural Network-Based Block Up-Sampling for HEVC paper_content: Recently, convolutional neural network (CNN)-based methods have achieved remarkable progress in image and video super-resolution, which inspires research on down-/up-sampling-based image and video coding using CNN. Instead of hand-crafted filters for up-sampling, trained CNN models are believed to be more capable of improving image quality, thus leading to coding gain. However, previous studies either concentrated on intra-frame coding or performed down- and up-sampling of entire frame. In this paper, we introduce block-level down- and up-sampling into inter-frame coding with the help of CNN. Specifically, each block in the P or B frame can either be compressed at the original resolution or down-sampled and compressed at low resolution and then, up-sampled by the trained CNN models. Such block-level adaptivity is flexible to cope with the spatially variant texture and motion characteristics. We further investigate how to enhance the capability of CNN-based up-sampling by utilizing reference frames and study how to train the CNN models by using encoded video sequences. We implement the proposed scheme onto the high efficiency video coding (HEVC) reference software and perform a comprehensive set of experiments to evaluate our methods. The experimental results show that our scheme achieves superior performance to the HEVC anchor, especially at low bit rates, leading to an average 3.8%, 2.6%, and 3.5% BD-rate reduction on the HEVC common test sequences under random-access, low-delay B, and low-delay P configurations, respectively. When tested on high-definition and ultrahigh-definition sequences, the average BD-rate exceeds 5%. --- paper_title: Fully Connected Network-Based Intra Prediction for Image Coding paper_content: This paper proposes a deep learning method for intra prediction. Different from traditional methods utilizing some fixed rules, we propose using a fully connected network to learn an end-to-end mapping from neighboring reconstructed pixels to the current block. In the proposed method, the network is fed by multiple reference lines. Compared with traditional single line-based methods, more contextual information of the current block is utilized. For this reason, the proposed network has the potential to generate better prediction. In addition, the proposed network has good generalization ability on different bitrate settings. The model trained from a specified bitrate setting also works well on other bitrate settings. Experimental results demonstrate the effectiveness of the proposed method. When compared with high efficiency video coding reference software HM-16.9, our network can achieve an average of 3.4% bitrate saving. In particular, the average result of 4K sequences is 4.5% bitrate saving, where the maximum one is 7.4%. --- paper_title: Down-Sampling Based Video Coding Using Super-Resolution Technique paper_content: It has been reported that oversampling a still image before compression does not guarantee a good image quality. Similarly, down-sampling before video compression in low bit rate video coding may alleviate the blocking effect and improve peak signal-to-noise ratio of the decoded frames. When the number of discrete cosine transform coefficients is reduced in such a down-sampling based coding (DBC), the bit budget of each coefficient will increase, thus reduce the quantization error. A DBC video coding scheme is proposed in this paper, where a super-resolution technique is employed to restore the down-sampled frames to their original resolutions. The performance improvement of the proposed DBC scheme is analyzed at low bit rates, and verified by experiments. --- paper_title: Video Frame Interpolation via Adaptive Separable Convolution paper_content: Standard video frame interpolation methods first estimate optical flow between input frames and then synthesize an intermediate frame guided by motion. Recent approaches merge these two steps into a single convolution process by convolving input frames with spatially adaptive kernels that account for motion and re-sampling simultaneously. These methods require large kernels to handle large motion, which limits the number of pixels whose kernels can be estimated at once due to the large memory demand. To address this problem, this paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D kernels require significantly fewer parameters to be estimated. Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously. Since our method is able to estimate kernels and synthesizes the whole video frame at once, it allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. This deep neural network is trained end-to-end using widely available video data without any human annotation. Both qualitative and quantitative experiments show that our method provides a practical solution to high-quality video frame interpolation. --- paper_title: Adaptive Interpolation Filter for H.264/AVC paper_content: In order to reduce the bit-rate of video signals, current coding standards apply hybrid coding with motion-compensated prediction and transform coding of the prediction error. In former publications, it has been shown that aliasing components contained in an image signal, as well as motion blur are limiting the prediction efficiency obtained by motion compensation. In this paper, we show that the analytical calculation of an optimal interpolation filter at particular constraints is possible, resulting in total coding improvements of 20% at broadcast quality compared to the H.264/AVC High Profile. Furthermore, the spatial adaptation to local image characteristics enables further improvements of 0.15 dB for CIF sequences compared to globally adaptive filter or up to 0.6 dB, compared to the standard H.264/AVC. Additionally, we show that the presented approach is generally applicable, i.e., also motion blur can be exactly compensated, if particular constraints are fulfilled. --- paper_title: Convolutional Neural Network-Based Motion Compensation Refinement for Video Coding paper_content: Inspired by the great success of convolutional neural network (CNN) in computer vision, we propose a CNN-based method to refine the motion compensation in video coding. First, we study a simple CNN-based motion compensation refinement (CNNMCR) scheme, where we train a CNN to refine the motion compensated prediction directly. Second, we consider to exploit the contextual information for the refinement, and propose a more powerful CNNMCR scheme, where the CNN utilizes not only the motion compensated prediction, but also the neighboring reconstructed region to refine the prediction. We integrate the simple CNNMCR and the CNNMCR schemes into the High Efficiency Video Coding (HEVC) framework. Experimental results show that both schemes achieve better compression performance than the HEVC anchor, leading to on average 1.8% and 2.3% BD-rate reduction, respectively, under low-delay P configuration. Furthermore, the combination of our proposed CNNMCR and the overlapped block motion compensation (OBMC) technique provides as high as 5.2% BD-rate reduction. --- paper_title: Learning a convolutional neural network for fractional interpolation in HEVC inter coding paper_content: Motion compensated prediction (MCP) is an effective technology for video coding to improve compression efficiency. Fractional sample precision prediction is utilized in HEVC to further remove temporal redundancy, and finite impulse response (FIR) filters designed using decomposition of the discrete cosine transform are applied to generate samples that do not fall on the integer positions. However, the coefficients of these DCT-based interpolation filters are fixed, which may not be able to adapt to varied video content. Inspired by the remarkable success of convolutional neural network (CNN) in the single image super-resolution task, we propose to learn a convolutional neural network for fractional interpolation in HEVC inter prediction. Compared with super-resolution, there is one big difference in fractional interpolation — fractional interpolation needs to maintain samples at integer positions while super-resolution generates a whole high-resolution image. Another difference is no real ground truth is available in fractional interpolation process. To overcome these two challenges, we introduce a constraint strategy to the training phase of the original super-resolution network as well as a specially designed preprocessing step which reuses the DCTIF interpolation process. Unlike other previous work, our proposed approach simultaneously generating the fractional positions from one network and experimental results show our proposed approach achieves 0.45% BD-Rate reduction under the low-delay-P configuration on average. --- paper_title: A Convolutional Neural Network Approach for Post-Processing in HEVC Intra Coding paper_content: Lossy image and video compression algorithms yield visually annoying artifacts including blocking, blurring, and ringing, especially at low bit-rates. To reduce these artifacts, post-processing techniques have been extensively studied. Recently, inspired by the great success of convolutional neural network (CNN) in computer vision, some researches were performed on adopting CNN in post-processing, mostly for JPEG compressed images. In this paper, we present a CNN-based post-processing algorithm for High Efficiency Video Coding (HEVC), the state-of-the-art video coding standard. We redesign a Variable-filter-size Residue-learning CNN (VRCNN) to improve the performance and to accelerate network training. Experimental results show that using our VRCNN as post-processing leads to on average 4.6% bit-rate reduction compared to HEVC baseline. The VRCNN outperforms previously studied networks in achieving higher bit-rate reduction, lower memory cost, and multiplied computational speedup. --- paper_title: Convolutional Neural Network-Based Fractional-Pixel Motion Compensation paper_content: Fractional-pixel motion compensation (MC) improves the efficiency of inter prediction and has been utilized extensively in video coding standards. The traditional methods of fractional-pixel MC usually follow the approach of interpolation, i.e., they adopt different kinds of filters, either fixed or adaptive, to interpolate fractional-pixel values from integer-pixel values in a reference picture. Different from the interpolation approach, in this paper, we formulate the fractional-pixel MC as an inter-picture regression problem, which is to predict the pixel values of the current to-be-coded picture from the integer-pixel values of a reference picture, given a fractional-pixel motion vector that relates the two pictures. We then propose to adopt convolutional neural network (CNN) models to approach the regression problem, inspired by the recent advances of CNN. Accordingly, we propose fractional-pixel reference generation CNN (FRCNN) for both uni-directional and bi-directional MC in video coding. We further investigate how to train FRCNN by using encoded video sequences, and empirically study the effect of different training data and different CNN structures. Moreover, we propose to integrate FRCNN into the high efficiency video coding (HEVC) scheme, and perform a comprehensive set of experiments to evaluate the effectiveness of FRCNN. The experimental results show that our proposed FRCNN achieves on average 3.9%, 2.7%, and 1.3% bits saving compared with HEVC, under low-delay P, low-delay B, and random-access configurations, respectively. --- paper_title: A comparison of fractional-pel interpolation filters in HEVC and H.264/AVC paper_content: The fractional-pel interpolation filter adopted in H.264/AVC improves motion compensation greatly. Recently, a new DCT-based fractional-pel interpolation filter is adopted in the oncoming standard HEVC. We are interested in the differences between these two types of fractional-pel interpolation filters. In this paper we describe the derivations of fractional-pel interpolation filters in HEVC and H.264/AVC in detail, and compare them on properties of frequency responses. We find that the half-pel interpolation filters in HEVC and H.264/AVC are very similar, but the low-pass properties of quarter-pel interpolation filters in HEVC are much better than those in H.264/AVC. Experimental results validate this phenomenon, the fractional-pel interpolation in H.264/AVC tends to increase BD-rates by more than 10% compared with that in HEVC, and this performance loss mainly comes from quarter-pel interpolation filters. On the other hand, the complexity of fractional-pel interpolation filtering in HEVC is greatly increased than that in H.264/AVC. --- paper_title: Enhanced Ctu-Level Inter Prediction with Deep Frame Rate Up-Conversion for High Efficiency Video Coding paper_content: Inter prediction serves as the foundation of prediction based hybrid video coding framework. The state-of-the-art video coding standards employ the reconstructed frames as the references, and the motion vectors which convey the relative position shift between the current block and the prediction block are explicitly signalled in the bitstream. In this paper, we propose a high efficient inter prediction scheme by introducing a new methodology based on virtual reference frame, which is effectively generated with the deep neural network such that the motion data does not need to be explicitly signalled. In particular, the high quality virtual reference frame is generated with the deep learning based frame rate up-conversion (FRUC) algorithm from two reconstructed bi-prediction frames. Subsequently, a novel CTU level coding mode termed as direct virtual reference frame (DVRF) mode, is proposed to adaptively compensate for the current to-be-coded block in the sense of rate-distortion optimization (RDO). The proposed scheme is integrated into the HM-16.6 software, and experimental results demonstrate significant superiority of the proposed method, which provides more than 3% coding gains on average for HEVC test sequences. --- paper_title: Convolutional Neural Network-Based Invertible Half-Pixel Interpolation Filter for Video Coding paper_content: Fractional-pixel interpolation has been widely used in the modern video coding standards to improve the accuracy of motion compensated prediction. Traditional interpolation filters are designed based on the signal processing theory. However, video signal is non-stationary, making the traditional methods less effective. In this paper, we reveal that the interpolation filter can not only generate the fractional pixels from the integer pixels, but also reconstruct the integer pixels from the fractional ones. This property is called invertibility. Inspired by the invertibility of fractional-pixel interpolation, we propose an end-to-end scheme based on convolutional neural network (CNN) to derive the invertible interpolation filter, termed CNNInvIF. CNNlnvIF does not need the “ground-truth” of fractional pixels for training. Experimental results show that the proposed CNNInvIF can achieve up to 4.6% and on average 2.2% BD-rate reduction than HEVC under the low-delay P configuration. --- paper_title: Enhanced Bi-Prediction With Convolutional Neural Network for High-Efficiency Video Coding paper_content: In this paper, we propose an enhanced bi-prediction scheme based on the convolutional neural network (CNN) to improve the rate-distortion performance in video compression. In contrast to the traditional bi-prediction strategy which computes the linear superposition as the predictive signals with pixel-to-pixel correspondence, the proposed scheme employs CNN to directly infer the predictive signals in a data-driven manner. As such, the predicted blocks are fused in a nonlinear fashion to improve the coding performance. Moreover, the patch-to-patch inference strategy with CNN also improves the prediction accuracy since the patch-level information for the prediction of each individual pixel can be exploited. The proposed enhanced bi-prediction scheme is further incorporated into the high-efficiency video coding standard, and the experimental results exhibit a significant performance improvement under different coding configurations. --- paper_title: Enhanced Motion-Compensated Video Coding With Deep Virtual Reference Frame Generation paper_content: In this paper, we propose an efficient inter prediction scheme by introducing the deep virtual reference frame (VRF), which serves better reference in the temporal redundancy removal process of video coding. In particular, the high quality VRF is generated with the deep learning-based frame rate up conversion (FRUC) algorithm from two reconstructed bi-directional frames, which is subsequently incorporated into the reference list serving as the high quality reference. Moreover, to alleviate the compression artifacts of VRF, we develop a convolutional neural network (CNN)-based enhancement model to further improve its quality. To facilitate better utilization of the VRF, a CTU level coding mode termed as direct virtual reference frame (DVRF) is devised, which achieves better trade-off between compression performance and complexity. The proposed scheme is integrated into HM-16.6 and JEM-7.1 software platforms, and the simulation results under random access (RA) configuration demonstrate significant superiority of the proposed method. When adding VRF to RPS, more than 6% average BD-rate gain is achieved for HEVC test sequences on HM-16.6, and 0.8% BD-rate gain is observed based on JEM-7.1 software. Regarding the DVRF mode, 3.6% bitrate saving is achieved on HM-16.6 with the computational complexity effectively reduced. --- paper_title: Gradient-Based Learning Applied to Document Recognition paper_content: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day. --- paper_title: Image quality assessment: from error visibility to structural similarity paper_content: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/. --- paper_title: A perceptual quantization strategy for HEVC based on a convolutional neural network trained on natural images paper_content: Fast prediction models of local distortion visibility and local quality can potentially make modern spatiotemporally adaptive coding schemes feasible for real-time applications. In this paper, a fast convolutional-neural- network based quantization strategy for HEVC is proposed. Local artifact visibility is predicted via a network trained on data derived from our improved contrast gain control model. The contrast gain control model was trained on our recent database of local distortion visibility in natural scenes [Alam et al. JOV 2014]. Further- more, a structural facilitation model was proposed to capture effects of recognizable structures on distortion visibility via the contrast gain control model. Our results provide on average 11% improvements in compression efficiency for spatial luma channel of HEVC while requiring almost one hundredth of the computational time of an equivalent gain control model. Our work opens the doors for similar techniques which may work for different forthcoming compression standards. --- paper_title: Neural network-based arithmetic coding of intra prediction modes in HEVC paper_content: In both H.264 and HEVC, context-adaptive binary arithmetic coding (CABAC) is adopted as the entropy coding method. CABAC relies on manually designed binarization processes as well as handcrafted context models, which may restrict the compression efficiency. In this paper, we propose an arithmetic coding strategy by training neural networks, and make preliminary studies on coding of the intra prediction modes in HEVC. Instead of binarization, we propose to directly estimate the probability distribution of the 35 intra prediction modes with the adoption of a multi-level arithmetic codec. Instead of handcrafted context models, we utilize convolutional neural network (CNN) to perform the probability estimation. Simulation results show that our proposed arithmetic coding leads to as high as 9.9% bits saving compared with CABAC. --- paper_title: CNN-based transform index prediction in multiple transforms framework to assist entropy coding paper_content: Recent work in video compression has shown that using multiple 2D transforms instead of a single transform in order to de-correlate residuals provides better compression efficiency. These transforms are tested competitively inside a video encoder and the optimal transform is selected based on the Rate Distortion Optimization (RDO) cost. However, one needs to encode a syntax to indicate the chosen transform per residual block to the decoder for successful reconstruction of the pixels. Conventionally, the transform index is binarized using fixed length coding and a CABAC context model is attached to it. In this work, we provide a novel method that utilizes Convolutional Neural Network to predict the chosen transform index from the quantized coefficient block. The prediction probabilities are used to binarize the index by employing a variable length coding instead of a fixed length coding. Results show that by employing this modified transform index coding scheme inside HEVC, one can achieve up to 0.59% BD-rate gain. --- paper_title: Convolutional Neural Network-Based Synthesized View Quality Enhancement for 3D Video Coding paper_content: The quality of synthesized view plays an important role in the three dimensional (3D) video system. In this paper, to further improve the coding efficiency, a convolutional neural network (CNN) based synthesized view quality enhancement method for 3D High Efficiency Video Coding (HEVC) is proposed. Firstly, the distortion elimination in synthesized view is formulated as an image restoration task with the aim to reconstruct the latent distortion free synthesized image. Secondly, the learned CNN models are incorporated into 3D HEVC codec to improve the view synthesis performance for both view synthesis optimization (VSO) and the final synthesized view, where the geometric and compression distortions are considered according to the specific characteristics of synthesized view. Thirdly, a new Lagrange multiplier in the rate-distortion (RD) cost function is derived to adapt the CNN based VSO process to embrace a better 3D video coding performance. Extensive experimental results show that the proposed scheme can efficiently eliminate the artifacts in the synthesized image, and reduce 25.9% and 11.7% bit rate in terms of peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) index, which significantly outperforms the state-of-theart methods. --- paper_title: Content-Aware Convolutional Neural Network for In-Loop Filtering in High Efficiency Video Coding paper_content: Recently, convolutional neural network (CNN) has attracted tremendous attention and has achieved great success in many image processing tasks. In this paper, we focus on CNN technology combined with image restoration to facilitate video coding performance and propose the content-aware CNN based in-loop filtering for high-efficiency video coding (HEVC). In particular, we quantitatively analyze the structure of the proposed CNN model from multiple dimensions to make the model interpretable and optimal for CNN-based loop filtering. More specifically, each coding tree unit (CTU) is treated as an independent region for processing, such that the proposed content-aware multimodel filtering mechanism is realized by the restoration of different regions with different CNN models under the guidance of the discriminative network. To adapt the image content, the discriminative neural network is learned to analyze the content characteristics of each region for the adaptive selection of the deep learning model. The CTU level control is also enabled in the sense of rate-distortion optimization. To learn the CNN model, an iterative training method is proposed by simultaneously labeling filter categories at the CTU level and fine-tuning the CNN model parameters. The CNN based in-loop filter is implemented after sample adaptive offset in HEVC, and extensive experiments show that the proposed approach significantly improves the coding performance and achieves up to 10.0% bit-rate reduction. On average, 4.1%, 6.0%, 4.7%, and 6.0% bit-rate reduction can be obtained under all intra, low delay, low delay P, and random access configurations, respectively. --- paper_title: CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression paper_content: Lossy image compression algorithms are pervasively used to reduce the size of images transmitted over the web and recorded on data storage media. However, we pay for their high compression rate with visual artifacts degrading the user experience. Deep convolutional neural networks have become a widespread tool to address high-level computer vision tasks very successfully. Recently, they have found their way into the areas of low-level computer vision and image processing to solve regression problems mostly with relatively shallow networks. We present a novel 12-layer deep convolutional network for image compression artifact suppression with hierarchical skip connections and a multi-scale loss function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an improvement of up to 0.36 dB over the best previous ConvNet result. We show that a network trained for a specific quality factor (QF) is resilient to the QF used to compress the input image — a single network trained for QF 60 provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76. --- paper_title: Just-Noticeable Difference-Based Perceptual Optimization for JPEG Compression paper_content: The Quantization table in JPEG, which specifies the quantization scale for each discrete cosine transform (DCT) coefficient, plays an important role in image codec optimization. However, the generic quantization table design that is based on the characteristics of human visual system (HVS) cannot adapt to the variations of image content. In this letter, we propose a just-noticeable difference (JND) based quantization table derivation method for JPEG by optimizing the rate-distortion costs for all the frequency bands. To achieve better perceptual quality, the DCT domain JND-based distortion metric is utilized to model the stair distortion perceived by HVS. The rate-distortion cost for each band is derived by estimating the rate with the first-order entropy of quantized coefficients. Subsequently, the optimal quantization table is obtained by minimizing the total rate-distortion costs of all the bands. Extensive experimental results show that the quantization table generated by the proposed method achieves significant bit-rate savings compared with JPEG recommended quantization table and specifically developed quantization tables in terms of both objective and subjective evaluations. --- paper_title: CNN-based in-loop filtering for coding efficiency improvement paper_content: A recent video coding standard, called High Efficiency Video Coding (HEVC), adopts two in-loop filters for coding efficiency improvement where the in-loop filtering is done by a de-blocking filter (DF) followed by sample adaptive offset (SAO) filtering. The DF helps improve both coding efficiency and subjective quality without signaling any bit to decoder sides while SAO filtering corrects the quantization errors by sending offset values to decoders. In this paper, we first present a new in-loop filtering technique using convolutional neural networks (CNN), called IFCNN, for coding efficiency and subjective visual quality improvement. The IFCNN does not require signaling bits by using the same trained weights in both encoders and decoder. The proposed IFCNN is trained in two different QP ranges: QR1 from QP = 20 to QP = 29; and QR2 from QP = 30 to QP = 39. In testing, the IFCNN trained in QR1 is applied for the encoding/decoding with QP values less than 30 while the IFCNN trained in QR2 is applied for the case of QP values greater than 29. The experiment results show that the proposed IFCNN outperforms the HEVC reference mode (HM) with average 1.9%–2.8% gain in BD-rate for Low Delay configuration, and average 1.6%–2.6% gain in BD-rate for Random Access configuration with IDR period 16. --- paper_title: Nonlocal In-Loop Filter: The Way Toward Next-Generation Video Coding? paper_content: In-loop filtering has emerged as an essential coding tool since H.264/AVC, due to its delicate design, which reduces different kinds of compression artifacts. However, existing in-loop filters rely only on local image correlations, largely ignoring nonlocal similarities. In this article, the authors explore the design philosophy of in-loop filters and discuss their vision for the future of in-loop filter research by examining the potential of nonlocal similarities. Specifically, the group-based sparse representation, which jointly exploits an image's local and nonlocal self-similarities, lays a novel and meaningful groundwork for in-loop filter design. Hard- and soft-thresholding filtering operations are applied to derive the sparse parameters that are appropriate for compression artifact reduction. Experimental results show that this in-loop filter design can significantly improve the compression performance of the High Efficiency Video Coding (HEVC) standard, leading us in a new direction for improving compression efficiency. --- paper_title: An efficient deep convolutional neural networks model for compressed image deblocking paper_content: Convolutional neural networks (CNNs) have been widely used in image processing community. Image deblocking is a post-processing strategy, which aims to reduce the visually annoying blocking artifacts that are caused by block-based transform coding at low bit rates. In recent years, CNNs based methods have been proposed to solve this classic image processing problem. In this paper, we present an efficient deep C-NNs model for image deblocking. Our model can well alleviate the conflict between bit reduction and quality preservation by taking local small patches into consideration. Our trained model can be used to deblock lossy compressed images with different quality factors. The proposed model can be easily integrated into the existing codecs as a post-processing procedure without changing the codec. Experimental results verify that our proposed model outperforms the state-of-the-art methods in both the objective quality and the perceptual quality. --- paper_title: Low-Rank-Based Nonlocal Adaptive Loop Filter for High-Efficiency Video Compression paper_content: In video coding, the in-loop filtering has emerged as a key module due to its significant improvement on compression performance since H.264/Advanced Video Coding. Existing incorporated in-loop filters in video coding standards mainly take advantage of the local smoothness prior model used for images. In this paper, we propose a novel adaptive loop filter utilizing image nonlocal prior knowledge by imposing the low-rank constraint on similar image patches for compression noise reduction. In the filtering process, the reconstructed frame is first divided into image patch groups according to image patch similarity. The proposed in-loop filtering is formulated as an optimization problem with low-rank constraint for every group of image patches independently. It can be efficiently solved by soft-thresholding singular values of the matrix composed of image patches in the same group. To adapt the properties of the input sequences and bit budget, an adaptive threshold derivation model is established for every group of image patches according to the characteristics of compressed image patches, quantization parameters, and coding modes. Moreover, frame-level and largest coding unit-level control flags are signaled to further improve the adaptability from the sense of rate-distortion optimization. The performance of the proposed in-loop filter is analyzed when it collaborates with the existing in-loop filters in High Efficiency Video Coding. Extensive experimental results show that our proposed in-loop filter can further improve the performance of state-of-the-art video coding standard significantly, with up to 16% bit-rate savings. --- paper_title: S-Net: a scalable convolutional neural network for JPEG compression artifact reduction paper_content: Recent studies have used deep residual convolutional neural networks (CNNs) for JPEG compression artifact reduction. This study proposes a scalable CNN called S-Net. Our approach effectively adjusts the network scale dynamically in a multitask system for real-time operation with little performance loss. It offers a simple and direct technique to evaluate the performance gains obtained with increasing network depth, and it is helpful for removing redundant network layers to maximize the network efficiency. We implement our architecture using the Keras framework with the TensorFlow backend on an NVIDIA K80 GPU server. We train our models on the DIV2K dataset and evaluate their performance on public benchmark datasets. To validate the generality and universality of the proposed method, we created and utilized a new dataset, called WIN143, for over-processed images evaluation. Experimental results indicate that our proposed approach outperforms other CNN-based methods and achieves state-of-the-art performance. --- paper_title: Compression Artifacts Reduction by a Deep Convolutional Network paper_content: Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar"easy to hard"idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use case (i.e. Twitter). In addition, we show that our method can be applied as pre-processing to facilitate other low-level vision routines when they take compressed images as input. --- paper_title: Spatial-Temporal Residue Network Based In-Loop Filter for Video Coding paper_content: Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-loop filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-loop filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-loop filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization. Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard. --- paper_title: Multi-Frame Quality Enhancement for Compressed Video paper_content: The past few years have witnessed great success in applying deep learning to enhance the quality of compressed image/video. The existing approaches mainly focus on enhancing the quality of a single frame, ignoring the similarity between consecutive frames. In this paper, we investigate that heavy quality fluctuation exists across compressed video frames, and thus low quality frames can be enhanced using the neighboring high quality frames, seen as Multi-Frame Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach for compressed video, as a first attempt in this direction. In our approach, we firstly develop a Support Vector Machine (SVM) based detector to locate Peak Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame Convolutional Neural Network (MF-CNN) is designed to enhance the quality of compressed video, in which the non-PQF and its nearest two PQFs are as the input. The MF-CNN compensates motion between the non-PQF and PQFs through the Motion Compensation subnet (MC-subnet). Subsequently, the Quality Enhancement subnet (QE-subnet) reduces compression artifacts of the non-PQF with the help of its nearest PQFs. Finally, the experiments validate the effectiveness and generality of our MFQE approach in advancing the state-of-the-art quality enhancement of compressed video. The code of our MFQE approach is available at https://github.com/ryangBUAA/MFQE.git. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: HEVC Deblocking Filter paper_content: This paper describes the in-loop deblocking filter used in the upcoming High Efficiency Video Coding (HEVC) standard to reduce visible artifacts at block boundaries. The deblocking filter performs detection of the artifacts at the coded block boundaries and attenuates them by applying a selected filter. Compared to the H.264/AVC deblocking filter, the HEVC deblocking filter has lower computational complexity and better parallel processing capabilities while still achieving significant reduction of the visual artifacts. --- paper_title: H.263+: Video Coding at Low Bit Rates paper_content: We discuss the ITU-T H.263+ (or H.263 Version 2) low-bit-rate video coding standard. We first describe, briefly, the H.263 standard including its optional modes. We then address the 12 new negotiable modes of H.263+. Next, we present experimental results for these modes, based on our public-domain implementation (http:ilspmg.ece.ubc.ca). Tradeoffs among compression performance, complexity, and memory requirements for the H.263+ optional modes are discussed. Finally, results for mode combinations are presented. --- paper_title: A Practical Convolutional Neural Network as Loop Filter for Intra Frame paper_content: Loop filters are used in video coding to remove artifacts or improve performance. Recent advances in deploying convolutional neural network (CNN) to replace traditional loop filters show large gains but with problems for practical application. First, different model is used for frames encoded with different quantization parameter (QP), respectively. It is expensive for hardware. Second, float points operation in CNN leads to inconsistency between encoding and decoding across different platforms. Third, redundancy within CNN model consumes precious computational resources. This paper proposes a CNN as the loop filter for intra frames and proposes a scheme to solve the above problems. It aims to design a single CNN model with low redundancy to adapt to decoded frames with different qualities and ensure consistency. To adapt to reconstructions with different qualities, both reconstruction and QP are taken as inputs. After training, the obtained model is compressed to reduce redundancy. To ensure consistency, dynamic fixed points (DFP) are adopted in testing CNN. Parameters in the compressed model are first quantized to DFP and then used for inference of CNN. Outputs of each layer in CNN are computed by DFP operations. Experimental results on JEM 7.0 report 3.14%, 5.21%, 6.28% BD-rate savings for luma and two chroma components with all intra configuration when replacing all traditional filters. --- paper_title: Multi-modal/multi-scale convolutional neural network based in-loop filter design for next generation video codec paper_content: In this paper, we propose a novel in-loop filter design for video compression. Our approach aims to replace existing deblocking filter and SAO (Sample Adaptive Offset) of HEVC standard with multi-modal/multi-scale convolutional neural network (MMS-net). The proposed CNN architecture consists of two sub-networks of different scales. An input image is down-sampled first and restored through the lower scale network, then the output image from it is fed into higher scale network concatenated with the original input image. Moreover, to boost the restoration performance, the proposed architecture utilizes information resides in the coded sequence. Specifically, the compression parameters from coding tree units (CTU) are exploited as input to CNN, which helps to alleviate blocking artifacts on the reconstructed images. In the experiments, our method reduces the average BD-rate by 4.55% and 8.5%, respectively, compared with the conventional neural network based approach [1] and HEVC reference software HM16.7 [2] in ‘All Intra — Main’ configuration. --- paper_title: Sample Adaptive Offset in the HEVC Standard paper_content: This paper provides a technical overview of a newly added in-loop filtering technique, sample adaptive offset (SAO), in High Efficiency Video Coding (HEVC). The key idea of SAO is to reduce sample distortion by first classifying reconstructed samples into different categories, obtaining an offset for each category, and then adding the offset to each sample of the category. The offset of each category is properly calculated at the encoder and explicitly signaled to the decoder for reducing sample distortion effectively, while the classification of each sample is performed at both the encoder and the decoder for saving side information significantly. To achieve low latency of only one coding tree unit (CTU), a CTU-based syntax design is specified to adapt SAO parameters for each CTU. A CTU-based optimization algorithm can be used to derive SAO parameters of each CTU, and the SAO parameters of the CTU are inter leaved into the slice data. It is reported that SAO achieves on average 3.5% BD-rate reduction and up to 23.5% BD-rate reduction with less than 1% encoding time increase and about 2.5% decoding time increase under common test conditions of HEVC reference software version 8.0. --- paper_title: Residual Highway Convolutional Neural Networks for in-loop Filtering in HEVC paper_content: High efficiency video coding (HEVC) standard achieves half bit-rate reduction while keeping the same quality compared with AVC. However, it still cannot satisfy the demand of higher quality in real applications, especially at low bit rates. To further improve the quality of reconstructed frame while reducing the bitrates, a residual highway convolutional neural network (RHCNN) is proposed in this paper for in-loop filtering in HEVC. The RHCNN is composed of several residual highway units and convolutional layers. In the highway units, there are some paths that could allow unimpeded information across several layers. Moreover, there also exists one identity skip connection (shortcut) from the beginning to the end, which is followed by one small convolutional layer. Without conflicting with deblocking filter (DF) and sample adaptive offset (SAO) filter in HEVC, RHCNN is employed as a high-dimension filter following DF and SAO to enhance the quality of reconstructed frames. To facilitate the real application, we apply the proposed method to I frame, P frame, and B frame, respectively. For obtaining better performance, the entire quantization parameter (QP) range is divided into several QP bands, where a dedicated RHCNN is trained for each QP band. Furthermore, we adopt a progressive training scheme for the RHCNN where the QP band with lower value is used for early training and their weights are used as initial weights for QP band of higher values in a progressive manner. Experimental results demonstrate that the proposed method is able to not only raise the PSNR of reconstructed frame but also prominently reduce the bit-rate compared with HEVC reference software. --- paper_title: Adaptive Loop Filtering for Video Coding paper_content: Adaptive loop filtering for video coding is to minimize the mean square error between original samples and decoded samples by using Wiener-based adaptive filter. The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages. The suitable filter coefficients are determined by the encoder and explicitly signaled to the decoder. In order to achieve better coding efficiency, especially for high resolution videos, local adaptation is used for luma signals by applying different filters to different regions or blocks in a picture. In addition to filter adaptation, filter on/off control at coding tree unit (CTU) level is also helpful for improving coding efficiency. Syntax-wise, filter coefficients are sent in a picture level header called adaptation parameter set, and filter on/off flags of CTUs are interleaved at CTU level in the slice data. This syntax design not only supports picture level optimization but also achieves a low encoding latency. Simulation results show that the ALF can achieve on average 7% bit rate reduction for 25 HD sequences. The run time increases are 1% and 10% for encoders and decoders, respectively, without special attention to optimization in C++ code. --- paper_title: Learning for Video Compression paper_content: One key challenge to learning-based video compression is that motion predictive coding, a very effective tool for video compression, can hardly be trained into a neural network. In this paper, we propose the concept of PixelMotionCNN (PMCNN) which includes motion extension and hybrid prediction networks. PMCNN can model spatiotemporal coherence to effectively perform predictive coding inside the learning network. On the basis of PMCNN, we further explore a learning-based framework for video compression with additional components of iterative analysis/synthesis and binarization. The experimental results demonstrate the effectiveness of the proposed scheme. Although entropy coding and complex configurations are not employed in this paper, we still demonstrate superior performance compared with MPEG-2 and achieve comparable results with H.264 codec. The proposed learning-based scheme provides a possible new direction to further improve compression efficiency and functionalities of future video coding. --- paper_title: Unsupervised Learning of Video Representations using LSTMs paper_content: We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance. --- paper_title: Full Resolution Image Compression with Recurrent Neural Networks paper_content: This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures consist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study "one-shot" versus additive reconstruction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3%-8.8% AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding. --- paper_title: Video (language) modeling: a baseline for generative models of natural videos paper_content: We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences. --- paper_title: DeepCoder: A deep neural network based video compression paper_content: Inspired by recent advances in deep learning, we present the DeepCoder — a Convolutional Neural Network (CNN) based video compression framework. We apply separate CNN nets for predictive and residual signals respectively. Scalar quantization and Huffman coding are employed to encode the quantized feature maps (fMaps) into binary stream. We use the fixed 32 × 32 block in this work to demonstrate our ideas, and performance comparison is conducted with the well-known H.264/AVC video coding standard with comparable rate-distortion performance. Here distortion is measured using Structural Similarity (SSIM) because it is more close to perceptual response. --- paper_title: CNN oriented fast PU mode decision for HEVC hardwired intra encoder paper_content: The number of intra prediction modes in High Efficiency Video Coding (HEVC) has been increased up to 35. To the end of alleviating the complexity of intra coding, we bring in the convolution neural network (CNN) to obtain the candidate modes of current PU and adopt the corner detection algorithm to further reduce the candidate modes. The virtues of proposed algorithm include: Firstly, our algorithm skip the rough PU mode decision (RMD) process and get the candidate list from CNN directly. In other words, the computations are relaxed in our algorithm. Secondly, the inputs of CNN in proposed algorithm merely contain the source image pixels and quantization parameter (QP), this feature makes it friendly to high parallel hardwired encoder. As compared with HM-15.0, experiments show that our algorithm decreases the intra coding time by 27.92% while the corresponding BDBR augment is 1.15%. At last but not least, our algorithm possesses a stable coding performance. In specific, for the most sensitive sequence (Class F), our algorithm could save 27.10% intra coding time with 2.01% BDBR increase. --- paper_title: Reducing Complexity of HEVC: A Deep Learning Approach paper_content: High Efficiency Video Coding (HEVC) significantly reduces bit-rates over the preceding H.264 standard but at the expense of extremely high encoding complexity. In HEVC, the quad-tree partition of coding unit (CU) consumes a large proportion of the HEVC encoding complexity, due to the brute-force search for rate-distortion optimization (RDO). Therefore, this paper proposes a deep learning approach to predict the CU partition for reducing the HEVC complexity at both intra-and inter-modes, which is based on convolutional neural network (CNN) and long-and short-term memory (LSTM) network. First, we establish a large-scale database including substantial CU partition data for HEVC intra-and inter-modes. This enables deep learning on the CU partition. Second, we represent the CU partition of an entire coding tree unit (CTU) in the form of a hierarchical CU partition map (HCPM). Then, we propose an early-terminated hierarchical CNN (ETH-CNN) for learning to predict the HCPM. Consequently, the encoding complexity of intra-mode HEVC can be drastically reduced by replacing the brute-force search with ETH-CNN to decide the CU partition. Third, an early-terminated hierarchical LSTM (ETH-LSTM) is proposed to learn the temporal correlation of the CU partition. Then, we combine ETH-LSTM and ETH-CNN to predict the CU partition for reducing the HEVC complexity at inter-mode. Finally, experimental results show that our approach outperforms other state-of-the-art approaches in reducing the HEVC complexity at both intra-and inter-modes. --- paper_title: CU Partition Mode Decision for HEVC Hardwired Intra Encoder Using Convolution Neural Network paper_content: The intensive computation of High Efficiency Video Coding (HEVC) engenders challenges for the hardwired encoder in terms of the hardware overhead and the power dissipation. On the other hand, the constrains in hardwired encoder design seriously degrade the efficiency of software oriented fast coding unit (CU) partition mode decision algorithms. A fast algorithm is attributed as VLSI friendly, when it possesses the following properties. First, the maximum complexity of encoding a coding tree unit (CTU) could be reduced. Second, the parallelism of the hardwired encoder should not be deteriorated. Third, the process engine of the fast algorithm must be of low hardware- and power-overhead. In this paper, we devise the convolution neural network based fast algorithm to decrease no less than two CU partition modes in each CTU for full rate-distortion optimization (RDO) processing, thereby reducing the encoder’s hardware complexity. As our algorithm does not depend on the correlations among CU depths or spatially nearby CUs, it is friendly to the parallel processing and does not deteriorate the rhythm of RDO pipelining. Experiments illustrated that, an averaged 61.1% intraencoding time was saved, whereas the Bjontegaard-Delta bit-rate augment is 2.67%. Capitalizing on the optimal arithmetic representation, we developed the high-speed [714 MHz in the worst conditions (125 °C, 0.9 V)] and low-cost (42.5k gate) accelerator for our fast algorithm by using TSMC 65-nm CMOS technology. One accelerator could support HD1080p at 55 frames/s real-time encoding. The corresponding power dissipation was 16.2 mW at 714 MHz. Finally, our accelerator is provided with good scalability. Four accelerators fulfill the throughput requirements of UltraHD-4K at 55 frames/s. --- paper_title: A Joint Compression Scheme of Video Feature Descriptors and Visual Content. paper_content: High-efficiency compression of visual feature descriptors has recently emerged as an active topic due to the rapidly increasing demand in mobile visual retrieval over bandwidth-limited networks. However, transmitting only those feature descriptors may largely restrict its application scale due to the lack of necessary visual content. To facilitate the wide spread of feature descriptors, a hybrid framework of jointly compressing the feature descriptors and visual content is highly desirable. In this paper, such a content-plus-feature coding scheme is investigated, aiming to shape the next generation of video compression system toward visual retrieval, where the high-efficiency coding of both feature descriptors and visual content can be achieved by exploiting the interactions between each other. On the one hand, visual feature descriptors can achieve compact and efficient representation by taking advantages of the structure and motion information in the compressed video stream. To optimize the retrieval performance, a novel rate-accuracy optimization technique is proposed to accurately estimate the retrieval performance degradation in feature coding. On the other hand, the already compressed feature data can be utilized to further improve the video coding efficiency by applying feature matching-based affine motion compensation. Extensive simulations have shown that the proposed joint compression framework can offer significant bitrate reduction in representing both feature descriptors and video frames, while simultaneously maintaining the state-of-the-art visual retrieval performance. --- paper_title: Joint Rate-Distortion Optimization for Simultaneous Texture and Deep Feature Compression of Facial Images paper_content: The explosion of surveillance cameras in smart cites and the increasing demand of low latency visual analysis have pushed the horizon from the traditional image/video compression to feature compression. Due to the recent advances of face recognition, we investigate the simultaneous compression of facial images and deep features, which is demonstrated to be beneficial in terms of the whole system performance including visual quality and recognition accuracy. Herein, we propose the Texture-Feature-Quality-Index (TFQI) to measure the ultimate utility of the facial images based on automatic visual analysis and monitoring. Furthermore, based on TFQI, a bit allocation scheme is proposed to optimally allocate the given bits for images and features, such that the overall coding performance can be optimized. The proposed scheme is validated using the standard face verification benchmark, Labeled Faces in the Wild (LFW). Better rate-TFQI and rate-Accuracy performance compared to the traditional texture coding can be achieved, especially in the scenario of low bit-rate coding. --- paper_title: Complexity-Constrained H.264 Video Encoding paper_content: In this paper, a joint complexity-distortion optimization approach is proposed for real-time H.264 video encoding under the power-constrained environment. The power consumption is first translated to the encoding computation costs measured by the number of scaled computation units consumed by basic operations. The solved problem is then specified to be the allocation and utilization of the computational resources. A computation allocation model (CAM) with virtual computation buffers is proposed to optimally allocate the computational resources to each video frame. In particular, the proposed CAM and the traditional hypothetical reference decoder model have the same temporal phase in operations. Further, to fully utilize the allocated computational resources, complexity-configurable motion estimation (CAME) and complexity-configurable mode decision (CAMD) algorithms are proposed for H.264 video encoding. In particular, the CAME is performed to select the path of motion search at the frame level, and the CAMD is performed to select the order of mode search at the macroblock level. Based on the hierarchical adjusting approach, the adaptive allocation of computational resources and the fine scalability of complexity control can be achieved. ---
Title: Image and Video Compression with Neural Networks: A Review Section 1: INTRODUCTION OF NEURAL NETWORK AND IMAGE/VIDEO COMPRESSION Description 1: Introduction to the basic concepts and development history of neural networks, as well as the frameworks and basic technique development for block-based image coding and hybrid video coding framework. Section 2: Neural Network Description 2: Discussion on the invention, structure, and functionality of neural networks, focusing on interdisciplinary research between neuroscience and mathematics. Section 3: Image and Video Compression Description 3: Core techniques in image and video compression, including transform and prediction, and comparison of various coding frameworks such as JPEG, MPEG-2, H.264/AVC, and HEVC. Section 4: PROGRESS OF NEURAL NETWORK BASED IMAGE COMPRESSION Description 4: Review on the development of neural network-based image compression techniques, including detailed descriptions of methods based on Multilayer Perceptron (MLP), Random Neural Networks, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Generative Adversarial Networks (GAN). Section 5: Multi-layer Perceptron based Image Coding Description 5: Exploration of the historical development and application of MLP in image compression, focusing on representative methods and structures. Section 6: Random Neural Network based Image Coding Description 6: Description of Random Neural Networks and their specific applications and adaptations for image compression. Section 7: Convolutional Neural Network based Coding Description 7: Detailed discussion on the application of CNNs in lossy image compression, and the techniques employed to overcome the challenges related to quantization and rate-distortion optimization. Section 8: Recurrent Neural Network based Coding Description 8: Review of RNN architectures with memory for image compression, highlighting progressive and spatially adaptive compression methods. Section 9: Generative Adversarial Network based Coding Description 9: Examination of GAN-based methods for image compression, focusing on how adversarial loss improves the subjective quality of decoded images. Section 10: ADVANCEMENT OF VIDEO CODING WITH NEURAL NETWORKS Description 10: An overview of improvements in video coding modules, such as intra prediction, inter-prediction, quantization, entropy coding, and loop filtering, through the integration of various deep learning models. Section 11: Intra Prediction Techniques using Neural Networks Description 11: Focused discussion on network-based enhancements for intra prediction in HEVC, including specific techniques and performance results. Section 12: Neural Network based Inter Prediction Description 12: Exploration of deep learning-based improvements for inter prediction, including motion estimation refinement and fractional-pixel reference generation. Section 13: Neural Network based Quantization and Entropy Coding for Video Coding Description 13: Discussion on neural network enhancements for quantization and entropy coding processes in video compression. Section 14: Neural Network based Loop Filtering Description 14: Review of CNN-based loop filtering techniques aimed at artifact reduction, including specific model architectures and their coding gains. Section 15: New Video Coding Frameworks Based on Neural Network Description 15: Examination of novel video coding frameworks leveraging neural network models, diverging from traditional hybrid frameworks. Section 16: OPTIMIZATION TECHNIQUES FOR IMAGE AND VIDEO COMPRESSION Description 16: Overview of optimization techniques for image and video compression leveraging neural networks, including fast mode-decision algorithms. Section 17: CONCLUSIONS AND OUTLOOK Description 17: Summarizes the reviewed neural network-based image and video compression techniques, identifies advantages and challenges, and discusses future research directions.
Exploring the Attack Surface of Blockchain: A Systematic Overview
9
--- paper_title: Bitcoin Transaction Graph Analysis paper_content: Bitcoins have recently become an increasingly popular cryptocurrency through which users trade electronically and more anonymously than via traditional electronic transfers. Bitcoin's design keeps all transactions in a public ledger. The sender and receiver for each transaction are identified only by cryptographic public-key ids. This leads to a common misconception that it inherently provides anonymous use. While Bitcoin's presumed anonymity offers new avenues for commerce, several recent studies raise user-privacy concerns. We explore the level of anonymity in the Bitcoin system. Our approach is two-fold: (i) We annotate the public transaction graph by linking bitcoin public keys to "real" people - either definitively or statistically. (ii) We run the annotated graph through our graph-analysis framework to find and summarize activity of both known and unknown users. --- paper_title: Information propagation in the Bitcoin network paper_content: Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior. --- paper_title: On the Preliminary Investigation of Selfish Mining Strategy with Multiple Selfish Miners paper_content: Eyal and Sirer's selfish mining strategy has demonstrated that Bitcoin system is not secure even if 50% of total mining power is held by altruistic miners. Since then, researchers have been investigating either to improve the efficiency of selfish mining, or how to defend against it, typically in a single selfish miner setting. Yet there is no research on a selfish mining strategies concurrently used by multiple miners in the system. The effectiveness of such selfish mining strategies and their required mining power under such multiple selfish miners setting remains unknown. ::: In this paper, a preliminary investigation and our findings of selfish mining strategy used by multiple miners are reported. In addition, the conventional model of Bitcoin system is slightly redesigned to tackle its shortcoming: namely, a concurrency of individual mining processes. Although a theoretical analysis of selfish mining strategy under this setting is yet to be established, the current findings based on simulations is promising and of great interest. In particular, our work shows that a lower bound of power threshold required for selfish mining strategy decreases in proportion to a number of selfish miners. Moreover, there exist Nash equilibria where all selfish miners in the system do not change to an honest mining strategy and simultaneously earn their unfair amount of mining reward given that they equally possess sufficiently large mining power. Lastly, our new model yields a power threshold for mounting selfish mining strategy slightly greater than one from the conventional model. --- paper_title: Decentralizing Privacy: Using Blockchain to Protect Personal Data paper_content: The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society. --- paper_title: Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts paper_content: Emerging smart contract systems over decentralized cryptocurrencies allow mutually distrustful parties to transact safely without trusted third parties. In the event of contractual breaches or aborts, the decentralized blockchain ensures that honest parties obtain commensurate compensation. Existing systems, however, lack transactional privacy. All transactions, including flow of money between pseudonyms and amount transacted, are exposed on the blockchain. We present Hawk, a decentralized smart contract system that does not store financial transactions in the clear on the blockchain, thus retaining transactional privacy from the public's view. A Hawk programmer can write a private smart contract in an intuitive manner without having to implement cryptography, and our compiler automatically generates an efficient cryptographic protocol where contractual parties interact with the blockchain, using cryptographic primitives such as zero-knowledge proofs. To formally define and reason about the security of our protocols, we are the first to formalize the blockchain model of cryptography. The formal modeling is of independent interest. We advocate the community to adopt such a formal model when designing applications atop decentralized blockchains. --- paper_title: Double-spending fast payments in bitcoin paper_content: Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of \emph{fast} payment in a local fast-food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments. --- paper_title: A Blockchain-Based Approach Towards Overcoming Financial Fraud in Public Sector Services paper_content: Abstract In financial markets it is common for companies and individuals to invest into foreign companies. To avoid the double taxation of investors on dividend payment – both in the country where the profit is generated as well as the country of residence – most governments have entered into bilateral double taxation treaties, whereby investors can claim a tax refund in the country where the profit is generated. Due to easily forgeable documents and insufficient international exchange of information between tax authorities, investors illegitimately apply for these tax returns causing an estimated damage of 1.8 billion USD, for example, in Denmark alone. This paper assesses the potential of a blockchain database to provide a feasible solution for overcoming this problem against the backdrop of recent advances in the public sector and the unique set of blockchain capacities. Towards this end, we develop and evaluate a blockchain-based prototype system aimed at eliminating this type of tax fraud and increasing transparency regarding the flow of dividends. While the prototype is based on the specific context of the Danish tax authority, we discuss how it can be generalized for tracking international and interorganizational transactions. --- paper_title: Towards Blockchain-Driven, Secure and Transparent Audit Logs paper_content: Audit logs serve as a critical component in the enterprise business systems that are used for auditing, storing, and tracking changes made to the data. However, audit logs are vulnerable to a series of attacks, which enable adversaries to tamper data and corresponding audit logs. In this paper, we present BlockAudit: a scalable and tamper-proof system that leverages the design properties of audit logs and security guarantees of blockchains to enable secure and trustworthy audit logs. Towards that, we construct the design schema of BlockAudit, and outline its operational procedures. We implement our design on Hyperledger and evaluate its performance in terms of latency, network size, and payload size. Our results show that conventional audit logs can seamlessly transition into BlockAudit to achieve higher security, integrity, and fault tolerance. --- paper_title: Enabling Blockchain Innovations with Pegged Sidechains paper_content: Since the introduction of Bitcoin[Nak09] in 2009, and the multiple computer science and electronic cash innovations it brought, there has been great interest in the potential of decentralised cryptocurrencies. At the same time, implementation changes to the consensuscritical parts of Bitcoin must necessarily be handled very conservatively. As a result, Bitcoin has greater difficulty than other Internet protocols in adapting to new demands and accommodating new innovation. We propose a new technology, pegged sidechains, which enables bitcoins and other ledger assets to be transferred between multiple blockchains. This gives users access to new and innovative cryptocurrency systems using the assets they already own. By reusing Bitcoin’s currency, these systems can more easily interoperate with each other and with Bitcoin, avoiding the liquidity shortages and market fluctuations associated with new currencies. Since sidechains are separate systems, technical and economic innovation is not hindered. Despite bidirectional transferability between Bitcoin and pegged sidechains, they are isolated: in the case of a cryptographic break (or malicious design) in a sidechain, the damage is entirely confined to the sidechain itself. This paper lays out pegged sidechains, their implementation requirements, and the work needed to fully benefit from the future of interconnected blockchains. --- paper_title: A Survey of Attacks on Ethereum Smart Contracts SoK paper_content: Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage. --- paper_title: Formal Verification of Smart Contracts: Short Paper paper_content: Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open global computing platform, called the Ethereum Virtual Machine (EVM). EVM executes bytecode on a simple stack machine. Programmers do not usually write EVM code; instead, they can program in a JavaScript-like language, called Solidity, that compiles to bytecode. Since the main purpose of EVM is to execute smart contracts that manage and transfer digital assets (called Ether), security is of paramount importance. However, writing secure smart contracts can be extremely difficult: due to the openness of Ethereum, both programs and pseudonymous users can call into the public methods of other programs, leading to potentially dangerous compositions of trusted and untrusted code. This risk was recently illustrated by an attack on TheDAO contract that exploited subtle details of the EVM semantics to transfer roughly $50M worth of Ether into the control of an attacker. In this paper, we outline a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F*, a functional programming language aimed at program verification. --- paper_title: A Survey of Blockchain Security Issues and Challenges paper_content: Blockchain technologies is one of the most popular issue in recent years, it has already changed people's lifestyle in some area due to its great influence on many business or industry, and what it can do will still continue cause impact in many places. Although the feature of blockchain technologies may bring us more reliable and convenient services, the security issues and challenges behind this innovative technique is also an important topic that we need to concern. --- paper_title: Blindly Signed Contracts: Anonymous On-Blockchain and Off-Blockchain Bitcoin Transactions paper_content: Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin \(\leftrightarrow \) voucher exchange. Our schemes are practical, secure and anonymous. --- paper_title: DistArch-SCNet: Blockchain-Based Distributed Architecture with Li-Fi Communication for a Scalable Smart City Network paper_content: The winds of change are blowing toward the multibillion dollar global consumer electronics (CE) industry, which includes companies that are engaged in the manufacturing of smart devices to enable smartconnected vehicles, transportation, health-care systems, home automation, and smart industry in the smart-city network. The rapid increase in the number and diversity of smart devices connected to the Internet has given rise to concerns about scalability, efficiency, flexibility, and availability in the existing smart-city network. The imminent energy crisis, radiofrequency spectrum, and constraints on the lifetime of smart devices have also emerged as critical challenges. To address these challenges, this article presents DistArch-SCNet, the efficient, scalable, blockchain-based distributed smart-city network architecture enabled by the light-fidelity (Li-Fi) communication technique. --- paper_title: Blockchain beyond bitcoin paper_content: Blockchain technology has the potential to revolutionize applications and redefine the digital economy. --- paper_title: BlueWallet: The Secure Bitcoin Wallet paper_content: With the increasing popularity of Bitcoin, a digital decentralized currency and payment system, the number of malicious third parties attempting to steal bitcoins has grown substantially. Attackers have stolen bitcoins worth millions of dollars from victims by using malware to gain access to the private keys stored on the victims’ computers or smart phones. In order to protect the Bitcoin private keys, we propose the use of a hardware token for the authorization of transactions. We created a proof-of-concept Bitcoin hardware token: BlueWallet. The device communicates using Bluetooth Low Energy and is able to securely sign Bitcoin transactions. The device can also be used as an electronic wallet in combination with a point of sale and serves as an alternative to cash and credit cards. --- paper_title: SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies paper_content: Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison. --- paper_title: Hijacking Bitcoin: Routing Attacks on Cryptocurrencies paper_content: As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. ::: This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate ~50% of the mining power---even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. ::: We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. ::: The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately. --- paper_title: Blockchain technology in healthcare: The revolution starts here paper_content: Blockchain technology has shown its considerable adaptability in recent years as a variety of market sectors sought ways of incorporating its abilities into their operations. While so far most of the focus has been on the financial services industry, several projects in other service related areas such as healthcare show this is beginning to change. Numerous starting points for Blockchain technology in the healthcare industry are the focus of this report. With examples for public healthcare management, user-oriented medical research and drug counterfeiting in the pharmaceutical sector, this report aims to illustrate possible influences, goals and potentials connected to this disruptive technology. --- paper_title: DistBlockNet: A Distributed Blockchains-Based Secure SDN Architecture for IoT Networks paper_content: The rapid increase in the number and diversity of smart devices connected to the Internet has raised the issues of flexibility, efficiency, availability, security, and scalability within the current IoT network. These issues are caused by key mechanisms being distributed to the IoT network on a large scale, which is why a distributed secure SDN architecture for IoT using the blockchain technique (DistBlockNet) is proposed in this research. It follows the principles required for designing a secure, scalable, and efficient network architecture. The DistBlockNet model of IoT architecture combines the advantages of two emerging technologies: SDN and blockchains technology. In a verifiable manner, blockchains allow us to have a distributed peer-to-peer network where non-confident members can interact with each other without a trusted intermediary. A new scheme for updating a flow rule table using a blockchains technique is proposed to securely verify a version of the flow rule table, validate the flow rule table, and download the latest flow rules table for the IoT forwarding devices. In our proposed architecture, security must automatically adapt to the threat landscape, without administrator needs to review and apply thousands of recommendations and opinions manually. We have evaluated the performance of our proposed model architecture and compared it to the existing model with respect to various metrics. The results of our evaluation show that DistBlockNet is capable of detecting attacks in the IoT network in real time with low performance overheads and satisfying the design principles required for the future IoT network. --- paper_title: Secure Wallet-Assisted Offline Bitcoin Payments with Double-Spender Revocation paper_content: Bitcoin seems to be the most successful cryptocurrency so far given the growing real life deployment and popularity. While Bitcoin requires clients to be online to perform transactions and a certain amount of time to verify them, there are many real life scenarios that demand for offline and immediate payments (e.g., mobile ticketing, vending machines, etc). However, offline payments in Bitcoin raise non-trivial security challenges, as the payee has no means to verify the received coins without having access to the Bitcoin network. Moreover, even online immediate payments are shown to be vulnerable to double-spending attacks. In this paper, we propose the first solution for Bitcoin payments, which enables secure payments with Bitcoin in offline settings and in scenarios where payments need to be immediately accepted. Our approach relies on an offline wallet and deploys several novel security mechanisms to prevent double-spending and to verify the coin validity in offline setting. These mechanisms achieve probabilistic security to guarantee that the attack probability is lower than the desired threshold. We provide a security and risk analysis as well as model security parameters for various adversaries. We further eliminate remaining risks by detection of misbehaving wallets and their revocation. We implemented our solution for mobile Android clients and instantiated an offline wallet using a microSD security card. Our implementation demonstrates that smooth integration over a very prevalent platform (Android) is possible, and that offline and online payments can practically co-exist. We also discuss alternative deployment approach for the offline wallet which does not leverage secure hardware, but instead relies on a deposit system managed by the Bitcoin network. --- paper_title: Centrally Banked Cryptocurrencies paper_content: Current cryptocurrencies, starting with Bitcoin, build a decentralized blockchain-based transaction ledger, main- tained through proofs-of-work that also serve to generate a monetary supply. Such decentralization has benefits, such as independence from national political control, but also significant limitations in terms of computational costs and scalability. We introduce RSCoin, a cryptocurrency framework in which central banks maintain complete control over the monetary supply, but rely on a distributed set of authorities, or mintettes, to prevent double-spending. While monetary policy is centralized, RSCoin still provides strong transparency and auditability guarantees. We demonstrate, both theoretically and experimentally, the benefits of a modest degree of centralization, such as the elimination of wasteful hashing and a scalable system for avoiding double- spending attacks. --- paper_title: Decentralized name-based security for content distribution using blockchains paper_content: User, content, and device names as a security primitive have been an attractive approach especially in the context of Information-Centric Networking (ICN) architectures. We leverage Hierarchical Identity Based Encryption (HIBE) to build (content) name-based security mechanisms used for securely distributing content. In contrast to similar approaches, in our system each user maintains his own Private Key Generator used for generating the master secret key and the public system parameters required by the HIBE algorithm. This way our system does not suffer from the key escrow problem, which is inherent in many similar solutions. In order to disseminate the system parameters of a content owner in a fully distributed way, we use blockchains, a distributed, community managed, global list of transactions. --- paper_title: BLOCKBENCH: A Framework for Analyzing Private Blockchains paper_content: Blockchain technologies are taking the world by storm. Public blockchains, such as Bitcoin and Ethereum, enable secure peer-to-peer applications like crypto-currency or smart contracts. Their security and performance are well studied. This paper concerns recent private blockchain systems designed with stronger security (trust) assumption and performance requirement. These systems target and aim to disrupt applications which have so far been implemented on top of database systems, for example banking, finance and trading applications. Multiple platforms for private blockchains are being actively developed and fine tuned. However, there is a clear lack of a systematic framework with which different systems can be analyzed and compared against each other. Such a framework can be used to assess blockchains' viability as another distributed data processing platform, while helping developers to identify bottlenecks and accordingly improve their platforms. In this paper, we first describe BLOCKBENCH, the first evaluation framework for analyzing private blockchains. It serves as a fair means of comparison for different platforms and enables deeper understanding of different system design choices. Any private blockchain can be integrated to BLOCKBENCH via simple APIs and benchmarked against workloads that are based on real and synthetic smart contracts. BLOCKBENCH measures overall and component-wise performance in terms of throughput, latency, scalability and fault-tolerance. Next, we use BLOCKBENCH to conduct comprehensive evaluation of three major private blockchains: Ethereum, Parity and Hyperledger Fabric. The results demonstrate that these systems are still far from displacing current database systems in traditional data processing workloads. Furthermore, there are gaps in performance among the three systems which are attributed to the design choices at different layers of the blockchain's software stack. We have released BLOCKBENCH for public use. --- paper_title: Preventing the 51%-Attack: a Stochastic Analysis of Two Phase Proof of Work in Bitcoin paper_content: The security of Bitcoin (a relatively new form of a distributed ledger) is threatened by the formation of large public pools, which form naturally in order to reduce reward variance for individual miners. By introducing a second cryptographic challenge (two phase proof-of-work or 2P-PoW for short), pool operators are forced to either give up their private keys or provide a substantial part of their pool’s mining hashrate which potentially forces pools to become smaller. This document provides a stochastic analysis of the Bitcoin mining protocol extended with 2PPoW, modelled using CTMCs (continuous-time Markov chains). 2P-PoW indeed holds its promises, according to these models. A plot is provided for dierent "strengths" of the second cryptographic challenge, which can be used to select proper values for future implementers. --- paper_title: Blockchain-based efficient privacy preserving and data sharing scheme of content-centric network in 5G paper_content: Now, the authors' life is full of vast amount of information, the era of information has arrived. So the content-centric networks face severe challenges in dealing with a huge range of content requests, bringing protection and sharing concerns of the content. How to protect information in the network efficiently and securely for the upcoming 5G era has become a problem. The authors propose a scheme based on a blockchain to solve the privacy issues in content-centric mobile networks for 5G. The authors implement the mutual trust between content providers and users. Besides, the openness and tamper-resistant of the blockchain ledger ensure the access control and privacy of the provider. With the help of a miner, selected from users, the authors can maintain the public ledger expediently. Also, in return, the authors share the interesting data with low overhead, network delay and congestion, and then achieve green communication. --- paper_title: Double-spending prevention for Bitcoin zero-confirmation transactions paper_content: Zero-confirmation transactions, i.e. transactions that have been broadcast but are still pending to be included in the blockchain, have gained attention in order to enable fast payments in Bitcoin, shortening the time for performing payments. Fast payments are desirable in certain scenarios, for instance, when buying in vending machines, fast food restaurants, or withdrawing from an ATM. Despite being quickly propagated through the network, zero-confirmation transactions are not protected against double-spending attacks, since the double-spending protection Bitcoin offers relies on the blockchain and, by definition, such transactions are not yet included in it. In this paper, we propose a double-spending prevention mechanism for Bitcoin zero-confirmation transactions. Our proposal is based on exploiting the flexibility of the Bitcoin scripting language together with a well-known vulnerability of the ECDSA signature scheme to discourage attackers from performing such an attack. --- paper_title: Mining on Someone Else’s Dime: Mitigating Covert Mining Operations in Clouds and Enterprises paper_content: Covert cryptocurrency mining operations are causing notable losses to both cloud providers and enterprises. Increased power consumption resulting from constant CPU and GPU usage from mining, inflated cooling and electricity costs, and wastage of resources that could otherwise benefit legitimate users are some of the factors that contribute to these incurred losses. Affected organizations currently have no way of detecting these covert, and at times illegal miners and often discover the abuse when attackers have already fled and the damage is done. --- paper_title: E-Voting With Blockchain: An E-Voting Protocol with Decentralisation and Voter Privacy paper_content: Technology has positive impacts on many aspects of our social life. Designing a 24 hour globally connected architecture enables ease of access to a variety of resources and services. Furthermore, technology like the Internet has been a fertile ground for innovation and creativity. One such disruptive innovation is blockchain - a keystone of cryptocurrencies. The blockchain technology is presented as a game changer for many of the existing and emerging technologies/services. With its immutability property and decentralised architecture, it is taking centre stage in many services as an equalisation factor to the current parity between consumers and large corporations/governments. One potential application of the blockchain is in e-voting schemes. The objective of such a scheme would be to provide a decentralised architecture to run and support a voting scheme that is open, fair, and independently verifiable. In this paper, we propose a potential new e-voting protocol that utilises the blockchain as a transparent ballot box. The protocol has been designed to adhere to fundamental e-voting properties as well as offer a degree of decentralisation and allow for the voter to change/update their vote (within the permissible voting period). This paper highlights the pros and cons of using blockchain for such a proposal from a practical point view in both development/deployment and usage contexts. Concluding the paper is a potential roadmap for blockchain technology to be able to support complex applications. --- paper_title: A Survey on the Security of Blockchain Systems paper_content: Since its inception, the blockchain technology has shown promising application prospects. From the initial cryptocurrency to the current smart contract, blockchain has been applied to many fields. Although there are some studies on the security and privacy issues of blockchain, there lacks a systematic examination on the security of blockchain systems. In this paper, we conduct a systematic study on the security threats to blockchain and survey the corresponding real attacks by examining popular blockchain systems. We also review the security enhancement solutions for blockchain, which could be used in the development of various blockchain systems, and suggest some future directions to stir research efforts into this area. --- paper_title: Chains in Chains - Logic and Challenges of Blockchains in Supply Chains paper_content: Due to the disruptive role of the Bitcoin in the financial sector, both scholars and practitioners are increasingly wondering whether it is possible to replicate the impact of the Blockchain technology in the supply chain context. As a distributed ledger technology characterized by the decentralized consensus, Blockchain is touted by many as the proper platform to collect all the information about supply chains from the producer to the consumer. However, the current technology immaturity and the lack of successful supply chain implementations pave the way for doubt about the disruptive role of this technology in supply chains. To the authors’ knowledge, this work is one of the very first attempts to link the blockchain technology to supply chain and logistics. This paper investigates the state-of-the-art application of blockchain in supply chains, exploring both the literature and the industry initiatives, contributing to the increase of the managerial insight and providing a future research agenda. (Less) --- paper_title: Secure Attribute-Based Signature Scheme With Multiple Authorities for Blockchain in Electronic Health Records Systems paper_content: Electronic Health Records (EHRs) are entirely controlled by hospitals instead of patients, which complicates seeking medical advices from different hospitals. Patients face a critical need to focus on the details of their own healthcare and restore management of their own medical data. The rapid development of blockchain technology promotes population healthcare, including medical records as well as patient-related data. This technology provides patients with comprehensive, immutable records, and access to EHRs free from service providers and treatment websites. In this paper, to guarantee the validity of EHRs encapsulated in blockchain, we present an attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the attribute while disclosing no information other than the evidence that he has attested to it. Furthermore, there are multiple authorities without a trusted single or central one to generate and distribute public/private keys of the patient, which avoids the escrow problem and conforms to the mode of distributed data storage in the blockchain. By sharing the secret pseudorandom function seeds among authorities, this protocol resists collusion attack out of $N$ from $N-1$ corrupted authorities. Under the assumption of the computational bilinear Diffie-Hellman, we also formally demonstrate that, in terms of the unforgeability and perfect privacy of the attribute-signer, this attribute-based signature scheme is secure in the random oracle model. The comparison shows the efficiency and properties between the proposed method and methods proposed in other studies. --- paper_title: Security Concerns and Issues for Bitcoin paper_content: This paper focuses on the unique characteristics of Bitcoin as a cryptocurrency and the major security issues regarding the mining process and transaction process of Bitcoin. Nowadays, Bitcoin is emerging as the most successful implementation of the concept known as cryptocurrency. The Bitcoin records its transactions in a public log called the blockchain. The distributed protocols that maintain the blockchain are responsible for the security of the Bitcoin. The blockchain is run by participants known as the miners. The Bitcoin technology the protocol and the cryptography has a strong security track record, and the Bitcoin network is known as one of the largest distributed computing project in the world. The security aspect of the Bitcoin is the major area of research. This currency may be vulnerable during the transactions or it can be also attacked on its online storage pools or exchanges. The recent researches, mainly focused on the protocol of the Bitcoin, shows that the currency is not fully secure against the colluding groups of users that uses different attacks to fraud the ‘Honest’ miners of the Bitcoin. General Terms Security. --- paper_title: Blockchain Technology: Principles and Applications paper_content: This paper expounds the main principles behind blockchain technology and some of its cutting-edge applications. Firstly, we present the core concepts at the heart of the blockchain, and we discuss the potential risks and drawbacks of public distributed ledgers, and the shift toward hybrid solutions. Secondly, we expose the main features of decentralized public ledger platforms. Thirdly, we show why the blockchain is a disruptive and foundational technology, and fourthly, we sketch out a list of important applications, bearing in mind the most recent evolutions. --- paper_title: Bitcoin-NG: A Scalable Blockchain Protocol paper_content: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. ::: This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. ::: In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network. --- paper_title: Merging supply chain and blockchain technologies paper_content: Technology has been playing a major role in our lives. One definition for technology is all the knowledge, products, processes, tools,methods and systems employed in the creation of goods or in providing services.This makes technological innovations raise the competitiveness between organizations that depend on supply chain and logistics in the global market. With increasing competitiveness, new challenges arise due to lack of information and assets tractability. This paper introduces three scenarios for solving these challenges using the Blockchain technology. In this work, Blockchain technology targets two main issues within the supply chain, namely, data transparency and resource sharing. These issues are reflected into the organizations strategies and plans. --- paper_title: A Blockchain Based System to Ensure Transparency and Reliability in Food Supply Chain paper_content: We propose a blockchain oriented platform to secure storage origin provenance for food data. By exploiting the blockchain distributed and immutable nature the proposed system ensures the supply chain transparency with a view to encourage local region by promoting the smart food tourism and by increasing local economy. Thanks to the decentralized application platforms that makes us able to develop smart contracts, we define and implement a system that works inside the blockchain and guarantees transparency, reliability to all actors of the food supply chain. Food, in fact, is the most direct way to get in touch with a place. The touristic activities related to wine and food consumption and sale in fact influence the choice of a destination and may encourage the purchase of typical food also once tourists are back home to the country of origin. Touristic destinations must therefore be equipped with innovative tools that, in a context of Smart Tourism, guarantee the originality of the products and their traceability. --- paper_title: POSTER: Deterring DDoS Attacks on Blockchain-based Cryptocurrencies through Mempool Optimization paper_content: In this paper, we highlight a new form of distributed denial of service (DDoS) attack that impacts the memory pools of cryptocurrency systems causing massive transaction backlog and higher mining fees. Towards that, we study such an attack on Bitcoin mempools and explore its effects on the mempool size and transaction fees paid by the legitimate users. We also propose countermeasures to contain such an attack. Our countermeasures include fee-based and age-based designs, which optimize the mempool size and help to counter the effects of DDoS attacks. We evaluate our designs using simulations in diverse attack conditions. --- paper_title: Blockchains and Smart Contracts for the Internet of Things paper_content: Motivated by the recent explosion of interest around blockchains, we examine whether they make a good fit for the Internet of Things (IoT) sector. Blockchains allow us to have a distributed peer-to-peer network where non-trusting members can interact with each other without a trusted intermediary, in a verifiable manner. We review how this mechanism works and also look into smart contracts—scripts that reside on the blockchain that allow for the automation of multi-step processes. We then move into the IoT domain, and describe how a blockchain-IoT combination: 1) facilitates the sharing of services and resources leading to the creation of a marketplace of services between devices and 2) allows us to automate in a cryptographically verifiable manner several existing, time-consuming workflows. We also point out certain issues that should be considered before the deployment of a blockchain network in an IoT setting: from transactional privacy to the expected value of the digitized assets traded on the network. Wherever applicable, we identify solutions and workarounds. Our conclusion is that the blockchain-IoT combination is powerful and can cause significant transformations across several industries, paving the way for new business models and novel, distributed applications. --- paper_title: The Impact of Uncle Rewards on Selfish Mining in Ethereum paper_content: Many of today's crypto currencies use blockchains as decentralized ledgers and secure them with proof of work. In case of a fork of the chain, Bitcoin's rule for achieving consensus is selecting the longest chain and discarding the other chain as stale. It has been demonstrated that this consensus rule has a weakness against selfish mining in which the selfish miner exploits the variance in block generation by partially withholding blocks. In Ethereum, however, under certain conditions stale blocks don't have to be discarded but can be referenced from the main chain as uncle blocks yielding a partial reward. This concept limits the impact of network delays on the expected revenue for miners. But the concept also reduces the risk for a selfish miner to gain no rewards from withholding a freshly minted block. This paper uses a Monte Carlo simulation to quantify the effect of uncle blocks both to the profitability of selfish mining and the blockchain's security in Ethereum (ETH). A brief outlook about a recent Ethereum Classic (ETC) improvement proposal that weighs uncle blocks during the selection of the main chain will be given. --- paper_title: Optimizing Governed Blockchains for Financial Process Authentications paper_content: We propose the formal study of governed blockchains that are owned and controlled by organizations and that neither create cryptocurrencies nor provide any incentives to solvers of cryptographic puzzles. We view such approaches as frameworks in which system parts, such as the cryptographic puzzle, may be instantiated with different technology. Owners of such a blockchain procure puzzle solvers as resources they control, and use a mathematical model to compute optimal parameters for the cryptographic puzzle mechanism or other parts of the blockchain. We illustrate this approach with a use case in which blockchains record hashes of financial process transactions to increase their trustworthiness and that of their audits. For Proof of Work as cryptographic puzzle, we develop a detailed mathematical model to derive MINLP optimization problems for computing optimal Proof of Work configuration parameters that trade off potentially conflicting aspects such as availability, resiliency, security, and cost in this governed setting. We demonstrate the utility of such a mining calculus by solving some instances of this problem. This experimental validation is strengthened by statistical experiments that confirm the validity of random variables used in formulating our mathematical model. We hope that our work may facilitate the creation of domain-specific blockchains for a wide range of applications such as trustworthy information in Internet of Things systems and bespoke improvements of legacy financial services. --- paper_title: FairAccess: a new Blockchain-based access control framework for the Internet of Things paper_content: Security and privacy are huge challenges in Internet of Things (IoT) environments, but unfortunately, the harmonization of the IoT-related standards and protocols is hardly and slowly widespread. In this paper, we propose a new framework for access control in IoT based on the blockchain technology. Our first contribution consists in providing a reference model for our proposed framework within the Objectives, Models, Architecture and Mechanism specification in IoT. In addition, we introduce FairAccess as a fully decentralized pseudonymous and privacy preserving authorization management framework that enables users to own and control their data. To implement our model, we use and adapt the blockchain into a decentralized access control manager. Unlike financial bitcoin transactions, FairAccess introduces new types of transactions that are used to grant, get, delegate, and revoke access. As a proof of concept, we establish an initial implementation with a Raspberry PI device and local blockchain. Finally, we discuss some limitations and propose further opportunities. Copyright © 2017 John Wiley & Sons, Ltd. --- paper_title: An Analysis of Attacks on Blockchain Consensus paper_content: We present and validate a novel mathematical model of the blockchain mining process and use it to conduct an economic evaluation of the double-spend attack, which is fundamental to all blockchain systems. Our analysis focuses on the value of transactions that can be secured under a conventional double-spend attack, both with and without a concurrent eclipse attack. Our model quantifies the importance of several factors that determine the attack's success, including confirmation depth, attacker mining power, and any confirmation deadline set by the merchant. In general, the security of a transaction against a double-spend attack increases roughly logarithmically with the depth of the block, made easier by the increasing sum of coin turned-over (between individuals) in the blocks, but more difficult by the increasing proof of work required. In recent blockchain data, we observed a median block turnover value of 6 BTC. Based on this value, a merchant requiring a single confirmation is protected against only attackers that can increase the current mining power by 1% or less. However, similar analysis shows that a merchant that requires a much longer 72 confirmations (~12 hours) will eliminate all potential profit for any double-spend attacker adding mining power less than 40% of the current mining power. --- paper_title: Sprites: Payment Channels that Go Faster than Lightning paper_content: It is well known that Bitcoin, Ethereum, and other blockchain-based cryptocurrencies are facing hurdles in scaling to meet user demand. One of the most promising approaches is to form a network of"off-chain payment channels,"which are backed by on-chain currency but support rapid, optimistic transactions and use the blockchain only in case of disputes. We develop a novel construction for payment channels that reduces the worst-case"collateral cost"for off- chain payments. In existing proposals, particularly the Lightning Network, a payment across a path of $\ell$ channels requires locking up collateral for $O(\ell \Delta)$ time, where $\Delta$ is the time to commit a on-chain transaction. Our construction reduces this cost to $O(\ell + \Delta)$. We formalize our construction in the simulation-based security model, and provide an implementation as an Ethereum smart contract. Our construction relies on a general purpose primitive called a"state channel,"which is of independent interest. --- paper_title: Bitcoin-NG: A Scalable Blockchain Protocol paper_content: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. ::: This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. ::: In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network. --- paper_title: PBFT vs Proof-of-Authority: Applying the CAP Theorem to Permissioned Blockchain paper_content: Permissioned blockchains are arising as a solution to federate companies prompting accountable interactions. A variety of consensus algorithms for such blockchains have been proposed, each of which has different benefits and drawbacks. Proof-of-Authority (PoA) is a new family of Byzantine fault-tolerant (BFT) consensus algorithms largely used in practice to ensure better performance than traditional Practical Byzantine Fault Tolerance (PBFT). However, the lack of adequate analysis of PoA hinders any cautious evaluation of their effectiveness in real-world permissioned blockchains deployed over the Internet, hence on an eventually synchronous network experimenting Byzantine nodes. In this paper, we analyse two of the main PoA algorithms, named Aura and Clique, both in terms of provided guarantees and performances. First, we derive their functioning including how messages are exchanged, then we weight, by relying on the CAP theorem, consistency, availability and partition tolerance guarantees. We also report a qualitative latency analysis based on message rounds. The analysis advocates that PoA for per- missioned blockchains, deployed over the Internet with Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that PBFT can fit better such scenarios, despite a limited loss in terms of performance. --- paper_title: Blockchain and Scalability paper_content: Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry. --- paper_title: Analysis of Difficulty Control in Bitcoin and Proof-of-Work Blockchains paper_content: This paper presents a stochastic model for block arrival times based on the difficulty retargeting rule used in Bitcoin, as well as other proof-of-work blockchains. Unlike some previous work, this paper explicitly models the difficulty target as a random variable which is a function of the previous block arrival times and affecting the block times in the next retargeting period. An explicit marginal distribution is derived for the time between successive blocks (the blocktime), while allowing for randomly changing difficulty. This paper also aims to serve as an introduction to Bitcoin and proof-of-work blockchains for the controls community, focusing on the difficulty retargeting procedure used in Bitcoin. --- paper_title: A Memo on the Proof-of-Stake Mechanism paper_content: We analyze the economic incentives generated by the proof-of-stake mechanism discussed in the Ethereum Casper upgrade proposal. Compared with proof-of-work, proof-of-stake has a different cost structure for attackers. In Budish (2018), three equations characterize the limits of Bitcoin, which has a proof-of-work mechanism. We investigate their counterparts and evaluate the risk of double-spending attack and sabotage attack. We argue that PoS is safer than PoW agaisnt double-spending attack because of the tractability of attackers, which implies a large "stock" cost for the attacker. Compared to a PoW system whose mining equipments are repurposable, PoS is also safer against a sabotage attack. --- paper_title: Consensus in the Age of Blockchains paper_content: The blockchain initially gained traction in 2008 as the technology underlying bitcoin, but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. ::: The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: first protocols based on proof-of-work (PoW), second proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and third hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours. --- paper_title: A Survey of Scalability Solutions on Blockchain paper_content: The blockchain is the distributed ledger technology, and the unchangeability is the main feature. However, the scalability is hindered to satisfy the characteristic. Nowadays, there are more and more attempts to put many applications on block-chains, but there are problems in actual use, for example, the low number of processing transactions per second (tps). Since solving these problems can increase the performance of the blockchain and reduce the cost, to solve them is one of the most important issues in the blockchain; And it is called the scalability issue. In this paper, we analyze methods that various attempts to solve it into following categories: On-chain, Off-chain, Side-chain, Child-chain and Inter-chain. We will analyze how these methods in each category work, what their pros and cons are, and finally compare which scalability issue has been resolved and their own features. --- paper_title: A Proof-of-Stake protocol for consensus on Bitcoin subchains paper_content: Although the transactions on the Bitcoin blockchain have the main purpose of recording currency transfers, they can also carry a few bytes of metadata. A sequence of transaction metadata forms a subchain of the Bitcoin blockchain, and it can be used to store a tamper-proof execution trace of a smart contract. Except for the trivial case of contracts which admit any trace, in general there may exist inconsistent subchains which represent incorrect contract executions. A crucial issue is how to make it difficult, for an adversary, to subvert the execution of a contract by making its subchain inconsistent. Existing approaches either postulate that subchains are always consistent, or give weak guarantees about their security (for instance, they are susceptible to Sybil attacks). We propose a consensus protocol, based on Proof-of-Stake, that incentivizes nodes to consistently extend the subchain. We empirically evaluate the security of our protocol, and we show how to exploit it as the basis for smart contracts on Bitcoin. --- paper_title: TwinsCoin: A Cryptocurrency via Proof-of-Work and Proof-of-Stake paper_content: We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power. Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically. We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available. --- paper_title: Performance Modeling of PBFT Consensus Process for Permissioned Blockchain Network (Hyperledger Fabric) paper_content: While Blockchain network brings tremendous benefits, there are concerns whether their performance would match up with the mainstream IT systems. This paper aims to investigate whether the consensus process using Practical Byzantine Fault Tolerance (PBFT) could be a performance bottleneck for networks with a large number of peers. We model the PBFT consensus process using Stochastic Reward Nets (SRN) to compute the mean time to complete consensus for networks up to 100 peers. We create a blockchain network using IBM Bluemix service, running a production-grade IoT application and use the data to parameterize and validate our models. We also conduct sensitivity analysis over a variety of system parameters and examine the performance of larger networks --- paper_title: Towards characterizing blockchain-based cryptocurrencies for highly-accurate predictions paper_content: In 2017, the Blockchain-based crypto currency market witnessed enormous growth. Bitcoin, the leading crypto currency, reached all-time highs many times over the year leading to speculations to explain the trend in its growth. In this paper, we study Bitcoin and explore features in its network that explain its price hikes. We gather data and analyze user and network activity that highly impact Bitcoin price. We monitor the change in the activities over time and relate them to economic theories. We identify key network features that determine the demand and supply dynamics of a crypto currency. Finally, we use machine learning methods to construct models that predict Bitcoin price. Our regression model predicts Bitcoin price with 99.4% accuracy and 0.0113 root mean squared error (RMSE). --- paper_title: Is Blockchain Hashing an Effective Method for Electronic Governance? paper_content: Governments across the world are testing different uses of the blockchain for the delivery of their public services. Blockchain hashing - or the insertion of data in the blockchain - is one of the potential applications of the blockchain in this space. With this method, users can apply special scripts to add their data to blockchain transactions, ensuring both immutability and publicity. Blockchain hashing also secures the integrity of the original data stored on central governmental databases. The paper starts by analysing possible scenarios of hashing on the blockchain and assesses in which cases it may work and in which it is less likely to add value to a public administration. Second, the paper also compares this method with traditional digital signatures using PKI (Public Key Infrastructure) and discusses standardisation in each domain. Third, it also addresses issues related to concepts such as distributed ledger technology and permissioned blockchains. Finally, it raises the question of whether blockchain hashing is an effective solution for electronic governance, and concludes that its value is controversial, even if it is improved by PKI and other security measures. In this regard, we claim that governments need first to identify pain points in governance, and then consider the trade-offs of the blockchain as a potential solution versus other alternatives. --- paper_title: Secure Scheme Against Compromised Hash in Proof-of-Work Blockchain paper_content: Blockchain is built on the basis of peer-to-peer network, cryptography and consensus mechanism over a distributed environment. The underlying cryptography in blockchain, such as hash algorithm and digital signature scheme, is used to guarantee the security of blockchain. However, past experience showed that cryptographic primitives do not last forever along with increasing computational power and advanced cryptanalysis. Therefore, it is crucial to investigate the issue that the underlying cryptography in blockchain is compromised. --- paper_title: Blockchain Consensus Protocols in the Wild (Keynote Talk) paper_content: A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. --- paper_title: AsicBoost - A Speedup for Bitcoin Mining paper_content: AsicBoost is a method to speed up Bitcoin mining by a factor of approximately 20%. The performance gain is achieved through a high-level optimization of the Bitcoin mining algorithm which allows for drastic reduction in gate count on the mining chip. AsicBoost is applicable to all types of mining hardware and chip designs. This paper presents the idea behind the method and describes the information flow in implementations of AsicBoost. --- paper_title: Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin paper_content: In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools - the current state of the Bitcoin network - the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools. --- paper_title: From Bitcoin to Bitcoin Cash: a network analysis paper_content: Bitcoins and Blockchain technologies are attracting the attention of different scientific communities. In addition, their widespread industrial applications and the continuous introduction of cryptocurrencies are also stimulating the attention of the public opinion. The underlying structure of these technologies constitutes one of their core concepts. In particular, they are based on peer-to-peer networks. Accordingly, all nodes lie at the same level, so that there is no place for privileged actors as, for instance, banking institutions in classical financial networks. In this work, we perform a preliminary investigation on two kinds of network, i.e. the Bitcoin network and the Bitcoin Cash network. Notably, we analyze their global structure and we try to evaluate if they are provided with a small-world behavior. Results suggest that the principle known as 'fittest-gets-richer', combined with a continuous increasing of connections, might constitute the mechanism leading these networks to reach their current structure. Moreover, further observations open the way to new investigations into this direction. --- paper_title: PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake paper_content: A peer-to-peer crypto-currency design derived from Satoshi Nakamoto’s Bitcoin. Proof-of-stake replaces proof-of-work to provide most of the network security. Under this hybrid design proof-of-work mainly provides initial minting and is largely non-essential in the long run. Security level of the network is not dependent on energy consumption in the long term thus providing an energyefficient and more cost-competitive peer-to-peer crypto-currency. Proof-of-stake is based on coin age and generated by each node via a hashing scheme bearing similarity to Bitcoin’s but over limited search space. Block chain history and transaction settlement are further protected by a centrally broadcasted checkpoint mechanism. --- paper_title: Thunderella: Blockchains with Optimistic Instant Confirmation paper_content: State machine replication, or “consensus”, is a central abstraction for distributed systems where a set of nodes seek to agree on an ever-growing, linearly-ordered log. In this paper, we propose a practical new paradigm called Thunderella for achieving state machine replication by combining a fast, asynchronous path with a (slow) synchronous “fall-back” path (which only gets executed if something goes wrong); as a consequence, we get simple state machine replications that essentially are as robust as the best synchronous protocols, yet “optimistically” (if a super majority of the players are honest), the protocol “instantly” confirms transactions. --- paper_title: LinBFT: Linear-Communication Byzantine Fault Tolerance for Public Blockchains paper_content: This paper presents LinBFT, a novel Byzantine fault tolerance (BFT) protocol for blockchain systems that achieves amortized O(n) communication volume per block under reasonable conditions (where n is the number of participants), while satisfying determinist guarantees on safety and liveness. This significantly improves previous results, which either incurs quadratic communication complexity, or only satisfies safety in a probabilistic sense. LinBFT is based on the popular PBFT protocol, and cuts down its $O(n^4)$ complexity with three tricks, each by $O(n)$: linear view change, threshold signatures, and verifiable random functions. All three are known, i.e., the solutions are right in front of our eyes, and yet LinBFT is the first $O(n)$ solution with deterministic security guarantees. Further, LinBFT also addresses issues that are specific to permission-less, public blockchain systems, such as anonymous participants without a public-key infrastructure, proof-of-stake with slashing, rotating leader, and a dynamic participant set. In addition, LinBFT contains no proof-of-work module, reaches consensus for every block, and tolerates changing honesty of the participants for different blocks. --- paper_title: Bitcoin's Growing Energy Problem paper_content: The electricity that is expended in the process of mining Bitcoin has become a topic of heavy debate over the past few years. It is a process that makes Bitcoin extremely energy-hungry by design, as the currency requires a huge amount of hash calculations for its ultimate goal of processing financial transactions without intermediaries (peer-to-peer). The primary fuel for each of these calculations is electricity. The Bitcoin network can be estimated to consume at least 2.55 gigawatts of electricity currently, and potentially 7.67 gigawatts in the future, making it comparable with countries such as Ireland (3.1 gigawatts) and Austria (8.2 gigawatts). Economic models tell us that Bitcoin's electricity consumption will gravitate toward the latter number. A look at Bitcoin miner production estimates suggests that this number could already be reached in 2018. --- paper_title: A Blockchain-Based Traceable Certification System paper_content: In recent years, product records become more common to merchandize sold in retail stores, but the current product record system used today can’t assure product’s quality after products were transported through the whole supply chain. During transportation, merchandise may be damaged accidentally or condition changed. Those events do not get recorded because records are predominantly focused by manufacturers. Also in second hand market, product record may be tampered or verification is weak. Nonexperiences buyers can’t distinguish counterfeit because records are not trustworthy and outdated. By using the concept borrowed from Bitcon, the advantage of the blockchain can be applied to the product record system. Because of the characteristics of the blockchain such as: decentralization, openness, and immutability, which can improve the system. To achieve the goal, ownership of products is introduced and smart contract is also embedded to further enhance the product record system. --- paper_title: Scaling Byzantine Consensus: A Broad Analysis paper_content: Blockchains and distributed ledger technology (DLT) that rely on Proof-of-Work (PoW) typically show limited performance. Several recent approaches incorporate Byzantine fault-tolerant (BFT) consensus protocols in their DLT design as Byzantine consensus allows for increased performance and energy efficiency, as well as it offers proven liveness and safety properties. While there has been a broad variety of research on BFT consensus protocols over the last decades, those protocols were originally not intended to scale for a large number of nodes. Thus, the quest for scalable BFT consensus was initiated with the emerging research interest in DLT. In this paper, we first provide a broad analysis of various optimization techniques and approaches used in recent protocols to scale Byzantine consensus for large environments such as BFT blockchain infrastructures. We then present an overview of both efforts and assumptions made by existing protocols and compare their solutions. --- paper_title: Stake-Bleeding Attacks on Proof-of-Stake Blockchains paper_content: We describe a general attack on proof-of-stake (PoS) blockchains without checkpointing. Our attack leverages transaction fees, the ability to treat transactions "out of context," and the standard longest chain rule to completely dominate a blockchain. The attack grows in power with the number of honest transactions and the stake held by the adversary, and can be launched by an adversary controlling any constant fraction of the stake. With the present statistical profile of blockchain protocols, the attack can be launched given a few years of prior blockchain operation; hence it is within the realm of feasibility for PoS protocols. Most importantly, it demonstrates how closely transaction fees and rewards are coupled with the security properties of PoS protocols. More broadly, our attack must be reflected and countered in any future PoS design that avoids checkpointing, as well as any effort to remove checkpointing from existing protocols. We describe several mechanisms for protecting against the attack that include context-sensitivity of transactions and chain density statistics. --- paper_title: Can We Afford Integrity by Proof-of-Work? Scenarios Inspired by the Bitcoin Currency paper_content: Proof-of-Work (PoW), a well-known principle to ration resource access in client-server relations, is about to experience a renaissance as a mechanism to protect the integrity of a global state in distributed transaction systems under decentralized control. Most prominently, the Bitcoin cryptographic currency protocol leverages PoW to (1) prevent double spending and (2) establish scarcity, two essential properties of any electronic currency. This chapter asks the important question whether this approach is generally viable. Citing actual data, it provides a first cut of an answer by estimating the resource requirements, in terms of operating cost and ecological footprint, of a suitably dimensioned PoW infrastructure and comparing them to three attack scenarios. The analysis is inspired by Bitcoin, but generalizes to potential successors, which fix Bitcoin’s technical and economic teething troubles discussed in the literature. --- paper_title: Consensus in the Age of Blockchains paper_content: The blockchain initially gained traction in 2008 as the technology underlying bitcoin, but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. ::: The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: first protocols based on proof-of-work (PoW), second proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and third hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours. --- paper_title: Centrally Banked Cryptocurrencies paper_content: Current cryptocurrencies, starting with Bitcoin, build a decentralized blockchain-based transaction ledger, main- tained through proofs-of-work that also serve to generate a monetary supply. Such decentralization has benefits, such as independence from national political control, but also significant limitations in terms of computational costs and scalability. We introduce RSCoin, a cryptocurrency framework in which central banks maintain complete control over the monetary supply, but rely on a distributed set of authorities, or mintettes, to prevent double-spending. While monetary policy is centralized, RSCoin still provides strong transparency and auditability guarantees. We demonstrate, both theoretically and experimentally, the benefits of a modest degree of centralization, such as the elimination of wasteful hashing and a scalable system for avoiding double- spending attacks. --- paper_title: Smart Contracts Make Bitcoin Mining Pools Vulnerable paper_content: Despite their incentive structure flaws, mining pools account for more than 95% of Bitcoin’s computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party’s payments, and therefore miners need neither trust the attacker’s intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit). --- paper_title: Majority Is Not Enough: Bitcoin Mining Is Vulnerable paper_content: The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. --- paper_title: On the Preliminary Investigation of Selfish Mining Strategy with Multiple Selfish Miners paper_content: Eyal and Sirer's selfish mining strategy has demonstrated that Bitcoin system is not secure even if 50% of total mining power is held by altruistic miners. Since then, researchers have been investigating either to improve the efficiency of selfish mining, or how to defend against it, typically in a single selfish miner setting. Yet there is no research on a selfish mining strategies concurrently used by multiple miners in the system. The effectiveness of such selfish mining strategies and their required mining power under such multiple selfish miners setting remains unknown. ::: In this paper, a preliminary investigation and our findings of selfish mining strategy used by multiple miners are reported. In addition, the conventional model of Bitcoin system is slightly redesigned to tackle its shortcoming: namely, a concurrency of individual mining processes. Although a theoretical analysis of selfish mining strategy under this setting is yet to be established, the current findings based on simulations is promising and of great interest. In particular, our work shows that a lower bound of power threshold required for selfish mining strategy decreases in proportion to a number of selfish miners. Moreover, there exist Nash equilibria where all selfish miners in the system do not change to an honest mining strategy and simultaneously earn their unfair amount of mining reward given that they equally possess sufficiently large mining power. Lastly, our new model yields a power threshold for mounting selfish mining strategy slightly greater than one from the conventional model. --- paper_title: Stubborn Mining: Generalizing Selfish Mining and Combining with an Eclipse Attack paper_content: Selfish mining, originally discovered by Eyal et al. [9], is a well-known attack where a selfish miner, under certain conditions, can gain a disproportionate share of reward by deviating from the honest behavior. In this paper, we expand the mining strategy space to include novel "stubborn" strategies that, for a large range of parameters, earn the miner more revenue. Consequently, we show that the selfish mining attack is not (in general) optimal. Further, we show how a miner can further amplify its gain by non-trivially composing mining attacks with network-level eclipse attacks. We show, surprisingly, that given the attacker's best strategy, in some cases victims of an eclipse attack can actually benefit from being eclipsed! --- paper_title: Majority Is Not Enough: Bitcoin Mining Is Vulnerable paper_content: The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. --- paper_title: Theoretical Bitcoin Attacks with less than Half of the Computational Power (draft) paper_content: A widespread security claim of the Bitcoin system, presented in the original Bitcoin white-paper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically flawed assumptions. ::: In the paper we analyze two kinds of attacks based on two theoretical flaws: the Block Discarding Attack and the Difficulty Raising Attack. We argue that the current theoretical limit of attacker's fraction of total computational power essential for the security of the system is in a sense not $\frac{1}{2}$ but a bit less than $\frac{1}{4}$, and outline proposals for protocol change that can raise this limit to be as close to $\frac{1}{2}$ as we want. ::: The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major differences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the first time. --- paper_title: Preventing the 51%-Attack: a Stochastic Analysis of Two Phase Proof of Work in Bitcoin paper_content: The security of Bitcoin (a relatively new form of a distributed ledger) is threatened by the formation of large public pools, which form naturally in order to reduce reward variance for individual miners. By introducing a second cryptographic challenge (two phase proof-of-work or 2P-PoW for short), pool operators are forced to either give up their private keys or provide a substantial part of their pool’s mining hashrate which potentially forces pools to become smaller. This document provides a stochastic analysis of the Bitcoin mining protocol extended with 2PPoW, modelled using CTMCs (continuous-time Markov chains). 2P-PoW indeed holds its promises, according to these models. A plot is provided for dierent "strengths" of the second cryptographic challenge, which can be used to select proper values for future implementers. --- paper_title: Eclipse attacks on Bitcoin's peer-to-peer network paper_content: We present eclipse attacks on bitcoin's peer-to-peer network. Our attack allows an adversary controlling a sufficient number of IP addresses to monopolize all connections to and from a victim bitcoin node. The attacker can then exploit the victim for attacks on bitcoin's mining and consensus system, including N-confirmation double spending, selfish mining, and adversarial forks in the blockchain. We take a detailed look at bitcoin's peer-to-peer network, and quantify the resources involved in our attack via probabilistic analysis, Monte Carlo simulations, measurements and experiments with live bitcoin nodes. Finally, we present countermeasures, inspired by botnet architectures, that are designed to raise the bar for eclipse attacks while preserving the openness and decentralization of bitcoin's current network architecture. --- paper_title: On inferring autonomous system relationships in the internet paper_content: The Internet consists of rapidly increasing number of hosts interconnected by constantly evolving networks of links and routers. Interdomain routing in the Internet is coordinated by the Border Gateway Protocol (BGP). The BGP allows each autonomous system (AS) to choose its own administrative policy in selecting routes and propagating reachability information to others. These routing policies are constrained by the contractual commercial agreements between administrative domains. For example, an AS sets its policy so that it does not provide transit services between its providers. Such policies imply that AS relationships are an important aspect of the Internet structure. We propose an augmented AS graph representation that classifies AS relationships into customer-provider, peering, and sibling relationships. We classify the types of routes that can appear in BGP routing tables based on the relationships between the ASs in the path and present heuristic algorithms that infer AS relationships from BGP routing tables. The algorithms are tested on publicly available BGP routing tables. We verify our inference results with AT&T internal information on its relationship with neighboring ASs. As much as 99.1% of our inference results are confirmed by the AT&T internal information. We also verify our inferred sibling relationships with the information acquired from the WHOIS lookup service. More than half of our inferred sibling-to-sibling relationships are confirmed by the WHOIS lookup service. To the best of our knowledge, there has been no publicly available information about AS relationships and this is the first attempt in understanding and inferring AS relationships in the Internet. We show evidence that some routing table entries stem from router misconfigurations. --- paper_title: Hijacking Bitcoin: Routing Attacks on Cryptocurrencies paper_content: As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. ::: This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate ~50% of the mining power---even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. ::: We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. ::: The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately. --- paper_title: Stubborn Mining: Generalizing Selfish Mining and Combining with an Eclipse Attack paper_content: Selfish mining, originally discovered by Eyal et al. [9], is a well-known attack where a selfish miner, under certain conditions, can gain a disproportionate share of reward by deviating from the honest behavior. In this paper, we expand the mining strategy space to include novel "stubborn" strategies that, for a large range of parameters, earn the miner more revenue. Consequently, we show that the selfish mining attack is not (in general) optimal. Further, we show how a miner can further amplify its gain by non-trivially composing mining attacks with network-level eclipse attacks. We show, surprisingly, that given the attacker's best strategy, in some cases victims of an eclipse attack can actually benefit from being eclipsed! --- paper_title: Improving routing in large networks inside autonomous system paper_content: The internet is seen as the interconnection between different large networks together. Administrators use interior gateway protocol for routing inside these large networks. Managing routing issues in the network is a challenging task for administrators although they have a choice of selecting from a primitive routing protocol like routing information protocol (version 1) to the advanced routing protocol like open shortest path first protocol or intermediate system-to-intermediate system (IS–IS) protocol. Irrespective of the above mentioned protocols issue of scalability is troubling administrators. They let their network grow up to the limit where it becomes unmanageable. Our analytic study with the help of simulations shows that the these large networks become manageable and improves overall performance of routing and forwarding if route reflectors are used to segment large networks inside autonomous systems. --- paper_title: An Adversary-Centric Behavior Modeling of DDoS Attacks paper_content: Distributed Denial of Service (DDoS) attacks are some of the most persistent threats on the Internet today. The evolution of DDoS attacks calls for an in-depth analysis of those attacks. A better understanding of the attackers’ behavior can provide insights to unveil patterns and strategies utilized by attackers. The prior art on the attackers’ behavior analysis often falls in two aspects: it assumes that adversaries are static, and makes certain simplifying assumptions on their behavior, which often are not supported by real attack data. In this paper, we take a data-driven approach to designing and validating three DDoS attack models from temporal (e.g., attack magnitudes), spatial (e.g., attacker origin), and spatiotemporal (e.g., attack inter-launching time) perspectives. We design these models based on the analysis of traces consisting of more than 50,000 verified DDoS attacks from industrial mitigation operations. Each model is also validated by testing its effectiveness in accurately predicting future DDoS attacks. Comparisons against simple intuitive models further show that our models can more accurately capture the essential features of DDoS attacks. --- paper_title: Empirical Analysis of Denial-of-Service Attacks in the Bitcoin Ecosystem paper_content: We present an empirical investigation into the prevalence and impact of distributed denial-of-service (DDoS) attacks on operators in the Bitcoin economy. To that end, we gather and analyze posts mentioning “DDoS” on the popular Bitcoin forum bitcointalk.org. Starting from around 3 000 different posts made between May 2011 and October 2013, we document 142 unique DDoS attacks on 40 Bitcoin services. We find that 7 % of all known operators have been attacked, but that currency exchanges, mining pools, gambling operators, eWallets, and financial services are much more likely to be attacked than other services. Not coincidentally, we find currency exchanges and mining pools are much more likely to have DDoS protection such as CloudFlare, Incapsula, or Amazon Cloud. We show that those services that have been attacked are more than three times as likely to buy anti-DDoS services than operators who have not been attacked. We find that big mining pools (those with historical hashrate shares of at least 5 %) are much more likely to be DDoSed than small pools. We investigate Mt. Gox as a case study for DDoS attacks on currency exchanges and find a disproportionate amount of DDoS reports made during the large spike in trading volume and exchange rates in spring 2013. We conclude by outlining future opportunities for researching DDoS attacks on Bitcoin. --- paper_title: POSTER: Deterring DDoS Attacks on Blockchain-based Cryptocurrencies through Mempool Optimization paper_content: In this paper, we highlight a new form of distributed denial of service (DDoS) attack that impacts the memory pools of cryptocurrency systems causing massive transaction backlog and higher mining fees. Towards that, we study such an attack on Bitcoin mempools and explore its effects on the mempool size and transaction fees paid by the legitimate users. We also propose countermeasures to contain such an attack. Our countermeasures include fee-based and age-based designs, which optimize the mempool size and help to counter the effects of DDoS attacks. We evaluate our designs using simulations in diverse attack conditions. --- paper_title: Practical byzantine fault tolerance and proactive recovery paper_content: Our growing reliance on online services accessible on the Internet demands highly available systems that provide correct service without interruptions. Software bugs, operator mistakes, and malicious attacks are a major cause of service interruptions and they can cause arbitrary behavior, that is, Byzantine faults. This article describes a new replication algorithm, BFT, that can be used to build highly available systems that tolerate Byzantine faults. BFT can be used in practice to implement real services: it performs well, it is safe in asynchronous environments such as the Internet, it incorporates mechanisms to defend against Byzantine-faulty clients, and it recovers replicas proactively. The recovery mechanism allows the algorithm to tolerate any number of faults over the lifetime of the system provided fewer than 1/3 of the replicas become faulty within a small window of vulnerability. BFT has been implemented as a generic program library with a simple interface. We used the library to implement the first Byzantine-fault-tolerant NFS file system, BFS. The BFT library and BFS perform well because the library incorporates several important optimizations, the most important of which is the use of symmetric cryptography to authenticate messages. The performance results show that BFS performs 2p faster to 24p slower than production implementations of the NFS protocol that are not replicated. This supports our claim that the BFT library can be used to build practical systems that tolerate Byzantine faults. --- paper_title: Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin paper_content: In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools - the current state of the Bitcoin network - the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools. --- paper_title: Security Implications of Blockchain Cloud with Analysis of Block Withholding Attack paper_content: The blockchain technology has emerged as an attractive solution to address performance and security issues in distributed systems. Blockchain's public and distributed peer-to-peer ledger capability benefits cloud computing services which require functions such as, assured data provenance, auditing, management of digital assets, and distributed consensus. Blockchain's underlying consensus mechanism allows to build a tamper-proof environment, where transactions on any digital assets are verified by set of authentic participants or miners. With use of strong cryptographic methods, blocks of transactions are chained together to enable immutability on the records. However, achieving consensus demands computational power from the miners in exchange of handsome reward. Therefore, greedy miners always try to exploit the system by augmenting their mining power. In this paper, we first discuss blockchain's capability in providing assured data provenance in cloud and present vulnerabilities in blockchain cloud. We model the block withholding (BWH) attack in a blockchain cloud considering distinct pool reward mechanisms. BWH attack provides rogue miner ample resources in the blockchain cloud for disrupting honest miners' mining efforts, which was verified through simulations. --- paper_title: On Power Splitting Games in Distributed Computation: The Case of Bitcoin Pooled Mining paper_content: Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the "block withholding attack". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete. --- paper_title: Dynamic Practical Byzantine Fault Tolerance paper_content: This paper describes a novel Byzantine fault tolerant protocol that allows replicas to join and exit dynamically. With the astonishing success of cryptocurrencies, people attach great importance in “blockchain” and robust Byzantine fault tolerant (BFT) protocols for consensus. Among the conventional wisdom, the Practical Byzantine Fault Tolerance (PBFT), proposed by Miguel and Liskov in 1999, occupies an important position. Although PBFT has many advantages, it has fatal disadvantages. Firstly, it works in a completely enclosed environment, where users who want to add or take out any node must stop the whole system. Secondly, although PBFT guarantees liveness and safety if at most $\left\lfloor {\frac{{{\rm{n}} - 1}}{3}} \right\rfloor$ out of a total n replicas are faulty, it takes no measure to deal with these ineffective or malicious replicas, which is harmful to the system and will cause system crash finally. These drawbacks are unbearable in practice. In order to solve them, we present an alternative, Dynamic PBFT. --- paper_title: Tampering with the Delivery of Blocks and Transactions in Bitcoin paper_content: Given the increasing adoption of Bitcoin, the number of transactions and the block sizes within the system are only expected to increase. To sustain its correct operation in spite of its ever-increasing use, Bitcoin implements a number of necessary optimizations and scalability measures. These measures limit the amount of information broadcast in the system to the minimum necessary. In this paper, we show that current scalability measures adopted by Bitcoin come at odds with the security of the system. More specifically, we show that an adversary can exploit these measures in order to effectively delay the propagation of transactions and blocks to specific nodes for a considerable amount of time---without causing a network partitioning in the system. Notice that this attack alters the information received by Bitcoin nodes, and modifies their views of the ledger state. Namely, we show that this allows the adversary to considerably increase its mining advantage in the network, and to double-spend transactions in spite of the current countermeasures adopted by Bitcoin. Based on our results, we propose a number of countermeasures in order to enhance the security of Bitcoin without deteriorating its scalability. --- paper_title: Practical byzantine fault tolerance paper_content: This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS. --- paper_title: Resource-Efficient Byzantine Fault Tolerance paper_content: One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: $3f+1$ replicas are required to tolerate only $f$ faults. Recent works have been able to reduce the minimum number of replicas to $2f+1$ by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents Resource-efficient Byzantine Fault Tolerance ( ReBFT ), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping $f$ replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply ReBFT to two existing BFT systems: PBFT and MinBFT. --- paper_title: Analysis of Bitcoin Pooled Mining Reward Systems paper_content: In this paper we describe the various scoring systems used to calculate rewards of participants in Bitcoin pooled mining, explain the problems each were designed to solve and analyze their respective advantages and disadvantages. --- paper_title: Increased block size and Bitcoin blockchain dynamics paper_content: Bitcoin is a peer to peer electronic payment system where payment transactions are stored in a data structure named the blockchain which is maintained by a community of participants. The Bitcoin Core protocol limits blocks to 1 MB in size. Each block contains at most some 4,000 transactions. Blocks are added to the blockchain on average every 10 minutes, therefore the transaction rate is limited to some 7 transactions per second (TPS). This is much less than the transaction rate offered by competing financial transaction processing systems. The Bitcoin TPS can be increased by increasing the block size and/or by decreasing the block discovery interval. Both of these interventions will increase the end-to-end block transmission delay, which in turn will increase the probability that different participants momentarily record different versions of the blockchain, so that the consensus protocol will discard an increasing number of blocks. The net effect is that the real increase in the TPS is not proportional to the increase (decrease) in the block size (block discovery rate). Our simulation experiments show that large block sizes, if accompanied by large end-to-end block transmission delays, give rise to the frequent appearance of inconsistent blockchain copies, to the detriment of the TPS. We present a simulation analysis of Bitcoin-Next Generation where blocks (keyblocks) stripped of transactions propagate rapidly through the peer-to-peer network. Once a keyblock is mined, only the miner of the keyblock is entitled to broadcast small microblocks of transactions until the next keyblock is mined and another miner is selected to broadcast microblocks. Initial simulation experiments show that Bitcoin-NG can sustain substantially larger transaction rates than Bitcoin Core. --- paper_title: Majority Is Not Enough: Bitcoin Mining Is Vulnerable paper_content: The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. --- paper_title: Efficient Byzantine Fault-Tolerance paper_content: We present two asynchronous Byzantine fault-tolerant state machine replication (BFT) algorithms, which improve previous algorithms in terms of several metrics. First, they require only 2f+1 replicas, instead of the usual 3f+1. Second, the trusted service in which this reduction of replicas is based is quite simple, making a verified implementation straightforward (and even feasible using commercial trusted hardware). Third, in nice executions the two algorithms run in the minimum number of communication steps for nonspeculative and speculative algorithms, respectively, four and three steps. Besides the obvious benefits in terms of cost, resilience and management complexity-fewer replicas to tolerate a certain number of faults-our algorithms are simpler than previous ones, being closer to crash fault-tolerant replication algorithms. The performance evaluation shows that, even with the trusted component access overhead, they can have better throughput than Castro and Liskov's PBFT, and better latency in networks with nonnegligible communication delays. --- paper_title: Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin paper_content: In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools - the current state of the Bitcoin network - the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools. --- paper_title: ZeroBlock: Preventing Selfish Mining in Bitcoin paper_content: Bitcoin was recently introduced as a peer-to-peer electronic currency in order to facilitate transactions outside the traditional financial system. The core of Bitcoin, the Blockchain, is the history of the transactions in the system maintained by all nodes as a distributed shared register. New blocks in the Blockchain contain the last transactions in the system and are added by nodes (miners) after a block mining process that consists in solving a resource consuming proof-of-work (cryptographic puzzle). The reward is a motivation for mining process but also could be an incentive for attacks such as selfish mining. In this paper we propose a solution for one of the major problems in Bitcoin : selfish mining or block withholding attack. This attack is conducted by adversarial or selfish nodes in order to either earn undue rewards or waste the computational power of honest nodes. Contrary to recent solutions, our solution, ZeroBlock, prevents block withholding using a technique free of forgeable timestamps. Moreover, we show that our solution is also compliant with nodes churn. --- paper_title: Nonoutsourceable Scratch-Off Puzzles to Discourage Bitcoin Mining Coalitions paper_content: An implicit goal of Bitcoin's reward structure is to diffuse network influence over a diverse, decentralized population of individual participants. Indeed, Bitcoin's security claims rely on no single entity wielding a sufficiently large portion of the network's overall computational power. Unfortunately, rather than participating independently, most Bitcoin miners join coalitions called mining pools in which a central pool administrator largely directs the pool's activity, leading to a consolidation of power. Recently, the largest mining pool has accounted for more than half of network's total mining capacity. Relatedly, "hosted mining" service providers offer their clients the benefit of economies-of-scale, tempting them away from independent participation. We argue that the prevalence of mining coalitions is due to a limitation of the Bitcoin proof-of-work puzzle -- specifically, that it affords an effective mechanism for enforcing cooperation in a coalition. We present several definitions and constructions for "nonoutsourceable" puzzles that thwart such enforcement mechanisms, thereby deterring coalitions. We also provide an implementation and benchmark results for our schemes to show they are practical. --- paper_title: Security Implications of Blockchain Cloud with Analysis of Block Withholding Attack paper_content: The blockchain technology has emerged as an attractive solution to address performance and security issues in distributed systems. Blockchain's public and distributed peer-to-peer ledger capability benefits cloud computing services which require functions such as, assured data provenance, auditing, management of digital assets, and distributed consensus. Blockchain's underlying consensus mechanism allows to build a tamper-proof environment, where transactions on any digital assets are verified by set of authentic participants or miners. With use of strong cryptographic methods, blocks of transactions are chained together to enable immutability on the records. However, achieving consensus demands computational power from the miners in exchange of handsome reward. Therefore, greedy miners always try to exploit the system by augmenting their mining power. In this paper, we first discuss blockchain's capability in providing assured data provenance in cloud and present vulnerabilities in blockchain cloud. We model the block withholding (BWH) attack in a blockchain cloud considering distinct pool reward mechanisms. BWH attack provides rogue miner ample resources in the blockchain cloud for disrupting honest miners' mining efforts, which was verified through simulations. --- paper_title: Bitcoin Block Withholding Attack: Analysis and Mitigation paper_content: We address two problems: first, we study a variant of block withholding (BWH) attack in Bitcoins and second, we propose solutions to prevent all existing types of BWH attacks in Bitcoins. We analyze the strategies of a selfish Bitcoin miner who in connivance with one pool attacks another pool and receives reward from the former mining pool for attacking the latter. We name this attack as “sponsored block withholding attack.” We present detailed quantitative analysis of the monetary incentive that a selfish miner can earn by adopting this strategy under different scenarios. We prove that under certain conditions, the attacker can maximize her revenue by adopting some strategies and by utilizing her computing power wisely. We also show that an attacker may use this strategy for attacking both the pools for earning higher amount of incentives. More importantly, we present a strategy that can effectively counter block withholding attack in any mining pool. First, we propose a generic scheme that uses cryptographic commitment schemes to counter BWH attack. Then, we suggest an alternative implementation of the same scheme using hash function. Our scheme protects a pool from rogue miners as well as rogue pool administrators. The scheme and its variant defend against BWH attack by making it impossible for the miners to distinguish between a partial proof of work and a complete proof of work. The scheme is so designed that the administrator cannot cheat on the entire pool. The scheme can be implemented by making minor changes to existing Bitcoin protocol. We also analyze the security of the scheme. --- paper_title: Hijacking Bitcoin: Routing Attacks on Cryptocurrencies paper_content: As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. ::: This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate ~50% of the mining power---even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. ::: We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. ::: The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately. --- paper_title: Optimal Selfish Mining Strategies in Bitcoin paper_content: The Bitcoin protocol requires nodes to quickly distribute newly created blocks. Strong nodes can, however, gain higher payoffs by withholding blocks they create and selectively postponing their publication. The existence of such selfish mining attacks was first reported by Eyal and Sirer, who have demonstrated a specific deviation from the standard protocol (a strategy that we name SM1). --- paper_title: On Subversive Miner Strategies and Block Withholding Attack in Bitcoin Digital Currency paper_content: Bitcoin is a "crypto currency", a decentralized electronic payment scheme based on cryptography. Bitcoin economy grows at an incredibly fast rate and is now worth some 10 billions of dollars. Bitcoin mining is an activity which consists of creating (minting) the new coins which are later put into circulation. Miners spend electricity on solving cryptographic puzzles and they are also gatekeepers which validate bitcoin transactions of other people. Miners are expected to be honest and have some incentives to behave well. However. In this paper we look at the miner strategies with particular attention paid to subversive and dishonest strategies or those which could put bitcoin and its reputation in danger. We study in details several recent attacks in which dishonest miners obtain a higher reward than their relative contribution to the network. In particular we revisit the concept of block withholding attacks and propose a new concrete and practical block withholding attack which we show to maximize the advantage gained by rogue miners. ::: RECENT EVENTS: it seems that the attack was recently executed, see Section XI-A. --- paper_title: Preventing the 51%-Attack: a Stochastic Analysis of Two Phase Proof of Work in Bitcoin paper_content: The security of Bitcoin (a relatively new form of a distributed ledger) is threatened by the formation of large public pools, which form naturally in order to reduce reward variance for individual miners. By introducing a second cryptographic challenge (two phase proof-of-work or 2P-PoW for short), pool operators are forced to either give up their private keys or provide a substantial part of their pool’s mining hashrate which potentially forces pools to become smaller. This document provides a stochastic analysis of the Bitcoin mining protocol extended with 2PPoW, modelled using CTMCs (continuous-time Markov chains). 2P-PoW indeed holds its promises, according to these models. A plot is provided for dierent "strengths" of the second cryptographic challenge, which can be used to select proper values for future implementers. --- paper_title: Yet Another Note on Block Withholding Attack on Bitcoin Mining Pools paper_content: In this paper we provide a short quantitative analysis of Bitcoin Block Withholding (BWH) Attack. In this study, we investigate the incentive earned by a miner who either independently or at the diktat of a separate mining pool launches Block Withholding attack on a target mining pool. The victim pool shares its earned revenue with the rogue attacker. We investigate the property revenue function of the attacker and find parameters that could maximize the gain of the attacker. We then propose a new concept that we call “special reward”. This special rewarding scheme is aimed at discouraging the attackers by granting additional incentive to a miner who actually finds a block. A BWH attacker who never submits a valid block to the pool will be deprived from this special reward and her gain will be less than her expectation. Depending upon the actual monetary value of the special reward a pool can significantly reduce the revenue of a BWH attacker and thus can even ward off the threat of an attack. --- paper_title: Scalable Byzantine Consensus via Hardware-assisted Secret Sharing paper_content: The surging interest in blockchain technology has revitalized the search for effective Byzantine consensus schemes. In particular, the blockchain community has been looking for ways to effectively integrate traditional Byzantine fault-tolerant (BFT) protocols into a blockchain consensus layer allowing various financial institutions to securely agree on the order of transactions. However, existing BFT protocols can only scale to tens of nodes due to their $O(n^2)$ message complexity. In this paper, we propose FastBFT, a fast and scalable BFT protocol. At the heart of FastBFT is a novel message aggregation technique that combines hardware-based trusted execution environments (TEEs) with lightweight secret sharing. Combining this technique with several other optimizations (i.e., optimistic execution, tree topology and failure detection), FastBFT achieves low latency and high throughput even for large scale networks. Via systematic analysis and experiments, we demonstrate that FastBFT has better scalability and performance than previous BFT protocols. --- paper_title: Game-Theoretic Analysis of DDoS Attacks Against Bitcoin Mining Pools paper_content: One of the unique features of the digital currency Bitcoin is that new cash is introduced by so-called miners carrying out resource-intensive proof-of-work operations. To increase their chances of obtaining freshly minted bitcoins, miners typically join pools to collaborate on the computations. However, intense competition among mining pools has recently manifested in two ways. Miners may invest in additional computing resources to increase the likelihood of winning the next mining race. But, at times, a more sinister tactic is also employed: a mining pool may trigger a costly distributed denial-of-service (DDoS) attack to lower the expected success outlook of a competing mining pool. We explore the trade-off between these strategies with a series of game-theoretical models of competition between two pools of varying sizes. We consider differences in costs of investment and attack, as well as uncertainty over whether a DDoS attack will succeed. By characterizing the game’s equilibria, we can draw a number of conclusions. In particular, we find that pools have a greater incentive to attack large pools than small ones. We also observe that larger mining pools have a greater incentive to attack than smaller ones. --- paper_title: Countering Selfish Mining in Blockchains paper_content: Selfish mining is a well known vulnerability in blockchains exploited by miners to steal block rewards. In this paper, we explore a new form of selfish mining attack that guarantees high rewards with low cost. We show the feasibility of this attack facilitated by recent developments in blockchain technology opening new attack avenues. By outlining the limitations of existing countermeasures, we highlight a need for new defense strategies to counter this attack, and leverage key system parameters in blockchain applications to propose an algorithm that enforces fair mining. We use the expected transaction confirmation height and block publishing height to detect selfish mining behavior and develop a network-wide defense mechanism to disincentivize selfish miners. Our design involves a simple modifications to transactions’ data structure in order to obtain a “truth state” used to catch the selfish miners and prevent honest miners from losing block rewards. --- paper_title: Incentive Compatibility of Bitcoin Mining Pool Reward Functions paper_content: In this paper we introduce a game-theoretic model for reward functions in Bitcoin mining pools. Our model consists only of an unordered history of reported shares and gives participating miners the strategy choices of either reporting or delaying when they discover a share or full solution. We defined a precise condition for incentive compatibility to ensure miners strategy choices optimize the welfare of the pool as a whole. With this definition we show that proportional mining rewards are not incentive compatible in this model. We introduce and analyze a novel reward function which is incentive compatible in this model. Finally we show that the popular reward function pay-per-last-N-shares is also incentive compatible in a more general model. --- paper_title: POSTER: Deterring DDoS Attacks on Blockchain-based Cryptocurrencies through Mempool Optimization paper_content: In this paper, we highlight a new form of distributed denial of service (DDoS) attack that impacts the memory pools of cryptocurrency systems causing massive transaction backlog and higher mining fees. Towards that, we study such an attack on Bitcoin mempools and explore its effects on the mempool size and transaction fees paid by the legitimate users. We also propose countermeasures to contain such an attack. Our countermeasures include fee-based and age-based designs, which optimize the mempool size and help to counter the effects of DDoS attacks. We evaluate our designs using simulations in diverse attack conditions. --- paper_title: Bitcoin Transaction Graph Analysis paper_content: Bitcoins have recently become an increasingly popular cryptocurrency through which users trade electronically and more anonymously than via traditional electronic transfers. Bitcoin's design keeps all transactions in a public ledger. The sender and receiver for each transaction are identified only by cryptographic public-key ids. This leads to a common misconception that it inherently provides anonymous use. While Bitcoin's presumed anonymity offers new avenues for commerce, several recent studies raise user-privacy concerns. We explore the level of anonymity in the Bitcoin system. Our approach is two-fold: (i) We annotate the public transaction graph by linking bitcoin public keys to "real" people - either definitively or statistically. (ii) We run the annotated graph through our graph-analysis framework to find and summarize activity of both known and unknown users. --- paper_title: Virtual money laundering: the case of Bitcoin and the Linden dollar paper_content: This paper presents an analysis of the money laundering risks of two virtual currencies, the Linden dollar, the in-world currency of the interactive online environment Second Life, and Bitcoin, an experimental virtual currency that allows for the transfer of value through peer-to-peer software. The paper will demonstrate that although these virtual currencies have money laundering utility, they are currently unsuitable for laundering on a large scale. The paper also considers whether either of these virtual currencies fall under the scope of the Money Laundering Regulations 2007 and draws on similarities with online gambling to suggest a method of incorporating the Linden dollar and Bitcoin within the anti-money laundering framework. --- paper_title: Double-spending fast payments in bitcoin paper_content: Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of \emph{fast} payment in a local fast-food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments. --- paper_title: Misbehavior in Bitcoin: A Study of Double-Spending and Accountability paper_content: Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to resist double-spending through a distributed timestamping service. To ensure the operation and security of Bitcoin, it is essential that all transactions and their order of execution are available to all Bitcoin users. Unavoidably, in such a setting, the security of transactions comes at odds with transaction privacy. Motivated by the fact that transaction confirmation in Bitcoin requires tens of minutes, we analyze the conditions for performing successful double-spending attacks against fast payments in Bitcoin, where the time between the exchange of currency and goods is short (in the order of a minute). We show that unless new detection techniques are integrated in the Bitcoin implementation, double-spending attacks on fast payments succeed with considerable probability and can be mounted at low cost. We propose a new and lightweight countermeasure that enables the detection of double-spending attacks in fast transactions. In light of such misbehavior, accountability becomes crucial. We show that in the specific case of Bitcoin, accountability complements privacy. To illustrate this tension, we provide accountability and privacy definition for Bitcoin, and we investigate analytically and empirically the privacy and accountability provisions in Bitcoin. --- paper_title: AsicBoost - A Speedup for Bitcoin Mining paper_content: AsicBoost is a method to speed up Bitcoin mining by a factor of approximately 20%. The performance gain is achieved through a high-level optimization of the Bitcoin mining algorithm which allows for drastic reduction in gate count on the mining chip. AsicBoost is applicable to all types of mining hardware and chip designs. This paper presents the idea behind the method and describes the information flow in implementations of AsicBoost. --- paper_title: End-to-End Analysis of In-Browser Cryptojacking paper_content: In-browser cryptojacking involves hijacking the CPU power of a website's visitor to perform CPU-intensive cryptocurrency mining, and has been on the rise, with 8500% growth during 2017. While some websites advocate cryptojacking as a replacement for online advertisement, web attackers exploit it to generate revenue by embedding malicious cryptojacking code in highly ranked websites. Motivated by the rise of cryptojacking and the lack of any prior systematic work, we set out to analyze malicious cryptojacking statically and dynamically, and examine the economical basis of cryptojacking as an alternative to advertisement. For our static analysis, we perform content-, currency-, and code-based analyses. Through the content-based analysis, we unveil that cryptojacking is a wide-spread threat targeting a variety of website types. Through a currency-based analysis we highlight affinities between mining platforms and currencies: the majority of cryptojacking websites use Coinhive to mine Monero. Through code-based analysis, we highlight unique code complexity features of cryptojacking scripts, and use them to detect cryptojacking code among benign and other malicious JavaScript code, with an accuracy of 96.4%. Through dynamic analysis, we highlight the impact of cryptojacking on system resources, such as CPU and battery consumption (in battery-powered devices); we use the latter to build an analytical model that examines the feasibility of cryptojacking as an alternative to online advertisement, and show a huge negative profit/loss gap, suggesting that the model is impractical. By surveying existing countermeasures and their limitations, we conclude with long-term countermeasures using insights from our analysis. --- paper_title: Mining on Someone Else’s Dime: Mitigating Covert Mining Operations in Clouds and Enterprises paper_content: Covert cryptocurrency mining operations are causing notable losses to both cloud providers and enterprises. Increased power consumption resulting from constant CPU and GPU usage from mining, inflated cooling and electricity costs, and wastage of resources that could otherwise benefit legitimate users are some of the factors that contribute to these incurred losses. Affected organizations currently have no way of detecting these covert, and at times illegal miners and often discover the abuse when attackers have already fled and the damage is done. --- paper_title: Smart contracts: security patterns in the ethereum ecosystem and solidity paper_content: Smart contracts that build up on blockchain technologies are receiving great attention in new business applications and the scientific community, because they allow untrusted parties to manifest contract terms in program code and thus eliminate the need for a trusted third party. The creation process of writing well performing and secure contracts in Ethereum, which is today’s most prominent smart contract platform, is a difficult task. Research on this topic has only recently started in industry and science. Based on an analysis of collected data with Grounded Theory techniques, we have elaborated several common security patterns, which we describe in detail on the basis of Solidity, the dominating programming language for Ethereum. The presented patterns describe solutions to typical security issues and can be applied by Solidity developers to mitigate typical attack scenarios. --- paper_title: On blockchain security and relevant attacks paper_content: The blockchain technology witnessed a wide adop­tion and a swift growth in recent years. This ingenious distributed peer-to-peer design attracted several businesses and solicited several communities beyond the financial market. There are also multiple use cases built around its ecosystem. However, this backbone introduced a lot of speculation and has been criticized by several researchers. Moreover, the lack of legislations perceived a lot of attention. In this paper, we are concerned in analyzing blockchain networks and their development, focusing on their security challenges. We took a holistic approach to cover the involved mechanisms and the limitations of Bitcoin, Ethereum and Hyperledger networks. We expose also numerous possible attacks and assess some countermeasures to dissuade vulnerabilities on the network. For occasion, we simulated the majority and the re-entrancy attacks. The purpose of this paper is to evaluate Blockchain security summarizing its current state. Thoroughly showing threatening flaws, we are not concerned with favoring any particular blockchain network. --- paper_title: On the Instability of Bitcoin Without the Block Reward paper_content: Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a "wealthy" block to "steal" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest. We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies. --- paper_title: Security Services Using Blockchains: A State of the Art Survey paper_content: This paper surveys blockchain-based approaches for several security services. These services include authentication, confidentiality, privacy and access control list, data and resource provenance, and integrity assurance. All these services are critical for the current distributed applications, especially due to the large amount of data being processed over the networks and the use of cloud computing. Authentication ensures that the user is who he/she claims to be. Confidentiality guarantees that data cannot be read by unauthorized users. Privacy provides the users the ability to control who can access their data. Provenance allows an efficient tracking of the data and resources along with their ownership and utilization over the network. Integrity helps in verifying that the data has not been modified or altered. These services are currently managed by centralized controllers, for example, a certificate authority. Therefore, the services are prone to attacks on the centralized controller. On the other hand, blockchain is a secured and distributed ledger that can help resolve many of the problems with centralization. The objectives of this paper are to give insights on the use of security services for current applications, to highlight the state of the art techniques that are currently used to provide these services, to describe their challenges, and to discuss how the blockchain technology can resolve these challenges. Further, several blockchain-based approaches providing such security services are compared thoroughly. Challenges associated with using blockchain-based security services are also discussed to spur further research in this area. --- paper_title: A Survey of Attacks on Ethereum Smart Contracts SoK paper_content: Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage. --- paper_title: New kids on the block: an analysis of modern blockchains paper_content: Half a decade after Bitcoin became the first widely used cryptocurrency, blockchains are receiving considerable interest from industry and the research community. Modern blockchains feature services such as name registration and smart contracts. Some employ new forms of consensus, such as proof-of-stake instead of proof-of-work. However, these blockchains are so far relatively poorly investigated, despite the fact that they move considerable assets. In this paper, we explore three representative, modern blockchains---Ethereum, Namecoin, and Peercoin. Our focus is on the features that set them apart from the pure currency use case of Bitcoin. We investigate the blockchains' activity in terms of transactions and usage patterns, identifying some curiosities in the process. For Ethereum, we are mostly interested in the smart contract functionality it offers. We also carry out a brief analysis of issues that are introduced by negligent design of smart contracts. In the case of Namecoin, our focus is how the name registration is used and has developed over time. For Peercoin, we are interested in the use of proof-of-stake, as this consensus algorithm is poorly understood yet used to move considerable value. Finally, we relate the above to the fundamental characteristics of the underlying peer-to-peer networks. We present a crawler for Ethereum and give statistics on the network size. For Peercoin and Namecoin, we identify the relatively small size of the networks and the weak bootstrapping process. --- paper_title: Bitcoin Risk Analysis paper_content: The surprise advent of the peer-to-peer payment system Bitcoin in 2009 has raised various concerns regarding its relationship to established economic market ideologies. Unlike at currencies, Bitcoin is based on open-source software; it is a secure cryptocurrency, traded as an investment between two individuals over the internet, with no bank involvement. ::: ::: Computationally, this is a very innovative solution, but Bitcoin's popularity has raised a number of security and trust concerns among mainstream economists. With cities and countries, including San Francisco and Germany, using Bitcoin as a unit of account in their financial systems, there is still a lack of understanding and a paucity of models for studying its use, and the role Bitcoin might play in real physical economies. This project tackles these issues by analysing the ramifications of Bitcoin within economic models, by building a computational model of the currency to test its performance in financial market models. The project uses established agent-based modelling techniques to build a decentralised Bitcoin model, which can be `plugged into' existing agent-based models of key economic and financial markets. This allows various metrics to be subjected to critical analysis, gauging the progress of digital economies equipped with Bitcoin usage. ::: ::: This project contributes to the themes of privacy, consent, security and trust in the digital economy and digital technologies, enabling new business models of direct relevance to NEMODE. As computer scientists, we consider Bitcoin from a technical perspective; this contrasts with and complements other current Bitcoin research, and helps document the realizable risks Bitcoin and similar currencies bring to our current economic world. ::: ::: This report outlines a comprehensive collection of risks raised by Bitcoin. Risk management is a discipline that can be used to address the possibility of future threats which may cause harm to the existing systems. Although there has been considerable work on analysing Bitcoin in terms of the potential issues it brings to the economic landscape, this report performs a first ever attempt of identifying the threats and risks posed by the use of Bitcoin from the perspective of computational modeling and engineering. In this project we consider risk at all levels of interaction when Bitcoin is introduced and transferred across the systems. We look at the infrastructure and the computational working of the digital currency to identify the potential risks it brings. Additional information can be seen in our forthcoming companion report on the detailed modeling of Bitcoin. --- paper_title: Can We Afford Integrity by Proof-of-Work? Scenarios Inspired by the Bitcoin Currency paper_content: Proof-of-Work (PoW), a well-known principle to ration resource access in client-server relations, is about to experience a renaissance as a mechanism to protect the integrity of a global state in distributed transaction systems under decentralized control. Most prominently, the Bitcoin cryptographic currency protocol leverages PoW to (1) prevent double spending and (2) establish scarcity, two essential properties of any electronic currency. This chapter asks the important question whether this approach is generally viable. Citing actual data, it provides a first cut of an answer by estimating the resource requirements, in terms of operating cost and ecological footprint, of a suitably dimensioned PoW infrastructure and comparing them to three attack scenarios. The analysis is inspired by Bitcoin, but generalizes to potential successors, which fix Bitcoin’s technical and economic teething troubles discussed in the literature. --- paper_title: Mining on Someone Else’s Dime: Mitigating Covert Mining Operations in Clouds and Enterprises paper_content: Covert cryptocurrency mining operations are causing notable losses to both cloud providers and enterprises. Increased power consumption resulting from constant CPU and GPU usage from mining, inflated cooling and electricity costs, and wastage of resources that could otherwise benefit legitimate users are some of the factors that contribute to these incurred losses. Affected organizations currently have no way of detecting these covert, and at times illegal miners and often discover the abuse when attackers have already fled and the damage is done. --- paper_title: A Survey on the Security of Blockchain Systems paper_content: Since its inception, the blockchain technology has shown promising application prospects. From the initial cryptocurrency to the current smart contract, blockchain has been applied to many fields. Although there are some studies on the security and privacy issues of blockchain, there lacks a systematic examination on the security of blockchain systems. In this paper, we conduct a systematic study on the security threats to blockchain and survey the corresponding real attacks by examining popular blockchain systems. We also review the security enhancement solutions for blockchain, which could be used in the development of various blockchain systems, and suggest some future directions to stir research efforts into this area. --- paper_title: XMSS: Extended Hash-Based Signatures paper_content: This note describes the eXtended Merkle Signature Scheme (XMSS), a ::: hash-based digital signature system. It follows existing descriptions ::: in scientific literature. The note specifies the WOTS+ one-time ::: signature scheme, a single-tree (XMSS) and a multi-tree variant ::: (XMSS^MT) of XMSS. Both variants use WOTS+ as a main building block. ::: XMSS provides cryptographic digital signatures without relying on the ::: conjectured hardness of mathematical problems. Instead, it is proven ::: that it only relies on the properties of cryptographic hash functions. ::: XMSS provides strong security guarantees and, besides some special ::: instantiations, is even secure when the collision resistance of the ::: underlying hash function is broken. It is suitable for compact ::: implementations, relatively simple to implement, and naturally resists ::: side-channel attacks. Unlike most other signature systems, hash-based ::: signatures withstand attacks using quantum computers. --- paper_title: Stick a fork in it: Analyzing the Ethereum network partition paper_content: As blockchain technologies and cryptocurrencies increase in popularity, their decentralization poses unique challenges in network partitions. In traditional distributed systems, network partitions are generally a result of bugs or connectivity failures; the typical goal of the system designer is to automatically recover from such issues as seamlessly as possible. Blockchain-based systems, however, rely on purposeful "forks" to roll out protocol changes in a decentralized manner. Not all users may agree with proposed changes, and thus forks can persist, leading to permanent network partitions. In this paper, we closely study the large-scale fork that occurred in Ethereum, a new blockchain technology that allows for both currency transactions and smart contracts. Ethereum is currently the second-most-valuable cryptocurrency, with a market capitalization of over $28B. We explore the consequences of this fork, showing the impact on the two networks and their mining pools, and how the fork lead to unintentional incentives and security vulnerabilities. --- paper_title: Bitcoin-NG: A Scalable Blockchain Protocol paper_content: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. ::: This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. ::: In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network. --- paper_title: Majority Is Not Enough: Bitcoin Mining Is Vulnerable paper_content: The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. --- paper_title: Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin paper_content: In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools - the current state of the Bitcoin network - the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools. --- paper_title: ZeroBlock: Preventing Selfish Mining in Bitcoin paper_content: Bitcoin was recently introduced as a peer-to-peer electronic currency in order to facilitate transactions outside the traditional financial system. The core of Bitcoin, the Blockchain, is the history of the transactions in the system maintained by all nodes as a distributed shared register. New blocks in the Blockchain contain the last transactions in the system and are added by nodes (miners) after a block mining process that consists in solving a resource consuming proof-of-work (cryptographic puzzle). The reward is a motivation for mining process but also could be an incentive for attacks such as selfish mining. In this paper we propose a solution for one of the major problems in Bitcoin : selfish mining or block withholding attack. This attack is conducted by adversarial or selfish nodes in order to either earn undue rewards or waste the computational power of honest nodes. Contrary to recent solutions, our solution, ZeroBlock, prevents block withholding using a technique free of forgeable timestamps. Moreover, we show that our solution is also compliant with nodes churn. --- paper_title: Hijacking Bitcoin: Routing Attacks on Cryptocurrencies paper_content: As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. ::: This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate ~50% of the mining power---even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. ::: We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. ::: The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately. --- paper_title: Optimal Selfish Mining Strategies in Bitcoin paper_content: The Bitcoin protocol requires nodes to quickly distribute newly created blocks. Strong nodes can, however, gain higher payoffs by withholding blocks they create and selectively postponing their publication. The existence of such selfish mining attacks was first reported by Eyal and Sirer, who have demonstrated a specific deviation from the standard protocol (a strategy that we name SM1). --- paper_title: Preventing the 51%-Attack: a Stochastic Analysis of Two Phase Proof of Work in Bitcoin paper_content: The security of Bitcoin (a relatively new form of a distributed ledger) is threatened by the formation of large public pools, which form naturally in order to reduce reward variance for individual miners. By introducing a second cryptographic challenge (two phase proof-of-work or 2P-PoW for short), pool operators are forced to either give up their private keys or provide a substantial part of their pool’s mining hashrate which potentially forces pools to become smaller. This document provides a stochastic analysis of the Bitcoin mining protocol extended with 2PPoW, modelled using CTMCs (continuous-time Markov chains). 2P-PoW indeed holds its promises, according to these models. A plot is provided for dierent "strengths" of the second cryptographic challenge, which can be used to select proper values for future implementers. --- paper_title: POSTER: Deterring DDoS Attacks on Blockchain-based Cryptocurrencies through Mempool Optimization paper_content: In this paper, we highlight a new form of distributed denial of service (DDoS) attack that impacts the memory pools of cryptocurrency systems causing massive transaction backlog and higher mining fees. Towards that, we study such an attack on Bitcoin mempools and explore its effects on the mempool size and transaction fees paid by the legitimate users. We also propose countermeasures to contain such an attack. Our countermeasures include fee-based and age-based designs, which optimize the mempool size and help to counter the effects of DDoS attacks. We evaluate our designs using simulations in diverse attack conditions. --- paper_title: A Survey of Attacks on Ethereum Smart Contracts SoK paper_content: Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage. --- paper_title: Analysis of Hashrate-Based Double Spending paper_content: Bitcoin is the world's first decentralized digital currency. Its main technical innovation is the use of a blockchain and hash-based proof of work to synchronize transactions and prevent double-spending the currency. While the qualitative nature of this system is well understood, there is widespread confusion about its quantitative aspects and how they relate to attack vectors and their countermeasures. In this paper we take a look at the stochastic processes underlying typical attacks and their resulting probabilities of success. --- paper_title: Anonymity Properties of the Bitcoin P2P Network paper_content: Bitcoin is a popular alternative to fiat money, widely used for its perceived anonymity properties. However, recent attacks on Bitcoin's peer-to-peer (P2P) network demonstrated that its gossip-based flooding protocols, which are used to ensure global network consistency, may enable user deanonymization---the linkage of a user's IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network's flooding mechanism to a different protocol, known as diffusion. However, no systematic justification was provided for the change, and it is unclear if diffusion actually improves the system's anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both pre- and post-2015. In doing so, we consider new adversarial models and spreading mechanisms that have not been previously studied in the source-finding literature. We theoretically prove that Bitcoin's networking protocols (both pre- and post-2015) offer poor anonymity properties on networks with a regular-tree topology. We validate this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology. --- paper_title: Double-spending prevention for Bitcoin zero-confirmation transactions paper_content: Zero-confirmation transactions, i.e. transactions that have been broadcast but are still pending to be included in the blockchain, have gained attention in order to enable fast payments in Bitcoin, shortening the time for performing payments. Fast payments are desirable in certain scenarios, for instance, when buying in vending machines, fast food restaurants, or withdrawing from an ATM. Despite being quickly propagated through the network, zero-confirmation transactions are not protected against double-spending attacks, since the double-spending protection Bitcoin offers relies on the blockchain and, by definition, such transactions are not yet included in it. In this paper, we propose a double-spending prevention mechanism for Bitcoin zero-confirmation transactions. Our proposal is based on exploiting the flexibility of the Bitcoin scripting language together with a well-known vulnerability of the ECDSA signature scheme to discourage attackers from performing such an attack. --- paper_title: An Adaptive Gas Cost Mechanism for Ethereum to Defend Against Under-Priced DoS Attacks paper_content: The gas mechanism in Ethereum charges the execution of every operation to ensure that smart contracts running in EVM (Ethereum Virtual Machine) will be eventually terminated. Failing to properly set the gas costs of EVM operations allows attackers to launch DoS attacks on Ethereum. Although Ethereum recently adjusted the gas costs of EVM operations to defend against known DoS attacks, it remains unknown whether the new setting is proper and how to configure it to defend against unknown DoS attacks. In this paper, we make the first step to address this challenging issue by first proposing an emulation-based framework to automatically measure the resource consumptions of EVM operations. The results reveal that Ethereum's new setting is still not proper. Moreover, we obtain an insight that there may always exist exploitable under-priced operations if the cost is fixed. Hence, we propose a novel gas cost mechanism, which dynamically adjusts the costs of EVM operations according to the number of executions, to thwart DoS attacks. This method punishes the operations that are executed much more frequently than before and lead to high gas costs. To make our solution flexible and secure and avoid frequent update of Ethereum client, we design a special smart contract that collaborates with the updated EVM for dynamic parameter adjustment. Experimental results demonstrate that our method can effectively thwart both known and unknown DoS attacks with flexible parameter settings. Moreover, our method only introduces negligible additional gas consumption for benign users. --- paper_title: Secure and anonymous decentralized Bitcoin mixing paper_content: The decentralized digital currency Bitcoin presents an anonymous alternative to the centralized banking system and indeed enjoys widespread and increasing adoption. Recent works, however, show how users can be reidentified and their payments linked based on Bitcoins most central element, the blockchain, a public ledger of all transactions. Thus, many regard Bitcoins central promise of financial privacy as broken.In this paper, we propose CoinParty, an efficient decentralized mixing service that allows users to reestablish their financial privacy in Bitcoin and related cryptocurrencies. CoinParty, through a novel combination of decryption mixnets with threshold signatures, takes a unique place in the design space of mixing services, combining the advantages of previously proposed centralized and decentralized mixing services in one system. Our prototype implementation of CoinParty scales to large numbers of users and achieves anonymity sets by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain. CoinParty can easily be deployed by any individual group of users, i.e.,independent of any third parties, or provided as a commercial or voluntary service, e.g.,as a community service by privacy-aware organizations. We present ideal properties for mixing of digital currencies.We propose a novel efficient decentralized mixing service for Bitcoin.A novel oblivious shuffle protocol improves resilience against malicious attackers.Use of threshold cryptography increases anonymity and enables deniability.The system is usable, scalable and compatible with Bitcoin/other digital currencies. --- paper_title: Making Smart Contracts Smarter paper_content: Cryptocurrencies record transactions in a decentralized data structure called a blockchain. Two of the most popular cryptocurrencies, Bitcoin and Ethereum, support the feature to encode rules or scripts for processing transactions. This feature has evolved to give practical shape to the ideas of smart contracts, or full-fledged programs that are run on blockchains. Recently, Ethereum's smart contract system has seen steady adoption, supporting tens of thousands of contracts, holding millions dollars worth of virtual coins. In this paper, we investigate the security of running smart contracts based on Ethereum in an open distributed network like those of cryptocurrencies. We introduce several new security problems in which an adversary can manipulate smart contract execution to gain profit. These bugs suggest subtle gaps in the understanding of the distributed semantics of the underlying platform. As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called Oyente to find potential security bugs. Among 19, 336 existing Ethereum contracts, Oyente flags 8, 833 of them as vulnerable, including the TheDAO bug which led to a 60 million US dollar loss in June 2016. We also discuss the severity of other attacks for several case studies which have source code available and confirm the attacks (which target only our accounts) in the main Ethereum network. --- paper_title: Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies paper_content: Besides attracting a billion dollar economy, Bitcoin revolutionized the field of digital currencies and influenced many adjacent areas. This also induced significant scientific interest. In this survey, we unroll and structure the manyfold results and research directions. We start by introducing the Bitcoin protocol and its building blocks. From there we continue to explore the design space by discussing existing contributions and results. In the process, we deduce the fundamental structures and insights at the core of the Bitcoin protocol and its applications. As we show and discuss, many key ideas are likewise applicable in various other fields, so that their impact reaches far beyond Bitcoin itself. --- paper_title: A Review on the Use of Blockchain for the Internet of Things paper_content: The paradigm of Internet of Things (IoT) is paving the way for a world, where many of our daily objects will be interconnected and will interact with their environment in order to collect information and automate certain tasks. Such a vision requires, among other things, seamless authentication, data privacy, security, robustness against attacks, easy deployment, and self-maintenance. Such features can be brought by blockchain, a technology born with a cryptocurrency called Bitcoin. In this paper, a thorough review on how to adapt blockchain to the specific needs of IoT in order to develop Blockchain-based IoT (BIoT) applications is presented. After describing the basics of blockchain, the most relevant BIoT applications are described with the objective of emphasizing how blockchain can impact traditional cloud-centered IoT applications. Then, the current challenges and possible optimizations are detailed regarding many aspects that affect the design, development, and deployment of a BIoT application. Finally, some recommendations are enumerated with the aim of guiding future BIoT researchers and developers on some of the issues that will have to be tackled before deploying the next generation of BIoT applications. --- paper_title: On the Security and Performance of Proof of Work Blockchains paper_content: Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions. --- paper_title: Avoiding Deadlocks in Payment Channel Networks paper_content: Payment transaction channels are one of the main proposed approaches to scaling cryptocurrency payment systems. Recent work by Malavolta et al. [7] has shown that the privacy of the protocol may conflict with its concurrent nature and may lead to deadlocks. In this paper we ask the natural question: can payments in routing networks be routed so as to avoid deadlocks altogether? Our results show that it is in general NP-complete to determine whether a deadlock-free routing exists in a given payment graph. On the other hand, Given some fixed routing, we propose another way to resolve the problem of deadlocks. We offer a modification of the protocols in lightning network and in Fulgor [7] that pre-locks edges in an order that guarantees progress, while still maintaining the protocol’s privacy requirements. --- paper_title: CoinExpress: A Fast Payment Routing Mechanism in Blockchain-Based Payment Channel Networks paper_content: Although cryptocurrencies have witnessed explosive growth in the past year, they have also raised many concerns, among which a crucial one is the scalability issue of blockchain-based cryptocurrencies. Suffering from the large overhead of global consensus and security assurance, even leading cryptocurrencies can only handle up to tens of transactions per second, which largely limits their applications in real- world scenarios. Among many proposals to improve cryptocurrency scalability, one of the most promising and mature solutions is the payment channel network (PCN), which offers off-chain settlement of transactions with minimal involvement of expensive blockchain operations. In this paper, we investigate the problem of payment routing in PCN. We suggest crucial design goals in PCN routing, and propose a novel distributed dynamic routing mechanism called CoinExpress. Through extensive simulations, we have shown that our proposed mechanism is able to achieve outstanding payment acceptance ratio with low routing overhead. --- paper_title: POSTER: Deterring DDoS Attacks on Blockchain-based Cryptocurrencies through Mempool Optimization paper_content: In this paper, we highlight a new form of distributed denial of service (DDoS) attack that impacts the memory pools of cryptocurrency systems causing massive transaction backlog and higher mining fees. Towards that, we study such an attack on Bitcoin mempools and explore its effects on the mempool size and transaction fees paid by the legitimate users. We also propose countermeasures to contain such an attack. Our countermeasures include fee-based and age-based designs, which optimize the mempool size and help to counter the effects of DDoS attacks. We evaluate our designs using simulations in diverse attack conditions. ---
Title: Exploring the Attack Surface of Blockchain: A Systematic Overview Section 1: INTRODUCTION Description 1: Introduce the importance of Blockchain technology and outline its application scenarios and associated security risks. Section 2: MOTIVATION AND TARGET AUDIENCE Description 2: Explain the motivation behind the study and identify the target audience interested in Blockchain security vulnerabilities. Section 3: OVERVIEW OF BLOCKCHAIN AND ITS OPERATIONS Description 3: Provide a conceptual overview of Blockchain including its structure, operations, and consensus algorithms. Section 4: BLOCKCHAIN STRUCTURE ATTACKS Description 4: Discuss attacks related to the design constructs of Blockchain such as forks, stale blocks, orphaned blocks, and vulnerabilities in consensus mechanisms. Section 5: BLOCKCHAIN'S PEER-TO-PEER SYSTEM Description 5: Explore attacks associated with the Blockchain's peer-to-peer architecture including selfish mining, majority attacks, DNS attacks, network attacks, eclipse attacks, DDoS attacks, block withholding attacks, consensus delay, and timejacking attacks. Section 6: APPLICATION ORIENTED ATTACKS Description 6: Address application-specific vulnerabilities in Blockchain, with a focus on Blockchain ingestion, double-spending, cryptojacking, wallet theft, and smart contract attacks. Section 7: RELATED WORK Description 7: Review prior research efforts towards understanding the attack surface of Blockchain technology and how they complement this study. Section 8: DISCUSSION AND OPEN DIRECTIONS Description 8: Summarize key lessons learned from the study and highlight open challenges that require future research directions. Section 9: CONCLUSION Description 9: Conclude by summarizing the exploration of Blockchain attack surfaces and emphasize the ongoing need for improved security practices.
How should my chatbot interact? A survey on human-chatbot interaction design
12
--- paper_title: The media equation: how people treat computers, television, and new media like real people and places paper_content: Part I. Introduction: 1. The media equation Part II. Media and Manners: 2. Politeness 3. Interpersonal distance 4. Flattery 5. Judging others and ourselves Part III. Media and Personality: 6. Personality of characters 7. Personality of interfaces 8. Imitating a personality Part IV. Media and emotion: 9. Good versus bad 10. Negativity 11. Arousal Part V. Media and Social Roles: 12. Specialists 13. Teammates 14. Gender 15. Voices 16. Source orientation Part VI. Media and Form: 17. Image size 18. Fidelity 19. Synchrony 20. Motion 21. Scene changes 22. Subliminal images Part VII. Final Words: 23. Conclusions about the media equation References. --- paper_title: "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents paper_content: The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems. --- paper_title: Computers as persuasive social actors paper_content: This chapter describes the role of computing products as persuasive social actors. These products persuade by giving a variety of social cues that elicit social responses from their human users. The chapter proposes five primary types of social cues cause people to make inferences about social presence in a computing product—physical, psychological, language, social dynamics, and social roles. Physical attractiveness has a significant impact on social influence. A computing technology that is visually attractive to target users is likely to be more persuasive as well. People are more readily persuaded by computing technology products that are similar to themselves; they feel the need to reciprocate when computing technology has done a favor for them. The chapter concludes that the social cues should be handled with care. The positive and negative outcome often depends on the user. --- paper_title: Let's Talk About Race: Identity, Chatbots, and AI paper_content: Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportunities for people and machines. By making the abstract and disparate qualities of this problem space tangible, we can develop chatbots that are more capable of handling race-talk in its many forms. Our goal is to provide the HCI community with ways to begin addressing the question, how can chatbots handle race-talk in new and improved ways? --- paper_title: Computers are social actors paper_content: This paper presents a new experimental paradigm for the study of human-computer interaction, Five experiments provide evidence that individuals’ interactions with computers are fundamentally social. The studies show that social responses to computers are not the result of conscious beliefs that computers are human or human-like. Moreover, such behaviors do not result from users’ ignorance or from psychological or social dysfunctions, nor from a belief that subjects are interacting with programmers. Rather, social responses to computers are commonplace and easy to generate. The results reported here present numerous and unprecedented hypotheses, unexpected implications for design, new approaches to usability testing, and direct methods for verii3cation. --- paper_title: Qualitative Data: An Introduction to Coding and Analysis paper_content: Preface AcknowledgmentsPart I: Getting into Qualitative Research1. Introducing Qualitative Hypothesis-Generating Research: The Yeshiva University Fatherhood ProjectPart II: Planning Your First Research Study2. Designing Hypothesis-Generating Research: The Haitian Fathers Study3. Qualitative and Quantitative Research as Complementary Strategies Part III: Analyzing Your First Research Study4. Coding 1: The Basic Ideas 5. Coding 2: The Mechanics, Phase 1: Making the Text Manageable 6. Coding 2: The Mechanics, Phase 2: Hearing What Was Said 7. Coding 2: The Mechanics, Phase 3: Developing Theory 8. Convincing Other People: The Issues Formerly Known as Reliability, Validity, and Generalizability Part IV: Designing and Analyzing Your Next Research Study9. Designing Your Next Study Using Theoretical Sampling: The Promise Keeper Fathers 10. Analyzing Your Next Study Using Elaborative Coding: The Promise Keeper Fathers Part V: Final Thoughts11. The "Why" of Qualitative Research: A Personal View Appendix A: Simplifying the Bookkeeping with Qualitative Data Analysis Programs Appendix B: The Haitian Fathers Study Appendix C: The Promise Keepers Study References Index About the Authors --- paper_title: How interface agents affect interaction between humans and computers paper_content: For many years, the HCI community has harbored a vision of interacting with intelligent, embodied computer agents. However, the reality of this vision remains elusive. From an interaction design perspective, little is known about how to specifically design an embodied agent to support the task it will perform and the social interactions that will result. This paper presents design research that explores the relationship between the visual features of embodied agents and the tasks they perform, and the social attributions that result. Our results show a clear link between agent task and agent form and reveals that people often prefer agents who conform to gender stereotypes associated with tasks. Based on the results of this work, we provide a set of emerging design considerations to help guide interaction designers in creating the visual form of embodied agents. --- paper_title: Single or Multiple Conversational Agents?: An Interactional Coherence Comparison paper_content: Chatbots focusing on a narrow domain of expertise are in great rise. As several tasks require multiple expertise, a designer may integrate multiple chatbots in the background or include them as interlocutors in a conversation. We investigated both scenarios by means of a Wizard of Oz experiment, in which participants talked to chatbots about visiting a destination. We analyzed the conversation content, users' speech, and reported impressions. We found no significant difference between single- and multi-chatbots scenarios. However, even with equivalent conversation structures, users reported more confusion in multi-chatbots interactions and adopted strategies to organize turn-taking. Our findings indicate that implementing a meta-chatbot may not be necessary, since similar conversation structures occur when interacting to multiple chatbots, but different interactional aspects must be considered for each scenario. --- paper_title: Six modes of proactive resource management: a user-centric typology for proactive behaviors paper_content: Proactivity has recently arisen as one of the focus areas within HCI. Proactive systems adhere to two premises: 1) working on behalf of, or pro, the user, and 2) acting on their own initiative. To extend researchers' views on how proactive systems can support the user, we clarify the concept of proactivity and suggest a typology that distinguishes between 6 modes of proactive resource management: preparation, optimization, advising, manipulation, inhibition, and finalization of user's resources. A scenario of mobile imaging is presented to illustrate how the typology can support the innovation of new use purposes. We argue that conceptual developments like the one proposed here are crucial for the advancement of the emerging field. --- paper_title: Evaluating the language resources of chatbots for their potential in english as a second language paper_content: This paper investigates the linguistic worth of current ‘chatbot’ programs – software programs which attempt to hold a conversation, or interact, in English – as a precursor to their potential as an ESL (English as a second language) learning resource. After some initial background to the development of chatbots, and a discussion of the Loebner Prize Contest for the most ‘human’ chatbot (the ‘Turing Test’), the paper describes an in-depth study evaluating the linguistic accuracy of a number of chatbots available online. Since the ultimate purpose of the current study concerns chatbots' potential with ESL learners, the analysis of language embraces not only an examination of features of language from a native-speaker's perspective (the focus of the Turing Test), but also aspects of language from a second-language-user's perspective. Analyses indicate that while the winner of the 2005 Loebner Prize is the most able chatbot linguistically, it may not necessarily be the chatbot most suited to ESL learners. The paper concludes that while substantial progress has been made in terms of chatbots' language-handling, a robust ESL ‘conversation practice machine’ (Atwell, 1999) is still some way off being a reality. --- paper_title: Survey on Chatbot Design Techniques in Speech Conversation Systems paper_content: Human-Computer Speech is gaining momentum as a technique of computer interaction. There has been a recent upsurge in speech based search engines and assistants such as Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like responses. This type of programme is called a Chatbot, which is the focus of this study. This paper presents a survey on the techniques used to design Chatbots and a comparison is made between different design techniques from nine carefully selected papers according to the main methods adopted. These papers are representative of the significant improvements in Chatbots in the last decade. The paper discusses the similarities and differences in the techniques and examines in particular the Loebner prize-winning Chatbots. --- paper_title: Iterative Development and Evaluation of a Social Conversational Agent paper_content: We show that an agent with fairly good social conversational abilities can be built based on a limited number of topics and dialogue strategies if it is tailored to its intended users through a high degree of user involvement during an iterative development process. The technology used is pattern matching of question-answer pairs, coupled with strategies to handle: followup questions, utterances not understood, abusive utterances, repetitive utterances, and initiation of new topics. --- paper_title: An Investigation of Conversational Agent Interventions Supporting Historical Reasoning in Primary Education paper_content: This work examines the efficiency of an agent intervention mode, aiming to stimulate productive conversational interactions and encourage students to explicate their historical reasoning about important domain concepts. The findings of a pilot study, conducted in the context of primary school class in Modern History, a suggest a favorable student opinion of the conversational agent, b indicate that agent interventions can help students to engage in a transactive form of dialogue, where peers build on each other's reasoning, and c reveal a series of interaction patterns emerging from the display of the agent interventions. --- paper_title: A new friend in our smartphone?: observing interactions with chatbots in the search of emotional engagement paper_content: We present the findings of a quantitative and qualitative empirical research to understand the possibilities of engagement and affection in the use of conversational agents (chatbots). Based on an experiment with 13 participants, we explored on one hand the correlation between the user expectation, user experience and intended use and, on the other, whether users feel keen and engaged in having a personal, empathic relation with an intelligent system like chatbots. We used psychological questionnaires to semi-structured interviews for disentangle the meaning of the interaction. In particular, the personal psychological background of participants was found critical while the experience itself allowed them to imagine new possible relations with chatbots. Our results show some insights on how people understand and empathize with future interactions with conversational agents and other non-visual interfaces. --- paper_title: 'Realness' in chatbots: establishing quantifiable criteria paper_content: The aim of this research is to generate measurable evaluation criteria acceptable to chatbot users. Results of two studies are summarised. In the first, fourteen participants were asked to do a critical incident analysis of their transcriptions with an ELIZA-type chatbot. Results were content analysed, and yielded seven overall themes. In the second, these themes were made into statements of an attitude-like nature, and 20 participants chatted with five winning entrants in the 2011 Chatterbox Challenge and five which failed to place. Latent variable analysis reduced the themes to four, resulting in four subscales with strong reliability which discriminated well between the two categories of chatbots. Content analysis of freeform comments led to a proposal of four dimensions along which people judge the naturalness of a conversation with chatbots. --- paper_title: Here's What I Can Do: Chatbots' Strategies to Convey Their Features to Users paper_content: Chatbots have been around since the 1960's, but recently they have risen in popularity especially due to new compatibility with social networks and messenger applications. Chatbots are different from traditional user interfaces, for they unveil themselves to the user one sentence at a time. Because of that, users may struggle to interact with them and to understand what they can do. Hence, it is important to support designers in deciding how to convey chatbots' features to users, as this might determine whether the user continues to chat or not. As a first step in this direction, in this paper our goal is to analyze the communicative strategies that have been used by popular chatbots to convey their features to users. To perform this analysis we use the Semiotic Inspection Method (SIM). As a result we identify and discuss the different strategies used by the analyzed chatbots to present their features to users. We also discuss the challenges and limitations of using SIM on such interfaces. --- paper_title: Benjamin Franklin’s Decision Method is Acceptable and Helpful with a Conversational Agent paper_content: In this paper, we show that rational decision-making methods such as Benjamin Franklin’s can be successfully implemented as a text based natural language dialog system. More specifically, we developed a prototype, vpino, and conducted a user study. Vpino acts maieutically: the questions raised by vpino encourage the user to reflect about potential options and arguments and help her structure her thoughts. To maintain a real dialog, vpino unobtrusively attempts to keep control of the conversation at all times. Serious, motivated users evaluated acceptance and usefulness of vpino quite positively. Users that are more open to computer based decision support held better and more fruitful dialogs than those with a sceptical attitude. This quantitative result conforms well to our qualitative observation that vpino shows good human like behaviour whenever the user is serious and motivated. We also found that users with a more hypervigilant approach to decisions particularly benefit from vpino. --- paper_title: Social Facilitation Effects by Pedagogical Conversational Agent: Lexical Network Analysis in an Online Explanation Task. paper_content: The present study investigates web-based learning activities of undergraduate students who generate explanations about a key concept taught in a large-scale classroom. The present study used an online system with Pedagogical Conversational Agent (PCA), asked to explain about the key concept from different points and provided suggestions and requests about how to make explanations, and gave social facilitation prompts such as providing examples by other members in the classroom. A total of 314 learner's text based explanation activities were collected from three different classrooms and were analyzed using the social network analysis methods. The main results from the lexical analysis show that those using the PCAs with social feedback worked harder to use more various types of explanations than those without such feedback. Future directions on how to design online tutoring systems are discussed. --- paper_title: The media equation: how people treat computers, television, and new media like real people and places paper_content: Part I. Introduction: 1. The media equation Part II. Media and Manners: 2. Politeness 3. Interpersonal distance 4. Flattery 5. Judging others and ourselves Part III. Media and Personality: 6. Personality of characters 7. Personality of interfaces 8. Imitating a personality Part IV. Media and emotion: 9. Good versus bad 10. Negativity 11. Arousal Part V. Media and Social Roles: 12. Specialists 13. Teammates 14. Gender 15. Voices 16. Source orientation Part VI. Media and Form: 17. Image size 18. Fidelity 19. Synchrony 20. Motion 21. Scene changes 22. Subliminal images Part VII. Final Words: 23. Conclusions about the media equation References. --- paper_title: Social intelligence - empathy = aggression? paper_content: Empathy reduces aggressive behavior. While empathy and social intelligence are strongly correlated, it is, for both logical and consequential reasons, important to regard them as different concepts. Social intelligence is required for all types of conflict behavior, prosocial as well as antisocial, but the presence of empathy acts as a mitigator of aggression. When empathy is partialed out, correlations between social intelligence and all types of aggression increase, while correlations between social intelligence and peaceful conflict resolution decrease. Social intelligence is related differently to various forms of aggressive behavior: more strongly to indirect than to verbal aggression, and weakest to physical aggression, which is in accordance with the developmental theory of aggressive style. More sophisticated forms of aggression require more social intelligence. --- paper_title: The media inequality: Comparing the initial human-human and human-AI social interactions paper_content: As human-machine communication has yet to become prevalent, the rules of interactions between human and intelligent machines need to be explored. This study aims to investigate a specific question: During human users' initial interactions with artificial intelligence, would they reveal their personality traits and communicative attributes differently from human-human interactions? A sample of 245 participants was recruited to view six targets' twelve conversation transcripts on a social media platform: Half with a chatbot Microsoft's Little Ice, and half with human friends. The findings suggested that when the targets interacted with Little Ice, they demonstrated different personality traits and communication attributes from interacting with humans. Specifically, users tended to be more open, more agreeable, more extroverted, more conscientious and self-disclosing when interacting with humans than with AI. The findings not only echo Mischel's cognitive-affective processing system model but also complement the Computers Are Social Actors Paradigm. Theoretical implications were discussed. During initial interactions with AI, would users reveal their personality differently?An exploratory study was conducted to examine users' performance.Facing AI, users demonstrated different personality from interacting with humans.Users were less open, agreeable, extroverted and conscientious when interacting with AI.The findings echo Mischel's cognitive-affective processing system model. --- paper_title: The media equation: how people treat computers, television, and new media like real people and places paper_content: Part I. Introduction: 1. The media equation Part II. Media and Manners: 2. Politeness 3. Interpersonal distance 4. Flattery 5. Judging others and ourselves Part III. Media and Personality: 6. Personality of characters 7. Personality of interfaces 8. Imitating a personality Part IV. Media and emotion: 9. Good versus bad 10. Negativity 11. Arousal Part V. Media and Social Roles: 12. Specialists 13. Teammates 14. Gender 15. Voices 16. Source orientation Part VI. Media and Form: 17. Image size 18. Fidelity 19. Synchrony 20. Motion 21. Scene changes 22. Subliminal images Part VII. Final Words: 23. Conclusions about the media equation References. --- paper_title: "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents paper_content: The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems. --- paper_title: Iterative Development and Evaluation of a Social Conversational Agent paper_content: We show that an agent with fairly good social conversational abilities can be built based on a limited number of topics and dialogue strategies if it is tailored to its intended users through a high degree of user involvement during an iterative development process. The technology used is pattern matching of question-answer pairs, coupled with strategies to handle: followup questions, utterances not understood, abusive utterances, repetitive utterances, and initiation of new topics. --- paper_title: Social intelligence - empathy = aggression? paper_content: Empathy reduces aggressive behavior. While empathy and social intelligence are strongly correlated, it is, for both logical and consequential reasons, important to regard them as different concepts. Social intelligence is required for all types of conflict behavior, prosocial as well as antisocial, but the presence of empathy acts as a mitigator of aggression. When empathy is partialed out, correlations between social intelligence and all types of aggression increase, while correlations between social intelligence and peaceful conflict resolution decrease. Social intelligence is related differently to various forms of aggressive behavior: more strongly to indirect than to verbal aggression, and weakest to physical aggression, which is in accordance with the developmental theory of aggressive style. More sophisticated forms of aggression require more social intelligence. --- paper_title: Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations paper_content: How does communication change when speaking to intelligent agents over humans?We compared 100 IM conversations to 100 exchanges with the chatbot Cleverbot.People used more, but shorter, messages when communicating with chatbots.People also used more restricted vocabulary and greater profanity with chatbots.People can easily adapt their language to communicate with intelligent agents. This study analyzed how communication changes when people communicate with an intelligent agent as opposed to with another human. We compared 100 instant messaging conversations to 100 exchanges with the popular chatbot Cleverbot along seven dimensions: words per message, words per conversation, messages per conversation, word uniqueness, and use of profanity, shorthand, and emoticons. A MANOVA indicated that people communicated with the chatbot for longer durations (but with shorter messages) than they did with another human. Additionally, human-chatbot communication lacked much of the richness of vocabulary found in conversations among people, and exhibited greater profanity. These results suggest that while human language skills transfer easily to human-chatbot communication, there are notable differences in the content and quality of such conversations. --- paper_title: Media inequality in conversation: how people behave differently when interacting with computers and people paper_content: How is interacting with computer programs different from interacting with people? One answer in the literature is that these two types of interactions are similar. The present study challenges this perspective with a laboratory experiment grounded in the principles of Interpersonal Theory, a psychological approach to interpersonal dynamics. Participants had a text-based, structured conversation with a computer that gave scripted conversational responses. The main manipulation was whether participants were told that they were interacting with a computer program or a person in the room next door. Discourse analyses revealed a key difference in participants' behavior -- when participants believed they were talking to a person, they showed many more of the kinds of behaviors associated with establishing the interpersonal nature of a relationship. This finding has important implications for the design of technologies intended to take on social roles or characteristics. --- paper_title: You always hurt the one you love: Strategies and tactics in interpersonal conflict paper_content: The present study delineates the five most frequently utilized strategies of relational conflict resolution. Three propositions were tested. In examining the first proposition, same and opposite sex friends indicated that they utilized significantly different conflict strategies. In testing proposition two, males and females reported that they employ different strategies with their same but not opposite sex friends. In probing the third proposition, frequently used conflict strategies discriminated between levels of relational satisfaction. --- paper_title: Benjamin Franklin’s Decision Method is Acceptable and Helpful with a Conversational Agent paper_content: In this paper, we show that rational decision-making methods such as Benjamin Franklin’s can be successfully implemented as a text based natural language dialog system. More specifically, we developed a prototype, vpino, and conducted a user study. Vpino acts maieutically: the questions raised by vpino encourage the user to reflect about potential options and arguments and help her structure her thoughts. To maintain a real dialog, vpino unobtrusively attempts to keep control of the conversation at all times. Serious, motivated users evaluated acceptance and usefulness of vpino quite positively. Users that are more open to computer based decision support held better and more fruitful dialogs than those with a sceptical attitude. This quantitative result conforms well to our qualitative observation that vpino shows good human like behaviour whenever the user is serious and motivated. We also found that users with a more hypervigilant approach to decisions particularly benefit from vpino. --- paper_title: The media equation: how people treat computers, television, and new media like real people and places paper_content: Part I. Introduction: 1. The media equation Part II. Media and Manners: 2. Politeness 3. Interpersonal distance 4. Flattery 5. Judging others and ourselves Part III. Media and Personality: 6. Personality of characters 7. Personality of interfaces 8. Imitating a personality Part IV. Media and emotion: 9. Good versus bad 10. Negativity 11. Arousal Part V. Media and Social Roles: 12. Specialists 13. Teammates 14. Gender 15. Voices 16. Source orientation Part VI. Media and Form: 17. Image size 18. Fidelity 19. Synchrony 20. Motion 21. Scene changes 22. Subliminal images Part VII. Final Words: 23. Conclusions about the media equation References. --- paper_title: Single or Multiple Conversational Agents?: An Interactional Coherence Comparison paper_content: Chatbots focusing on a narrow domain of expertise are in great rise. As several tasks require multiple expertise, a designer may integrate multiple chatbots in the background or include them as interlocutors in a conversation. We investigated both scenarios by means of a Wizard of Oz experiment, in which participants talked to chatbots about visiting a destination. We analyzed the conversation content, users' speech, and reported impressions. We found no significant difference between single- and multi-chatbots scenarios. However, even with equivalent conversation structures, users reported more confusion in multi-chatbots interactions and adopted strategies to organize turn-taking. Our findings indicate that implementing a meta-chatbot may not be necessary, since similar conversation structures occur when interacting to multiple chatbots, but different interactional aspects must be considered for each scenario. --- paper_title: Conversational Agents for Game-Like Virtual Environments paper_content: Insp. Gray: I’d like to ask you a few questions about last night. Coln. Mustard:Fire away young man, I’ve got nothing to hide. (Mustard uses phrase ’young man’ he feels a superiority in the social relationship with the inspector) (Mustard is a defensive person protests innocence straight away) Insp. Gray: Were you in the ballroom at all last night? Coln. Mustard:Yes. I was in the ballroom between 1945 hours and 2000 hours. (Mustard is an army man dictates choice of ’1945 hours’ over ’seven forty-five’) (Mustard is confident of himself very affirmative tone used) Insp. Gray: Okay. So where had you been before you went to the ballroom? Coln. Mustard:I was in the Kitchen (Abrupt. Dislikes the Inspectors questioning) Insp. Gray: But Mrs. White was in the Kitchen at that time and said that she didn’t see anyone else. Coln. Mustard:Well she must be mistaken. (Very defensive. Authoritative about the fact commanding personality) Insp. Gray: She seemed very confident she was right. Are you sure you’ve not made a mistake? Coln. Mustard:I’m absolutely certain! (Very defensive and angry strengthens his claim with the use of the adverb) Coln. Mustard:You should talk to Mrs. Peacock. I saw her heading towards the library at 1945 hours. She seemed very suspicious. (Changes subject dominant personality disliked previous conversational direction) Insp. Gray: Why do you say that? Coln. Mustard:She hates reading, and never goes to the library. (Authoritative confident of his knowledge) Insp. Gray: Okay. Thanks for your time. Figure 1. Example Conversation from the Cluedo Game --- paper_title: Evaluating the language resources of chatbots for their potential in english as a second language paper_content: This paper investigates the linguistic worth of current ‘chatbot’ programs – software programs which attempt to hold a conversation, or interact, in English – as a precursor to their potential as an ESL (English as a second language) learning resource. After some initial background to the development of chatbots, and a discussion of the Loebner Prize Contest for the most ‘human’ chatbot (the ‘Turing Test’), the paper describes an in-depth study evaluating the linguistic accuracy of a number of chatbots available online. Since the ultimate purpose of the current study concerns chatbots' potential with ESL learners, the analysis of language embraces not only an examination of features of language from a native-speaker's perspective (the focus of the Turing Test), but also aspects of language from a second-language-user's perspective. Analyses indicate that while the winner of the 2005 Loebner Prize is the most able chatbot linguistically, it may not necessarily be the chatbot most suited to ESL learners. The paper concludes that while substantial progress has been made in terms of chatbots' language-handling, a robust ESL ‘conversation practice machine’ (Atwell, 1999) is still some way off being a reality. --- paper_title: Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations paper_content: How does communication change when speaking to intelligent agents over humans?We compared 100 IM conversations to 100 exchanges with the chatbot Cleverbot.People used more, but shorter, messages when communicating with chatbots.People also used more restricted vocabulary and greater profanity with chatbots.People can easily adapt their language to communicate with intelligent agents. This study analyzed how communication changes when people communicate with an intelligent agent as opposed to with another human. We compared 100 instant messaging conversations to 100 exchanges with the popular chatbot Cleverbot along seven dimensions: words per message, words per conversation, messages per conversation, word uniqueness, and use of profanity, shorthand, and emoticons. A MANOVA indicated that people communicated with the chatbot for longer durations (but with shorter messages) than they did with another human. Additionally, human-chatbot communication lacked much of the richness of vocabulary found in conversations among people, and exhibited greater profanity. These results suggest that while human language skills transfer easily to human-chatbot communication, there are notable differences in the content and quality of such conversations. --- paper_title: 'Realness' in chatbots: establishing quantifiable criteria paper_content: The aim of this research is to generate measurable evaluation criteria acceptable to chatbot users. Results of two studies are summarised. In the first, fourteen participants were asked to do a critical incident analysis of their transcriptions with an ELIZA-type chatbot. Results were content analysed, and yielded seven overall themes. In the second, these themes were made into statements of an attitude-like nature, and 20 participants chatted with five winning entrants in the 2011 Chatterbox Challenge and five which failed to place. Latent variable analysis reduced the themes to four, resulting in four subscales with strong reliability which discriminated well between the two categories of chatbots. Content analysis of freeform comments led to a proposal of four dimensions along which people judge the naturalness of a conversation with chatbots. --- paper_title: Neural personalized response generation as domain adaptation paper_content: One of the most crucial problem on training personalized response generation models for conversational robots is the lack of large scale personal conversation data. To address the problem, we propose a two-phase approach, namely initialization then adaptation, to first pre-train an optimized RNN encoder-decoder model (LTS model) in a large scale conversational data for general response generation and then fine-tune the model in a small scale personal conversation data to generate personalized responses. For evaluation, we propose a novel human aided method, which can be seen as a quasi-Turing test, to evaluate the performance of the personalized response generation models. Experimental results show that the proposed personalized response generation model outperforms the state-of-the-art approaches to language model personalization and persona-based neural conversation generation on the automatic evaluation, offline human judgment and the quasi-Turing test. --- paper_title: Single or Multiple Conversational Agents?: An Interactional Coherence Comparison paper_content: Chatbots focusing on a narrow domain of expertise are in great rise. As several tasks require multiple expertise, a designer may integrate multiple chatbots in the background or include them as interlocutors in a conversation. We investigated both scenarios by means of a Wizard of Oz experiment, in which participants talked to chatbots about visiting a destination. We analyzed the conversation content, users' speech, and reported impressions. We found no significant difference between single- and multi-chatbots scenarios. However, even with equivalent conversation structures, users reported more confusion in multi-chatbots interactions and adopted strategies to organize turn-taking. Our findings indicate that implementing a meta-chatbot may not be necessary, since similar conversation structures occur when interacting to multiple chatbots, but different interactional aspects must be considered for each scenario. --- paper_title: Having an Animated Coffee with a Group of Chatbots from the 19th Century paper_content: Beyond current dyadic chatbots and voice-based personal assistants, this demo showcases a novel experience where users interact with multiple, text-based conversational systems as if seating around a table with them. Two key technologies allow the seamless interaction between users and chatbots: (1) A state-of-art artificial conversational governance system which allows a natural flow of conversation without the use of vocatives to trigger the chatbot responses; (2) a conversation visualization mechanism where the utterances from participants are aesthetically projected on a tabletop. Those technologies were developed originally for "Cafe com os Santiagos" an artwork where visitors conversed with three chatbots portraying characters from a book in a scenographic space recreating a 19th-century coffee table. The demo allows users to seat to actually have coffee and chat with multi-chatbots characters. --- paper_title: Politeness : some universals in language usage paper_content: Symbols and abbreviations Foreword John J. Gumperz Introduction to the reissue Notes 1. Introduction 2. Summarized argument 3. The argument: intuitive bases and derivative definitions 4. On the nature of the model 5. Realizations of politeness strategies in language 6. Derivative hypotheses 7. Sociological implications 8. Implications for language studies 9. Conclusions Notes References Author index Subject index. --- paper_title: Computer-Mediated Communication Impersonal, Interpersonal, and Hyperpersonal Interaction paper_content: While computer-mediated communication use and research are proliferating rapidly, findings offer contrasting images regarding the interpersonal character of this technology. Research trends over the history of these media are reviewed with observations across trends suggested so as to provide integrative principles with which to apply media to different circumstances. First, the notion that the media reduce personal influences—their impersonal effects—is reviewed. Newer theories and research are noted explaining normative “interpersonal” uses of the media. From this vantage point, recognizing that impersonal communication is sometimes advantageous, strategies for the intentional depersonalization of media use are inferred, with implications for Group Decision Support Systems effects. Additionally, recognizing that media sometimes facilitate communication that surpasses normal interpersonal levels, a new perspective on “hyperpersonal” communication is introduced. Subprocesses are discussed pertaining to re... --- paper_title: A Hybrid Architecture for Multi-Party Conversational Systems paper_content: Multi-party Conversational Systems are systems with natural language interaction between one or more people or systems. From the moment that an utterance is sent to a group, to the moment that it is replied in the group by a member, several activities must be done by the system: utterance understanding, information search, reasoning, among others. In this paper we present the challenges of designing and building multi-party conversational systems, the state of the art, our proposed hybrid architecture using both rules and machine learning and some insights after implementing and evaluating one on the finance domain. --- paper_title: Selective self-presentation in computer-mediated communication : Hyperpersonal dimensions of technology , language , and cognition paper_content: The hyperpersonal model of computer-mediated communication (CMC) posits that users exploit the technological aspects of CMC in order to enhance the messages they construct to manage impressions and facilitate desired relationships. This research examined how CMC users managed message composing time, editing behaviors, personal language, sentence complexity, and relational tone in their initial messages to different presumed targets, and the cognitive awareness related to these processes. Effects on several of these processes and outcomes were obtained in response to different targets, partially supporting the hyperpersonal perspective of CMC, with unanticipated gender and status interaction effects suggesting behavioral compensation through CMC, or overcompensation when addressing presumably undesirable partners. --- paper_title: 'Realness' in chatbots: establishing quantifiable criteria paper_content: The aim of this research is to generate measurable evaluation criteria acceptable to chatbot users. Results of two studies are summarised. In the first, fourteen participants were asked to do a critical incident analysis of their transcriptions with an ELIZA-type chatbot. Results were content analysed, and yielded seven overall themes. In the second, these themes were made into statements of an attitude-like nature, and 20 participants chatted with five winning entrants in the 2011 Chatterbox Challenge and five which failed to place. Latent variable analysis reduced the themes to four, resulting in four subscales with strong reliability which discriminated well between the two categories of chatbots. Content analysis of freeform comments led to a proposal of four dimensions along which people judge the naturalness of a conversation with chatbots. --- paper_title: Style Transfer Through Back-Translation paper_content: Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency. --- paper_title: Interpersonal Effects in Computer-Mediated Interaction A Relational Perspective paper_content: Several theories and much experimental research on relational tone in computer-mediated communication (CMC) points to the lack of nonverbal cues in this channel as a cause of impersonal and task-oriented messages. Field research in CMC often reports more positive relational behavior. This article examines the assumptions, methods, and findings of such research and suggests that negative relational effects are confined to narrow situational boundary conditions. Alternatively, it is suggested that communicators develop individuating impressions of others through accumulated CMC messages. Based upon these impressions, users may develop relationships and express multidimensional relational messages through verbal or textual cues. Predictions regarding these processes are suggested, and future research incorporating these points is urged. --- paper_title: Register, Genre, and Style paper_content: 1. Registers, genres, and styles: fundamental varieties of language Part I. Analytical Framework: 2. Describing the situational characteristics of registers and genres 3. Analysing linguistic features and their functions Part II. Detailed Descriptions of Register, Genre and Style: 4. Interpersonal spoken registers 5. Written registers, genres and styles 6. Historical evolution of registers, genres and styles 7. Registers and genres in electronic communication Part III. Larger Theoretical Issues: 8. Multidimensional patterns of register variation 9. Register studies in context Appendix A. Annotation of major register/genre studies Appendix B. Activity texts References. --- paper_title: Politeness and language paper_content: This article assesses the advantages and limitations of three different approaches to the analysis of politeness in language: politeness as social rules, politeness as adherence to an expanded set of Gricean Maxims, and politeness as strategic attention to ‘face.’ It argues that only the last can account for the observable commonalities in polite expressions across diverse languages and cultures, and positions the analysis of politeness as strategic attention to face in the modern context of attention to the evolutionary origins and nature of human cooperation. What Is Politeness? If, as many have claimed, language is the trait that most radically distinguishes Homo sapiens from other species, politeness is the feature of language use that most clearly reveals the nature of human sociality as expressed in speech. Politeness is essentially a matter of taking into account the feelings of others as to how they should be interactionally treated, including behaving in a manner that demonstrates appropriate concern for interactors’ social status and their social relationship. Politeness – in this broad sense of speech oriented to an interactor’s public persona or ‘face ’– is ubiquitous in language use. Since, on the whole, taking account of people’s feelings means saying and doing things in a less straightforward or more elaborate manner than when one is not taking such feelings into consideration, ways of being polite provide probably the most pervasive source of indirectness, reasons for not saying exactly what one means, in how people frame their communicative intentions in formulating their utterances. There are two quite different kinds of feelings to be attended to, and therefore there are two distinct kinds of politeness. One kind arises whenever what is about to be said may be unwelcome, prompting expressions of respect, restraint, avoidance (‘negative politeness’). Another arises from the fact that longterm relationships with people can be important in taking their feelings into account, prompting expressions of social closeness, caring, and approval (‘positive politeness’). There are many folk notions for these kinds of attention to feelings – including courtesy, tact, deference, demeanor, sensibility, poise, discernment, rapport, mannerliness, urbanity, as well as for the contrasting behavior – rudeness, gaucheness, social gaffes, and for their consequences – embarrassment, humiliation. Such terms label culture-specific notions invested with social importance, and attest both to the pervasiveness of notions of politeness and to their cultural framing. --- paper_title: Benjamin Franklin’s Decision Method is Acceptable and Helpful with a Conversational Agent paper_content: In this paper, we show that rational decision-making methods such as Benjamin Franklin’s can be successfully implemented as a text based natural language dialog system. More specifically, we developed a prototype, vpino, and conducted a user study. Vpino acts maieutically: the questions raised by vpino encourage the user to reflect about potential options and arguments and help her structure her thoughts. To maintain a real dialog, vpino unobtrusively attempts to keep control of the conversation at all times. Serious, motivated users evaluated acceptance and usefulness of vpino quite positively. Users that are more open to computer based decision support held better and more fruitful dialogs than those with a sceptical attitude. This quantitative result conforms well to our qualitative observation that vpino shows good human like behaviour whenever the user is serious and motivated. We also found that users with a more hypervigilant approach to decisions particularly benefit from vpino. --- paper_title: The Racial Formation of Chatbots paper_content: In his article "The Racial Formation of Chatbots" Mark C. Marino introduces electronic literature known as chatbot or conversation agent. These programs are all around us from automated help centers to smartphones (e.g., Siri). These conversation agents are often represented as text or disembodied voices. However, when programmers give them a body or the representation of a body (partial or full), other aspects of their identity become more apparent—particularly their racial or ethnic identity. Marino explores the ways racial identity is constructed through the embodied performance of chatbots and what that indicates for human identity construction on the internet. Mark C. Marino, "The Racial Formation of Chatbots" page 2 of 11 CLCWeb: Comparative Literature and Culture 16.5 (2014): http://docs.lib.purdue.edu/clcweb/vol16/iss5/13> Special Issue New Work on Electronic Literature and Cyberculture. Ed. Maya Zalbidea, Mark C. Marino, and Asuncion Lopez-Varela --- paper_title: Conversational Agents for Game-Like Virtual Environments paper_content: Insp. Gray: I’d like to ask you a few questions about last night. Coln. Mustard:Fire away young man, I’ve got nothing to hide. (Mustard uses phrase ’young man’ he feels a superiority in the social relationship with the inspector) (Mustard is a defensive person protests innocence straight away) Insp. Gray: Were you in the ballroom at all last night? Coln. Mustard:Yes. I was in the ballroom between 1945 hours and 2000 hours. (Mustard is an army man dictates choice of ’1945 hours’ over ’seven forty-five’) (Mustard is confident of himself very affirmative tone used) Insp. Gray: Okay. So where had you been before you went to the ballroom? Coln. Mustard:I was in the Kitchen (Abrupt. Dislikes the Inspectors questioning) Insp. Gray: But Mrs. White was in the Kitchen at that time and said that she didn’t see anyone else. Coln. Mustard:Well she must be mistaken. (Very defensive. Authoritative about the fact commanding personality) Insp. Gray: She seemed very confident she was right. Are you sure you’ve not made a mistake? Coln. Mustard:I’m absolutely certain! (Very defensive and angry strengthens his claim with the use of the adverb) Coln. Mustard:You should talk to Mrs. Peacock. I saw her heading towards the library at 1945 hours. She seemed very suspicious. (Changes subject dominant personality disliked previous conversational direction) Insp. Gray: Why do you say that? Coln. Mustard:She hates reading, and never goes to the library. (Authoritative confident of his knowledge) Insp. Gray: Okay. Thanks for your time. Figure 1. Example Conversation from the Cluedo Game --- paper_title: DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset paper_content: We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. --- paper_title: Let's Talk About Race: Identity, Chatbots, and AI paper_content: Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportunities for people and machines. By making the abstract and disparate qualities of this problem space tangible, we can develop chatbots that are more capable of handling race-talk in its many forms. Our goal is to provide the HCI community with ways to begin addressing the question, how can chatbots handle race-talk in new and improved ways? --- paper_title: WHAT MAKES ANY AGENT A MORAL AGENT? REFLECTIONS ON MACHINE CONSCIOUSNESS AND MORAL AGENCY paper_content: In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of conceptual pre-conditions for being a moral agent and then outline how one should — and should not — go about attributing moral agency. In place of a litmus test for such an agency — such as Allen et al.'s Moral Turing Test — we suggest some tools from the conceptual spaces theory and the unified conceptual space theory for mapping out the nature and extent of that agency. --- paper_title: Conversational Agents and Mental Health: Theory-Informed Assessment of Language and Affect paper_content: A study deployed the mental health Relational Frame Theory as grounding for an analysis of sentiment dynamics in human-language dialogs. The work takes a step towards enabling use of conversational agents in mental health settings. Sentiment tendencies and mirroring behaviors in 11k human-human dialogs were compared with behaviors when humans interacted with conversational agents in a similar-sized collection. The study finds that human sentiment-related interaction norms persist in human-agent dialogs, but that humans are twice as likely to respond negatively when faced with a negative utterance by a robot than in a comparable situation with humans. Similarly, inhibition towards use of obscenity is greatly reduced. We introduce a new Affective Neural Net implementation that specializes in analyzing sentiment in real time. --- paper_title: Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? paper_content: In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious. --- paper_title: A perceived moral agency scale: Development and validation of a metric for humans and social machines paper_content: Abstract Although current social machine technology cannot fully exhibit the hallmarks of human morality or agency, popular culture representations and emerging technology make it increasingly important to examine human interlocutors’ perception of social machines (e.g., digital assistants, chatbots, robots) as moral agents. To facilitate such scholarship, the notion of perceived moral agency (PMA) is proposed and defined, and a metric developed and validated through two studies: (1) a large-scale online survey featuring potential scale items and concurrent validation metrics for both machine and human targets, and (2) a scale validation study with robots presented as variably agentic and moral. The PMA metric is shown to be reliable, valid, and exhibiting predictive utility. --- paper_title: Automation, Algorithms, and Politics| Talking to Bots: Symbiotic Agency and the Case of Tay paper_content: In 2016, Microsoft launched Tay, an experimental artificial intelligence chat bot. Learning from interactions with Twitter users, Tay was shut down after one day because of its obscene and inflammatory tweets. This article uses the case of Tay to re-examine theories of agency. How did users view the personality and actions of an artificial intelligence chat bot when interacting with Tay on Twitter? Using phenomenological research methods and pragmatic approaches to agency, we look at what people said about Tay to study how they imagine and interact with emerging technologies and to show the limitations of our current theories of agency for describing communication in these settings. We show how different qualities of agency, different expectations for technologies, and different capacities for affordance emerge in the interactions between people and artificial intelligence. We argue that a perspective of “symbiotic agency”—informed by the imagined affordances of emerging technology—is required to really understand the collapse of Tay. --- paper_title: A new friend in our smartphone?: observing interactions with chatbots in the search of emotional engagement paper_content: We present the findings of a quantitative and qualitative empirical research to understand the possibilities of engagement and affection in the use of conversational agents (chatbots). Based on an experiment with 13 participants, we explored on one hand the correlation between the user expectation, user experience and intended use and, on the other, whether users feel keen and engaged in having a personal, empathic relation with an intelligent system like chatbots. We used psychological questionnaires to semi-structured interviews for disentangle the meaning of the interaction. In particular, the personal psychological background of participants was found critical while the experience itself allowed them to imagine new possible relations with chatbots. Our results show some insights on how people understand and empathize with future interactions with conversational agents and other non-visual interfaces. --- paper_title: Why Machine Ethics? paper_content: Machine ethics isn't merely science fiction; it's a topic that requires serious consideration, given the rapid emergence of increasingly complex autonomous software agents and robots. Machine ethics is an emerging field that seeks to implement moral decision-making faculties in computers and robots. We already have semiautonomous robots and software agents that violate ethical standards as a matter of course. In the case of AI and robotics, fearful scenarios range from the future takeover of humanity by a superior form of AI to the havoc created by endlessly reproducing nanobots --- paper_title: What Is Personalization? Perspectives on the Design and Implementation of Personalization in Information Systems paper_content: In e-commerce and mobile commerce, personalization has been recognized as an important approach element in customer relationships and Web strategies. However, there are wide differences in how this concept is defined, characterized, and implemented in the literature. In this article we present a high-level framework for classifying approaches to personalization that delineates fundamental assumptions about personalization in the literature and relates them to strategies for developing personalization systems. The framework consists of 2 parts: (a) a set of perspectives on personalization that guide the design of personalization systems at a general level and (b) a scheme for classifying how personalization can be implemented. The personalization perspectives represent 4 distinct schools of thought on the nature of personalization distilled from the literature of several fields. These perspectives are ideal types and we discuss them in terms of the motivation they supply for personalization, the goals and ... --- paper_title: The Racial Formation of Chatbots paper_content: In his article "The Racial Formation of Chatbots" Mark C. Marino introduces electronic literature known as chatbot or conversation agent. These programs are all around us from automated help centers to smartphones (e.g., Siri). These conversation agents are often represented as text or disembodied voices. However, when programmers give them a body or the representation of a body (partial or full), other aspects of their identity become more apparent—particularly their racial or ethnic identity. Marino explores the ways racial identity is constructed through the embodied performance of chatbots and what that indicates for human identity construction on the internet. Mark C. Marino, "The Racial Formation of Chatbots" page 2 of 11 CLCWeb: Comparative Literature and Culture 16.5 (2014): http://docs.lib.purdue.edu/clcweb/vol16/iss5/13> Special Issue New Work on Electronic Literature and Cyberculture. Ed. Maya Zalbidea, Mark C. Marino, and Asuncion Lopez-Varela --- paper_title: Conversational Agents for Game-Like Virtual Environments paper_content: Insp. Gray: I’d like to ask you a few questions about last night. Coln. Mustard:Fire away young man, I’ve got nothing to hide. (Mustard uses phrase ’young man’ he feels a superiority in the social relationship with the inspector) (Mustard is a defensive person protests innocence straight away) Insp. Gray: Were you in the ballroom at all last night? Coln. Mustard:Yes. I was in the ballroom between 1945 hours and 2000 hours. (Mustard is an army man dictates choice of ’1945 hours’ over ’seven forty-five’) (Mustard is confident of himself very affirmative tone used) Insp. Gray: Okay. So where had you been before you went to the ballroom? Coln. Mustard:I was in the Kitchen (Abrupt. Dislikes the Inspectors questioning) Insp. Gray: But Mrs. White was in the Kitchen at that time and said that she didn’t see anyone else. Coln. Mustard:Well she must be mistaken. (Very defensive. Authoritative about the fact commanding personality) Insp. Gray: She seemed very confident she was right. Are you sure you’ve not made a mistake? Coln. Mustard:I’m absolutely certain! (Very defensive and angry strengthens his claim with the use of the adverb) Coln. Mustard:You should talk to Mrs. Peacock. I saw her heading towards the library at 1945 hours. She seemed very suspicious. (Changes subject dominant personality disliked previous conversational direction) Insp. Gray: Why do you say that? Coln. Mustard:She hates reading, and never goes to the library. (Authoritative confident of his knowledge) Insp. Gray: Okay. Thanks for your time. Figure 1. Example Conversation from the Cluedo Game --- paper_title: Personifying the e-Market: A Framework for Social Agents. paper_content: This paper discusses our vision of the future with respect to the introduction of virtual assistants in the e-market. It presents the latest evolution of the involvement framework , an on-going model aimed at driving the design and the evaluation of social agents virtual characters designed to set up lasting relationships with the users. The framework is based on a review of the literature and on the systematic analysis of the firstgeneration social agents available on the Web. Moreover, it is enriched by empirical results of two user-based evaluations of Granny, a social agent that tries to inject a specific personality into the interaction between a financial service provider and a customer. The assumption behind our research is that social agents require a reexamination of the traditional HCI approach to system design and evaluation. Both the usability framework and the media equation paradigm need to be updated to account for the peculiarity of the new interaction form. --- paper_title: Let's Talk About Race: Identity, Chatbots, and AI paper_content: Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportunities for people and machines. By making the abstract and disparate qualities of this problem space tangible, we can develop chatbots that are more capable of handling race-talk in its many forms. Our goal is to provide the HCI community with ways to begin addressing the question, how can chatbots handle race-talk in new and improved ways? --- paper_title: The Self and Others: Positioning Individuals and Groups in Personal, Political, and Cultural Contexts paper_content: Introduction by Rom HarRE and Fathali M. Moghaddam The Self and Other Individuals Motivational Styles and Positioning Theory by Michael J. Apter Positioning and the Emotions by W. Gerrod Parrott "There You Are Man": Men's Use of Emotion Discourses and Their Negotiation of Emotional Subject Positions by Chris Walton, Adrian Coyle, and Evanthia Lyons The Unthinkable, the Unspeakable Self: Reflections on the Importance of Negative Emotional Boundaries for the Formation, Maintenance and Transformation of Identities by Ciaran Benson Malignant Positioning and the Predicament of People with Alzheimer's Disease by Steven R. Sabat Paranoia, Ambivalence, and Discursive Practices: Concepts of Position and Positioning in Psychoanalysis and Discursive Psychology by Margaret Wetherell The Self and Groups Positioning and Neighborhood Groups by Rom HarRE and Nikki Slocum Sustaining Intergroup Harmony: An Analysis of the Kissinger Papers through Positioning Theory by Fathali M. Moghaddam, Elizabeth Hanley, and Rom HarRE Categories as Actions: Positioning, Oppression, and Heteronormativity by Sue Wilkinson and Celia Kitzenger Gender Positioning: A 16th/17th Century Example Jennifer Lynn Adams and Rom HarRE Positioning and Postcolonial Apologizing in Australia by Lucinda Aberdeen Applying Positioning Principles to a Theory of Collective Identity by Donald M. Taylor, Evelyne Bougie, and Julie Caouette The Self and Context Integration Speaking: Introducing Positioning Theory in Regional Integration Studies by Nikki Slocum and Luk Van Langenhove Culture-Clash and Patents: Positioning and Intellectual Property Rights by Fathali M. Moghaddam and Shayna Ginsburg Assessment of Quality Systems with Positioning Theory by Lionel Boxer Positioning the Subject in Body/Landscape Relations by Bronwyn Davies Concluding Chapter by Tim May --- paper_title: Iterative Development and Evaluation of a Social Conversational Agent paper_content: We show that an agent with fairly good social conversational abilities can be built based on a limited number of topics and dialogue strategies if it is tailored to its intended users through a high degree of user involvement during an iterative development process. The technology used is pattern matching of question-answer pairs, coupled with strategies to handle: followup questions, utterances not understood, abusive utterances, repetitive utterances, and initiation of new topics. --- paper_title: Sense of humor and dimensions of personality paper_content: Previous researchers have demonstrated relationships between sense of humor and personality. Most have viewed sense of humor from the perspective of humor appreciation. Others have taken the approach that sense of humor has two factors: appreciation and creativity. Our approach has been to look at sense of humor as made up of creativity and several additional elements. The present study reports on the factor analysis of a Multidimensional Sense of Humor Scale, as well as correlates of various elements of sense of humor with personality traits assessed by the Edwards Personal Preference Schedule. Relationships by humor scale factors are reported, as are differences between those high and low in sense of humor within a sample of 426 individuals, 18 through 90 years of age. --- paper_title: How social is social responses to computers? The function of the degree of anthropomorphism in computer representations paper_content: Testing the assumption that more anthropomorphic (human-like) computer representations elicit more social responses from people, a between-participants experiment (N=168) manipulated 12 computer agents to represent four levels of anthropomorphism: low, medium, high, and real human images. Social responses were assessed with users' social judgment and homophily perception of the agents, conformity in a choice dilemma task, and competency and trustworthiness ratings of the agents. Linear polynomial trend analyses revealed significant linear trends for almost all the measures. As the agent became more anthropomorphic to being human, it received more social responses from users. --- paper_title: Extending an Educational Math Game with a Pedagogical Conversational Agent: Facing Design Challenges paper_content: We describe our work-in-progress of developing an educational game in mathematics for 12-14 year olds, by adding social and conversational abilities to an existing “teachable agent” (TA) in the game. The purpose of this extension is to affect cognitive, emotional and social constructs known to promote learning, such as self-efficacy and engagement, as well as enhancing students’ experiences of interacting with the agent over an extended period of time. Drawing from the EnALI framework, which states practical design guidelines, we discuss specific design challenges and exemplify research considerations as to developing the agent’s visual representation and conversational module. We present some initial findings from prototype testing with students from the target group. Promising developments seem to reside in pronouncing the agent’s personality traits and expanding its knowledge database, particularly its range of conversational topics. Finally we propose some future studies and research directions. --- paper_title: Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions paper_content: Disembodied conversational agents in the form of chatbots are increasingly becoming a reality on social media and messaging applications, and are a particularly pressing topic for service encounters with companies. Adopting an experimental design with actual chatbots powered with current technology, this study explores the extent to which human-like cues such as language style and name, and the framing used to introduce the chatbot to the consumer can influence perceptions about social presence as well as mindful and mindless anthropomorphism. Moreover, this study investigates the relevance of anthropomorphism and social presence to important company-related outcomes, such as attitudes, satisfaction and the emotional connection that consumers feel with the company after interacting with the chatbot. --- paper_title: Personality trait structure as a human universal. paper_content: Patterns of covariation among personality traits in English-speaking populations can be summarized by the five-factor model (FFM). To assess the cross-cultural generalizability of the FFM, data from studies using 6 translations of the Revised NEO Personality Inventory (P. T. Costa & R. R. McCrae, 1992) were compared with the American factor structure. German, Portuguese, Hebrew, Chinese, Korean, and Japanese samples (N = 7,134) showed similar structures after varimax rotation of 5 factors. When targeted rotations were used, the American factor structure was closely reproduced, even at the level of secondary loadings. Because the samples studied represented highly diverse cultures with languages from 5 distinct language families, these data strongly suggest that personality trait structure is universal. --- paper_title: Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human paper_content: This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most "intersubjective effort" toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human-human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction. This article demonstrates a method for evaluating human-agent interaction against human-human benchmarks.An experiment assesses the effort people exert toward building common ground with a conversational agent.Believing an interlocutor is a person (vs. an agent) augments efforts to establish common ground.Interfacing with a human body (vs. a text screen) augments efforts to establish common ground. --- paper_title: A new friend in our smartphone?: observing interactions with chatbots in the search of emotional engagement paper_content: We present the findings of a quantitative and qualitative empirical research to understand the possibilities of engagement and affection in the use of conversational agents (chatbots). Based on an experiment with 13 participants, we explored on one hand the correlation between the user expectation, user experience and intended use and, on the other, whether users feel keen and engaged in having a personal, empathic relation with an intelligent system like chatbots. We used psychological questionnaires to semi-structured interviews for disentangle the meaning of the interaction. In particular, the personal psychological background of participants was found critical while the experience itself allowed them to imagine new possible relations with chatbots. Our results show some insights on how people understand and empathize with future interactions with conversational agents and other non-visual interfaces. --- paper_title: The sense of humor : explorations of a personality characteristic paper_content: This volume brings together the current approaches to the definition and measurement of the sense of humor and its components. It provides both an overview of historic approaches and a compendium of current humor inventories and humor traits that have been studied. Presenting the only available overview and analysis of this significant facet of human behavior, this volume will interest researchers from the fields of humor and personality studies as well as those interested in the clinical or abstract implications of the subject. --- paper_title: Social Practice: Becoming Enculturated in Human-Computer Interaction paper_content: We present a new approach to the design, development and evaluation of embodied conversational agents (ECAs) that allows them to index identity through culturally and socially authentic verbal and non-verbal behaviors. This approach is illustrated with research we are carrying out with children who speak several dialects of American English, and the subsequent implementation and first evaluation of a virtual peer based on that research. Results suggest that issues of identity in ECAs are more complicated than previous approaches might suggest, and that ECAs themselves may play a role in understanding issues of identity and language use in ways that have promise for educational applications. --- paper_title: What Is Personalization? Perspectives on the Design and Implementation of Personalization in Information Systems paper_content: In e-commerce and mobile commerce, personalization has been recognized as an important approach element in customer relationships and Web strategies. However, there are wide differences in how this concept is defined, characterized, and implemented in the literature. In this article we present a high-level framework for classifying approaches to personalization that delineates fundamental assumptions about personalization in the literature and relates them to strategies for developing personalization systems. The framework consists of 2 parts: (a) a set of perspectives on personalization that guide the design of personalization systems at a general level and (b) a scheme for classifying how personalization can be implemented. The personalization perspectives represent 4 distinct schools of thought on the nature of personalization distilled from the literature of several fields. These perspectives are ideal types and we discuss them in terms of the motivation they supply for personalization, the goals and ... --- paper_title: Social identity theory: past achievements, current problems and future challenges paper_content: This article presents a critical review of Social Identity Theory. Its major contributions to the study of intergroup relations are discussed, focusing on its powerful explanations of such phenomena as ingroup bias, responses of subordinate groups to their unequal status position, and intragroup homogeneity and stereotyping. In addition, its stimulative role for theoretical elaborations of the Contact Hypothesis as a strategy for improving intergroup attitudes is noted. Then five issues which have proved problematic for Social Identity Theory are identified: the relationship between group identification and ingroup bias; the self-esteem hypothesis; positive – negative asymmetry in intergroup discrimination; the effects of intergroup similarity; and the choice of identity strategies by low-status groups. In a third section a future research agenda for the theory is sketched out, with five lines of enquiry noted as being particularly promising: expanding the concept of social identity; predicting comparison choice in intergroup settings; incorporating affect into the theory; managing social identities in multicultural settings; and integrating implicit and explicit processes. The article concludes with some remarks on the potential applications of social identity principles. Copyright © 2000 John Wiley & Sons, Ltd. --- paper_title: The Racial Formation of Chatbots paper_content: In his article "The Racial Formation of Chatbots" Mark C. Marino introduces electronic literature known as chatbot or conversation agent. These programs are all around us from automated help centers to smartphones (e.g., Siri). These conversation agents are often represented as text or disembodied voices. However, when programmers give them a body or the representation of a body (partial or full), other aspects of their identity become more apparent—particularly their racial or ethnic identity. Marino explores the ways racial identity is constructed through the embodied performance of chatbots and what that indicates for human identity construction on the internet. Mark C. Marino, "The Racial Formation of Chatbots" page 2 of 11 CLCWeb: Comparative Literature and Culture 16.5 (2014): http://docs.lib.purdue.edu/clcweb/vol16/iss5/13> Special Issue New Work on Electronic Literature and Cyberculture. Ed. Maya Zalbidea, Mark C. Marino, and Asuncion Lopez-Varela --- paper_title: Single or Multiple Conversational Agents?: An Interactional Coherence Comparison paper_content: Chatbots focusing on a narrow domain of expertise are in great rise. As several tasks require multiple expertise, a designer may integrate multiple chatbots in the background or include them as interlocutors in a conversation. We investigated both scenarios by means of a Wizard of Oz experiment, in which participants talked to chatbots about visiting a destination. We analyzed the conversation content, users' speech, and reported impressions. We found no significant difference between single- and multi-chatbots scenarios. However, even with equivalent conversation structures, users reported more confusion in multi-chatbots interactions and adopted strategies to organize turn-taking. Our findings indicate that implementing a meta-chatbot may not be necessary, since similar conversation structures occur when interacting to multiple chatbots, but different interactional aspects must be considered for each scenario. --- paper_title: Conversational Agents for Game-Like Virtual Environments paper_content: Insp. Gray: I’d like to ask you a few questions about last night. Coln. Mustard:Fire away young man, I’ve got nothing to hide. (Mustard uses phrase ’young man’ he feels a superiority in the social relationship with the inspector) (Mustard is a defensive person protests innocence straight away) Insp. Gray: Were you in the ballroom at all last night? Coln. Mustard:Yes. I was in the ballroom between 1945 hours and 2000 hours. (Mustard is an army man dictates choice of ’1945 hours’ over ’seven forty-five’) (Mustard is confident of himself very affirmative tone used) Insp. Gray: Okay. So where had you been before you went to the ballroom? Coln. Mustard:I was in the Kitchen (Abrupt. Dislikes the Inspectors questioning) Insp. Gray: But Mrs. White was in the Kitchen at that time and said that she didn’t see anyone else. Coln. Mustard:Well she must be mistaken. (Very defensive. Authoritative about the fact commanding personality) Insp. Gray: She seemed very confident she was right. Are you sure you’ve not made a mistake? Coln. Mustard:I’m absolutely certain! (Very defensive and angry strengthens his claim with the use of the adverb) Coln. Mustard:You should talk to Mrs. Peacock. I saw her heading towards the library at 1945 hours. She seemed very suspicious. (Changes subject dominant personality disliked previous conversational direction) Insp. Gray: Why do you say that? Coln. Mustard:She hates reading, and never goes to the library. (Authoritative confident of his knowledge) Insp. Gray: Okay. Thanks for your time. Figure 1. Example Conversation from the Cluedo Game --- paper_title: Let's Talk About Race: Identity, Chatbots, and AI paper_content: Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportunities for people and machines. By making the abstract and disparate qualities of this problem space tangible, we can develop chatbots that are more capable of handling race-talk in its many forms. Our goal is to provide the HCI community with ways to begin addressing the question, how can chatbots handle race-talk in new and improved ways? --- paper_title: Iterative Development and Evaluation of a Social Conversational Agent paper_content: We show that an agent with fairly good social conversational abilities can be built based on a limited number of topics and dialogue strategies if it is tailored to its intended users through a high degree of user involvement during an iterative development process. The technology used is pattern matching of question-answer pairs, coupled with strategies to handle: followup questions, utterances not understood, abusive utterances, repetitive utterances, and initiation of new topics. --- paper_title: How social is social responses to computers? The function of the degree of anthropomorphism in computer representations paper_content: Testing the assumption that more anthropomorphic (human-like) computer representations elicit more social responses from people, a between-participants experiment (N=168) manipulated 12 computer agents to represent four levels of anthropomorphism: low, medium, high, and real human images. Social responses were assessed with users' social judgment and homophily perception of the agents, conformity in a choice dilemma task, and competency and trustworthiness ratings of the agents. Linear polynomial trend analyses revealed significant linear trends for almost all the measures. As the agent became more anthropomorphic to being human, it received more social responses from users. --- paper_title: Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations paper_content: How does communication change when speaking to intelligent agents over humans?We compared 100 IM conversations to 100 exchanges with the chatbot Cleverbot.People used more, but shorter, messages when communicating with chatbots.People also used more restricted vocabulary and greater profanity with chatbots.People can easily adapt their language to communicate with intelligent agents. This study analyzed how communication changes when people communicate with an intelligent agent as opposed to with another human. We compared 100 instant messaging conversations to 100 exchanges with the popular chatbot Cleverbot along seven dimensions: words per message, words per conversation, messages per conversation, word uniqueness, and use of profanity, shorthand, and emoticons. A MANOVA indicated that people communicated with the chatbot for longer durations (but with shorter messages) than they did with another human. Additionally, human-chatbot communication lacked much of the richness of vocabulary found in conversations among people, and exhibited greater profanity. These results suggest that while human language skills transfer easily to human-chatbot communication, there are notable differences in the content and quality of such conversations. --- paper_title: Extending an Educational Math Game with a Pedagogical Conversational Agent: Facing Design Challenges paper_content: We describe our work-in-progress of developing an educational game in mathematics for 12-14 year olds, by adding social and conversational abilities to an existing “teachable agent” (TA) in the game. The purpose of this extension is to affect cognitive, emotional and social constructs known to promote learning, such as self-efficacy and engagement, as well as enhancing students’ experiences of interacting with the agent over an extended period of time. Drawing from the EnALI framework, which states practical design guidelines, we discuss specific design challenges and exemplify research considerations as to developing the agent’s visual representation and conversational module. We present some initial findings from prototype testing with students from the target group. Promising developments seem to reside in pronouncing the agent’s personality traits and expanding its knowledge database, particularly its range of conversational topics. Finally we propose some future studies and research directions. --- paper_title: Here's What I Can Do: Chatbots' Strategies to Convey Their Features to Users paper_content: Chatbots have been around since the 1960's, but recently they have risen in popularity especially due to new compatibility with social networks and messenger applications. Chatbots are different from traditional user interfaces, for they unveil themselves to the user one sentence at a time. Because of that, users may struggle to interact with them and to understand what they can do. Hence, it is important to support designers in deciding how to convey chatbots' features to users, as this might determine whether the user continues to chat or not. As a first step in this direction, in this paper our goal is to analyze the communicative strategies that have been used by popular chatbots to convey their features to users. To perform this analysis we use the Semiotic Inspection Method (SIM). As a result we identify and discuss the different strategies used by the analyzed chatbots to present their features to users. We also discuss the challenges and limitations of using SIM on such interfaces. --- paper_title: Artificially intelligent conversational agents in libraries paper_content: Purpose – Conversational agents are natural language interaction interfaces designed to simulate conversation with a real person. This paper seeks to investigate current development and applications of these systems worldwide, while focusing on their availability in Canadian libraries. It aims to argue that it is both timely and conceivable for Canadian libraries to consider adopting conversational agents to enhance – not replace – face‐to‐face human interaction. Potential users include library web site tour guides, automated virtual reference and readers' advisory librarians, and virtual story‐tellers. To provide background and justification for this argument, the paper seeks to review agents from classic implementations to state‐of‐the‐art prototypes: how they interact with users, produce language, and control conversational behaviors.Design/methodology/approach – The web sites of the 20 largest Canadian libraries were surveyed to assess the extent to which specific language‐related technologies are off... --- paper_title: Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis paper_content: Chatbots are becoming a ubiquitous trend in many fields such as medicine, product and ::: service industry, and education. Chatbots are computer programs used to conduct auditory or ::: textual conversations. A growing body of evidence suggests that these programs have the potential ::: to change the way students learn and search for information. Especially in large-scale learning ::: scenarios with more than 100 students per lecturer, chatbots are able to solve the problem of ::: individual student support. However, until now, there has been no systematic, structured overview ::: of their use in education. The aim of this paper is therefore to conduct a systematic literature review ::: based on a multi-perspective framework, from which we have derived initial search questions, ::: synthesized past research, and highlighted future research directions. We reviewed titles and ::: abstracts of 1405 articles drawn from management, education, information systems, and ::: psychology literature before examining and individually coding a relevant subset of 80 articles. ::: The results show that chatbots are in the very beginning of entering education. Few studies suggest ::: the potential of chatbots for improving learning processes and outcomes. Nevertheless, past ::: research has revealed that the effectiveness of chatbots in education is complex and depends on a ::: variety of factors. With our literature review, we make two principal contributions: first, we ::: structure and synthesize past research by using an input-process-output framework, and secondly, ::: we use the framework to highlight research gaps for guiding future research in that area. --- paper_title: Can a Chatbot Determine My Diet?: Addressing Challenges of Chatbot Application for Meal Recommendation paper_content: Poor nutrition can lead to reduced immunity, increased susceptibility to disease, impaired physical and mental development, and reduced productivity. A conversational agent can support people as a virtual coach, however building such systems still have its associated challenges and limitations. This paper describes the background and motivation for chatbot systems in the context of healthy nutrition recommendation. We discuss current challenges associated with chatbot application, we tackled technical, theoretical, behavioural, and social aspects of the challenges. We then propose a pipeline to be used as guidelines by developers to implement theoretically and technically robust chatbot systems. --- paper_title: An Overview of Open-Source Chatbots Social Skills paper_content: This paper aims to analyze and compare some of the most known open source chatbot technologies focusing on their potential to model a conversational agent able to show a form of “social intelligence”. The main features and drawbacks of each system will be examined. Then, we will discuss their flexibility to produce more realistic social conversational scenarios adopting as the reference the social practice theory. --- paper_title: Chatbots: Are they Really Useful? paper_content: Chatbots are computer programs that interact with users using natural lan- guages. This technology started in the 1960’s; the aim was to see if chatbot systems could fool users that they were real humans. However, chatbot sys- tems are not only built to mimic human conversation, and entertain users. In this paper, we investigate other applications where chatbots could be useful such as education, information retrival, business, and e-commerce. A range of chatbots with useful applications, including several based on the ALICE/AIML architecture, are presented in this paper. --- paper_title: Chatbots' Greetings to Human-Computer Communication paper_content: Both dialogue systems and chatbots aim at putting into action communication between humans and computers. However, instead of focusing on sophisticated techniques to perform natural language understanding, as the former usually do, chatbots seek to mimic conversation. Since Eliza, the first chatbot ever, developed in 1966, there were many interesting ideas explored by the chatbots' community. Actually, more than just ideas, some chatbots' developers also provide free resources, including tools and large-scale corpora. It is our opinion that this know-how and materials should not be neglected, as they might be put to use in the human-computer communication field (and some authors already do it). Thus, in this paper we present a historical overview of the chatbots' developments, we review what we consider to be the main contributions of this community, and we point to some possible ways of coupling these with current work in the human-computer communication research line. --- paper_title: A Review of Technologies for Conversational Systems paper_content: During the last 50 years, since the development of ELIZA by Weizenbaum, technologies for developing conversational systems have made a great stride. The number of conversational systems is increasing. Conversational systems emerge almost in every digital device in many application areas. In this paper, we present the review of the development of conversational systems regarding technologies and their special features including language tricks. --- paper_title: Review of integrated applications with AIML based chatbot paper_content: Artificial Intelligence Markup Language (AIML) is derived from Extensible Markup Language (XML) which is used to build up conversational agent (chatbot) artificially. There are developed a lot of works to make conversational agent. But low cost, configuration and availability make possible to use it in various applications. In this paper, we give a brief review of some applications which are used AIML chatbot for their conversational service. These applications are related to cultural heritage, e-learning, e-government, web base model, dialog model, semantic analysis framework, interaction framework, humorist expert, network management, adaptive modular architecture as well. In this case, they are not only providing useful services but also interact with customers and give solution of their problems through AIML chatbot instead of human beings. So, this is popular day by day with entrepreneur and users to provide efficient service. --- paper_title: Conversational agents in healthcare: a systematic review paper_content: Objective ::: Our objective was to review the characteristics, current applications, and evaluation measures of conversational agents with unconstrained natural language input capabilities used for health-related purposes. ::: ::: ::: Methods ::: We searched PubMed, Embase, CINAHL, PsycInfo, and ACM Digital using a predefined search strategy. Studies were included if they focused on consumers or healthcare professionals; involved a conversational agent using any unconstrained natural language input; and reported evaluation measures resulting from user interaction with the system. Studies were screened by independent reviewers and Cohen's kappa measured inter-coder agreement. ::: ::: ::: Results ::: The database search retrieved 1513 citations; 17 articles (14 different conversational agents) met the inclusion criteria. Dialogue management strategies were mostly finite-state and frame-based (6 and 7 conversational agents, respectively); agent-based strategies were present in one type of system. Two studies were randomized controlled trials (RCTs), 1 was cross-sectional, and the remaining were quasi-experimental. Half of the conversational agents supported consumers with health tasks such as self-care. The only RCT evaluating the efficacy of a conversational agent found a significant effect in reducing depression symptoms (effect size d = 0.44, p = .04). Patient safety was rarely evaluated in the included studies. ::: ::: ::: Conclusions ::: The use of conversational agents with unconstrained natural language input capabilities for health-related purposes is an emerging field of research, where the few published studies were mainly quasi-experimental, and rarely evaluated efficacy or safety. Future studies would benefit from more robust experimental designs and standardized reporting. ::: ::: ::: Protocol Registration ::: The protocol for this systematic review is registered at PROSPERO with the number CRD42017065917. ---
<format> Title: How should my chatbot interact? A survey on human-chatbot interaction design Section 1: Introduction Description 1: This section introduces the motivation behind the study and outlines the importance of social capabilities in chatbots, summarizing current research challenges and objectives. Section 2: Related Work Description 2: This section reviews related surveys and literature on chatbots, focusing on their social interactions and identifying the gaps this paper aims to address. Section 3: Methodology Description 3: This section describes the research methodology, including the literature review process, selection criteria, coding process, and data analysis. Section 4: Chatbots Social Characteristics Description 4: This section presents the identified social characteristics of chatbots, grouped into categories such as conversational intelligence, social intelligence, and personification. Section 5: Conversational Intelligence Description 5: This section discusses social characteristics related to conversational intelligence, including proactivity, conscientiousness, and communicability. Section 6: Social Intelligence Description 6: This section explores characteristics related to social intelligence, such as damage control, thoroughness, manners, moral agency, emotional intelligence, and personalization. Section 7: Personification Description 7: This section examines the influence of identity projection on human-chatbot interaction, focusing on identity and personality traits. Section 8: Interrelationships Among the Characteristics Description 8: This section describes the theoretical framework of how the identified social characteristics interrelate and influence each other. Section 9: Related Surveys Description 9: This section compares the current survey with previous related surveys, highlighting the unique contributions of this paper. Section 10: Limitations Description 10: This section discusses the limitations of the study, including scope restrictions and methodological constraints. Section 11: Conclusion Description 11: This section summarizes the findings, identifies research opportunities, and suggests future directions for advancing human-chatbot interactions. Section 12: Supplemental Material Description 12: This section includes additional tables and constructs that can be used to assess the success of social characteristics in chatbots. </format>
An Attentive Survey of Attention Models
7
--- paper_title: Attention, please! A Critical Review of Neural Attention Models in Natural Language Processing paper_content: Attention is an increasingly popular mechanism used in a wide range of neural architectures. Because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures for natural language processing, with a focus on architectures designed to work with vector representation of the textual data. We discuss the dimensions along which proposals differ, the possible uses of attention, and chart the major research activities and open challenges in the area. --- paper_title: Listen, attend and spell: A neural network for large vocabulary conversational speech recognition paper_content: We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set. --- paper_title: A Neural Attention Model for Abstractive Sentence Summarization paper_content: Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines. --- paper_title: Hierarchical Question-Image Co-Attention for Visual Question Answering paper_content: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. --- paper_title: Hierarchical Question-Image Co-Attention for Visual Question Answering paper_content: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. --- paper_title: Effective Approaches to Attention-based Neural Machine Translation paper_content: An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems that already incorporate known techniques such as dropout. Our ensemble model using different attention architectures yields a new state-of-the-art result in the WMT’15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker. 1 --- paper_title: DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding paper_content: Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, "Directional Self-Attention Network (DiSAN)", is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN/CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02% on the Stanford Natural Language Inference (SNLI) dataset, and shows state-of-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets. --- paper_title: Dynamic Meta-Embeddings for Improved Sentence Representations paper_content: While one of the first steps in many NLP systems is selecting what pre-trained word embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce dynamic meta-embeddings, a simple yet effective method for the supervised learning of embedding ensembles, which leads to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new light on the usage of word embeddings in NLP systems. --- paper_title: Pointer Networks paper_content: We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence [1] and Neural Turing Machines [2], because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems - finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem - using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems. --- paper_title: DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding paper_content: Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, "Directional Self-Attention Network (DiSAN)", is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN/CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02% on the Stanford Natural Language Inference (SNLI) dataset, and shows state-of-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets. --- paper_title: NAIS: Neural Attentive Item Similarity Model for Recommendation paper_content: Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM), our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems. --- paper_title: Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems paper_content: We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic "addition" and "multiplication" long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks. --- paper_title: Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures paper_content: Recently, non-recurrent architectures (convolutional, self-attentional) have outperformed RNNs in neural machine translation. CNNs and self-attentional networks can connect distant words via shorter network paths than RNNs, and it has been speculated that this improves their ability to model long-range dependencies. However, this theoretical argument has not been tested empirically, nor have alternative explanations for their strong performance been explored in-depth. We hypothesize that the strong performance of CNNs and self-attentional networks could also be due to their ability to extract semantic features from the source text, and we evaluate RNNs, CNNs and self-attention networks on two tasks: subject-verb agreement (where capturing long-range dependencies is required) and word sense disambiguation (where semantic feature extraction is required). Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation. --- paper_title: Aspect Level Sentiment Classification with Deep Memory Network paper_content: We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation. --- paper_title: Listen, attend and spell: A neural network for large vocabulary conversational speech recognition paper_content: We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set. --- paper_title: NAIS: Neural Attentive Item Similarity Model for Recommendation paper_content: Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM), our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems. --- paper_title: Attention-Based Models for Speech Recognition paper_content: Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1,2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in [2] reaches a competitive 18.7% phoneme error rate (PER) on the TIMET phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18% PER in single utterances and 20% in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6% level. --- paper_title: Teaching Machines to Read and Comprehend paper_content: Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure. --- paper_title: Hierarchical Question-Image Co-Attention for Visual Question Answering paper_content: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. --- paper_title: A Survey of Methods for Explaining Black Box Models paper_content: In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective. --- paper_title: Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting paper_content: We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives. We analyze the potential allocation harms that can result from semantic representation bias. To do so, we study the impact on occupation classification of including explicit gender indicators---such as first names and pronouns---in different semantic representations of online biographies. Additionally, we quantify the bias that remains when these indicators are "scrubbed," and describe proxy behavior that occurs in the absence of explicit gender indicators. As we demonstrate, differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances. --- paper_title: Dynamic Meta-Embeddings for Improved Sentence Representations paper_content: While one of the first steps in many NLP systems is selecting what pre-trained word embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce dynamic meta-embeddings, a simple yet effective method for the supervised learning of embedding ensembles, which leads to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new light on the usage of word embeddings in NLP systems. --- paper_title: Self-Attentive Sequential Recommendation paper_content: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the 'context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are 'relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences. --- paper_title: Attentional Encoder Network for Targeted Sentiment Classification paper_content: Targeted sentiment classification aims at determining the sentimental tendency towards specific targets. Most of the previous approaches model context and target words with RNN and attention. However, RNNs are difficult to parallelize and truncated backpropagation through time brings difficulty in remembering long-term patterns. To address this issue, this paper proposes an Attentional Encoder Network (AEN) which eschews recurrence and employs attention based encoders for the modeling between context and target. We raise the label unreliability issue and introduce label smoothing regularization. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of our model. --- paper_title: Massive Exploration of Neural Machine Translation Architectures paper_content: Neural Machine Translation (NMT) has shown remarkable progress over the past few years with production systems now being deployed to end-users. One major drawback of current architectures is that they are expensive to train, typically requiring days to weeks of GPU time to converge. This makes exhaustive hyperparameter search, as is commonly done with other neural network architectures, prohibitively expensive. In this work, we present the first large-scale analysis of NMT architecture hyperparameters. We report empirical results and variance numbers for several hundred experimental runs, corresponding to over 250,000 GPU hours on the standard WMT English to German translation task. Our experiments lead to novel insights and practical advice for building and extending NMT architectures. As part of this contribution, we release an open-source NMT framework that enables researchers to easily experiment with novel techniques and reproduce state of the art results. ---
Title: An Attentive Survey of Attention Models Section 1: Introduction Description 1: Provide an overview of the importance and evolution of attention models in neural networks, mentioning their applications and benefits. Section 2: Attention Model Description 2: Explain the concept of attention models, including the traditional encoder-decoder architecture and improvements brought by attention mechanisms. Section 3: Taxonomy of Attention Description 3: Categorize different types of attention and describe the unique characteristics and examples of each category. Section 4: Network Architectures with Attention Description 4: Describe various neural network architectures that incorporate attention mechanisms, including encoder-decoder frameworks, memory networks, and non-RNN architectures. Section 5: Attention for Interpretability Description 5: Discuss the role of attention models in improving transparency and the interpretability of neural networks. Section 6: Applications of Attention Models Description 6: Provide detailed examples of how attention models are applied in various domains such as natural language generation, classification, and recommender systems. Section 7: Conclusion Description 7: Summarize the key points discussed in the survey, emphasizing the impact of attention models on neural network performance, interpretability, and computational efficiency.
The Art, Science, and Engineering of Fuzzing: A Survey
11
--- paper_title: Fuzzing for Software Security Testing and Quality Assurance. Artech House paper_content: "A fascinating look at the new direction fuzzing technology is taking -- useful for both QA engineers and bug hunters alike!" --Dave Aitel, CTO, Immunity Inc. Learn the code cracker's malicious mindset, so you can find worn-size holes in the software you are designing, testing, and building. Fuzzing for Software Security Testing and Quality Assurance takes a weapon from the black-hat arsenal to give you a powerful new tool to build secure, high-quality software. This practical resource helps you add extra protection without adding expense or time to already tight schedules and budgets. The book shows you how to make fuzzing a standard practice that integrates seamlessly with all development activities. This comprehensive reference goes through each phase of software development and points out where testing and auditing can tighten security. It surveys all popular commercial fuzzing tools and explains how to select the right one for a software development project. The book also identifies those cases where commercial tools fall short and when there is a need for building your own fuzzing tools. --- paper_title: Scheduling black-box mutational fuzzing paper_content: Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time. --- paper_title: Billions and billions of constraints: Whitebox fuzz testing in production paper_content: We report experiences with constraint-based whitebox fuzz testing in production across hundreds of large Windows applications and over 500 machine years of computation from 2007 to 2013. Whitebox fuzzing leverages symbolic execution on binary traces and constraint solving to construct new inputs to a program. These inputs execute previously uncovered paths or trigger security vulnerabilities. Whitebox fuzzing has found one-third of all file fuzzing bugs during the development of Windows 7, saving millions of dollars in potential security vulnerabilities. The technique is in use today across multiple products at Microsoft. We describe key challenges with running whitebox fuzzing in production. We give principles for addressing these challenges and describe two new systems built from these principles: SAGAN, which collects data from every fuzzing run for further analysis, and JobCenter, which controls deployment of our whitebox fuzzing infrastructure across commodity virtual machines. Since June 2010, SAGAN has logged over 3.4 billion constraints solved, millions of symbolic executions, and tens of millions of test cases generated. Our work represents the largest scale deployment of whitebox fuzzing to date, including the largest usage ever for a Satisfiability Modulo Theories (SMT) solver. We present specific data analyses that improved our production use of whitebox fuzzing. Finally we report data on the performance of constraint solving and dynamic test generation that points toward future research problems. --- paper_title: VUzzer : Application - aware Evolutionary Fuzzing paper_content: See, stats, and : https : / / www . researchgate . net / publication / 311886374 VUzzer : Application - aware Conference DOI : 10 . 14722 / ndss . 2017 . 23404 CITATIONS 0 READS 17 6 , including : Some : Systems Sanjay Vrije , Amsterdam , Netherlands 38 SEE Ashish International 1 SEE Cristiano VU 51 SEE Herbert VU 163 , 836 SEE All . The . All - text and , letting . Abstract—Fuzzing is an effective software testing technique to find bugs . Given the size and complexity of real - world applications , modern fuzzers tend to be either scalable , but not effective in exploring bugs that lie deeper in the execution , or capable of penetrating deeper in the application , but not scalable . In this paper , we present an application - aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format . In order to maximize coverage and explore deeper paths , we leverage control - and data - flow features based on static and dynamic analysis to infer fundamental prop - erties of the application . This enables much faster generation of interesting inputs compared to an application - agnostic approach . We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets : DARPA Grand Challenge binaries (CGC) , a set of real - world applications (binary input parsers) , and the recently released LAVA dataset . On all of these datasets , VUzzer yields significantly better results than state - of - the - art fuzzers , by quickly finding several existing and new bugs . --- paper_title: Open Source Fuzzing Tools paper_content: This chapter discusses some open source fuzzing tools. Fuzzing tools typically fall into one of three categories: fuzzing frameworks, special purpose tools, and general-purpose fuzzers. Fuzzing frameworks are good if one is looking to write his/her own fuzzer or needs to fuzz a customer or proprietary protocol. The advantage is that the tool set is provided by the framework; the disadvantage is that all open source fuzzing frameworks are far from complete and most are very immature. Special-purpose tools are usually fuzzers that were written for a specific protocol or application. While they can usually be extended, they are fairly limited to fuzzing anything outside the original scope of the project. In many cases, general-purpose fuzzers are very partial, as the writers tend to use them to find a few holes in a protocol/application and then move on to more interesting things, leaving the fuzzer unmaintained. General-purpose tools are neat, if they work. They typically don’t, and those that do are too general and lack optimization to be very useful. --- paper_title: Program-Adaptive Mutational Fuzzing paper_content: We present the design of an algorithm to maximize the number of bugs found for black-box mutational fuzzing given a program and a seed input. The major intuition is to leverage white-box symbolic analysis on an execution trace for a given program-seed pair to detect dependencies among the bit positions of an input, and then use this dependency relation to compute a probabilistically optimal mutation ratio for this program-seed pair. Our result is promising: we found an average of 38.6% more bugs than three previous fuzzers over 8 applications using the same amount of fuzzing time. --- paper_title: An Empirical Study of the Reliability of UNIX Utilities paper_content: The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig. --- paper_title: Coverage-based Greybox Fuzzing as Markov Chain paper_content: Coverage-based Greybox Fuzzing (CGF) is a random testing approach that requires no program analysis. A new test is generated by slightly mutating a seed input. If the test exercises a new and interesting path, it is added to the set of seeds; otherwise, it is discarded. We observe that most tests exercise the same few "high-frequency" paths and develop strategies to explore significantly more paths with the same number of tests by gravitating towards low-frequency paths. We explain the challenges and opportunities of CGF using a Markov chain model which specifies the probability that fuzzing the seed that exercises path i generates an input that exercises path j. Each state (i.e., seed) has an energy that specifies the number of inputs to be generated from that seed. We show that CGF is considerably more efficient if energy is inversely proportional to the density of the stationary distribution and increases monotonically every time that seed is chosen. Energy is controlled with a power schedule. We implemented the exponential schedule by extending AFL. In 24 hours, AFLFAST exposes 3 previously unreported CVEs that are not exposed by AFL and exposes 6 previously unreported CVEs 7x faster than AFL. AFLFAST produces at least an order of magnitude more unique crashes than AFL. --- paper_title: CollAFL: Path Sensitive Fuzzing paper_content: Coverage-guided fuzzing is a widely used and effective solution to find software vulnerabilities. Tracking code coverage and utilizing it to guide fuzzing are crucial to coverage-guided fuzzers. However, tracking full and accurate path coverage is infeasible in practice due to the high instrumentation overhead. Popular fuzzers (e.g., AFL) often use coarse coverage information, e.g., edge hit counts stored in a compact bitmap, to achieve highly efficient greybox testing. Such inaccuracy and incompleteness in coverage introduce serious limitations to fuzzers. First, it causes path collisions, which prevent fuzzers from discovering potential paths that lead to new crashes. More importantly, it prevents fuzzers from making wise decisions on fuzzing strategies. In this paper, we propose a coverage sensitive fuzzing solution CollAFL. It mitigates path collisions by providing more accurate coverage information, while still preserving low instrumentation overhead. It also utilizes the coverage information to apply three new fuzzing strategies, promoting the speed of discovering new paths and vulnerabilities. We implemented a prototype of CollAFL based on the popular fuzzer AFL and evaluated it on 24 popular applications. The results showed that path collisions are common, i.e., up to 75% of edges could collide with others in some applications, and CollAFL could reduce the edge collision ratio to nearly zero. Moreover, armed with the three fuzzing strategies, CollAFL outperforms AFL in terms of both code coverage and vulnerability discovery. On average, CollAFL covered 20% more program paths, found 320% more unique crashes and 260% more bugs than AFL in 200 hours. In total, CollAFL found 157 new security bugs with 95 new CVEs assigned. --- paper_title: Fuzzing: Brute Force Vulnerability Discovery paper_content: Piezoelectric crystalline films which consist essentially of a crystalline zinc oxide film with a c-axis perpendicular to a substrate surface, containing 0.01 to 20.0 atomic percent of bismuth. These films are prepared by radio-frequency sputtering. --- paper_title: Android permissions demystified paper_content: Android provides third-party applications with an extensive API that includes access to phone hardware, settings, and user data. Access to privacy- and security-relevant parts of the API is controlled with an install-time application permission system. We study Android applications to determine whether Android developers follow least privilege with their permission requests. We built Stowaway, a tool that detects overprivilege in compiled Android applications. Stowaway determines the set of API calls that an application uses and then maps those API calls to permissions. We used automated testing tools on the Android API in order to build the permission map that is necessary for detecting overprivilege. We apply Stowaway to a set of 940 applications and find that about one-third are overprivileged. We investigate the causes of overprivilege and find evidence that developers are trying to follow least privilege but sometimes fail due to insufficient API documentation. --- paper_title: EHBDroid: Beyond GUI testing for Android applications paper_content: With the prevalence of Android-based mobile devices, automated testing for Android apps has received increasing attention. However, owing to the large variety of events that Android supports, test input generation is a challenging task. In this paper, we present a novel approach and an open source tool called EHBDroid for testing Android apps. In contrast to conventional GUI testing approaches, a key novelty of EHBDroid is that it does not generate events from the GUI, but directly invokes callbacks of event handlers. By doing so, EHBDroid can efficiently simulate a large number of events that are difficult to generate by traditional UI-based approaches. We have evaluated EHBDroid on a collection of 35 real-world large-scale Android apps and compared its performance with two state-of-the-art UI-based approaches, Monkey and Dynodroid. Our experimental results show that EHBDroid is significantly more effective and efficient than Monkey and Dynodroid: in a much shorter time, EHBDroid achieves as much as 22.3% higher statement coverage (11.1% on average) than the other two approaches, and found 12 bugs in these benchmarks, including 5 new bugs that the other two failed to find. --- paper_title: Abusing File Processing in Malware Detectors for Fun and Profit paper_content: We systematically describe two classes of evasion exploits against automated malware detectors. Chameleon attacks confuse the detectors' file-type inference heuristics, while werewolf attacks exploit discrepancies in format-specific file parsing between the detectors and actual operating systems and applications. These attacks do not rely on obfuscation, metamorphism, binary packing, or any other changes to malicious code. Because they enable even the simplest, easily detectable viruses to evade detection, we argue that file processing has become the weakest link of malware defense. Using a combination of manual analysis and black-box differential fuzzing, we discovered 45 new evasion exploits and tested them against 36 popular antivirus scanners, all of which proved vulnerable to various chameleon and werewolf attacks. --- paper_title: Fuzzing with code fragments paper_content: Fuzz testing is an automated technique providing random data as input to a software system in the hope to expose a vulnerability. In order to be effective, the fuzzed input must be common enough to pass elementary consistency checks; a JavaScript interpreter, for instance, would only accept a semantically valid program. On the other hand, the fuzzed input must be uncommon enough to trigger exceptional behavior, such as a crash of the interpreter. The LangFuzz approach resolves this conflict by using a grammar to randomly generate valid programs; the code fragments, however, partially stem from programs known to have caused invalid behavior before. LangFuzz is an effective tool for security testing: Applied on the Mozilla JavaScript interpreter, it discovered a total of 105 new severe vulnerabilities within three months of operation (and thus became one of the top security bug bounty collectors within this period); applied on the PHP interpreter, it discovered 18 new defects causing crashes. --- paper_title: The impact of fault models on software robustness evaluations paper_content: Following the design and in-lab testing of software, the evaluation of its resilience to actual operational perturbations in the field is a key validation need. Software-implemented fault injection (SWIFI) is a widely used approach for evaluating the robustness of software components. Recent research [24, 18] indicates that the selection of the applied fault model has considerable influence on the results of SWIFI-based evaluations, thereby raising the question how to select appropriate fault models (i.e. that provide justified robustness evidence). This paper proposes several metrics for comparatively evaluating fault models's abilities to reveal robustness vulnerabilities. It demonstrates their application in the context of OS device drivers by investigating the influence (and relative utility) of four commonly used fault models, i.e. bit flips (in function parameters and in binaries), data type dependent parameter corruptions, and parameter fuzzing. We assess the efficiency of these models at detecting robustness vulnerabilities during the SWIFI evaluation of a real embedded operating system kernel and discuss application guidelines for our metrics alongside. --- paper_title: Cryptographic Function Detection in Obfuscated Binaries via Bit-Precise Symbolic Loop Mapping paper_content: Cryptographic functions have been commonly abused by malware developers to hide malicious behaviors, disguise destructive payloads, and bypass network-based firewalls. Now-infamous crypto-ransomware even encrypts victim's computer documents until a ransom is paid. Therefore, detecting cryptographic functions in binary code is an appealing approach to complement existing malware defense and forensics. However, pervasive control and data obfuscation schemes make cryptographic function identification a challenging work. Existing detection methods are either brittle to work on obfuscated binaries or ad hoc in that they can only identify specific cryptographic functions. In this paper, we propose a novel technique called bit-precise symbolic loop mapping to identify cryptographic functions in obfuscated binary code. Our trace-based approach captures the semantics of possible cryptographic algorithms with bit-precise symbolic execution in a loop. Then we perform guided fuzzing to efficiently match boolean formulas with known reference implementations. We have developed a prototype called CryptoHunt and evaluated it with a set of obfuscated synthetic examples, well-known cryptographic libraries, and malware. Compared with the existing tools, CryptoHunt is a general approach to detecting commonly used cryptographic functions such as TEA, AES, RC4, MD5, and RSA under different control and data obfuscation scheme combinations. --- paper_title: Singularity: pattern fuzzing for worst case complexity paper_content: We describe a new blackbox complexity testing technique for determining the worst-case asymptotic complexity of a given application. The key idea is to look for an input pattern —rather than a concrete input— that maximizes the asymptotic resource usage of the target program. Because input patterns can be described concisely as programs in a restricted language, our method transforms the complexity testing problem to optimal program synthesis. In particular, we express these input patterns using a new model of computation called Recurrent Computation Graph (RCG) and solve the optimal synthesis problem by developing a genetic programming algorithm that operates on RCGs. We have implemented the proposed ideas in a tool called Singularityand evaluate it on a diverse set of benchmarks. Our evaluation shows that Singularitycan effectively discover the worst-case complexity of various algorithms and that it is more scalable compared to existing state-of-the-art techniques. Furthermore, our experiments also corroborate that Singularitycan discover previously unknown performance bugs and availability vulnerabilities in real-world applications such as Google Guava and JGraphT. --- paper_title: Automated Whitebox Fuzz Testing paper_content: Fuzz testing is an effective technique for finding security vulnerabilities in software. Traditionally, fuzz testing tools apply random mutations to well-formed inputs of a program and test the resulting values. We present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation. Our approach records an actual run of the program under test on a well-formed input, symbolically evaluates the recorded trace, and gathers constraints on inputs capturing how the program uses these. The collected constraints are then negated one by one and solved with a constraint solver, producing new inputs that exercise different control paths in the program. This process is repeated with the help of a code-coverage maximizing heuristic designed to find defects as fast as possible. We have implemented this algorithm in SAGE (Scalable, Automated, Guided Execution), a new tool employing x86 instruction-level tracing and emulation for whitebox fuzzing of arbitrary file-reading Windows applications. We describe key optimizations needed to make dynamic test generation scale to large input files and long execution traces with hundreds of millions of instructions. We then present detailed experiments with several Windows applications. Notably, without any format-specific knowledge, SAGE detects the MS07-017 ANI vulnerability, which was missed by extensive blackbox fuzzing and static analysis tools. Furthermore, while still in an early stage of development, SAGE has already discovered 30+ new bugs in large shipped Windows applications including image processors, media players, and file decoders. Several of these bugs are potentially exploitable memory access violations. --- paper_title: SMS of Death: From Analyzing to Attacking Mobile Phones on a Large Scale paper_content: Mobile communication is an essential part of our daily lives. Therefore, it needs to be secure and reliable. In this paper, we study the security of feature phones, the most common type of mobile phone in the world. We built a framework to analyze the security of SMS clients of feature phones. The framework is based on a small GSM base station, which is readily available on the market. Through our analysis we discovered vulnerabilities in the feature phone platforms of all major manufacturers. Using these vulnerabilities we designed attacks against end-users as well as mobile operators. The threat is serious since the attacks can be used to prohibit communication on a large scale and can be carried out from anywhere in the world. Through further analysis we determined that such attacks are amplified by certain configurations of the mobile network. We conclude our research by providing a set of countermeasures. --- paper_title: IntelliDroid: A Targeted Input Generator for the Dynamic Analysis of Android Malware paper_content: We would like to thank Zhen Huang, Mariana D’Angelo, ::: Dhaval Miyani, Wei Huang, Beom Heyn Kim, Sukwon Oh, ::: and Afshar Ganjali for their suggestions and feedback. We ::: also thank the anonymous reviewers for their constructive ::: comments. The research in this paper was supported by an ::: NSERC CGS-M scholarship, a Bell Graduate scholarship, an ::: NSERC Discovery grant, an ORF-RE grant, and a Tier 2 ::: Canada Research Chair. --- paper_title: Grammar-based whitebox fuzzing paper_content: Whitebox fuzzing is a form of automatic dynamic test generation, based on symbolic execution and constraint solving, designed for security testing of large applications. Unfortunately, the current effectiveness of whitebox fuzzing is limited when testing applications with highly-structured inputs, such as compilers and interpreters. These applications process their inputs in stages, such as lexing, parsing and evaluation. Due to the enormous number of control paths in early processing stages, whitebox fuzzing rarely reaches parts of the application beyond those first stages. In this paper, we study how to enhance whitebox fuzzing of complex structured-input applications with a grammar-based specification of their valid inputs. We present a novel dynamic test generation algorithm where symbolic execution directly generates grammar-based constraints whose satisfiability is checked using a custom grammar-based constraint solver. We have implemented this algorithm and evaluated it on a large security-critical application, the JavaScript interpreter of Internet Explorer 7 (IE7). Results of our experiments show that grammar-based whitebox fuzzing explores deeper program paths and avoids dead-ends due to non-parsable inputs. Compared to regular whitebox fuzzing, grammar-based whitebox fuzzing increased coverage of the code generation module of the IE7 JavaScript interpreter from 53% to 81% while using three times fewer tests. --- paper_title: PScout: analyzing the Android permission specification paper_content: Modern smartphone operating systems (OSs) have been developed with a greater emphasis on security and protecting privacy. One of the mechanisms these systems use to protect users is a permission system, which requires developers to declare what sensitive resources their applications will use, has users agree with this request when they install the application and constrains the application to the requested resources during runtime. As these permission systems become more common, questions have risen about their design and implementation. In this paper, we perform an analysis of the permission system of the Android smartphone OS in an attempt to begin answering some of these questions. Because the documentation of Android's permission system is incomplete and because we wanted to be able to analyze several versions of Android, we developed PScout, a tool that extracts the permission specification from the Android OS source code using static analysis. PScout overcomes several challenges, such as scalability due to Android's 3.4 million line code base, accounting for permission enforcement across processes due to Android's use of IPC, and abstracting Android's diverse permission checking mechanisms into a single primitive for analysis. We use PScout to analyze 4 versions of Android spanning version 2.2 up to the recently released Android 4.0. Our main findings are that while Android has over 75 permissions, there is little redundancy in the permission specification. However, if applications could be constrained to only use documented APIs, then about 22% of the non-system permissions are actually unnecessary. Finally, we find that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permission specification as the Android OS evolves. --- paper_title: Fitness-guided path exploration in dynamic symbolic execution paper_content: Dynamic symbolic execution is a structural testing technique that systematically explores feasible paths of the program under test by running the program with different test inputs to improve code coverage. To address the space-explosion issue in path exploration, we propose a novel approach called Fitnex, a search strategy that uses state-dependent fitness values (computed through a fitness function) to guide path exploration. The fitness function measures how close an already discovered feasible path is to a particular test target (e.g., covering a not-yet-covered branch). Our new fitness-guided search strategy is integrated with other strategies that are effective for exploration problems where the fitness heuristic fails. We implemented the new approach in Pex, an automated structural testing tool developed at Microsoft Research. We evaluated our new approach by comparing it with existing search strategies. The empirical results show that our approach is effective since it consistently achieves high code coverage faster than existing search strategies. --- paper_title: PerfFuzz: automatically generating pathological inputs paper_content: Performance problems in software can arise unexpectedly when programs are provided with inputs that exhibit worst-case behavior. A large body of work has focused on diagnosing such problems via statistical profiling techniques. But how does one find these inputs in the first place? We present PerfFuzz, a method to automatically generate inputs that exercise pathological behavior across program locations, without any domain knowledge. PerfFuzz generates inputs via feedback-directed mutational fuzzing. Unlike previous approaches that attempt to maximize only a scalar characteristic such as the total execution path length, PerfFuzz uses multi-dimensional feedback and independently maximizes execution counts for all program locations. This enables PerfFuzz to (1) find a variety of inputs that exercise distinct hot spots in a program and (2) generate inputs with higher total execution path length than previous approaches by escaping local maxima. PerfFuzz is also effective at generating inputs that demonstrate algorithmic complexity vulnerabilities. We implement PerfFuzz on top of AFL, a popular coverage-guided fuzzing tool, and evaluate PerfFuzz on four real-world C programs typically used in the fuzzing literature. We find that PerfFuzz outperforms prior work by generating inputs that exercise the most-hit program branch 5x to 69x times more, and result in 1.9x to 24.7x longer total execution paths. --- paper_title: Robust signatures for kernel data structures paper_content: Kernel-mode rootkits hide objects such as processes and threads using a technique known as Direct Kernel Object Manipulation (DKOM). Many forensic analysis tools attempt to detect these hidden objects by scanning kernel memory with handmade signatures; however, such signatures are brittle and rely on non-essential features of these data structures, making them easy to evade. In this paper, we present an automated mechanism for generating signatures for kernel data structures and show that these signatures are robust: attempts to evade the signature by modifying the structure contents will cause the OS to consider the object invalid. Using dynamic analysis, we profile the target data structure to determine commonly used fields, and we then fuzz those fields to determine which are essential to the correct operation of the OS. These fields form the basis of a signature for the data structure. In our experiments, our new signature matched the accuracy of existing scanners for traditional malware and found processes hidden with our prototype rootkit that all current signatures missed. Our techniques significantly increase the difficulty of hiding objects from signature scanning. --- paper_title: Making Malory Behave Maliciously: Targeted Fuzzing of Android Execution Environments paper_content: Android applications, or apps, provide useful features to end-users, but many apps also contain malicious behavior. Modern malware makes understanding such behavior challenging by behaving maliciously only under particular conditions. For example, a malware app may check whether it runs on a real device and not an emulator, in a particular country, and alongside a specific target app, such as a vulnerable banking app. To observe the malicious behavior, a security analyst must find out and emulate all these app-specific constraints. This paper presents FuzzDroid, a framework for automatically generating an Android execution environment where an app exposes its malicious behavior. The key idea is to combine an extensible set of static and dynamic analyses through a search-based algorithm that steers the app toward a configurable target location. On recent malware, the approach reaches the target location in 75% of the apps. In total, we reach 240 code locations within an average time of only one minute. To reach these code locations, FuzzDroid generates 106 different environments, too many for a human analyst to create manually. --- paper_title: An Empirical Study of the Reliability of UNIX Utilities paper_content: The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig. --- paper_title: Hulk: eliciting malicious behavior in browser extensions paper_content: We present Hulk, a dynamic analysis system that detects malicious behavior in browser extensions by monitoring their execution and corresponding network activity. Hulk elicits malicious behavior in extensions in two ways. First, Hulk leverages HoneyPages, which are dynamic pages that adapt to an extension's expectations in web page structure and content. Second, Hulk employs a fuzzer to drive the numerous event handlers that modern extensions heavily rely upon. We analyzed 48K extensions from the Chrome Web store, driving each with over 1M URLs. We identify a number of malicious extensions, including one with 5.5 million affected users, stressing the risks that extensions pose for today's web security ecosystem, and the need to further strengthen browser security to protect user data and privacy. --- paper_title: Automated Detection, Exploitation, and Elimination of Double-Fetch Bugs using Modern CPU Features paper_content: Double-fetch bugs are a special type of race condition, where an unprivileged execution thread is able to change a memory location between the time-of-check and time-of-use of a privileged execution thread. If an unprivileged attacker changes the value at the right time, the privileged operation becomes inconsistent, leading to a change in control flow, and thus an escalation of privileges for the attacker. More severely, such double-fetch bugs can be introduced by the compiler, entirely invisible on the source-code level. ::: We propose novel techniques to efficiently detect, exploit, and eliminate double-fetch bugs. We demonstrate the first combination of state-of-the-art cache attacks with kernel-fuzzing techniques to allow fully automated identification of double fetches. We demonstrate the first fully automated reliable detection and exploitation of double-fetch bugs, making manual analysis as in previous work superfluous. We show that cache-based triggers outperform state-of-the-art exploitation techniques significantly, leading to an exploitation success rate of up to 97%. Our modified fuzzer automatically detects double fetches and automatically narrows down this candidate set for double-fetch bugs to the exploitable ones. We present the first generic technique based on hardware transactional memory, to eliminate double-fetch bugs in a fully automated and transparent manner. We extend defensive programming techniques by retrofitting arbitrary code with automated double-fetch prevention, both in trusted execution environments as well as in syscalls, with a performance overhead below 1%. --- paper_title: IFuzzer: An Evolutionary Interpreter Fuzzer Using Genetic Programming paper_content: We present an automated evolutionary fuzzing technique to find bugs in JavaScript interpreters. Fuzzing is an automated black box testing technique used for finding security vulnerabilities in the software by providing random data as input. However, in the case of an interpreter, fuzzing is challenging because the inputs are piece of codes that should be syntactically/semantically valid to pass the interpreter’s elementary checks. On the other hand, the fuzzed input should also be uncommon enough to trigger exceptional behavior in the interpreter, such as crashes, memory leaks and failing assertions. In our approach, we use evolutionary computing techniques, specifically genetic programming, to guide the fuzzer in generating uncommon input code fragments that may trigger exceptional behavior in the interpreter. We implement a prototype named IFuzzer to evaluate our technique on real-world examples. IFuzzer uses the language grammar to generate valid inputs. We applied IFuzzer first on an older version of the JavaScript interpreter of Mozilla (to allow for a fair comparison to existing work) and found 40 bugs, of which 12 were exploitable. On subsequently targeting the latest builds of the interpreter, IFuzzer found 17 bugs, of which four were security bugs. --- paper_title: Representation dependence testing using program inversion paper_content: The definition of a data structure may permit many different concrete representations of the same logical content. A (client) program that accepts such a data structure as input is said to have a representation dependence if its behavior differs for logically equivalent input values. In this paper, we present a methodology and tool for automated testing of clients of a data structure for representation dependence. In the proposed methodology, the developer expresses the logical equivalence by writing a normalization program f that maps each concrete representation to a canonical one. Our solution relies on automatically synthesizing the one-to-many inverse function of f: given an input value x, we can generate multiple test inputs logically equivalent to x by executing the inverse with the canonical value f(x) as input repeatedly. We present an inversion algorithm for restricted classes of normalization programs including programs mapping arrays to arrays in a typical iterative manner. We present a prototype implementation of the algorithm, and demonstrate how our methodology reveals bugs due to representation dependence in open source software such as Open Office and Picasa using the widely used image format TIFF. TIFF is a challenging case study for our approach. --- paper_title: Enforceable security policies paper_content: A precise characterization is given for the class of security policies that can be enforced using mechanisms that work by monitoring system execution, and a class of automata is introduced for specifying those security policies. Techniques to enforce security policies specified by such automata are also discussed. READERS NOTE: A substantially revised version of this document is available at http://cs-tr.cs.cornell.edu:80/Dienst/UI/1.0/Display/ncstrl.cornell/TR99-1759 --- paper_title: An Empirical Study of the Reliability of UNIX Utilities paper_content: The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig. --- paper_title: Adaptive Random Testing: the ART of Test Case Diversity paper_content: Random testing is not only a useful testing technique in itself, but also plays a core role in many other testing methods. Hence, any significant improvement to random testing has an impact throughout the software testing community. Recently, Adaptive Random Testing (ART) was proposed as an effective alternative to random testing. This paper presents a synthesis of the most important research results related to ART. In the course of our research and through further reflection, we have realised how the techniques and concepts of ART can be applied in a much broader context, which we present here. We believe such ideas can be applied in a variety of areas of software testing, and even beyond software testing. Amongst these ideas, we particularly note the fundamental role of diversity in test case selection strategies. We hope this paper serves to provoke further discussions and investigations of these ideas. --- paper_title: CUTE: a concolic unit testing engine for C paper_content: In unit testing, a program is decomposed into units which are collections of functions. A part of unit can be tested by generating inputs for a single entry function. The entry function may contain pointer arguments, in which case the inputs to the unit are memory graphs. The paper addresses the problem of automating unit testing with memory graphs as inputs. The approach used builds on previous work combining symbolic and concrete execution, and more specifically, using such a combination to generate test inputs to explore all feasible execution paths. The current work develops a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graphs as inputs. Moreover, an efficient constraint solver is proposed to facilitate incremental generation of such test inputs. Finally, CUTE, a tool implementing the method is described together with the results of applying CUTE to real-world examples of C code. --- paper_title: Random testing for security: blackbox vs. whitebox fuzzing paper_content: Fuzz testing is an effective technique for finding security vulnerabilities in software. Fuzz testing is a form of blackbox random testing which randomly mutates well-formed inputs and tests the program on the resulting data. In some cases, grammars are used to randomly generate the well-formed inputs. This also allows the tester to encode application-specific knowledge (such as corner cases of particular interest) as part of the grammar, and to specify test heuristics by assigning probabilistic weights to production rules. Although fuzz testing can be remarkably effective, the limitations of blackbox random testing are well-known. For instance, the then branch of the conditional statement "if (x==10) then" has only one in 232 chances of being exercised if x is a randomly chosen 32-bit input value. This intuitively explains why random testing usually provides low code coverage. Recently, we have proposed an alternative approach of whitebox fuzz testing [4], building upon recent advances in dynamic symbolic execution and test generation [2]. Starting with a well-formed input, our approach symbolically executes the program dynamically and gathers constraints on inputs from conditional statements encountered along the way. The collected constraints are then systematically negated and solved with a constraint solver, yielding new inputs that exercise different execution paths in the program. This process is repeated using a novel search algorithm with a coverage-maximizing heuristic designed to find defects as fast as possible in large search spaces. For example, symbolic execution of the above code fragment on the input x = 0 generates the constraint x ≠ 10. Once this constraint is negated and solved, it yields x = 10, which gives us a new input that causes the program to follow the then branch of the given conditional statement. We have implemented this approach in SAGE (Scalable, Automated, Guided Execution), a tool based on x86 instruction-level tracing and emulation for whitebox fuzzing of file-reading Windows applications. While still in an early stage of development and deployment, SAGE has already discovered more than 30 new bugs in large shipped Windows applications including image processors, media players and file decoders. Several of these bugs are potentially exploitable memory access violations. In this talk, I will briefly review blackbox fuzzing for security testing. Then, I will present an overview of our recent work on whitebox fuzzing [4] (joint work with Michael Y. Levin and David Molnar), with an emphasis on the key algorithms and techniques needed to make this approach effective and scalable (see also [1, 3]). --- paper_title: An orchestrated survey of methodologies for automated software test case generation paper_content: Test case generation is among the most labour-intensive tasks in software testing. It also has a strong impact on the effectiveness and efficiency of software testing. For these reasons, it has been one of the most active research topics in software testing for several decades, resulting in many different approaches and tools. This paper presents an orchestrated survey of the most prominent techniques for automatic generation of software test cases, reviewed in self-standing sections. The techniques presented include: (a) structural testing using symbolic execution, (b) model-based testing, (c) combinatorial testing, (d) random testing and its variant of adaptive random testing, and (e) search-based testing. Each section is contributed by world-renowned active researchers on the technique, and briefly covers the basic ideas underlying the method, the current state of the art, a discussion of the open research problems, and a perspective of the future development of the approach. As a whole, the paper aims at giving an introductory, up-to-date and (relatively) short overview of research in automatic test case generation, while ensuring a comprehensive and authoritative treatment. --- paper_title: SELECT—a formal system for testing and debugging programs by symbolic execution paper_content: SELECT is an experimental system for assisting in the formal systematic debugging of programs. It is intended to be a compromise between an automated program proving system and the current ad hoc debugging practice, and is similar to a system being developed by King et al. of IBM. SELECT systematically handles the paths of programs written in a LISP subset that includes arrays. For each execution path SELECT returns simplified conditions on input variables that cause the path to be executed, and simplified symbolic values for program variables at the path output. For conditions which form a system of linear equalities and inequalities SELECT will return input variable values that can serve as sample test data. The user can insert constraint conditions, at any point in the program including the output, in the form of symbolically executable assertions. These conditions can induce the system to select test data in user-specified regions. SELECT can also determine if the path is correct with respect to an output assertion. We present four examples demonstrating the various modes of system operation and their effectiveness in finding bugs. In some examples, SELECT was successful in automatically finding useful test data. In others, user interaction was required in the form of output assertions. SELECT appears to be a useful tool for rapidly revealing program errors, but for the future there is a need to expand its expressive and deductive power. --- paper_title: jFuzz: A Concolic Whitebox Fuzzer for Java paper_content: We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model-checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented a novel optimization, name-independent caching, that aggressively normalizes the constraints to so reduced the number of calls to the constraint solver. We present preliminary results due to this optimization, and demonstrate the effectiveness of jFuzz in creating good test inputs. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder. --- paper_title: Automated Whitebox Fuzz Testing paper_content: Fuzz testing is an effective technique for finding security vulnerabilities in software. Traditionally, fuzz testing tools apply random mutations to well-formed inputs of a program and test the resulting values. We present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation. Our approach records an actual run of the program under test on a well-formed input, symbolically evaluates the recorded trace, and gathers constraints on inputs capturing how the program uses these. The collected constraints are then negated one by one and solved with a constraint solver, producing new inputs that exercise different control paths in the program. This process is repeated with the help of a code-coverage maximizing heuristic designed to find defects as fast as possible. We have implemented this algorithm in SAGE (Scalable, Automated, Guided Execution), a new tool employing x86 instruction-level tracing and emulation for whitebox fuzzing of arbitrary file-reading Windows applications. We describe key optimizations needed to make dynamic test generation scale to large input files and long execution traces with hundreds of millions of instructions. We then present detailed experiments with several Windows applications. Notably, without any format-specific knowledge, SAGE detects the MS07-017 ANI vulnerability, which was missed by extensive blackbox fuzzing and static analysis tools. Furthermore, while still in an early stage of development, SAGE has already discovered 30+ new bugs in large shipped Windows applications including image processors, media players, and file decoders. Several of these bugs are potentially exploitable memory access violations. --- paper_title: KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs paper_content: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them. ::: ::: We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies. --- paper_title: Methodology for the Generation of Program Test Data paper_content: A methodology for generating program test data is described. The methodology is a model of the test data generation process and can be used to characterize the basic problems of test data generation. It is well defined and can be used to build an automatic test data generation system. --- paper_title: Billions and billions of constraints: Whitebox fuzz testing in production paper_content: We report experiences with constraint-based whitebox fuzz testing in production across hundreds of large Windows applications and over 500 machine years of computation from 2007 to 2013. Whitebox fuzzing leverages symbolic execution on binary traces and constraint solving to construct new inputs to a program. These inputs execute previously uncovered paths or trigger security vulnerabilities. Whitebox fuzzing has found one-third of all file fuzzing bugs during the development of Windows 7, saving millions of dollars in potential security vulnerabilities. The technique is in use today across multiple products at Microsoft. We describe key challenges with running whitebox fuzzing in production. We give principles for addressing these challenges and describe two new systems built from these principles: SAGAN, which collects data from every fuzzing run for further analysis, and JobCenter, which controls deployment of our whitebox fuzzing infrastructure across commodity virtual machines. Since June 2010, SAGAN has logged over 3.4 billion constraints solved, millions of symbolic executions, and tens of millions of test cases generated. Our work represents the largest scale deployment of whitebox fuzzing to date, including the largest usage ever for a Satisfiability Modulo Theories (SMT) solver. We present specific data analyses that improved our production use of whitebox fuzzing. Finally we report data on the performance of constraint solving and dynamic test generation that points toward future research problems. --- paper_title: Grammar-based whitebox fuzzing paper_content: Whitebox fuzzing is a form of automatic dynamic test generation, based on symbolic execution and constraint solving, designed for security testing of large applications. Unfortunately, the current effectiveness of whitebox fuzzing is limited when testing applications with highly-structured inputs, such as compilers and interpreters. These applications process their inputs in stages, such as lexing, parsing and evaluation. Due to the enormous number of control paths in early processing stages, whitebox fuzzing rarely reaches parts of the application beyond those first stages. In this paper, we study how to enhance whitebox fuzzing of complex structured-input applications with a grammar-based specification of their valid inputs. We present a novel dynamic test generation algorithm where symbolic execution directly generates grammar-based constraints whose satisfiability is checked using a custom grammar-based constraint solver. We have implemented this algorithm and evaluated it on a large security-critical application, the JavaScript interpreter of Internet Explorer 7 (IE7). Results of our experiments show that grammar-based whitebox fuzzing explores deeper program paths and avoids dead-ends due to non-parsable inputs. Compared to regular whitebox fuzzing, grammar-based whitebox fuzzing increased coverage of the code generation module of the IE7 JavaScript interpreter from 53% to 81% while using three times fewer tests. --- paper_title: All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but Might Have Been Afraid to Ask) paper_content: Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context. --- paper_title: Model-based whitebox fuzzing for program binaries paper_content: Many real-world programs take highly structured and very complex inputs. The automated testing of such programs is non-trivial. If the test input does not adhere to a specific file format, the program returns a parser error. For symbolic execution-based whitebox fuzzing the corresponding error handling code becomes a significant time sink. Too much time is spent in the parser exploring too many paths leading to trivial parser errors. Naturally, the time is better spent exploring the functional part of the program where failure with valid input exposes deep and real bugs in the program. In this paper, we suggest to leverage information about the file format and the data chunks of existing, valid files to swiftly carry the exploration beyond the parser code. We call our approach Model-based Whitebox Fuzzing (MoWF) because the file format input model of blackbox fuzzers can be exploited as a constraint on the vast input space to rule out most invalid inputs during path exploration in symbolic execution. We evaluate on 13 vulnerabilities in 8 large program binaries with 6 separate file formats and found that MoWF exposes all vulnerabilities while both, traditional whitebox fuzzing and model-based blackbox fuzzing, expose only less than half, respectively. Our experiments also demonstrate that MoWF exposes 70% vulnerabilities without any seed inputs. --- paper_title: Satisfiability modulo theories: introduction and applications paper_content: Checking the satisfiability of logical formulas, SMT solvers scale orders of magnitude beyond custom ad hoc solvers. --- paper_title: Symbolic execution and program testing paper_content: This paper describes the symbolic execution of programs. Instead of supplying the normal inputs to a program (e.g. numbers) one supplies symbols representing arbitrary values. The execution proceeds as in a normal execution except that values may be symbolic formulas over the input symbols. The difficult, yet interesting issues arise during the symbolic execution of conditional branch type statements. A particular system called EFFIGY which provides symbolic execution for program testing and debugging is also described. It interpretively executes programs written in a simple PL/I style programming language. It includes many standard debugging features, the ability to manage and to prove things about symbolic expressions, a simple program testing manager, and a program verifier. A brief discussion of the relationship between symbolic execution and program proving is also included. --- paper_title: Taint-based directed whitebox fuzzing paper_content: We present a new automated white box fuzzing technique and a tool, BuzzFuzz, that implements this technique. Unlike standard fuzzing techniques, which randomly change parts of the input file with little or no information about the underlying syntactic structure of the file, BuzzFuzz uses dynamic taint tracing to automatically locate regions of original seed input files that influence values used at key program attack points (points where the program may contain an error). BuzzFuzz then automatically generates new fuzzed test input files by fuzzing these identified regions of the original seed input files. Because these new test files typically preserve the underlying syntactic structure of the original seed input files, they tend to make it past the initial input parsing components to exercise code deep within the semantic core of the computation. We have used BuzzFuzz to automatically find errors in two open-source applications: Swfdec (an Adobe Flash player) and MuPDF (a PDF viewer). Our results indicate that our new directed fuzzing technique can effectively expose errors located deep within large programs. Because the directed fuzzing technique uses taint to automatically discover and exploit information about the input file format, it is especially appropriate for testing programs that have complex, highly structured input file formats. --- paper_title: DART: directed automated random testing paper_content: We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging. --- paper_title: Fuzzing for Software Security Testing and Quality Assurance. Artech House paper_content: "A fascinating look at the new direction fuzzing technology is taking -- useful for both QA engineers and bug hunters alike!" --Dave Aitel, CTO, Immunity Inc. Learn the code cracker's malicious mindset, so you can find worn-size holes in the software you are designing, testing, and building. Fuzzing for Software Security Testing and Quality Assurance takes a weapon from the black-hat arsenal to give you a powerful new tool to build secure, high-quality software. This practical resource helps you add extra protection without adding expense or time to already tight schedules and budgets. The book shows you how to make fuzzing a standard practice that integrates seamlessly with all development activities. This comprehensive reference goes through each phase of software development and points out where testing and auditing can tighten security. It surveys all popular commercial fuzzing tools and explains how to select the right one for a software development project. The book also identifies those cases where commercial tools fall short and when there is a need for building your own fuzzing tools. --- paper_title: VUzzer : Application - aware Evolutionary Fuzzing paper_content: See, stats, and : https : / / www . researchgate . net / publication / 311886374 VUzzer : Application - aware Conference DOI : 10 . 14722 / ndss . 2017 . 23404 CITATIONS 0 READS 17 6 , including : Some : Systems Sanjay Vrije , Amsterdam , Netherlands 38 SEE Ashish International 1 SEE Cristiano VU 51 SEE Herbert VU 163 , 836 SEE All . The . All - text and , letting . Abstract—Fuzzing is an effective software testing technique to find bugs . Given the size and complexity of real - world applications , modern fuzzers tend to be either scalable , but not effective in exploring bugs that lie deeper in the execution , or capable of penetrating deeper in the application , but not scalable . In this paper , we present an application - aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format . In order to maximize coverage and explore deeper paths , we leverage control - and data - flow features based on static and dynamic analysis to infer fundamental prop - erties of the application . This enables much faster generation of interesting inputs compared to an application - agnostic approach . We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets : DARPA Grand Challenge binaries (CGC) , a set of real - world applications (binary input parsers) , and the recently released LAVA dataset . On all of these datasets , VUzzer yields significantly better results than state - of - the - art fuzzers , by quickly finding several existing and new bugs . --- paper_title: Revolutionizing the Field of Grey-box Attack Surface Testing with Evolutionary Fuzzing paper_content: Runtime code coverage analysis is feasible and useful when application source code is not available. An evolutionary test tool receiving such statistics can use that information as fitness for pools of sessions to actively learn the interface protocol. We call this activity grey-box fuzzing. We intend to show that, when applicable, grey-box fuzzing is more effective at finding bugs than RFC compliant or capture-replay mutation black-box tools. This research is focused on building a better/new breed of fuzzer. The impact of which is the discovery of difficult to find bugs in real world applications which are accessible (not theoretical). We have successfully combined an evolutionary approach with a debugged target to get real-time grey-box code coverage (CC) fitness data. We build upon existing test tool General Purpose Fuzzer (GPF) [8], and existing reverse engineering and debugging framework PaiMei [10] to accomplish this. We call our new tool the Evolutionary Fuzzing System (EFS), which is the initial realization of my PhD thesis. We have shown that it is possible for our system to learn the targets language (protocol) as target communication sessions become more fit over time. We have also shown that this technique works to find bugs in a real world application. Initial results are promising though further testing is still underway. This paper will explain EFS, describing its unique features, and present preliminary results for one test case. We will also discuss ongoing research efforts. First we begin with some background and related works. Previous Evolutionary Testing Work “Evolutionary Testing uses evolutionary algorithms to search for software test data. For white-box testing criteria, each uncovered structure-for example a program statement or branch-is taken as the individual target of a test data search. With certain types of programs, however, the approach degenerates into a random search, due to a lack of guidance to the required test data. Often this is because the fitness function does not take into account data dependencies within the program under test, and the fact that certain program statements need to have been executed prior to the target structure in order for it to be feasible. For instance, the outcome of a target branching condition may be dependent on a variable having a special value that is only set in a special circumstancefor example a special flag or enumeration value denoting an unusual condition; a unique return value from a function call indicating that an error has occurred, or a counter variable only incremented under certain conditions. Without specific knowledge of such dependencies, the fitness landscape may contain coarse, flat, or even deceptive areas, causing the evolutionary search to stagnate and fail. The problem of flag variables in particular has received much interest from researchers (Baresel et aL, 2004; Baresel and Sthamer, 2003; Bottaci, 2002; Harman et aL, 2002), but there has been little attention with regards to the broader problem as described. [1]” The above quote is from a McMinn paper that is pushing forward the field of traditional evolutionary testing. However, in this paper we propose a method for performing evolutionary testing (ET) that does not require source code. This is useful for third-party testing, verification, and security audits when the source code of the test target will not be provided. Our approach is to track the portions of code executed (“hits”) during runtime via a debugger. Previous static analysis of the compile code, allows the debugger to set break points on functions (funcs) or basic blocks (BBs). We partially overcome the traditional problems of evolutionary testing by the use of a seed file, which gives the evolutionary algorithm hints about the nature of the protocol to learn. Our approach works differently from traditional ET in two important ways: 1. We use a grey-box style of testing that allows us to proceed without source code 2. We search for sequences of test data, known as sessions, which fully define the documented and undocumented features of the interface under test (protocol discovery). This is very similar to finding test data to cover every source code branch via ET. However, the administration, of discovered test data is happening during the search. Thus, test results, are discovered as our algorithm runs. Robustness issues are recorded in the form of crash files and Mysql data, and can be further explored for exploitable conditions while the algorithm continues to run. --- paper_title: Feedback-Directed Random Test Generation paper_content: We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation. --- paper_title: Program-Adaptive Mutational Fuzzing paper_content: We present the design of an algorithm to maximize the number of bugs found for black-box mutational fuzzing given a program and a seed input. The major intuition is to leverage white-box symbolic analysis on an execution trace for a given program-seed pair to detect dependencies among the bit positions of an input, and then use this dependency relation to compute a probabilistically optimal mutation ratio for this program-seed pair. Our result is promising: we found an average of 38.6% more bugs than three previous fuzzers over 8 applications using the same amount of fuzzing time. --- paper_title: An Empirical Study of the Reliability of UNIX Utilities paper_content: The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig. --- paper_title: Optimizing seed selection for fuzzing paper_content: Randomly mutating well-formed program inputs or simply fuzzing, is a highly effective and widely used strategy to find bugs in software. Other than showing fuzzers find bugs, there has been little systematic effort in understanding the science of how to fuzz properly. In this paper, we focus on how to mathematically formulate and reason about one critical aspect in fuzzing: how best to pick seed files to maximize the total number of bugs found during a fuzz campaign. We design and evaluate six different algorithms using over 650 CPU days on Amazon Elastic Compute Cloud (EC2) to provide ground truth data. Overall, we find 240 bugs in 8 applications and show that the choice of algorithm can greatly increase the number of bugs found. We also show that current seed selection strategies as found in Peach may fare no better than picking seeds at random. We make our data set and code publicly available. --- paper_title: The Past, Present, and Future of Cyberdyne paper_content: Cyberdyne is a distributed system that discovers vulnerabilities in third-party, off-the-shelf binary programs. It competed in all rounds of DARPA’s Cyber Grand Challenge (CGC). In the qualifying event, Cyberdyne was the second most effective bug-finding system. In the final event, it was the bug-finding arm of the fourth-place team. Since then, Cyberdyne has been successfully applied during commercial code audits. The first half of this article describes the evolution and implementation of Cyberdyne and its bug-finding tools. The second half of the article looks at what it took to have Cyberdyne audit real applications and how we performed the first paid automated security audit for the Mozilla Secure Open Source Fund. We conclude with a discussion about the future of automated security audits. --- paper_title: Dynamic Test Generation to Find Integer Bugs in x86 Binary Linux Programs paper_content: Recently, integer bugs, including integer overflow, width conversion, and signed/unsigned conversion errors, have risen to become a common root cause for serious security vulnerabilities. We introduce new methods for discovering integer bugs using dynamic test generation on x86 binaries, and we describe key design choices in efficient symbolic execution of such programs. We implemented our methods in a prototype tool SmartFuzz, which we use to analyze Linux x86 binary executables. We also created a reporting service, metafuzz.com, to aid in triaging and reporting bugs found by SmartFuzz and the black-box fuzz testing tool zzuf. We report on experiments applying these tools to a range of software applications, including the mplayer media player, the exiv2 image metadata library, and ImageMagick convert. We also report on our experience using SmartFuzz, zzuf, and metafuzz.com to perform testing at scale with the Amazon Elastic Compute Cloud (EC2). To date, the metafuzz.com site has recorded more than 2; 614 test runs, comprising 2; 361; 595 test cases. Our experiments found approximately 77 total distinct bugs in 864 compute hours, costing us an average of $2:24 per bug at current EC2 rates. We quantify the overlap in bugs found by the two tools, and we show that SmartFuzz finds bugs missed by zzuf, including one program where Smart-Fuzz finds bugs but zzuf does not. --- paper_title: MagicFuzzer: Scalable deadlock detection for large-scale applications paper_content: We present MagicFuzzer, a novel dynamic deadlock detection technique. Unlike existing techniques to locate potential deadlock cycles from an execution, it iteratively prunes lock dependencies that each has no incoming or outgoing edge. Combining with a novel thread-specific strategy, it dramatically shrinks the size of lock dependency set for cycle detection, improving the efficiency and scalability of such a detection significantly. In the real deadlock confirmation phase, it uses a new strategy to actively schedule threads of an execution against the whole set of potential deadlock cycles. We have implemented a prototype and evaluated it on large-scale C/C++ programs. The experimental results confirm that our technique is significantly more effective and efficient than existing techniques. --- paper_title: Turning programs against each other: high coverage fuzz-testing using binary-code mutation and dynamic slicing paper_content: Mutation-based fuzzing is a popular and widely employed black-box testing technique for finding security and robustness bugs in software. It owes much of its success to its simplicity; a well-formed seed input is mutated, e.g. through random bit-flipping, to produce test inputs. While reducing the need for human effort, and enabling security testing even of closed-source programs with undocumented input formats, the simplicity of mutation-based fuzzing comes at the cost of poor code coverage. Often millions of iterations are needed, and the results are highly dependent on configuration parameters and the choice of seed inputs. In this paper we propose a novel method for automated generation of high-coverage test cases for robustness testing. Our method is based on the observation that, even for closed-source programs with proprietary input formats, an implementation that can generate well-formed inputs to the program is typically available. By systematically mutating the program code of such generating programs, we leverage information about the input format encoded in the generating program to produce high-coverage test inputs, capable of reaching deep states in the program under test. Our method works entirely at the machine-code level, enabling use-cases similar to traditional black-box fuzzing. We have implemented the method in our tool MutaGen, and evaluated it on 7 popular Linux programs. We found that, for most programs, our method improves code coverage by one order of magnitude or more, compared to two well-known mutation-based fuzzers. We also found a total of 8 unique bugs. --- paper_title: Probability-Based Parameter Selection for Black-Box Fuzz Testing paper_content: Abstract : Dynamic, randomized-input functional testing, or black-box fuzz testing, is an effective technique for finding security vulnerabilities in software applications. Parameters for an invocation of black-box fuzz testing generally include known-good input to use as a basis for randomization (i.e., a seed file) and a specification of how much of the seed file to randomize (i.e., the range).This report describes an algorithm that applies basic statistical theory to the parameter selection problem and automates selection of seed files and ranges. This algorithm was implemented in an open-source, file-interface testing tool and was used to find and mitigate vulnerabilities in several software applications. This report generalizes the parameter selection problem, explains the algorithm, and analyzes empirical data collected from the implementation. Results of using the algorithm show a marked improvement in the efficiency of discovering unique application errors over basic parameter selection techniques. --- paper_title: Detecting atomic-set serializability violations in multithreaded programs through active randomized testing paper_content: Concurrency bugs are notoriously difficult to detect because there can be vast combinations of interleavings among concurrent threads, yet only a small fraction can reveal them. Atomic-set serializability characterizes a wide range of concurrency bugs, including data races and atomicity violations. In this paper, we propose a two-phase testing technique that can effectively detect atomic-set serializability violations. In Phase I, our technique infers potential violations that do not appear in a concrete execution and prunes those interleavings that are violation-free. In Phase II, our technique actively controls a thread scheduler to enumerate these potential scenarios identified in Phase I to look for real violations. We have implemented our technique as a prototype system AssetFuzzer and applied it to a number of subject programs for evaluating concurrency defect analysis techniques. The experimental results show that AssetFuzzer can identify more concurrency bugs than two recent testing tools RaceFuzzer and AtomFuzzer. --- paper_title: Synthesizing program input grammars paper_content: We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers. --- paper_title: Chopped symbolic execution paper_content: Symbolic execution is a powerful program analysis technique that systematically explores multiple program paths. However, despite important technical advances, symbolic execution often struggles to reach deep parts of the code due to the well-known path explosion problem and constraint solving limitations. In this paper, we propose chopped symbolic execution, a novel form of symbolic execution that allows users to specify uninteresting parts of the code to exclude during the analysis, thus only targeting the exploration to paths of importance. However, the excluded parts are not summarily ignored, as this may lead to both false positives and false negatives. Instead, they are executed lazily, when their effect may be observable by code under analysis. Chopped symbolic execution leverages various on-demand static analyses at runtime to automatically exclude code fragments while resolving their side effects, thus avoiding expensive manual annotations and imprecision. Our preliminary results show that the approach can effectively improve the effectiveness of symbolic execution in several different scenarios, including failure reproduction and test suite augmentation. --- paper_title: Coverage-directed differential testing of JVM implementations paper_content: Java virtual machine (JVM) is a core technology, whose reliability is critical. Testing JVM implementations requires painstaking effort in designing test classfiles (*.class) along with their test oracles. An alternative is to employ binary fuzzing to differentially test JVMs by blindly mutating seeding classfiles and then executing the resulting mutants on different JVM binaries for revealing inconsistent behaviors. However, this blind approach is not cost effective in practice because most of the mutants are invalid and redundant. This paper tackles this challenge by introducing classfuzz, a coverage-directed fuzzing approach that focuses on representative classfiles for differential testing of JVMs’ startup processes. Our core insight is to (1) mutate seeding classfiles using a set of predefined mutation operators (mutators) and employ Markov Chain Monte Carlo (MCMC) sampling to guide mutator selection, and (2) execute the mutants on a reference JVM implementation and use coverage uniqueness as a discipline for accepting representative ones. The accepted classfiles are used as inputs to differentially test different JVM implementations and find defects. We have implemented classfuzz and conducted an extensive evaluation of it against existing fuzz testing algorithms. Our evaluation results show that classfuzz can enhance the ratio of discrepancy-triggering classfiles from 1.7% to 11.9%. We have also reported 62 JVM discrepancies, along with the test classfiles, to JVM developers. Many of our reported issues have already been confirmed as JVM defects, and some even match recent clarifications and changes to the Java SE 8 edition of the JVM specification. --- paper_title: Optimizing seed selection for fuzzing paper_content: Randomly mutating well-formed program inputs or simply fuzzing, is a highly effective and widely used strategy to find bugs in software. Other than showing fuzzers find bugs, there has been little systematic effort in understanding the science of how to fuzz properly. In this paper, we focus on how to mathematically formulate and reason about one critical aspect in fuzzing: how best to pick seed files to maximize the total number of bugs found during a fuzz campaign. We design and evaluate six different algorithms using over 650 CPU days on Amazon Elastic Compute Cloud (EC2) to provide ground truth data. Overall, we find 240 bugs in 8 applications and show that the choice of algorithm can greatly increase the number of bugs found. We also show that current seed selection strategies as found in Peach may fare no better than picking seeds at random. We make our data set and code publicly available. --- paper_title: Randomized active atomicity violation detection in concurrent programs paper_content: Atomicity is an important specification that enables programmers to understand atomic blocks of code in a multi-threaded program as if they are sequential. This significantly simplifies the programmer's job to reason about correctness. Several modern multithreaded programming languages provide no built-in support to ensure atomicity; instead they rely on the fact that programmers would use locks properly in order to guarantee that atomic code blocks are indeed atomic. However, improper use of locks can sometimes fail to ensure atomicity. Therefore, we need tools that can check atomicity properties of lock-based code automatically. We propose a randomized dynamic analysis technique to detect a special, but important, class of atomicity violations that are often found in real-world programs. Specifically, our technique modifies the existing Java thread scheduler behavior to create atomicity violations with high probability. Our approach has several advantages over existing dynamic analysis tools. First, we can create a real atomicity violation and see if an exception can be thrown. Second, we can replay an atomicity violating execution by simply using the same seed for random number generation---we do not need to record the execution. Third, we give no false warnings unlike existing dynamic atomicity checking techniques. We have implemented the technique in a prototype tool for Java and have experimented on a number of large multi-threaded Java programs and libraries. We report a number of previously known and unknown bugs and atomicity violations in these Java programs. --- paper_title: Scheduling black-box mutational fuzzing paper_content: Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time. --- paper_title: QuickFuzz: an automatic random fuzzer for common file formats paper_content: Fuzzing is a technique that involves testing programs using invalid or erroneous inputs. Most fuzzers require a set of valid inputs as a starting point, in which mutations are then introduced. QuickFuzz is a fuzzer that leverages QuickCheck-style random test-case generationto automatically test programs that manipulate common file formats by fuzzing. We rely on existing Haskell implementations of file-format-handling libraries found on Hackage, the community-driven Haskell code repository. We have tried QuickFuzz in the wild and found that the approach is effective in discovering vulnerabilities in real-world implementations of browsers, image processing utilities and file compressors among others. In addition, we introduce a mechanism to automatically derive random generators for the types representing these formats. QuickFuzz handles most well-known image and media formats, and can be used to test programs and libraries written in any language. --- paper_title: IMF: Inferred Model-based Fuzzer paper_content: Kernel vulnerabilities are critical in security because they naturally allow attackers to gain unprivileged root access. Although there has been much research on finding kernel vulnerabilities from source code, there are relatively few research on kernel fuzzing, which is a practical bug finding technique that does not require any source code. Existing kernel fuzzing techniques involve feeding in random input values to kernel API functions. However, such a simple approach does not reveal latent bugs deep in the kernel code, because many API functions are dependent on each other, and they can quickly reject arbitrary parameter values based on their calling context. In this paper, we propose a novel fuzzing technique for commodity OS kernels that leverages inferred dependence model between API function calls to discover deep kernel bugs. We implement our technique on a fuzzing system, called IMF. IMF has already found 32 previously unknown kernel vulnerabilities on the latest macOS version 10.12.3 (16D32) at the time of this writing. --- paper_title: Skyfire: Data-Driven Seed Generation for Fuzzing paper_content: Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs. --- paper_title: Random testing for security: blackbox vs. whitebox fuzzing paper_content: Fuzz testing is an effective technique for finding security vulnerabilities in software. Fuzz testing is a form of blackbox random testing which randomly mutates well-formed inputs and tests the program on the resulting data. In some cases, grammars are used to randomly generate the well-formed inputs. This also allows the tester to encode application-specific knowledge (such as corner cases of particular interest) as part of the grammar, and to specify test heuristics by assigning probabilistic weights to production rules. Although fuzz testing can be remarkably effective, the limitations of blackbox random testing are well-known. For instance, the then branch of the conditional statement "if (x==10) then" has only one in 232 chances of being exercised if x is a randomly chosen 32-bit input value. This intuitively explains why random testing usually provides low code coverage. Recently, we have proposed an alternative approach of whitebox fuzz testing [4], building upon recent advances in dynamic symbolic execution and test generation [2]. Starting with a well-formed input, our approach symbolically executes the program dynamically and gathers constraints on inputs from conditional statements encountered along the way. The collected constraints are then systematically negated and solved with a constraint solver, yielding new inputs that exercise different execution paths in the program. This process is repeated using a novel search algorithm with a coverage-maximizing heuristic designed to find defects as fast as possible in large search spaces. For example, symbolic execution of the above code fragment on the input x = 0 generates the constraint x ≠ 10. Once this constraint is negated and solved, it yields x = 10, which gives us a new input that causes the program to follow the then branch of the given conditional statement. We have implemented this approach in SAGE (Scalable, Automated, Guided Execution), a tool based on x86 instruction-level tracing and emulation for whitebox fuzzing of file-reading Windows applications. While still in an early stage of development and deployment, SAGE has already discovered more than 30 new bugs in large shipped Windows applications including image processors, media players and file decoders. Several of these bugs are potentially exploitable memory access violations. In this talk, I will briefly review blackbox fuzzing for security testing. Then, I will present an overview of our recent work on whitebox fuzzing [4] (joint work with Michael Y. Levin and David Molnar), with an emphasis on the key algorithms and techniques needed to make this approach effective and scalable (see also [1, 3]). --- paper_title: Language fuzzing using constraint logic programming paper_content: Fuzz testing builds confidence in compilers and interpreters. It is desirable for fuzzers to allow targeted generation of programs that showcase specific language features and behaviors. However, the predominant program generation technique used by most language fuzzers, stochastic context-free grammars, does not have this property. We propose the use of constraint logic programming (CLP) for program generation. Using CLP, testers can write declarative predicates specifying interesting programs, including syntactic features and semantic behaviors. CLP subsumes and generalizes the stochastic grammar approach. --- paper_title: A randomized dynamic program analysis technique for detecting real deadlocks paper_content: We present a novel dynamic analysis technique that finds real deadlocks in multi-threaded programs. Our technique runs in two stages. In the first stage, we use an imprecise dynamic analysis technique to find potential deadlocks in a multi-threaded program by observing an execution of the program. In the second stage, we control a random thread scheduler to create the potential deadlocks with high probability. Unlike other dynamic analysis techniques, our approach has the advantage that it does not give any false warnings. We have implemented the technique in a prototype tool for Java, and have experimented on a number of large multi-threaded Java programs. We report a number of previously known and unknown real deadlocks that were found in these benchmarks. --- paper_title: QSYM: a practical concolic execution engine tailored for hybrid fuzzing paper_content: Recently, hybrid fuzzing has been proposed to address the limitations of fuzzing and concolic execution by combining both approaches. The hybrid approach has shown its effectiveness in various synthetic benchmarks such as DARPA Cyber Grand Challenge (CGC) binaries, but it still suffers from scaling to find bugs in complex, realworld software. We observed that the performance bottleneck of the existing concolic executor is the main limiting factor for its adoption beyond a small-scale study. ::: ::: To overcome this problem, we design a fast concolic execution engine, called QSYM, to support hybrid fuzzing. The key idea is to tightly integrate the symbolic emulation with the native execution using dynamic binary translation, making it possible to implement more fine-grained, so faster, instruction-level symbolic emulation. Additionally, QSYM loosens the strict soundness requirements of conventional concolic executors for better performance, yet takes advantage of a faster fuzzer for validation, providing unprecedented opportunities for performance optimizations, e.g., optimistically solving constraints and pruning uninteresting basic blocks. ::: ::: Our evaluation shows that QSYM does not just outperform state-of-the-art fuzzers (i.e., found 14× more bugs than VUzzer in the LAVA-M dataset, and outperformed Driller in 104 binaries out of 126), but also found 13 previously unknown security bugs in eight real-world programs like Dropbox Lepton, ffmpeg, and OpenJPEG, which have already been intensively tested by the state-of-the-art fuzzers, AFL and OSS-Fuzz. --- paper_title: Fuzzing with code fragments paper_content: Fuzz testing is an automated technique providing random data as input to a software system in the hope to expose a vulnerability. In order to be effective, the fuzzed input must be common enough to pass elementary consistency checks; a JavaScript interpreter, for instance, would only accept a semantically valid program. On the other hand, the fuzzed input must be uncommon enough to trigger exceptional behavior, such as a crash of the interpreter. The LangFuzz approach resolves this conflict by using a grammar to randomly generate valid programs; the code fragments, however, partially stem from programs known to have caused invalid behavior before. LangFuzz is an effective tool for security testing: Applied on the Mozilla JavaScript interpreter, it discovered a total of 105 new severe vulnerabilities within three months of operation (and thus became one of the top security bug bounty collectors within this period); applied on the PHP interpreter, it discovered 18 new defects causing crashes. --- paper_title: jFuzz: A Concolic Whitebox Fuzzer for Java paper_content: We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model-checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented a novel optimization, name-independent caching, that aggressively normalizes the constraints to so reduced the number of calls to the constraint solver. We present preliminary results due to this optimization, and demonstrate the effectiveness of jFuzz in creating good test inputs. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder. --- paper_title: Angora: Efficient Fuzzing by Principled Search paper_content: Fuzzing is a popular technique for finding software bugs. However, the performance of the state-of-the-art fuzzers leaves a lot to be desired. Fuzzers based on symbolic execution produce quality inputs but run slow, while fuzzers based on random mutation run fast but have difficulty producing quality inputs. We propose Angora, a new mutation-based fuzzer that outperforms the state-of-the-art fuzzers by a wide margin. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution. To solve path constraints efficiently, we introduce several key techniques: scalable byte-level taint tracking, context-sensitive branch count, search based on gradient descent, and input length exploration. On the LAVA-M data set, Angora found almost all the injected bugs, found more bugs than any other fuzzer that we compared with, and found eight times as many bugs as the second-best fuzzer in the program who. Angora also found 103 bugs that the LAVA authors injected but could not trigger. We also tested Angora on eight popular, mature open source programs. Angora found 6, 52, 29, 40 and 48 new bugs in file, jhead, nm, objdump and size, respectively. We measured the coverage of Angora and evaluated how its key techniques contribute to its impressive performance. --- paper_title: Hawkeye: Towards a Desired Directed Grey-box Fuzzer paper_content: Grey-box fuzzing is a practically effective approach to test real-world programs. However, most existing grey-box fuzzers lack directedness, i.e. the capability of executing towards user-specified target sites in the program. To emphasize existing challenges in directed fuzzing, we propose Hawkeye to feature four desired properties of directed grey-box fuzzers. Owing to a novel static analysis on the program under test and the target sites, Hawkeye precisely collects the information such as the call graph, function and basic block level distances to the targets. During fuzzing, Hawkeye evaluates exercised seeds based on both static information and the execution traces to generate the dynamic metrics, which are then used for seed prioritization, power scheduling and adaptive mutating. These strategies help Hawkeye to achieve better directedness and gravitate towards the target sites. We implemented Hawkeye as a fuzzing framework and evaluated it on various real-world programs under different scenarios. The experimental results showed that Hawkeye can reach the target sites and reproduce the crashes much faster than state-of-the-art grey-box fuzzers such as AFL and AFLGo. Specially, Hawkeye can reduce the time to exposure for certain vulnerabilities from about 3.5 hours to 0.5 hour. By now, Hawkeye has detected more than 41 previously unknown crashes in projects such as Oniguruma, MJS with the target sites provided by vulnerability prediction tools; all these crashes are confirmed and 15 of them have been assigned CVE IDs. --- paper_title: Automated Whitebox Fuzz Testing paper_content: Fuzz testing is an effective technique for finding security vulnerabilities in software. Traditionally, fuzz testing tools apply random mutations to well-formed inputs of a program and test the resulting values. We present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation. Our approach records an actual run of the program under test on a well-formed input, symbolically evaluates the recorded trace, and gathers constraints on inputs capturing how the program uses these. The collected constraints are then negated one by one and solved with a constraint solver, producing new inputs that exercise different control paths in the program. This process is repeated with the help of a code-coverage maximizing heuristic designed to find defects as fast as possible. We have implemented this algorithm in SAGE (Scalable, Automated, Guided Execution), a new tool employing x86 instruction-level tracing and emulation for whitebox fuzzing of arbitrary file-reading Windows applications. We describe key optimizations needed to make dynamic test generation scale to large input files and long execution traces with hundreds of millions of instructions. We then present detailed experiments with several Windows applications. Notably, without any format-specific knowledge, SAGE detects the MS07-017 ANI vulnerability, which was missed by extensive blackbox fuzzing and static analysis tools. Furthermore, while still in an early stage of development, SAGE has already discovered 30+ new bugs in large shipped Windows applications including image processors, media players, and file decoders. Several of these bugs are potentially exploitable memory access violations. --- paper_title: GRT: Program-Analysis-Guided Random Testing (T) paper_content: We propose Guided Random Testing (GRT), which uses static and dynamic analysis to include information on program types, data, and dependencies in various stages of automated test generation. Static analysis extracts knowledge from the system under test. Test coverage is further improved through state fuzzing and continuous coverage analysis. We evaluated GRT on 32 real-world projects and found that GRT outperforms major peer techniques in terms of code coverage (by 13 %) and mutation score (by 9 %). On the four studied benchmarks of Defects4J, which contain 224 real faults, GRT also shows better fault detection capability than peer techniques, finding 147 faults (66 %). Furthermore, in an in-depth evaluation on the latest versions of ten popular real-world projects, GRT successfully detects over 20 unknown defects that were confirmed by developers. --- paper_title: KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs paper_content: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them. ::: ::: We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies. --- paper_title: FairFuzz: A Targeted Mutation Strategy for Increasing Greybox Fuzz Testing Coverage paper_content: In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop (AFL), has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the bugs it can find since it simply does not cover large regions of code. If it does not cover parts of the code, it will not find bugs there. We propose a two-pronged approach to increase the coverage achieved by AFL. First, the approach automatically identifies branches exercised by few AFL-produced inputs (rare branches), which often guard code that is empirically hard to cover by naively mutating inputs. The second part of the approach is a novel mutation mask creation algorithm, which allows mutations to be biased towards producing inputs hitting a given rare branch. This mask is dynamically computed during fuzz testing and can be adapted to other testing targets. We implement this approach on top of AFL in a tool named FairFuzz. We conduct evaluation on real-world programs against state-of-the-art versions of AFL. We find that on these programs FairFuzz achieves high branch coverage at a faster rate that state-of-the-art versions of AFL. In addition, on programs with nested conditional structure, it achieves sustained increases in branch coverage after 24 hours (average 10.6% increase). In qualitative analysis, we find that FairFuzz has an increased capacity to automatically discover keywords. --- paper_title: Snooze: Toward a stateful network protocol fuzzer paper_content: Fuzzing is a well-known black-box approach to the security testing of applications. Fuzzing has many advantages in terms of simplicity and effectiveness over more complex, expensive testing approaches. Unfortunately, current fuzzing tools suffer from a number of limitations, and, in particular, they provide little support for the fuzzing of stateful protocols. In this paper, we present SNOOZE, a tool for building flexible, security-oriented, network protocol fuzzers. SNOOZE implements a stateful fuzzing approach that can be used to effectively identify security flaws in network protocol implementations. SNOOZE allows a tester to describe the stateful operation of a protocol and the messages that need to be generated in each state. In addition, SNOOZE provides attack-specific fuzzing primitives that allow a tester to focus on specific vulnerability classes. We used an initial prototype of the SNOOZE tool to test programs that implement the SIP protocol, with promising results. SNOOZE supported the creation of sophisticated fuzzing scenarios that were able to expose real-world bugs in the programs analyzed. --- paper_title: TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection paper_content: Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated malformed inputs are rejected in the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. In this paper, we present TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel contributions: 1) TaintScope is the first checksum-aware fuzzing tool to the best of our knowledge. It can identify checksum fields in input instances, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. 2) TaintScope is a directed fuzzing tool working at X86 binary level (on both Linux and Window). Based on fine-grained dynamic taint tracing, TaintScope identifies which bytes in a well-formed input are used in security-sensitive operations (e.g., invoking system/library calls) and then focuses on modifying such bytes. Thus, generated inputs are more likely to trigger potential vulnerabilities. 3) TaintScope is fully automatic, from detecting checksum, directed fuzzing, to repairing crashed samples. It can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 27 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Google Picasa, Microsoft Paint, and ImageMagick. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Corresponding patches from vendors are released or in progress based on our reports. --- paper_title: Statically-directed dynamic automated test generation paper_content: We present a new technique for exploiting static analysis to guide dynamic automated test generation for binary programs, prioritizing the paths to be explored. Our technique is a three-stage process, which alternates dynamic and static analysis. In the first stage, we run dynamic analysis with a small number of seed tests to resolve indirect jumps in the binary code and build a visibly pushdown automaton (VPA) reflecting the global control-flow of the program. Further, we augment the computed VPA with statically computable jumps not executed by the seed tests. In the second stage, we apply static analysis to the inferred automaton to find potential vulnerabilities, i.e., targets for the dynamic analysis. In the third stage, we use the results of the prior phases to assign weights to VPA edges. Our symbolic-execution based automated test generation tool then uses the weighted shortest-path lengths in the VPA to direct its exploration to the target potential vulnerabilities. Preliminary experiments on a suite of benchmarks extracted from real applications show that static analysis allows exploration to reach vulnerabilities it otherwise would not, and the generated test inputs prove that the static warnings indicate true positives. --- paper_title: Path-exploration lifting: hi-fi tests for lo-fi emulators paper_content: Processor emulators are widely used to provide isolation and instrumentation of binary software. However they have proved difficult to implement correctly: processor specifications have many corner cases that are not exercised by common workloads. It is untenable to base other system security properties on the correctness of emulators that have received only ad-hoc testing. To obtain emulators that are worthy of the required trust, we propose a technique to explore a high-fidelity emulator with symbolic execution, and then lift those test cases to test a lower-fidelity emulator. The high-fidelity emulator serves as a proxy for the hardware specification, but we can also further validate by running the tests on real hardware. We implement our approach and apply it to generate about 610,000 test cases; for about 95% of the instructions we achieve complete path coverage. The tests reveal thousands of individual differences; we analyze those differences to shed light on a number of root causes, such as atomicity violations and missing security features. --- paper_title: Directed Greybox Fuzzing paper_content: Existing Greybox Fuzzers (GF) cannot be effectively directed, for instance, towards problematic changes or patches, towards critical system calls or dangerous locations, or towards functions in the stack-trace of a reported vulnerability that we wish to reproduce. In this paper, we introduce Directed Greybox Fuzzing (DGF) which generates inputs with the objective of reaching a given set of target program locations efficiently. We develop and evaluate a simulated annealing-based power schedule that gradually assigns more energy to seeds that are closer to the target locations while reducing energy for seeds that are further away. Experiments with our implementation AFLGo demonstrate that DGF outperforms both directed symbolic-execution-based whitebox fuzzing and undirected greybox fuzzing. We show applications of DGF to patch testing and crash reproduction, and discuss the integration of AFLGo into Google's continuous fuzzing platform OSS-Fuzz. Due to its directedness, AFLGo could find 39 bugs in several well-fuzzed, security-critical projects like LibXML2. 17 CVEs were assigned. --- paper_title: VUzzer : Application - aware Evolutionary Fuzzing paper_content: See, stats, and : https : / / www . researchgate . net / publication / 311886374 VUzzer : Application - aware Conference DOI : 10 . 14722 / ndss . 2017 . 23404 CITATIONS 0 READS 17 6 , including : Some : Systems Sanjay Vrije , Amsterdam , Netherlands 38 SEE Ashish International 1 SEE Cristiano VU 51 SEE Herbert VU 163 , 836 SEE All . The . All - text and , letting . Abstract—Fuzzing is an effective software testing technique to find bugs . Given the size and complexity of real - world applications , modern fuzzers tend to be either scalable , but not effective in exploring bugs that lie deeper in the execution , or capable of penetrating deeper in the application , but not scalable . In this paper , we present an application - aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format . In order to maximize coverage and explore deeper paths , we leverage control - and data - flow features based on static and dynamic analysis to infer fundamental prop - erties of the application . This enables much faster generation of interesting inputs compared to an application - agnostic approach . We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets : DARPA Grand Challenge binaries (CGC) , a set of real - world applications (binary input parsers) , and the recently released LAVA dataset . On all of these datasets , VUzzer yields significantly better results than state - of - the - art fuzzers , by quickly finding several existing and new bugs . --- paper_title: PULSAR: Stateful Black-Box Fuzzing of Proprietary Network Protocols paper_content: The security of network services and their protocols critically depends on minimizing their attack surface. A single flaw in an implementation can suffice to compromise a service and expose sensitive data to an attacker. The discovery of vulnerabilities in protocol implementations, however, is a challenging task: While for standard protocols this process can be conducted with regular techniques for auditing, the situation becomes difficult for proprietary protocols if neither the program code nor the specification of the protocol are easily accessible. As a result, vulnerabilities in closed-source implementations can often remain undiscovered for a longer period of time. In this paper, we present Pulsar, a method for stateful black-box fuzzing of proprietary network protocols. Our method combines concepts from fuzz testing with techniques for automatic protocol reverse engineering and simulation. It proceeds by observing the traffic of a proprietary protocol and inferring a generative model for message formats and protocol states that can not only analyze but also simulate communication. During fuzzing this simulation can effectively explore the protocol state space and thereby enables uncovering vulnerabilities deep inside the protocol implementation. We demonstrate the efficacy of Pulsar in two case studies, where it identifies known as well as unknown vulnerabilities. --- paper_title: Synthesizing racy tests paper_content: Subtle concurrency errors in multithreaded libraries that arise because of incorrect or inadequate synchronization are often difficult to pinpoint precisely using only static techniques. On the other hand, the effectiveness of dynamic race detectors is critically dependent on multithreaded test suites whose execution can be used to identify and trigger races. Usually, such multithreaded tests need to invoke a specific combination of methods with objects involved in the invocations being shared appropriately to expose a race. Without a priori knowledge of the race, construction of such tests can be challenging. In this paper, we present a lightweight and scalable technique for synthesizing precisely these kinds of tests. Given a multithreaded library and a sequential test suite, we describe a fully automated analysis that examines sequential execution traces, and produces as its output a concurrent client program that drives shared objects via library method calls to states conducive for triggering a race. Experimental results on a variety of well-tested Java libraries yield 101 synthesized multithreaded tests in less than four minutes. Analyzing the execution of these tests using an off-the-shelf race detector reveals 187 harmful races, including several previously unreported ones. Our implementation, named NARADA, and the results of our experiments are available at http://www.csa.iisc.ernet.in/~sss/tools/narada. --- paper_title: Grammar-based whitebox fuzzing paper_content: Whitebox fuzzing is a form of automatic dynamic test generation, based on symbolic execution and constraint solving, designed for security testing of large applications. Unfortunately, the current effectiveness of whitebox fuzzing is limited when testing applications with highly-structured inputs, such as compilers and interpreters. These applications process their inputs in stages, such as lexing, parsing and evaluation. Due to the enormous number of control paths in early processing stages, whitebox fuzzing rarely reaches parts of the application beyond those first stages. In this paper, we study how to enhance whitebox fuzzing of complex structured-input applications with a grammar-based specification of their valid inputs. We present a novel dynamic test generation algorithm where symbolic execution directly generates grammar-based constraints whose satisfiability is checked using a custom grammar-based constraint solver. We have implemented this algorithm and evaluated it on a large security-critical application, the JavaScript interpreter of Internet Explorer 7 (IE7). Results of our experiments show that grammar-based whitebox fuzzing explores deeper program paths and avoids dead-ends due to non-parsable inputs. Compared to regular whitebox fuzzing, grammar-based whitebox fuzzing increased coverage of the code generation module of the IE7 JavaScript interpreter from 53% to 81% while using three times fewer tests. --- paper_title: A whitebox approach for automated security testing of Android applications on the cloud paper_content: By changing the way software is delivered to end-users, markets for mobile apps create a false sense of security: apps are downloaded from a market that can potentially be regulated. In practice, this is far from truth and instead, there has been evidence that security is not one of the primary design tenets for the mobile app stores. Recent studies have indicated mobile markets are harboring apps that are either malicious or vulnerable leading to compromises of millions of devices. The key technical obstacle for the organizations overseeing these markets is the lack of practical and automated mechanisms to assess the security of mobile apps, given that thousands of apps are added and updated on a daily basis. In this paper, we provide an overview of a multi-faceted project targeted at automatically testing the security and robustness of Android apps in a scalable manner. We describe an Android-specific program analysis technique capable of generating a large number of test cases for fuzzing an app, as well as a test bed that given the generated test cases, executes them in parallel on numerous emulated Androids running on the cloud. --- paper_title: BlendFuzz: A Model-Based Framework for Fuzz Testing Programs with Grammatical Inputs paper_content: Fuzz testing has been widely used in practice to detect software vulnerabilities. Traditional fuzzing tools typically use blocks to model program input. Despite the demonstrated success of this approach, its effectiveness is inherently limited when applied to test programs that process grammatical inputs, where the input data are mainly human-readable text with complex structures that are specified by a formal grammar. In this paper we present BlendFuzz, a fuzz testing framework that is grammar-aware. It works by breaking a set of existing test cases into units of grammar components, then using these units as variants to restructure existent test data, resulting in a wider range of test cases that have the potential to explore previously uncovered corner cases when used in testing. We've implemented this framework along with two language fuzzers on top of it. Experiments with these fuzzers have shown improved code coverage, and field testing has revealed over two dozens of previously unreported bugs in real-world applications, with seven of them being medium or high risk zero-day vulnerabilities. --- paper_title: Well There’s Your Problem: Isolating the Crash-Inducing Bits in a Fuzzed File paper_content: Abstract : Mutational input testing (fuzzing, and in particular dumb fuzzing) is an effective technique for discovering vulnerabilities in software. However, many of the bitwise changes in fuzzed input files are not relevant to the actual software crashes found. In this report, we describe an algorithm that efficiently reverts bits from the fuzzed file to those found in the original seed file, keeping only the minimal bits required to recreate the crash under investigation. This technique reduces the complexity of analyzing a crashing test case by eliminating the changes to the seed file that are not essential to the crash being evaluated. --- paper_title: Program-Adaptive Mutational Fuzzing paper_content: We present the design of an algorithm to maximize the number of bugs found for black-box mutational fuzzing given a program and a seed input. The major intuition is to leverage white-box symbolic analysis on an execution trace for a given program-seed pair to detect dependencies among the bit positions of an input, and then use this dependency relation to compute a probabilistically optimal mutation ratio for this program-seed pair. Our result is promising: we found an average of 38.6% more bugs than three previous fuzzers over 8 applications using the same amount of fuzzing time. --- paper_title: FLAX: Systematic Discovery of Client-side Validation Vulnerabilities in Rich Web Applications paper_content: The complexity of the client-side components of web applications has exploded with the increase in popularity of web 2.0 applications. Today, traditional desktop applications, such as document viewers, presentation tools and chat applications are commonly available as online JavaScript applications. Previous research on web vulnerabilities has primarily concentrated on flaws in the server-side components of web applications. This paper highlights a new class of vulnerabilities, which we term client-side validation (or CSV) vulnerabilities. CSV vulnerabilities arise from unsafe usage of untrusted data in the client-side code of the web application that is typically written in JavaScript. In this paper, we demonstrate that they can result in a broad spectrum of attacks. Our work provides empirical evidence that CSV vulnerabilities are not merely conceptual but are prevalent in today’s web applications. We propose dynamic analysis techniques to systematically discover vulnerabilities of this class. The techniques are light-weight, efficient, and have no false positives. We implement our techniques in a prototype tool called FLAX, which scales to real-world applications and has discovered 11 vulnerabilities in the wild so far. --- paper_title: Effective random testing of concurrent programs paper_content: Multithreaded concurrent programs often exhibit wrong behaviors due to unintended interferences among the concurrent threads. Such errors are often hard to find because they typically manifest under very specific thread schedules. Traditional testing, which pays no attention to thread schedules and non-deterministically exercises a few arbitrary schedules, often misses such bugs. Traditional model checking techniques, which try to systematically explore all thread schedules, give very high confidence in the correctness of the system, but, unfortunately, they suffer from the state explosion problem. Recently, dynamic partial order techniques have been proposed to alleviate the problem. However, such techniques fail for large programs because the state space remains large in spite of reduction. An inexpensive and a simple alternative approach is to perform random testing by choosing thread schedules at random. We show that such a naive approach often explores some states with very high probability compared to the others. We propose a random partial order sampling algorithm (or RAPOS) that partly removes this non-uniformity in sampling the state space. We empirically compare the proposed algorithm with the simple random testing algorithm and show that the former outperforms the latter --- paper_title: Systematic Fuzzing and Testing of TLS Libraries paper_content: We present TLS-Attacker, an open source framework for evaluating the security of TLS libraries. TLS-Attacker allows security engineers to create custom TLS message flows and arbitrarily modify message contents using a simple interface in order to test the behavior of their libraries. Based on TLS-Attacker, we present a two-stage fuzzing approach to evaluate TLS server behavior. Our approach automatically searches for cryptographic failures and boundary violation vulnerabilities. It allowed us to find unusual padding oracle vulnerabilities and overflows/overreads in widely used TLS libraries, including OpenSSL, Botan, and MatrixSSL. Our findings motivate developers to create comprehensive test suites, including positive as well as negative tests, for the evaluation of TLS libraries. We use TLS-Attacker to create such a test suite framework which finds further problems in Botan. --- paper_title: KameleonFuzz: evolutionary fuzzing for black-box XSS detection paper_content: Fuzz testing consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? Where to observe its effects? In this paper, we specifically address the questions: How to fuzz a parameter? How to observe its effects? To address these questions, we propose KameleonFuzz, a black-box Cross Site Scripting (XSS) fuzzer for web applications. KameleonFuzz can not only generate malicious inputs to exploit XSS, but also detect how close it is revealing a vulnerability. The malicious inputs generation and evolution is achieved with a genetic algorithm, guided by an attack grammar. A double taint inference, up to the browser parse tree, permits to detect precisely whether an exploitation attempt succeeded. Our evaluation demonstrates no false positives and high XSS revealing capabilities: KameleonFuzz detects several vulnerabilities missed by other black-box scanners. --- paper_title: Model-based whitebox fuzzing for program binaries paper_content: Many real-world programs take highly structured and very complex inputs. The automated testing of such programs is non-trivial. If the test input does not adhere to a specific file format, the program returns a parser error. For symbolic execution-based whitebox fuzzing the corresponding error handling code becomes a significant time sink. Too much time is spent in the parser exploring too many paths leading to trivial parser errors. Naturally, the time is better spent exploring the functional part of the program where failure with valid input exposes deep and real bugs in the program. In this paper, we suggest to leverage information about the file format and the data chunks of existing, valid files to swiftly carry the exploration beyond the parser code. We call our approach Model-based Whitebox Fuzzing (MoWF) because the file format input model of blackbox fuzzers can be exploited as a constraint on the vast input space to rule out most invalid inputs during path exploration in symbolic execution. We evaluate on 13 vulnerabilities in 8 large program binaries with 6 separate file formats and found that MoWF exposes all vulnerabilities while both, traditional whitebox fuzzing and model-based blackbox fuzzing, expose only less than half, respectively. Our experiments also demonstrate that MoWF exposes 70% vulnerabilities without any seed inputs. --- paper_title: kb-anonymity: a model for anonymized behaviour-preserving test and debugging data paper_content: It is often very expensive and practically infeasible to generate test cases that can exercise all possible program states in a program. This is especially true for a medium or large industrial system. In practice, industrial clients of the system often have a set of input data collected either before the system is built or after the deployment of a previous version of the system. Such data are highly valuable as they represent the operations that matter in a client's daily business and may be used to extensively test the system. However, such data often carries sensitive information and cannot be released to third-party development houses. For example, a healthcare provider may have a set of patient records that are strictly confidential and cannot be used by any third party. Simply masking sensitive values alone may not be sufficient, as the correlation among fields in the data can reveal the masked information. Also, masked data may exhibit different behavior in the system and become less useful than the original data for testing and debugging. For the purpose of releasing private data for testing and debugging, this paper proposes the kb-anonymity model, which combines the k-anonymity model commonly used in the data mining and database areas with the concept of program behavior preservation. Like k-anonymity, kb-anonymity replaces some information in the original data to ensure privacy preservation so that the replaced data can be released to third-party developers. Unlike k-anonymity, kb-anonymity ensures that the replaced data exhibits the same kind of program behavior exhibited by the original data so that the replaced data may still be useful for the purposes of testing and debugging. We also provide a concrete version of the model under three particular configurations and have successfully applied our prototype implementation to three open source programs, demonstrating the utility and scalability of our prototype. --- paper_title: Race directed random testing of concurrent programs paper_content: Bugs in multi-threaded programs often arise due to data races. Numerous static and dynamic program analysis techniques have been proposed to detect data races. We propose a novel randomized dynamic analysis technique that utilizes potential data race information obtained from an existing analysis tool to separate real races from false races without any need for manual inspection. Specifically, we use potential data race information obtained from an existing dynamic analysis technique to control a random scheduler of threads so that real race conditions get created with very high probability and those races get resolved randomly at runtime. Our approach has several advantages over existing dynamic analysis tools. First, we can create a real race condition and resolve the race randomly to see if an error can occur due to the race. Second, we can replay a race revealing execution efficiently by simply using the same seed for random number generation--we do not need to record the execution. Third, our approach has very low overhead compared to other precise dynamic race detection techniques because we only track all synchronization operations and a single pair of memory access statements that are reported to be in a potential race by an existing analysis. We have implemented the technique in a prototype tool for Java and have experimented on a number of large multi-threaded Java programs. We report a number of previously known and unknown bugs and real races in these Java programs. --- paper_title: Autodafé: an act of software torture paper_content: Automated vulnerability searching tools have led to a dramatic increase of the rate at which such flaws are discovered. One particular searching technique is fault injection i.e. insertion of random data into input files, buffers or protocol packets, combined with a systematic monitoring of memory violations. Even if these tools allow to uncover a lot of vulnerabilities, they are still very primitive; despite their poor efficiency, they are useful because of the very high density of such vulnerabilities in modern software. This paper presents an innovative buffer overflow uncovering technique, which uses a more thorough and reliable approach. This technique, called: Fuzzing by Weighting Attacks with Markers, is a specialized kind of fault injection, which does not need source code or special compilation for the monitored program. As a proof of concept of the efficiency of this technique, a tool called Autodafe has been developed. It allows to detect automatically an impressive number of buffer overflow vulnerabilities. --- paper_title: Revolutionizing the Field of Grey-box Attack Surface Testing with Evolutionary Fuzzing paper_content: Runtime code coverage analysis is feasible and useful when application source code is not available. An evolutionary test tool receiving such statistics can use that information as fitness for pools of sessions to actively learn the interface protocol. We call this activity grey-box fuzzing. We intend to show that, when applicable, grey-box fuzzing is more effective at finding bugs than RFC compliant or capture-replay mutation black-box tools. This research is focused on building a better/new breed of fuzzer. The impact of which is the discovery of difficult to find bugs in real world applications which are accessible (not theoretical). We have successfully combined an evolutionary approach with a debugged target to get real-time grey-box code coverage (CC) fitness data. We build upon existing test tool General Purpose Fuzzer (GPF) [8], and existing reverse engineering and debugging framework PaiMei [10] to accomplish this. We call our new tool the Evolutionary Fuzzing System (EFS), which is the initial realization of my PhD thesis. We have shown that it is possible for our system to learn the targets language (protocol) as target communication sessions become more fit over time. We have also shown that this technique works to find bugs in a real world application. Initial results are promising though further testing is still underway. This paper will explain EFS, describing its unique features, and present preliminary results for one test case. We will also discuss ongoing research efforts. First we begin with some background and related works. Previous Evolutionary Testing Work “Evolutionary Testing uses evolutionary algorithms to search for software test data. For white-box testing criteria, each uncovered structure-for example a program statement or branch-is taken as the individual target of a test data search. With certain types of programs, however, the approach degenerates into a random search, due to a lack of guidance to the required test data. Often this is because the fitness function does not take into account data dependencies within the program under test, and the fact that certain program statements need to have been executed prior to the target structure in order for it to be feasible. For instance, the outcome of a target branching condition may be dependent on a variable having a special value that is only set in a special circumstancefor example a special flag or enumeration value denoting an unusual condition; a unique return value from a function call indicating that an error has occurred, or a counter variable only incremented under certain conditions. Without specific knowledge of such dependencies, the fitness landscape may contain coarse, flat, or even deceptive areas, causing the evolutionary search to stagnate and fail. The problem of flag variables in particular has received much interest from researchers (Baresel et aL, 2004; Baresel and Sthamer, 2003; Bottaci, 2002; Harman et aL, 2002), but there has been little attention with regards to the broader problem as described. [1]” The above quote is from a McMinn paper that is pushing forward the field of traditional evolutionary testing. However, in this paper we propose a method for performing evolutionary testing (ET) that does not require source code. This is useful for third-party testing, verification, and security audits when the source code of the test target will not be provided. Our approach is to track the portions of code executed (“hits”) during runtime via a debugger. Previous static analysis of the compile code, allows the debugger to set break points on functions (funcs) or basic blocks (BBs). We partially overcome the traditional problems of evolutionary testing by the use of a seed file, which gives the evolutionary algorithm hints about the nature of the protocol to learn. Our approach works differently from traditional ET in two important ways: 1. We use a grey-box style of testing that allows us to proceed without source code 2. We search for sequences of test data, known as sessions, which fully define the documented and undocumented features of the interface under test (protocol discovery). This is very similar to finding test data to cover every source code branch via ET. However, the administration, of discovered test data is happening during the search. Thus, test results, are discovered as our algorithm runs. Robustness issues are recorded in the form of crash files and Mysql data, and can be further explored for exploitable conditions while the algorithm continues to run. --- paper_title: DIFUZE: Interface Aware Fuzzing for Kernel Drivers paper_content: Device drivers are an essential part in modern Unix-like systems to handle operations on physical devices, from hard disks and printers to digital cameras and Bluetooth speakers. The surge of new hardware, particularly on mobile devices, introduces an explosive growth of device drivers in system kernels. Many such drivers are provided by third-party developers, which are susceptible to security vulnerabilities and lack proper vetting. Unfortunately, the complex input data structures for device drivers render traditional analysis tools, such as fuzz testing, less effective, and so far, research on kernel driver security is comparatively sparse. In this paper, we present DIFUZE, an interface-aware fuzzing tool to automatically generate valid inputs and trigger the execution of the kernel drivers. We leverage static analysis to compose correctly-structured input in the userspace to explore kernel drivers. DIFUZE is fully automatic, ranging from identifying driver handlers, to mapping to device file names, to constructing complex argument instances. We evaluate our approach on seven modern Android smartphones. The results show that DIFUZE can effectively identify kernel driver bugs, and reports 32 previously unknown vulnerabilities, including flaws that lead to arbitrary code execution. --- paper_title: An Empirical Study of the Reliability of UNIX Utilities paper_content: The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig. --- paper_title: Enemy of the State: a State-aware Black-box Web Vulnerability Scanner paper_content: Black-box web vulnerability scanners are a popular choice for finding security vulnerabilities in web applications in an automated fashion. These tools operate in a point-and-shootmanner, testing any web application-- regardless of the server-side language--for common security vulnerabilities. Unfortunately, black-box tools suffer from a number of limitations, particularly when interacting with complex applications that have multiple actions that can change the application's state. If a vulnerability analysis tool does not take into account changes in the web application's state, it might overlook vulnerabilities or completely miss entire portions of the web application. ::: ::: We propose a novel way of inferring the web application's internal state machine from the outside--that is, by navigating through the web application, observing differences in output, and incrementally producing a model representing the web application's state. ::: ::: We utilize the inferred state machine to drive a black-box web application vulnerability scanner. Our scanner traverses a web application's state machine to find and fuzz user-input vectors and discover security flaws. We implemented our technique in a prototype crawler and linked it to the fuzzing component from an open-source web vulnerability scanner. ::: ::: We show that our state-aware black-box web vulnerability scanner is able to not only exercise more code of the web application, but also discover vulnerabilities that other vulnerability scanners miss. --- paper_title: T-Fuzz: Model-Based Fuzzing for Robustness Testing of Telecommunication Protocols paper_content: Telecommunication networks are crucial in today's society since critical socio-economical and governmental functions depend upon them. High availability requirements, such as the "five nines" uptime availability, permeate the development of telecommunication applications from their design to their deployment. In this context, robustness testing plays a fundamental role in software quality assurance. We present T-Fuzz - a novel fuzzing framework that integrates with existing conformance testing environment. Automated model extraction of telecommunication protocols is provided to enable better code testing coverage. The T-Fuzz prototype has been fully implemented and tested on the implementation of a common LTE protocol within existing testing facilities. We provide an evaluation of our framework from both a technical and a qualitative point of view based on feedback from key testers. T-Fuzz has shown to enhance the existing development already in place by finding previously unseen unexpected behaviour in the system. Furthermore, according to the testers, T-Fuzz is easy to use and would likely result in time savings as well as more robust code. --- paper_title: KiF: a stateful SIP fuzzer paper_content: With the recent evolution in the VoIP market, where more and more devices and services are being pushed on a very promising market, assuring their security becomes crucial. Among the most dangerous threats to VoIP, failures and bugs in the software implementation will still rank high on the list of vulnerabilities. In this paper we address the issue of detecting such vulnerabilities using a stateful fuzzer. We describe an automated attack approach capable to self-improve and to track the state context of a target device. We implemented our approach and were able to discover vulnerabilities in market leading and well known equipments and software. --- paper_title: Taint-based directed whitebox fuzzing paper_content: We present a new automated white box fuzzing technique and a tool, BuzzFuzz, that implements this technique. Unlike standard fuzzing techniques, which randomly change parts of the input file with little or no information about the underlying syntactic structure of the file, BuzzFuzz uses dynamic taint tracing to automatically locate regions of original seed input files that influence values used at key program attack points (points where the program may contain an error). BuzzFuzz then automatically generates new fuzzed test input files by fuzzing these identified regions of the original seed input files. Because these new test files typically preserve the underlying syntactic structure of the original seed input files, they tend to make it past the initial input parsing components to exercise code deep within the semantic core of the computation. We have used BuzzFuzz to automatically find errors in two open-source applications: Swfdec (an Adobe Flash player) and MuPDF (a PDF viewer). Our results indicate that our new directed fuzzing technique can effectively expose errors located deep within large programs. Because the directed fuzzing technique uses taint to automatically discover and exploit information about the input file format, it is especially appropriate for testing programs that have complex, highly structured input file formats. --- paper_title: Coverage-based Greybox Fuzzing as Markov Chain paper_content: Coverage-based Greybox Fuzzing (CGF) is a random testing approach that requires no program analysis. A new test is generated by slightly mutating a seed input. If the test exercises a new and interesting path, it is added to the set of seeds; otherwise, it is discarded. We observe that most tests exercise the same few "high-frequency" paths and develop strategies to explore significantly more paths with the same number of tests by gravitating towards low-frequency paths. We explain the challenges and opportunities of CGF using a Markov chain model which specifies the probability that fuzzing the seed that exercises path i generates an input that exercises path j. Each state (i.e., seed) has an energy that specifies the number of inputs to be generated from that seed. We show that CGF is considerably more efficient if energy is inversely proportional to the density of the stationary distribution and increases monotonically every time that seed is chosen. Energy is controlled with a power schedule. We implemented the exponential schedule by extending AFL. In 24 hours, AFLFAST exposes 3 previously unreported CVEs that are not exposed by AFL and exposes 6 previously unreported CVEs 7x faster than AFL. AFLFAST produces at least an order of magnitude more unique crashes than AFL. --- paper_title: Many-core compiler fuzzing paper_content: We address the compiler correctness problem for many-core systems through novel applications of fuzz testing to OpenCL compilers. Focusing on two methods from prior work, random differential testing and testing via equivalence modulo inputs (EMI), we present several strategies for random generation of deterministic, communicating OpenCL kernels, and an injection mechanism that allows EMI testing to be applied to kernels that otherwise exhibit little or no dynamically-dead code. We use these methods to conduct a large, controlled testing campaign with respect to 21 OpenCL (device, compiler) configurations, covering a range of CPU, GPU, accelerator, FPGA and emulator implementations. Our study provides independent validation of claims in prior work related to the effectiveness of random differential testing and EMI testing, proposes novel methods for lifting these techniques to the many-core setting and reveals a significant number of OpenCL compiler bugs in commercial implementations. --- paper_title: CollAFL: Path Sensitive Fuzzing paper_content: Coverage-guided fuzzing is a widely used and effective solution to find software vulnerabilities. Tracking code coverage and utilizing it to guide fuzzing are crucial to coverage-guided fuzzers. However, tracking full and accurate path coverage is infeasible in practice due to the high instrumentation overhead. Popular fuzzers (e.g., AFL) often use coarse coverage information, e.g., edge hit counts stored in a compact bitmap, to achieve highly efficient greybox testing. Such inaccuracy and incompleteness in coverage introduce serious limitations to fuzzers. First, it causes path collisions, which prevent fuzzers from discovering potential paths that lead to new crashes. More importantly, it prevents fuzzers from making wise decisions on fuzzing strategies. In this paper, we propose a coverage sensitive fuzzing solution CollAFL. It mitigates path collisions by providing more accurate coverage information, while still preserving low instrumentation overhead. It also utilizes the coverage information to apply three new fuzzing strategies, promoting the speed of discovering new paths and vulnerabilities. We implemented a prototype of CollAFL based on the popular fuzzer AFL and evaluated it on 24 popular applications. The results showed that path collisions are common, i.e., up to 75% of edges could collide with others in some applications, and CollAFL could reduce the edge collision ratio to nearly zero. Moreover, armed with the three fuzzing strategies, CollAFL outperforms AFL in terms of both code coverage and vulnerability discovery. On average, CollAFL covered 20% more program paths, found 320% more unique crashes and 260% more bugs than AFL in 200 hours. In total, CollAFL found 157 new security bugs with 95 new CVEs assigned. --- paper_title: IFuzzer: An Evolutionary Interpreter Fuzzer Using Genetic Programming paper_content: We present an automated evolutionary fuzzing technique to find bugs in JavaScript interpreters. Fuzzing is an automated black box testing technique used for finding security vulnerabilities in the software by providing random data as input. However, in the case of an interpreter, fuzzing is challenging because the inputs are piece of codes that should be syntactically/semantically valid to pass the interpreter’s elementary checks. On the other hand, the fuzzed input should also be uncommon enough to trigger exceptional behavior in the interpreter, such as crashes, memory leaks and failing assertions. In our approach, we use evolutionary computing techniques, specifically genetic programming, to guide the fuzzer in generating uncommon input code fragments that may trigger exceptional behavior in the interpreter. We implement a prototype named IFuzzer to evaluate our technique on real-world examples. IFuzzer uses the language grammar to generate valid inputs. We applied IFuzzer first on an older version of the JavaScript interpreter of Mozilla (to allow for a fair comparison to existing work) and found 40 bugs, of which 12 were exploitable. On subsequently targeting the latest builds of the interpreter, IFuzzer found 17 bugs, of which four were security bugs. --- paper_title: perf fuzzer: Targeted Fuzzing of the perf event open() System Call paper_content: Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls. --- paper_title: Input generation via decomposition and re-stitching: finding bugs in Malware paper_content: Attackers often take advantage of vulnerabilities in benign software, and the authors of benign software must search their code for bugs in hopes of finding vulnerabilities before they are exploited. But there has been little research on the converse question of whether defenders can turn the tables by finding vulnerabilities in malware. We provide a first affirmative answer to that question. We introduce a new technique, stitched dynamic symbolic execution, that makes it possible to use exploration techniques based on symbolic execution in the presence of functionalities that are common in malware and otherwise hard to analyze, such as decryption and checksums. The technique is based on decomposing the constraints induced by a program, solving only a subset, and then re-stitching the constraint solution into a complete input. We implement the approach in a system for x86 binaries, and apply it to 4 prevalent families of bots and other malware. We find 6 bugs that could be exploited by a network attacker to terminate or subvert the malware. These bugs have persisted across malware revisions for months, and even years. We discuss the possible applications and ethical considerations of this new capability --- paper_title: Protocol state fuzzing of TLS implementations paper_content: We describe a largely automated and systematic analysis of TLS implementations by what we call 'protocol state fuzzing': we use state machine learning to infer state machines from protocol implementations, using only blackbox testing, and then inspect the inferred state machines to look for spurious behaviour which might be an indication of flaws in the program logic. For detecting the presence of spurious behaviour the approach is almost fully automatic: we automatically obtain state machines and any spurious behaviour is then trivial to see. Detecting whether the spurious behaviour introduces exploitable security weaknesses does require manual investigation. Still, we take the point of view that any spurious functionality in a security protocol implementation is dangerous and should be removed. ::: ::: We analysed both server- and client-side implementations with a test harness that supports several key exchange algorithms and the option of client certificate authentication. We show that this approach can catch an interesting class of implementation flaws that is apparently common in security protocol implementations: in three of the TLS implementations analysed new security flaws were found (in GnuTLS, the Java Secure Socket Extension, and OpenSSL). This shows that protocol state fuzzing is a useful technique to systematically analyse security protocol implementations. As our analysis of different TLS implementations resulted in different and unique state machines for each one, the technique can also be used for fingerprinting TLS implementations. --- paper_title: T-Fuzz: Fuzzing by Program Transformation paper_content: Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries. --- paper_title: Transformation-aware Exploit Generation using a HI-CFG paper_content: Abstract : A common task for security analysts is to determine whether potentially unsafe code constructs (as found by static analysis or code review) can be triggered by an attacker-controlled input to the program under analysis. We refer to this problem as proof-of-concept (POC) exploit generation. Exploit generation is challenging to automate because it requires precise reasoning across a large code base; in practice it is usually a manual task. An intuitive approach to exploit generation is to break down a program's relevant computation into a sequence of transformations that map an input value into the value that can trigger an exploit. We automate this intuition by describing an approach to discover the buffer structure (the chain of buffers used between transformations) of a program, and use this structure to construct an exploit input by inverting one transformation at a time. We propose a new program representation, a hybrid information- and control-flow graph (HI-CFG), and give algorithms to build a HI-CFG from instruction traces. We then describe how to guide program exploration using symbolic execution to efficiently search for transformation pre-images. We implement our techniques in a tool that operates on applications in x86 binary form. In two case studies we discuss how our tool creates POC exploits for (1) a vulnerability in a PDF rendering library that is reachable through multiple different transformation stages and (2) a vulnerability in the processing stage of a specific document format in AbiWord. --- paper_title: Pin: building customized program analysis tools with dynamic instrumentation paper_content: Robust and powerful software instrumentation tools are essential for program analysis tasks such as profiling, performance evaluation, and bug detection. To meet this need, we have developed a new instrumentation system called Pin. Our goals are to provide easy-to-use, portable, transparent, and efficient instrumentation. Instrumentation tools (called Pintools) are written in C/C++ using Pin's rich API. Pin follows the model of ATOM, allowing the tool writer to analyze an application at the instruction level without the need for detailed knowledge of the underlying instruction set. The API is designed to be architecture independent whenever possible, making Pintools source compatible across different architectures. However, a Pintool can access architecture-specific details when necessary. Instrumentation with Pin is mostly transparent as the application and Pintool observe the application's original, uninstrumented behavior. Pin uses dynamic compilation to instrument executables while they are running. For efficiency, Pin uses several techniques, including inlining, register re-allocation, liveness analysis, and instruction scheduling to optimize instrumentation. This fully automated approach delivers significantly better instrumentation performance than similar tools. For example, Pin is 3.3x faster than Valgrind and 2x faster than DynamoRIO for basic-block counting. To illustrate Pin's versatility, we describe two Pintools in daily use to analyze production software. Pin is publicly available for Linux platforms on four architectures: IA32 (32-bit x86), EM64T (64-bit x86), Itanium®, and ARM. In the ten months since Pin 2 was released in July 2004, there have been over 3000 downloads from its website. --- paper_title: QEMU, a Fast and Portable Dynamic Translator paper_content: We present the internals of QEMU, a fast machine emulator using an original portable dynamic translator. It emulates several CPUs (x86, PowerPC, ARM and Sparc) on several hosts (x86, PowerPC, ARM, Sparc, Alpha and MIPS). QEMU supports full system emulation in which a complete and unmodified operating system is run in a virtual machine and Linux user mode emulation where a Linux process compiled for one target CPU can be run on another CPU. --- paper_title: PEBIL: Efficient static binary instrumentation for Linux paper_content: Binary instrumentation facilitates the insertion of additional code into an executable in order to observe or modify the executable's behavior. There are two main approaches to binary instrumentation: static and dynamic binary instrumentation. In this paper we present a static binary instrumentation toolkit for Linux on the x86/x86_64 platforms, PEBIL (PMaC's Efficient Binary Instrumentation Toolkit for Linux). PEBIL is similar to other toolkits in terms of how additional code is inserted into the executable. However, it is designed with the primary goal of producing efficient-running instrumented code. To this end, PEBIL uses function level code relocation in order to insert large but fast control structures. Furthermore, the PEBIL API provides tool developers with the means to insert lightweight hand-coded assembly rather than relying solely on the insertion of instrumentation functions. These features enable the implementation of efficient instrumentation tools with PEBIL. The overhead introduced for basic block counting by PEBIL is an average of 65% of the overhead of Dyninst, 41% of the overhead of Pin, 15% of the overhead of DynamoRIO, and 8% of the overhead of Valgrind. --- paper_title: Efficient, Transparent and Comprehensive Runtime Code Manipulation paper_content: This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamically-generated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction—which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools—it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. ::: This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. ::: DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) --- paper_title: A platform for secure static binary instrumentation paper_content: Program instrumentation techniques form the basis of many recent software security defenses, including defenses against common exploits and security policy enforcement. As compared to source-code instrumentation, binary instrumentation is easier to use and more broadly applicable due to the ready availability of binary code. Two key features needed for security instrumentations are (a) it should be applied to all application code, including code contained in various system and application libraries, and (b) it should be non-bypassable. So far, dynamic binary instrumentation (DBI) techniques have provided these features, whereas static binary instrumentation (SBI) techniques have lacked them. These features, combined with ease of use, have made DBI the de facto choice for security instrumentations. However, DBI techniques can incur high overheads in several common usage scenarios, such as application startups, system-calls, and many real-world applications. We therefore develop a new platform for secure static binary instrumentation (PSI) that overcomes these drawbacks of DBI techniques, while retaining the security, robustness and ease-of-use features. We illustrate the versatility of PSI by developing several instrumentation applications: basic block counting, shadow stack defense against control-flow hijack and return-oriented programming attacks, and system call and library policy enforcement. While being competitive with the best DBI tools on CPU-intensive SPEC 2006 benchmark, PSI provides an order of magnitude reduction in overheads on a collection of real-world applications. --- paper_title: Valgrind: a framework for heavyweight dynamic binary instrumentation paper_content: Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited. In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values-a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO. --- paper_title: CollAFL: Path Sensitive Fuzzing paper_content: Coverage-guided fuzzing is a widely used and effective solution to find software vulnerabilities. Tracking code coverage and utilizing it to guide fuzzing are crucial to coverage-guided fuzzers. However, tracking full and accurate path coverage is infeasible in practice due to the high instrumentation overhead. Popular fuzzers (e.g., AFL) often use coarse coverage information, e.g., edge hit counts stored in a compact bitmap, to achieve highly efficient greybox testing. Such inaccuracy and incompleteness in coverage introduce serious limitations to fuzzers. First, it causes path collisions, which prevent fuzzers from discovering potential paths that lead to new crashes. More importantly, it prevents fuzzers from making wise decisions on fuzzing strategies. In this paper, we propose a coverage sensitive fuzzing solution CollAFL. It mitigates path collisions by providing more accurate coverage information, while still preserving low instrumentation overhead. It also utilizes the coverage information to apply three new fuzzing strategies, promoting the speed of discovering new paths and vulnerabilities. We implemented a prototype of CollAFL based on the popular fuzzer AFL and evaluated it on 24 popular applications. The results showed that path collisions are common, i.e., up to 75% of edges could collide with others in some applications, and CollAFL could reduce the edge collision ratio to nearly zero. Moreover, armed with the three fuzzing strategies, CollAFL outperforms AFL in terms of both code coverage and vulnerability discovery. On average, CollAFL covered 20% more program paths, found 320% more unique crashes and 260% more bugs than AFL in 200 hours. In total, CollAFL found 157 new security bugs with 95 new CVEs assigned. --- paper_title: The Past, Present, and Future of Cyberdyne paper_content: Cyberdyne is a distributed system that discovers vulnerabilities in third-party, off-the-shelf binary programs. It competed in all rounds of DARPA’s Cyber Grand Challenge (CGC). In the qualifying event, Cyberdyne was the second most effective bug-finding system. In the final event, it was the bug-finding arm of the fourth-place team. Since then, Cyberdyne has been successfully applied during commercial code audits. The first half of this article describes the evolution and implementation of Cyberdyne and its bug-finding tools. The second half of the article looks at what it took to have Cyberdyne audit real applications and how we performed the first paid automated security audit for the Mozilla Secure Open Source Fund. We conclude with a discussion about the future of automated security audits. --- paper_title: MagicFuzzer: Scalable deadlock detection for large-scale applications paper_content: We present MagicFuzzer, a novel dynamic deadlock detection technique. Unlike existing techniques to locate potential deadlock cycles from an execution, it iteratively prunes lock dependencies that each has no incoming or outgoing edge. Combining with a novel thread-specific strategy, it dramatically shrinks the size of lock dependency set for cycle detection, improving the efficiency and scalability of such a detection significantly. In the real deadlock confirmation phase, it uses a new strategy to actively schedule threads of an execution against the whole set of potential deadlock cycles. We have implemented a prototype and evaluated it on large-scale C/C++ programs. The experimental results confirm that our technique is significantly more effective and efficient than existing techniques. --- paper_title: Detecting atomic-set serializability violations in multithreaded programs through active randomized testing paper_content: Concurrency bugs are notoriously difficult to detect because there can be vast combinations of interleavings among concurrent threads, yet only a small fraction can reveal them. Atomic-set serializability characterizes a wide range of concurrency bugs, including data races and atomicity violations. In this paper, we propose a two-phase testing technique that can effectively detect atomic-set serializability violations. In Phase I, our technique infers potential violations that do not appear in a concrete execution and prunes those interleavings that are violation-free. In Phase II, our technique actively controls a thread scheduler to enumerate these potential scenarios identified in Phase I to look for real violations. We have implemented our technique as a prototype system AssetFuzzer and applied it to a number of subject programs for evaluating concurrency defect analysis techniques. The experimental results show that AssetFuzzer can identify more concurrency bugs than two recent testing tools RaceFuzzer and AtomFuzzer. --- paper_title: Randomized active atomicity violation detection in concurrent programs paper_content: Atomicity is an important specification that enables programmers to understand atomic blocks of code in a multi-threaded program as if they are sequential. This significantly simplifies the programmer's job to reason about correctness. Several modern multithreaded programming languages provide no built-in support to ensure atomicity; instead they rely on the fact that programmers would use locks properly in order to guarantee that atomic code blocks are indeed atomic. However, improper use of locks can sometimes fail to ensure atomicity. Therefore, we need tools that can check atomicity properties of lock-based code automatically. We propose a randomized dynamic analysis technique to detect a special, but important, class of atomicity violations that are often found in real-world programs. Specifically, our technique modifies the existing Java thread scheduler behavior to create atomicity violations with high probability. Our approach has several advantages over existing dynamic analysis tools. First, we can create a real atomicity violation and see if an exception can be thrown. Second, we can replay an atomicity violating execution by simply using the same seed for random number generation---we do not need to record the execution. Third, we give no false warnings unlike existing dynamic atomicity checking techniques. We have implemented the technique in a prototype tool for Java and have experimented on a number of large multi-threaded Java programs and libraries. We report a number of previously known and unknown bugs and atomicity violations in these Java programs. --- paper_title: A randomized dynamic program analysis technique for detecting real deadlocks paper_content: We present a novel dynamic analysis technique that finds real deadlocks in multi-threaded programs. Our technique runs in two stages. In the first stage, we use an imprecise dynamic analysis technique to find potential deadlocks in a multi-threaded program by observing an execution of the program. In the second stage, we control a random thread scheduler to create the potential deadlocks with high probability. Unlike other dynamic analysis techniques, our approach has the advantage that it does not give any false warnings. We have implemented the technique in a prototype tool for Java, and have experimented on a number of large multi-threaded Java programs. We report a number of previously known and unknown real deadlocks that were found in these benchmarks. --- paper_title: Synthesizing racy tests paper_content: Subtle concurrency errors in multithreaded libraries that arise because of incorrect or inadequate synchronization are often difficult to pinpoint precisely using only static techniques. On the other hand, the effectiveness of dynamic race detectors is critically dependent on multithreaded test suites whose execution can be used to identify and trigger races. Usually, such multithreaded tests need to invoke a specific combination of methods with objects involved in the invocations being shared appropriately to expose a race. Without a priori knowledge of the race, construction of such tests can be challenging. In this paper, we present a lightweight and scalable technique for synthesizing precisely these kinds of tests. Given a multithreaded library and a sequential test suite, we describe a fully automated analysis that examines sequential execution traces, and produces as its output a concurrent client program that drives shared objects via library method calls to states conducive for triggering a race. Experimental results on a variety of well-tested Java libraries yield 101 synthesized multithreaded tests in less than four minutes. Analyzing the execution of these tests using an off-the-shelf race detector reveals 187 harmful races, including several previously unreported ones. Our implementation, named NARADA, and the results of our experiments are available at http://www.csa.iisc.ernet.in/~sss/tools/narada. --- paper_title: Effective random testing of concurrent programs paper_content: Multithreaded concurrent programs often exhibit wrong behaviors due to unintended interferences among the concurrent threads. Such errors are often hard to find because they typically manifest under very specific thread schedules. Traditional testing, which pays no attention to thread schedules and non-deterministically exercises a few arbitrary schedules, often misses such bugs. Traditional model checking techniques, which try to systematically explore all thread schedules, give very high confidence in the correctness of the system, but, unfortunately, they suffer from the state explosion problem. Recently, dynamic partial order techniques have been proposed to alleviate the problem. However, such techniques fail for large programs because the state space remains large in spite of reduction. An inexpensive and a simple alternative approach is to perform random testing by choosing thread schedules at random. We show that such a naive approach often explores some states with very high probability compared to the others. We propose a random partial order sampling algorithm (or RAPOS) that partly removes this non-uniformity in sampling the state space. We empirically compare the proposed algorithm with the simple random testing algorithm and show that the former outperforms the latter --- paper_title: Race directed random testing of concurrent programs paper_content: Bugs in multi-threaded programs often arise due to data races. Numerous static and dynamic program analysis techniques have been proposed to detect data races. We propose a novel randomized dynamic analysis technique that utilizes potential data race information obtained from an existing analysis tool to separate real races from false races without any need for manual inspection. Specifically, we use potential data race information obtained from an existing dynamic analysis technique to control a random scheduler of threads so that real race conditions get created with very high probability and those races get resolved randomly at runtime. Our approach has several advantages over existing dynamic analysis tools. First, we can create a real race condition and resolve the race randomly to see if an error can occur due to the race. Second, we can replay a race revealing execution efficiently by simply using the same seed for random number generation--we do not need to record the execution. Third, our approach has very low overhead compared to other precise dynamic race detection techniques because we only track all synchronization operations and a single pair of memory access statements that are reported to be in a potential race by an existing analysis. We have implemented the technique in a prototype tool for Java and have experimented on a number of large multi-threaded Java programs. We report a number of previously known and unknown bugs and real races in these Java programs. --- paper_title: Optimizing seed selection for fuzzing paper_content: Randomly mutating well-formed program inputs or simply fuzzing, is a highly effective and widely used strategy to find bugs in software. Other than showing fuzzers find bugs, there has been little systematic effort in understanding the science of how to fuzz properly. In this paper, we focus on how to mathematically formulate and reason about one critical aspect in fuzzing: how best to pick seed files to maximize the total number of bugs found during a fuzz campaign. We design and evaluate six different algorithms using over 650 CPU days on Amazon Elastic Compute Cloud (EC2) to provide ground truth data. Overall, we find 240 bugs in 8 applications and show that the choice of algorithm can greatly increase the number of bugs found. We also show that current seed selection strategies as found in Peach may fare no better than picking seeds at random. We make our data set and code publicly available. --- paper_title: Optimizing seed selection for fuzzing paper_content: Randomly mutating well-formed program inputs or simply fuzzing, is a highly effective and widely used strategy to find bugs in software. Other than showing fuzzers find bugs, there has been little systematic effort in understanding the science of how to fuzz properly. In this paper, we focus on how to mathematically formulate and reason about one critical aspect in fuzzing: how best to pick seed files to maximize the total number of bugs found during a fuzz campaign. We design and evaluate six different algorithms using over 650 CPU days on Amazon Elastic Compute Cloud (EC2) to provide ground truth data. Overall, we find 240 bugs in 8 applications and show that the choice of algorithm can greatly increase the number of bugs found. We also show that current seed selection strategies as found in Peach may fare no better than picking seeds at random. We make our data set and code publicly available. --- paper_title: Billions and billions of constraints: Whitebox fuzz testing in production paper_content: We report experiences with constraint-based whitebox fuzz testing in production across hundreds of large Windows applications and over 500 machine years of computation from 2007 to 2013. Whitebox fuzzing leverages symbolic execution on binary traces and constraint solving to construct new inputs to a program. These inputs execute previously uncovered paths or trigger security vulnerabilities. Whitebox fuzzing has found one-third of all file fuzzing bugs during the development of Windows 7, saving millions of dollars in potential security vulnerabilities. The technique is in use today across multiple products at Microsoft. We describe key challenges with running whitebox fuzzing in production. We give principles for addressing these challenges and describe two new systems built from these principles: SAGAN, which collects data from every fuzzing run for further analysis, and JobCenter, which controls deployment of our whitebox fuzzing infrastructure across commodity virtual machines. Since June 2010, SAGAN has logged over 3.4 billion constraints solved, millions of symbolic executions, and tens of millions of test cases generated. Our work represents the largest scale deployment of whitebox fuzzing to date, including the largest usage ever for a Satisfiability Modulo Theories (SMT) solver. We present specific data analyses that improved our production use of whitebox fuzzing. Finally we report data on the performance of constraint solving and dynamic test generation that points toward future research problems. --- paper_title: Coverage-based Greybox Fuzzing as Markov Chain paper_content: Coverage-based Greybox Fuzzing (CGF) is a random testing approach that requires no program analysis. A new test is generated by slightly mutating a seed input. If the test exercises a new and interesting path, it is added to the set of seeds; otherwise, it is discarded. We observe that most tests exercise the same few "high-frequency" paths and develop strategies to explore significantly more paths with the same number of tests by gravitating towards low-frequency paths. We explain the challenges and opportunities of CGF using a Markov chain model which specifies the probability that fuzzing the seed that exercises path i generates an input that exercises path j. Each state (i.e., seed) has an energy that specifies the number of inputs to be generated from that seed. We show that CGF is considerably more efficient if energy is inversely proportional to the density of the stationary distribution and increases monotonically every time that seed is chosen. Energy is controlled with a power schedule. We implemented the exponential schedule by extending AFL. In 24 hours, AFLFAST exposes 3 previously unreported CVEs that are not exposed by AFL and exposes 6 previously unreported CVEs 7x faster than AFL. AFLFAST produces at least an order of magnitude more unique crashes than AFL. --- paper_title: Scheduling black-box mutational fuzzing paper_content: Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time. --- paper_title: Probability-Based Parameter Selection for Black-Box Fuzz Testing paper_content: Abstract : Dynamic, randomized-input functional testing, or black-box fuzz testing, is an effective technique for finding security vulnerabilities in software applications. Parameters for an invocation of black-box fuzz testing generally include known-good input to use as a basis for randomization (i.e., a seed file) and a specification of how much of the seed file to randomize (i.e., the range).This report describes an algorithm that applies basic statistical theory to the parameter selection problem and automates selection of seed files and ranges. This algorithm was implemented in an open-source, file-interface testing tool and was used to find and mitigate vulnerabilities in several software applications. This report generalizes the parameter selection problem, explains the algorithm, and analyzes empirical data collected from the implementation. Results of using the algorithm show a marked improvement in the efficiency of discovering unique application errors over basic parameter selection techniques. --- paper_title: Scheduling black-box mutational fuzzing paper_content: Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time. --- paper_title: Scheduling black-box mutational fuzzing paper_content: Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time. --- paper_title: Hawkeye: Towards a Desired Directed Grey-box Fuzzer paper_content: Grey-box fuzzing is a practically effective approach to test real-world programs. However, most existing grey-box fuzzers lack directedness, i.e. the capability of executing towards user-specified target sites in the program. To emphasize existing challenges in directed fuzzing, we propose Hawkeye to feature four desired properties of directed grey-box fuzzers. Owing to a novel static analysis on the program under test and the target sites, Hawkeye precisely collects the information such as the call graph, function and basic block level distances to the targets. During fuzzing, Hawkeye evaluates exercised seeds based on both static information and the execution traces to generate the dynamic metrics, which are then used for seed prioritization, power scheduling and adaptive mutating. These strategies help Hawkeye to achieve better directedness and gravitate towards the target sites. We implemented Hawkeye as a fuzzing framework and evaluated it on various real-world programs under different scenarios. The experimental results showed that Hawkeye can reach the target sites and reproduce the crashes much faster than state-of-the-art grey-box fuzzers such as AFL and AFLGo. Specially, Hawkeye can reduce the time to exposure for certain vulnerabilities from about 3.5 hours to 0.5 hour. By now, Hawkeye has detected more than 41 previously unknown crashes in projects such as Oniguruma, MJS with the target sites provided by vulnerability prediction tools; all these crashes are confirmed and 15 of them have been assigned CVE IDs. --- paper_title: FairFuzz: A Targeted Mutation Strategy for Increasing Greybox Fuzz Testing Coverage paper_content: In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop (AFL), has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the bugs it can find since it simply does not cover large regions of code. If it does not cover parts of the code, it will not find bugs there. We propose a two-pronged approach to increase the coverage achieved by AFL. First, the approach automatically identifies branches exercised by few AFL-produced inputs (rare branches), which often guard code that is empirically hard to cover by naively mutating inputs. The second part of the approach is a novel mutation mask creation algorithm, which allows mutations to be biased towards producing inputs hitting a given rare branch. This mask is dynamically computed during fuzz testing and can be adapted to other testing targets. We implement this approach on top of AFL in a tool named FairFuzz. We conduct evaluation on real-world programs against state-of-the-art versions of AFL. We find that on these programs FairFuzz achieves high branch coverage at a faster rate that state-of-the-art versions of AFL. In addition, on programs with nested conditional structure, it achieves sustained increases in branch coverage after 24 hours (average 10.6% increase). In qualitative analysis, we find that FairFuzz has an increased capacity to automatically discover keywords. --- paper_title: Directed Greybox Fuzzing paper_content: Existing Greybox Fuzzers (GF) cannot be effectively directed, for instance, towards problematic changes or patches, towards critical system calls or dangerous locations, or towards functions in the stack-trace of a reported vulnerability that we wish to reproduce. In this paper, we introduce Directed Greybox Fuzzing (DGF) which generates inputs with the objective of reaching a given set of target program locations efficiently. We develop and evaluate a simulated annealing-based power schedule that gradually assigns more energy to seeds that are closer to the target locations while reducing energy for seeds that are further away. Experiments with our implementation AFLGo demonstrate that DGF outperforms both directed symbolic-execution-based whitebox fuzzing and undirected greybox fuzzing. We show applications of DGF to patch testing and crash reproduction, and discuss the integration of AFLGo into Google's continuous fuzzing platform OSS-Fuzz. Due to its directedness, AFLGo could find 39 bugs in several well-fuzzed, security-critical projects like LibXML2. 17 CVEs were assigned. --- paper_title: QTEP: quality-aware test case prioritization paper_content: Test case prioritization (TCP) is a practical activity in software testing for exposing faults earlier. Researchers have proposed many TCP techniques to reorder test cases. Among them, coverage-based TCPs have been widely investigated. Specifically, coverage-based TCP approaches leverage coverage information between source code and test cases, i.e., static code coverage and dynamic code coverage, to schedule test cases. Existing coverage-based TCP techniques mainly focus on maximizing coverage while often do not consider the likely distribution of faults in source code. However, software faults are not often equally distributed in source code, e.g., around 80% faults are located in about 20% source code. Intuitively, test cases that cover the faulty source code should have higher priorities, since they are more likely to find faults. In this paper, we present a quality-aware test case prioritization technique, QTEP, to address the limitation of existing coverage-based TCP algorithms. In QTEP, we leverage code inspection techniques, i.e., a typical statistic defect prediction model and a typical static bug finder, to detect fault-prone source code and then adapt existing coverage-based TCP algorithms by considering the weighted source code in terms of fault-proneness. Our evaluation with 16 variant QTEP techniques on 33 different versions of 7 open source Java projects shows that QTEP could improve existing coverage-based TCP techniques for both regression and new test cases. Specifically, the improvement of the best variant of QTEP for regression test cases could be up to 15.0% and on average 7.6%, and for all test cases (both regression and new test cases), the improvement could be up to 10.0% and on average 5.0%. --- paper_title: Coverage-based Greybox Fuzzing as Markov Chain paper_content: Coverage-based Greybox Fuzzing (CGF) is a random testing approach that requires no program analysis. A new test is generated by slightly mutating a seed input. If the test exercises a new and interesting path, it is added to the set of seeds; otherwise, it is discarded. We observe that most tests exercise the same few "high-frequency" paths and develop strategies to explore significantly more paths with the same number of tests by gravitating towards low-frequency paths. We explain the challenges and opportunities of CGF using a Markov chain model which specifies the probability that fuzzing the seed that exercises path i generates an input that exercises path j. Each state (i.e., seed) has an energy that specifies the number of inputs to be generated from that seed. We show that CGF is considerably more efficient if energy is inversely proportional to the density of the stationary distribution and increases monotonically every time that seed is chosen. Energy is controlled with a power schedule. We implemented the exponential schedule by extending AFL. In 24 hours, AFLFAST exposes 3 previously unreported CVEs that are not exposed by AFL and exposes 6 previously unreported CVEs 7x faster than AFL. AFLFAST produces at least an order of magnitude more unique crashes than AFL. --- paper_title: Fuzzing: Brute Force Vulnerability Discovery paper_content: Piezoelectric crystalline films which consist essentially of a crystalline zinc oxide film with a c-axis perpendicular to a substrate surface, containing 0.01 to 20.0 atomic percent of bismuth. These films are prepared by radio-frequency sputtering. --- paper_title: Comparing operating systems using robustness benchmarks paper_content: When creating mission-critical distributed systems using off-the-shelf components, it is important to assess the dependability of not only the hardware, but the software as well. This paper proposes a way to test operating system dependability. The concept of response regions is presented as a way to visualize erroneous system behavior and gain insight into failure mechanisms. A 5-point "CRASH" (catastrophic, restart, abort, silent, hindering) scale is defined for grading the severity of robustness vulnerabilities encountered. Test results from five operating systems are analyzed for robustness vulnerabilities, and exhibit a range of dependability. Robustness benchmarking comparisons of this type may provide important information to both users and designers of off-the-shelf software for dependable systems. --- paper_title: Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations paper_content: Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. --- paper_title: QuickFuzz: an automatic random fuzzer for common file formats paper_content: Fuzzing is a technique that involves testing programs using invalid or erroneous inputs. Most fuzzers require a set of valid inputs as a starting point, in which mutations are then introduced. QuickFuzz is a fuzzer that leverages QuickCheck-style random test-case generationto automatically test programs that manipulate common file formats by fuzzing. We rely on existing Haskell implementations of file-format-handling libraries found on Hackage, the community-driven Haskell code repository. We have tried QuickFuzz in the wild and found that the approach is effective in discovering vulnerabilities in real-world implementations of browsers, image processing utilities and file compressors among others. In addition, we introduce a mechanism to automatically derive random generators for the types representing these formats. QuickFuzz handles most well-known image and media formats, and can be used to test programs and libraries written in any language. --- paper_title: Language fuzzing using constraint logic programming paper_content: Fuzz testing builds confidence in compilers and interpreters. It is desirable for fuzzers to allow targeted generation of programs that showcase specific language features and behaviors. However, the predominant program generation technique used by most language fuzzers, stochastic context-free grammars, does not have this property. We propose the use of constraint logic programming (CLP) for program generation. Using CLP, testers can write declarative predicates specifying interesting programs, including syntactic features and semantic behaviors. CLP subsumes and generalizes the stochastic grammar approach. --- paper_title: Fuzzing with code fragments paper_content: Fuzz testing is an automated technique providing random data as input to a software system in the hope to expose a vulnerability. In order to be effective, the fuzzed input must be common enough to pass elementary consistency checks; a JavaScript interpreter, for instance, would only accept a semantically valid program. On the other hand, the fuzzed input must be uncommon enough to trigger exceptional behavior, such as a crash of the interpreter. The LangFuzz approach resolves this conflict by using a grammar to randomly generate valid programs; the code fragments, however, partially stem from programs known to have caused invalid behavior before. LangFuzz is an effective tool for security testing: Applied on the Mozilla JavaScript interpreter, it discovered a total of 105 new severe vulnerabilities within three months of operation (and thus became one of the top security bug bounty collectors within this period); applied on the PHP interpreter, it discovered 18 new defects causing crashes. --- paper_title: Snooze: Toward a stateful network protocol fuzzer paper_content: Fuzzing is a well-known black-box approach to the security testing of applications. Fuzzing has many advantages in terms of simplicity and effectiveness over more complex, expensive testing approaches. Unfortunately, current fuzzing tools suffer from a number of limitations, and, in particular, they provide little support for the fuzzing of stateful protocols. In this paper, we present SNOOZE, a tool for building flexible, security-oriented, network protocol fuzzers. SNOOZE implements a stateful fuzzing approach that can be used to effectively identify security flaws in network protocol implementations. SNOOZE allows a tester to describe the stateful operation of a protocol and the messages that need to be generated in each state. In addition, SNOOZE provides attack-specific fuzzing primitives that allow a tester to focus on specific vulnerability classes. We used an initial prototype of the SNOOZE tool to test programs that implement the SIP protocol, with promising results. SNOOZE supported the creation of sophisticated fuzzing scenarios that were able to expose real-world bugs in the programs analyzed. --- paper_title: BlendFuzz: A Model-Based Framework for Fuzz Testing Programs with Grammatical Inputs paper_content: Fuzz testing has been widely used in practice to detect software vulnerabilities. Traditional fuzzing tools typically use blocks to model program input. Despite the demonstrated success of this approach, its effectiveness is inherently limited when applied to test programs that process grammatical inputs, where the input data are mainly human-readable text with complex structures that are specified by a formal grammar. In this paper we present BlendFuzz, a fuzz testing framework that is grammar-aware. It works by breaking a set of existing test cases into units of grammar components, then using these units as variants to restructure existent test data, resulting in a wider range of test cases that have the potential to explore previously uncovered corner cases when used in testing. We've implemented this framework along with two language fuzzers on top of it. Experiments with these fuzzers have shown improved code coverage, and field testing has revealed over two dozens of previously unreported bugs in real-world applications, with seven of them being medium or high risk zero-day vulnerabilities. --- paper_title: Systematic Fuzzing and Testing of TLS Libraries paper_content: We present TLS-Attacker, an open source framework for evaluating the security of TLS libraries. TLS-Attacker allows security engineers to create custom TLS message flows and arbitrarily modify message contents using a simple interface in order to test the behavior of their libraries. Based on TLS-Attacker, we present a two-stage fuzzing approach to evaluate TLS server behavior. Our approach automatically searches for cryptographic failures and boundary violation vulnerabilities. It allowed us to find unusual padding oracle vulnerabilities and overflows/overreads in widely used TLS libraries, including OpenSSL, Botan, and MatrixSSL. Our findings motivate developers to create comprehensive test suites, including positive as well as negative tests, for the evaluation of TLS libraries. We use TLS-Attacker to create such a test suite framework which finds further problems in Botan. --- paper_title: Autodafé: an act of software torture paper_content: Automated vulnerability searching tools have led to a dramatic increase of the rate at which such flaws are discovered. One particular searching technique is fault injection i.e. insertion of random data into input files, buffers or protocol packets, combined with a systematic monitoring of memory violations. Even if these tools allow to uncover a lot of vulnerabilities, they are still very primitive; despite their poor efficiency, they are useful because of the very high density of such vulnerabilities in modern software. This paper presents an innovative buffer overflow uncovering technique, which uses a more thorough and reliable approach. This technique, called: Fuzzing by Weighting Attacks with Markers, is a specialized kind of fault injection, which does not need source code or special compilation for the monitored program. As a proof of concept of the efficiency of this technique, a tool called Autodafe has been developed. It allows to detect automatically an impressive number of buffer overflow vulnerabilities. --- paper_title: T-Fuzz: Model-Based Fuzzing for Robustness Testing of Telecommunication Protocols paper_content: Telecommunication networks are crucial in today's society since critical socio-economical and governmental functions depend upon them. High availability requirements, such as the "five nines" uptime availability, permeate the development of telecommunication applications from their design to their deployment. In this context, robustness testing plays a fundamental role in software quality assurance. We present T-Fuzz - a novel fuzzing framework that integrates with existing conformance testing environment. Automated model extraction of telecommunication protocols is provided to enable better code testing coverage. The T-Fuzz prototype has been fully implemented and tested on the implementation of a common LTE protocol within existing testing facilities. We provide an evaluation of our framework from both a technical and a qualitative point of view based on feedback from key testers. T-Fuzz has shown to enhance the existing development already in place by finding previously unseen unexpected behaviour in the system. Furthermore, according to the testers, T-Fuzz is easy to use and would likely result in time savings as well as more robust code. --- paper_title: KiF: a stateful SIP fuzzer paper_content: With the recent evolution in the VoIP market, where more and more devices and services are being pushed on a very promising market, assuring their security becomes crucial. Among the most dangerous threats to VoIP, failures and bugs in the software implementation will still rank high on the list of vulnerabilities. In this paper we address the issue of detecting such vulnerabilities using a stateful fuzzer. We describe an automated attack approach capable to self-improve and to track the state context of a target device. We implemented our approach and were able to discover vulnerabilities in market leading and well known equipments and software. --- paper_title: perf fuzzer: Targeted Fuzzing of the perf event open() System Call paper_content: Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls. --- paper_title: Deriving input syntactic structure from execution paper_content: Program input syntactic structure is essential for a wide range of applications such as test case generation, software debugging and network security. However, such important information is often not available (e.g., most malware programs make use of secret protocols to communicate) or not directly usable by machines (e.g., many programs specify their inputs in plain text or other random formats). Furthermore, many programs claim they accept inputs with a published format, but their implementations actually support a subset or a variant. Based on the observations that input structure is manifested by the way input symbols are used during execution and most programs take input with top-down or bottom-up grammars, we devise two dynamic analyses, one for each grammar category. Our evaluation on a set of real-world programs shows that our technique is able to precisely reverse engineer input syntactic structure from execution. --- paper_title: Polyglot: automatic extraction of protocol message format using dynamic binary analysis paper_content: Protocol reverse engineering, the process of extracting the application-level protocol used by an implementation, without access to the protocol specification, is important for many network security applications. Recent work [17] has proposed protocol reverse engineering by using clustering on network traces. That kind of approach is limited by the lack of semantic information on network traces. In this paper we propose a new approach using program binaries. Our approach, shadowing, uses dynamic analysis and is based on a unique intuition - the way that an implementation of the protocol processes the received application data reveals a wealth of information about the protocol message format. We have implemented our approach in a system called Polyglot and evaluated it extensively using real-world implementations of five different protocols: DNS, HTTP, IRC, Samba and ICQ. We compare our results with the manually crafted message format, included in Wireshark, one of the state-of-the-art protocol analyzers. The differences we find are small and usually due to different implementations handling fields in different ways. Finding such differences between implementations is an added benefit, as they are important for problems such as fingerprint generation, fuzzing, and error detection. --- paper_title: AUTHSCAN: Automatic Extraction of Web Authentication Protocols from Implementations. paper_content: Ideally, security protocol implementations should be formally verified before they are deployed. However, this is not true in practice. Numerous high-profile vulnerabilities have been found in web authentication protocol implementations, especially in single-sign on (SSO) protocols implementations recently. Much of the prior work on authentication protocol verification has focused on theoretical foundations and building scalable verification tools for checking manually-crafted specifications [17, 18, 44]. In this paper, we address a complementary problem of automatically extracting specifications from implementations. We propose AUTHSCAN, an end-to-end platform to automatically recover authentication protocol specifications from their implementations. AUTHSCAN finds a total of 7 security vulnerabilities using off-the-shelf verification tools in specifications it recovers, which include SSO protocol implementations and custom web authentication logic of web sites with millions of users. --- paper_title: Tupni: automatic reverse engineering of input formats paper_content: Recent work has established the importance of automatic reverse engineering of protocol or file format specifications. However, the formats reverse engineered by previous tools have missed important information that is critical for security applications. In this paper, we present Tupni, a tool that can reverse engineer an input format with a rich set of information, including record sequences, record types, and input constraints. Tupni can generalize the format specification over multiple inputs. We have implemented a prototype of Tupni and evaluated it on ten different formats: five file formats (WMF, BMP, JPG, PNG and TIF) and five network protocols (DNS, RPC, TFTP, HTTP and FTP). Tupni identified all record sequences in the test inputs. We also show that, by aggregating over multiple WMF files, Tupni can derive a more complete format specification for WMF. Furthermore, we demonstrate the utility of Tupni by using the rich information it provides for zero-day vulnerability signature generation, which was not possible with previous reverse engineering tools. --- paper_title: Prospex: Protocol Specification Extraction paper_content: Protocol reverse engineering is the process of extracting application-level specifications for network protocols. Such specifications are very useful in a number of security-related contexts, for example, to perform deep packet inspection and black-box fuzzing, or to quickly understand custom botnet command and control (C\&C) channels.Since manual reverse engineering is a time-consuming and tedious process, a number of systems have been proposed that aim to automate this task. These systems either analyze network traffic directly or monitor the execution of the application that receives the protocol messages. While previous systems show that precise message formats can be extracted automatically, they do not provide a protocol specification.The reason is that they do not reverse engineer the protocol state machine.In this paper, we focus on closing this gap by presenting a system that is capable of automatically inferring state machines. This greatly enhances the results of automatic protocol reverse engineering, while further reducing the need for human interaction. We extend previous work that focuses on behavior-based message format extraction,and introduce techniques for identifying and clustering different types of messages not only based on their structure, but also according to the impact of each message on server behavior.Moreover, we present an algorithm for extracting the state machine.We have applied our techniques to a number of real-world protocols, including the command and control protocol used by a malicious bot. Our results demonstrate that we are able to extract format specifications for different types of messages and meaningful protocol state machines. We use these protocol specifications to automatically generate input for a stateful fuzzer,allowing us to discover security vulnerabilities in real-world applications. --- paper_title: IMF: Inferred Model-based Fuzzer paper_content: Kernel vulnerabilities are critical in security because they naturally allow attackers to gain unprivileged root access. Although there has been much research on finding kernel vulnerabilities from source code, there are relatively few research on kernel fuzzing, which is a practical bug finding technique that does not require any source code. Existing kernel fuzzing techniques involve feeding in random input values to kernel API functions. However, such a simple approach does not reveal latent bugs deep in the kernel code, because many API functions are dependent on each other, and they can quickly reject arbitrary parameter values based on their calling context. In this paper, we propose a novel fuzzing technique for commodity OS kernels that leverages inferred dependence model between API function calls to discover deep kernel bugs. We implement our technique on a fuzzing system, called IMF. IMF has already found 32 previously unknown kernel vulnerabilities on the latest macOS version 10.12.3 (16D32) at the time of this writing. --- paper_title: Skyfire: Data-Driven Seed Generation for Fuzzing paper_content: Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs. --- paper_title: Automatic Text Input Generation for Mobile Testing paper_content: Many designs have been proposed to improve the automated mobile testing. Despite these improvements, providing appropriate text inputs remains a prominent obstacle, which hinders the large-scale adoption of automated testing approaches. The key challenge is how to automatically produce the most relevant text in a use case context. For example, a valid website address should be entered in the address bar of a mobile browser app to continue the testing of the app, a singer's name should be entered in the search bar of a music recommendation app. Without the proper text inputs, the testing would get stuck. We propose a novel deep learning based approach to address the challenge, which reduces the problem to a minimization problem. Another challenge is how to make the approach generally applicable to both the trained apps and the untrained apps. We leverage the Word2Vec model to address the challenge. We have built our approaches as a tool and evaluated it with 50 iOS mobile apps including Firefox and Wikipedia. The results show that our approach significantly outperforms existing automatic text input generation methods. --- paper_title: Learn&Fuzz: Machine learning for input fuzzing paper_content: Fuzzing consists of repeatedly testing an application with modified, or fuzzed, inputs with the goal of finding security vulnerabilities in input-parsing code. In this paper, we show how to automate the generation of an input grammar suitable for input fuzzing using sample inputs and neural-network-based statistical machine-learning techniques. We present a detailed case study with a complex input format, namely PDF, and a large complex security-critical parser for this format, namely, the PDF parser embedded in Microsoft's new Edge browser. We discuss and measure the tension between conflicting learning and fuzzing goals: learning wants to capture the structure of well-formed inputs, while fuzzing wants to break that structure in order to cover unexpected code paths and find bugs. We also present a new algorithm for this learn&fuzz challenge which uses a learnt input probability distribution to intelligently guide where to fuzz inputs. --- paper_title: Saying ‘Hi!’ is not enough: Mining inputs for effective test generation paper_content: Automatically generating unit tests is a powerful approach to exercise complex software. Unfortunately, current techniques often fail to provide relevant input values, such as strings that bypass domain-specific sanity checks. As a result, state-of-the-art techniques are effective for generic classes, such as collections, but less successful for domain-specific software. This paper presents TestMiner, the first technique for mining a corpus of existing tests for input values to be used by test generators for effectively testing software not in the corpus. The main idea is to extract literals from thousands of tests and to adapt information retrieval techniques to find values suitable for a particular domain. Evaluating the approach with 40 Java classes from 18 different projects shows that TestMiner improves test coverage by 21% over an existing test generator. The approach can be integrated into various test generators in a straightforward way, increasing their effectiveness on previously difficult-to-test classes. --- paper_title: Synthesizing program input grammars paper_content: We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers. --- paper_title: PULSAR: Stateful Black-Box Fuzzing of Proprietary Network Protocols paper_content: The security of network services and their protocols critically depends on minimizing their attack surface. A single flaw in an implementation can suffice to compromise a service and expose sensitive data to an attacker. The discovery of vulnerabilities in protocol implementations, however, is a challenging task: While for standard protocols this process can be conducted with regular techniques for auditing, the situation becomes difficult for proprietary protocols if neither the program code nor the specification of the protocol are easily accessible. As a result, vulnerabilities in closed-source implementations can often remain undiscovered for a longer period of time. In this paper, we present Pulsar, a method for stateful black-box fuzzing of proprietary network protocols. Our method combines concepts from fuzz testing with techniques for automatic protocol reverse engineering and simulation. It proceeds by observing the traffic of a proprietary protocol and inferring a generative model for message formats and protocol states that can not only analyze but also simulate communication. During fuzzing this simulation can effectively explore the protocol state space and thereby enables uncovering vulnerabilities deep inside the protocol implementation. We demonstrate the efficacy of Pulsar in two case studies, where it identifies known as well as unknown vulnerabilities. --- paper_title: LearnLib: a library for automata learning and experimentation paper_content: In this tool demonstration we present the LearnLib, a library for automata learning and experimentation. Its modular structure allows users to configure their tailored learning scenarios, which exploit specific properties of the envisioned applications. As has been shown earlier, exploiting application-specific structural features enables optimizations that may lead to performance gains of several orders of magnitude, a necessary precondition to make automata learning applicable to realistic scenarios. The demonstration of the LearnLib will include the extrapolation of a behavioral model for a realistic (legacy) system, and the statistical analysis of different variants of automata learning algorithms on the basis of random generated models. --- paper_title: Enemy of the State: a State-aware Black-box Web Vulnerability Scanner paper_content: Black-box web vulnerability scanners are a popular choice for finding security vulnerabilities in web applications in an automated fashion. These tools operate in a point-and-shootmanner, testing any web application-- regardless of the server-side language--for common security vulnerabilities. Unfortunately, black-box tools suffer from a number of limitations, particularly when interacting with complex applications that have multiple actions that can change the application's state. If a vulnerability analysis tool does not take into account changes in the web application's state, it might overlook vulnerabilities or completely miss entire portions of the web application. ::: ::: We propose a novel way of inferring the web application's internal state machine from the outside--that is, by navigating through the web application, observing differences in output, and incrementally producing a model representing the web application's state. ::: ::: We utilize the inferred state machine to drive a black-box web application vulnerability scanner. Our scanner traverses a web application's state machine to find and fuzz user-input vectors and discover security flaws. We implemented our technique in a prototype crawler and linked it to the fuzzing component from an open-source web vulnerability scanner. ::: ::: We show that our state-aware black-box web vulnerability scanner is able to not only exercise more code of the web application, but also discover vulnerabilities that other vulnerability scanners miss. --- paper_title: Protocol state fuzzing of TLS implementations paper_content: We describe a largely automated and systematic analysis of TLS implementations by what we call 'protocol state fuzzing': we use state machine learning to infer state machines from protocol implementations, using only blackbox testing, and then inspect the inferred state machines to look for spurious behaviour which might be an indication of flaws in the program logic. For detecting the presence of spurious behaviour the approach is almost fully automatic: we automatically obtain state machines and any spurious behaviour is then trivial to see. Detecting whether the spurious behaviour introduces exploitable security weaknesses does require manual investigation. Still, we take the point of view that any spurious functionality in a security protocol implementation is dangerous and should be removed. ::: ::: We analysed both server- and client-side implementations with a test harness that supports several key exchange algorithms and the option of client certificate authentication. We show that this approach can catch an interesting class of implementation flaws that is apparently common in security protocol implementations: in three of the TLS implementations analysed new security flaws were found (in GnuTLS, the Java Secure Socket Extension, and OpenSSL). This shows that protocol state fuzzing is a useful technique to systematically analyse security protocol implementations. As our analysis of different TLS implementations resulted in different and unique state machines for each one, the technique can also be used for fingerprinting TLS implementations. --- paper_title: Turning programs against each other: high coverage fuzz-testing using binary-code mutation and dynamic slicing paper_content: Mutation-based fuzzing is a popular and widely employed black-box testing technique for finding security and robustness bugs in software. It owes much of its success to its simplicity; a well-formed seed input is mutated, e.g. through random bit-flipping, to produce test inputs. While reducing the need for human effort, and enabling security testing even of closed-source programs with undocumented input formats, the simplicity of mutation-based fuzzing comes at the cost of poor code coverage. Often millions of iterations are needed, and the results are highly dependent on configuration parameters and the choice of seed inputs. In this paper we propose a novel method for automated generation of high-coverage test cases for robustness testing. Our method is based on the observation that, even for closed-source programs with proprietary input formats, an implementation that can generate well-formed inputs to the program is typically available. By systematically mutating the program code of such generating programs, we leverage information about the input format encoded in the generating program to produce high-coverage test inputs, capable of reaching deep states in the program under test. Our method works entirely at the machine-code level, enabling use-cases similar to traditional black-box fuzzing. We have implemented the method in our tool MutaGen, and evaluated it on 7 popular Linux programs. We found that, for most programs, our method improves code coverage by one order of magnitude or more, compared to two well-known mutation-based fuzzers. We also found a total of 8 unique bugs. --- paper_title: Random Testing: Theoretical Results and Practical Implications paper_content: A substantial amount of work has shed light on whether random testing is actually a useful testing technique. Despite its simplicity, several successful real-world applications have been reported in the literature. Although it is not going to solve all possible testing problems, random testing appears to be an essential tool in the hands of software testers. In this paper, we review and analyze the debate about random testing. Its benefits and drawbacks are discussed. Novel results addressing general questions about random testing are also presented, such as how long does random testing need, on average, to achieve testing targets (e.g., coverage), how does it scale, and how likely is it to yield similar results if we rerun it on the same testing problem (predictability). Due to its simplicity that makes the mathematical analysis of random testing tractable, we provide precise and rigorous answers to these questions. Results show that there are practical situations in which random testing is a viable option. Our theorems are backed up by simulations and we show how they can be applied to most types of software and testing criteria. In light of these results, we then assess the validity of empirical analyzes reported in the literature and derive guidelines for both practitioners and scientists. --- paper_title: When only random testing will do paper_content: In some circumstances, random testing methods are more practical than any alternative, because information is lacking to make reasonable systematic test-point choices. This paper examines some situations in which random testing is indicated and discusses issues and difficulties with conducting the random tests. --- paper_title: Probability-Based Parameter Selection for Black-Box Fuzz Testing paper_content: Abstract : Dynamic, randomized-input functional testing, or black-box fuzz testing, is an effective technique for finding security vulnerabilities in software applications. Parameters for an invocation of black-box fuzz testing generally include known-good input to use as a basis for randomization (i.e., a seed file) and a specification of how much of the seed file to randomize (i.e., the range).This report describes an algorithm that applies basic statistical theory to the parameter selection problem and automates selection of seed files and ranges. This algorithm was implemented in an open-source, file-interface testing tool and was used to find and mitigate vulnerabilities in several software applications. This report generalizes the parameter selection problem, explains the algorithm, and analyzes empirical data collected from the implementation. Results of using the algorithm show a marked improvement in the efficiency of discovering unique application errors over basic parameter selection techniques. --- paper_title: Program-Adaptive Mutational Fuzzing paper_content: We present the design of an algorithm to maximize the number of bugs found for black-box mutational fuzzing given a program and a seed input. The major intuition is to leverage white-box symbolic analysis on an execution trace for a given program-seed pair to detect dependencies among the bit positions of an input, and then use this dependency relation to compute a probabilistically optimal mutation ratio for this program-seed pair. Our result is promising: we found an average of 38.6% more bugs than three previous fuzzers over 8 applications using the same amount of fuzzing time. --- paper_title: Large-scale analysis of format string vulnerabilities in Debian Linux paper_content: Format-string bugs are a relatively common security vulnerability, and can lead to arbitrary code execution. In collaboration with others, we designed and implemented a system to eliminate format string vulnerabilities from an entire Linux distribution, using type-qualifier inference, a static analysis technique that can find taint violations. We successfully analyze 66% of C/C++ source packages in the Debian 3.1 Linux distribution. Our system finds 1,533 format string taint warnings. We estimate that 85% of these are true positives, i.e., real bugs; ignoring duplicates from libraries, about 75% are real bugs. We suggest that the technology exists to render format string vulnerabilities extinct in the near future. --- paper_title: S2E: a platform for in-vivo multi-path analysis of software systems paper_content: This paper presents S2E, a platform for analyzing the properties and behavior of software systems. We demonstrate S2E's use in developing practical tools for comprehensive performance profiling, reverse engineering of proprietary software, and bug finding for both kernel-mode and user-mode binaries. Building these tools on top of S2E took less than 770 LOC and 40 person-hours each. S2E's novelty consists of its ability to scale to large real systems, such as a full Windows stack. S2E is based on two new ideas: selective symbolic execution, a way to automatically minimize the amount of code that has to be executed symbolically given a target analysis, and relaxed execution consistency models, a way to make principled performance/accuracy trade-offs in complex analyses. These techniques give S2E three key abilities: to simultaneously analyze entire families of execution paths, instead of just one execution at a time; to perform the analyses in-vivo within a real software stack--user programs, libraries, kernel, drivers, etc.--instead of using abstract models of these layers; and to operate directly on binaries, thus being able to analyze even proprietary software. Conceptually, S2E is an automated path explorer with modular path analyzers: the explorer drives the target system down all execution paths of interest, while analyzers check properties of each such path (e.g., to look for bugs) or simply collect information (e.g., count page faults). Desired paths can be specified in multiple ways, and S2E users can either combine existing analyzers to build a custom analysis tool, or write new analyzers using the S2E API. --- paper_title: CUTE: a concolic unit testing engine for C paper_content: In unit testing, a program is decomposed into units which are collections of functions. A part of unit can be tested by generating inputs for a single entry function. The entry function may contain pointer arguments, in which case the inputs to the unit are memory graphs. The paper addresses the problem of automating unit testing with memory graphs as inputs. The approach used builds on previous work combining symbolic and concrete execution, and more specifically, using such a combination to generate test inputs to explore all feasible execution paths. The current work develops a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graphs as inputs. Moreover, an efficient constraint solver is proposed to facilitate incremental generation of such test inputs. Finally, CUTE, a tool implementing the method is described together with the results of applying CUTE to real-world examples of C code. --- paper_title: jFuzz: A Concolic Whitebox Fuzzer for Java paper_content: We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model-checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented a novel optimization, name-independent caching, that aggressively normalizes the constraints to so reduced the number of calls to the constraint solver. We present preliminary results due to this optimization, and demonstrate the effectiveness of jFuzz in creating good test inputs. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder. --- paper_title: Automated Whitebox Fuzz Testing paper_content: Fuzz testing is an effective technique for finding security vulnerabilities in software. Traditionally, fuzz testing tools apply random mutations to well-formed inputs of a program and test the resulting values. We present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation. Our approach records an actual run of the program under test on a well-formed input, symbolically evaluates the recorded trace, and gathers constraints on inputs capturing how the program uses these. The collected constraints are then negated one by one and solved with a constraint solver, producing new inputs that exercise different control paths in the program. This process is repeated with the help of a code-coverage maximizing heuristic designed to find defects as fast as possible. We have implemented this algorithm in SAGE (Scalable, Automated, Guided Execution), a new tool employing x86 instruction-level tracing and emulation for whitebox fuzzing of arbitrary file-reading Windows applications. We describe key optimizations needed to make dynamic test generation scale to large input files and long execution traces with hundreds of millions of instructions. We then present detailed experiments with several Windows applications. Notably, without any format-specific knowledge, SAGE detects the MS07-017 ANI vulnerability, which was missed by extensive blackbox fuzzing and static analysis tools. Furthermore, while still in an early stage of development, SAGE has already discovered 30+ new bugs in large shipped Windows applications including image processors, media players, and file decoders. Several of these bugs are potentially exploitable memory access violations. --- paper_title: GRT: Program-Analysis-Guided Random Testing (T) paper_content: We propose Guided Random Testing (GRT), which uses static and dynamic analysis to include information on program types, data, and dependencies in various stages of automated test generation. Static analysis extracts knowledge from the system under test. Test coverage is further improved through state fuzzing and continuous coverage analysis. We evaluated GRT on 32 real-world projects and found that GRT outperforms major peer techniques in terms of code coverage (by 13 %) and mutation score (by 9 %). On the four studied benchmarks of Defects4J, which contain 224 real faults, GRT also shows better fault detection capability than peer techniques, finding 147 faults (66 %). Furthermore, in an in-depth evaluation on the latest versions of ten popular real-world projects, GRT successfully detects over 20 unknown defects that were confirmed by developers. --- paper_title: KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs paper_content: We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90% per tool (median: over 94%) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them. ::: ::: We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies. --- paper_title: TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection paper_content: Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated malformed inputs are rejected in the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. In this paper, we present TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel contributions: 1) TaintScope is the first checksum-aware fuzzing tool to the best of our knowledge. It can identify checksum fields in input instances, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. 2) TaintScope is a directed fuzzing tool working at X86 binary level (on both Linux and Window). Based on fine-grained dynamic taint tracing, TaintScope identifies which bytes in a well-formed input are used in security-sensitive operations (e.g., invoking system/library calls) and then focuses on modifying such bytes. Thus, generated inputs are more likely to trigger potential vulnerabilities. 3) TaintScope is fully automatic, from detecting checksum, directed fuzzing, to repairing crashed samples. It can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 27 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Google Picasa, Microsoft Paint, and ImageMagick. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Corresponding patches from vendors are released or in progress based on our reports. --- paper_title: Statically-directed dynamic automated test generation paper_content: We present a new technique for exploiting static analysis to guide dynamic automated test generation for binary programs, prioritizing the paths to be explored. Our technique is a three-stage process, which alternates dynamic and static analysis. In the first stage, we run dynamic analysis with a small number of seed tests to resolve indirect jumps in the binary code and build a visibly pushdown automaton (VPA) reflecting the global control-flow of the program. Further, we augment the computed VPA with statically computable jumps not executed by the seed tests. In the second stage, we apply static analysis to the inferred automaton to find potential vulnerabilities, i.e., targets for the dynamic analysis. In the third stage, we use the results of the prior phases to assign weights to VPA edges. Our symbolic-execution based automated test generation tool then uses the weighted shortest-path lengths in the VPA to direct its exploration to the target potential vulnerabilities. Preliminary experiments on a suite of benchmarks extracted from real applications show that static analysis allows exploration to reach vulnerabilities it otherwise would not, and the generated test inputs prove that the static warnings indicate true positives. --- paper_title: Path-exploration lifting: hi-fi tests for lo-fi emulators paper_content: Processor emulators are widely used to provide isolation and instrumentation of binary software. However they have proved difficult to implement correctly: processor specifications have many corner cases that are not exercised by common workloads. It is untenable to base other system security properties on the correctness of emulators that have received only ad-hoc testing. To obtain emulators that are worthy of the required trust, we propose a technique to explore a high-fidelity emulator with symbolic execution, and then lift those test cases to test a lower-fidelity emulator. The high-fidelity emulator serves as a proxy for the hardware specification, but we can also further validate by running the tests on real hardware. We implement our approach and apply it to generate about 610,000 test cases; for about 95% of the instructions we achieve complete path coverage. The tests reveal thousands of individual differences; we analyze those differences to shed light on a number of root causes, such as atomicity violations and missing security features. --- paper_title: Grammar-based whitebox fuzzing paper_content: Whitebox fuzzing is a form of automatic dynamic test generation, based on symbolic execution and constraint solving, designed for security testing of large applications. Unfortunately, the current effectiveness of whitebox fuzzing is limited when testing applications with highly-structured inputs, such as compilers and interpreters. These applications process their inputs in stages, such as lexing, parsing and evaluation. Due to the enormous number of control paths in early processing stages, whitebox fuzzing rarely reaches parts of the application beyond those first stages. In this paper, we study how to enhance whitebox fuzzing of complex structured-input applications with a grammar-based specification of their valid inputs. We present a novel dynamic test generation algorithm where symbolic execution directly generates grammar-based constraints whose satisfiability is checked using a custom grammar-based constraint solver. We have implemented this algorithm and evaluated it on a large security-critical application, the JavaScript interpreter of Internet Explorer 7 (IE7). Results of our experiments show that grammar-based whitebox fuzzing explores deeper program paths and avoids dead-ends due to non-parsable inputs. Compared to regular whitebox fuzzing, grammar-based whitebox fuzzing increased coverage of the code generation module of the IE7 JavaScript interpreter from 53% to 81% while using three times fewer tests. --- paper_title: Program-Adaptive Mutational Fuzzing paper_content: We present the design of an algorithm to maximize the number of bugs found for black-box mutational fuzzing given a program and a seed input. The major intuition is to leverage white-box symbolic analysis on an execution trace for a given program-seed pair to detect dependencies among the bit positions of an input, and then use this dependency relation to compute a probabilistically optimal mutation ratio for this program-seed pair. Our result is promising: we found an average of 38.6% more bugs than three previous fuzzers over 8 applications using the same amount of fuzzing time. --- paper_title: FLAX: Systematic Discovery of Client-side Validation Vulnerabilities in Rich Web Applications paper_content: The complexity of the client-side components of web applications has exploded with the increase in popularity of web 2.0 applications. Today, traditional desktop applications, such as document viewers, presentation tools and chat applications are commonly available as online JavaScript applications. Previous research on web vulnerabilities has primarily concentrated on flaws in the server-side components of web applications. This paper highlights a new class of vulnerabilities, which we term client-side validation (or CSV) vulnerabilities. CSV vulnerabilities arise from unsafe usage of untrusted data in the client-side code of the web application that is typically written in JavaScript. In this paper, we demonstrate that they can result in a broad spectrum of attacks. Our work provides empirical evidence that CSV vulnerabilities are not merely conceptual but are prevalent in today’s web applications. We propose dynamic analysis techniques to systematically discover vulnerabilities of this class. The techniques are light-weight, efficient, and have no false positives. We implement our techniques in a prototype tool called FLAX, which scales to real-world applications and has discovered 11 vulnerabilities in the wild so far. --- paper_title: Model-based whitebox fuzzing for program binaries paper_content: Many real-world programs take highly structured and very complex inputs. The automated testing of such programs is non-trivial. If the test input does not adhere to a specific file format, the program returns a parser error. For symbolic execution-based whitebox fuzzing the corresponding error handling code becomes a significant time sink. Too much time is spent in the parser exploring too many paths leading to trivial parser errors. Naturally, the time is better spent exploring the functional part of the program where failure with valid input exposes deep and real bugs in the program. In this paper, we suggest to leverage information about the file format and the data chunks of existing, valid files to swiftly carry the exploration beyond the parser code. We call our approach Model-based Whitebox Fuzzing (MoWF) because the file format input model of blackbox fuzzers can be exploited as a constraint on the vast input space to rule out most invalid inputs during path exploration in symbolic execution. We evaluate on 13 vulnerabilities in 8 large program binaries with 6 separate file formats and found that MoWF exposes all vulnerabilities while both, traditional whitebox fuzzing and model-based blackbox fuzzing, expose only less than half, respectively. Our experiments also demonstrate that MoWF exposes 70% vulnerabilities without any seed inputs. --- paper_title: DART: directed automated random testing paper_content: We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging. --- paper_title: Unleashing Mayhem on Binary Code paper_content: In this paper we present Mayhem, a new system for automatically finding exploitable bugs in binary (i.e., executable) programs. Every bug reported by Mayhem is accompanied by a working shell-spawning exploit. The working exploits ensure soundness and that each bug report is security-critical and actionable. Mayhem works on raw binary code without debugging information. To make exploit generation possible at the binary-level, Mayhem addresses two major technical challenges: actively managing execution paths without exhausting memory, and reasoning about symbolic memory indices, where a load or a store address depends on user input. To this end, we propose two novel techniques: 1) hybrid symbolic execution for combining online and offline (concolic) execution to maximize the benefits of both techniques, and 2) index-based memory modeling, a technique that allows Mayhem to efficiently reason about symbolic memory at the binary level. We used Mayhem to find and demonstrate 29 exploitable vulnerabilities in both Linux and Windows programs, 2 of which were previously undocumented. --- paper_title: Chopped symbolic execution paper_content: Symbolic execution is a powerful program analysis technique that systematically explores multiple program paths. However, despite important technical advances, symbolic execution often struggles to reach deep parts of the code due to the well-known path explosion problem and constraint solving limitations. In this paper, we propose chopped symbolic execution, a novel form of symbolic execution that allows users to specify uninteresting parts of the code to exclude during the analysis, thus only targeting the exploration to paths of importance. However, the excluded parts are not summarily ignored, as this may lead to both false positives and false negatives. Instead, they are executed lazily, when their effect may be observable by code under analysis. Chopped symbolic execution leverages various on-demand static analyses at runtime to automatically exclude code fragments while resolving their side effects, thus avoiding expensive manual annotations and imprecision. Our preliminary results show that the approach can effectively improve the effectiveness of symbolic execution in several different scenarios, including failure reproduction and test suite augmentation. --- paper_title: An orchestrated survey of methodologies for automated software test case generation paper_content: Test case generation is among the most labour-intensive tasks in software testing. It also has a strong impact on the effectiveness and efficiency of software testing. For these reasons, it has been one of the most active research topics in software testing for several decades, resulting in many different approaches and tools. This paper presents an orchestrated survey of the most prominent techniques for automatic generation of software test cases, reviewed in self-standing sections. The techniques presented include: (a) structural testing using symbolic execution, (b) model-based testing, (c) combinatorial testing, (d) random testing and its variant of adaptive random testing, and (e) search-based testing. Each section is contributed by world-renowned active researchers on the technique, and briefly covers the basic ideas underlying the method, the current state of the art, a discussion of the open research problems, and a perspective of the future development of the approach. As a whole, the paper aims at giving an introductory, up-to-date and (relatively) short overview of research in automatic test case generation, while ensuring a comprehensive and authoritative treatment. --- paper_title: SELECT—a formal system for testing and debugging programs by symbolic execution paper_content: SELECT is an experimental system for assisting in the formal systematic debugging of programs. It is intended to be a compromise between an automated program proving system and the current ad hoc debugging practice, and is similar to a system being developed by King et al. of IBM. SELECT systematically handles the paths of programs written in a LISP subset that includes arrays. For each execution path SELECT returns simplified conditions on input variables that cause the path to be executed, and simplified symbolic values for program variables at the path output. For conditions which form a system of linear equalities and inequalities SELECT will return input variable values that can serve as sample test data. The user can insert constraint conditions, at any point in the program including the output, in the form of symbolically executable assertions. These conditions can induce the system to select test data in user-specified regions. SELECT can also determine if the path is correct with respect to an output assertion. We present four examples demonstrating the various modes of system operation and their effectiveness in finding bugs. In some examples, SELECT was successful in automatically finding useful test data. In others, user interaction was required in the form of output assertions. SELECT appears to be a useful tool for rapidly revealing program errors, but for the future there is a need to expand its expressive and deductive power. --- paper_title: QSYM: a practical concolic execution engine tailored for hybrid fuzzing paper_content: Recently, hybrid fuzzing has been proposed to address the limitations of fuzzing and concolic execution by combining both approaches. The hybrid approach has shown its effectiveness in various synthetic benchmarks such as DARPA Cyber Grand Challenge (CGC) binaries, but it still suffers from scaling to find bugs in complex, realworld software. We observed that the performance bottleneck of the existing concolic executor is the main limiting factor for its adoption beyond a small-scale study. ::: ::: To overcome this problem, we design a fast concolic execution engine, called QSYM, to support hybrid fuzzing. The key idea is to tightly integrate the symbolic emulation with the native execution using dynamic binary translation, making it possible to implement more fine-grained, so faster, instruction-level symbolic emulation. Additionally, QSYM loosens the strict soundness requirements of conventional concolic executors for better performance, yet takes advantage of a faster fuzzer for validation, providing unprecedented opportunities for performance optimizations, e.g., optimistically solving constraints and pruning uninteresting basic blocks. ::: ::: Our evaluation shows that QSYM does not just outperform state-of-the-art fuzzers (i.e., found 14× more bugs than VUzzer in the LAVA-M dataset, and outperformed Driller in 104 binaries out of 126), but also found 13 previously unknown security bugs in eight real-world programs like Dropbox Lepton, ffmpeg, and OpenJPEG, which have already been intensively tested by the state-of-the-art fuzzers, AFL and OSS-Fuzz. --- paper_title: The Past, Present, and Future of Cyberdyne paper_content: Cyberdyne is a distributed system that discovers vulnerabilities in third-party, off-the-shelf binary programs. It competed in all rounds of DARPA’s Cyber Grand Challenge (CGC). In the qualifying event, Cyberdyne was the second most effective bug-finding system. In the final event, it was the bug-finding arm of the fourth-place team. Since then, Cyberdyne has been successfully applied during commercial code audits. The first half of this article describes the evolution and implementation of Cyberdyne and its bug-finding tools. The second half of the article looks at what it took to have Cyberdyne audit real applications and how we performed the first paid automated security audit for the Mozilla Secure Open Source Fund. We conclude with a discussion about the future of automated security audits. --- paper_title: Methodology for the Generation of Program Test Data paper_content: A methodology for generating program test data is described. The methodology is a model of the test data generation process and can be used to characterize the basic problems of test data generation. It is well defined and can be used to build an automatic test data generation system. --- paper_title: All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but Might Have Been Afraid to Ask) paper_content: Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context. --- paper_title: Satisfiability modulo theories: introduction and applications paper_content: Checking the satisfiability of logical formulas, SMT solvers scale orders of magnitude beyond custom ad hoc solvers. --- paper_title: Symbolic execution and program testing paper_content: This paper describes the symbolic execution of programs. Instead of supplying the normal inputs to a program (e.g. numbers) one supplies symbols representing arbitrary values. The execution proceeds as in a normal execution except that values may be symbolic formulas over the input symbols. The difficult, yet interesting issues arise during the symbolic execution of conditional branch type statements. A particular system called EFFIGY which provides symbolic execution for program testing and debugging is also described. It interpretively executes programs written in a simple PL/I style programming language. It includes many standard debugging features, the ability to manage and to prove things about symbolic expressions, a simple program testing manager, and a program verifier. A brief discussion of the relationship between symbolic execution and program proving is also included. --- paper_title: Angora: Efficient Fuzzing by Principled Search paper_content: Fuzzing is a popular technique for finding software bugs. However, the performance of the state-of-the-art fuzzers leaves a lot to be desired. Fuzzers based on symbolic execution produce quality inputs but run slow, while fuzzers based on random mutation run fast but have difficulty producing quality inputs. We propose Angora, a new mutation-based fuzzer that outperforms the state-of-the-art fuzzers by a wide margin. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution. To solve path constraints efficiently, we introduce several key techniques: scalable byte-level taint tracking, context-sensitive branch count, search based on gradient descent, and input length exploration. On the LAVA-M data set, Angora found almost all the injected bugs, found more bugs than any other fuzzer that we compared with, and found eight times as many bugs as the second-best fuzzer in the program who. Angora also found 103 bugs that the LAVA authors injected but could not trigger. We also tested Angora on eight popular, mature open source programs. Angora found 6, 52, 29, 40 and 48 new bugs in file, jhead, nm, objdump and size, respectively. We measured the coverage of Angora and evaluated how its key techniques contribute to its impressive performance. --- paper_title: GRT: Program-Analysis-Guided Random Testing (T) paper_content: We propose Guided Random Testing (GRT), which uses static and dynamic analysis to include information on program types, data, and dependencies in various stages of automated test generation. Static analysis extracts knowledge from the system under test. Test coverage is further improved through state fuzzing and continuous coverage analysis. We evaluated GRT on 32 real-world projects and found that GRT outperforms major peer techniques in terms of code coverage (by 13 %) and mutation score (by 9 %). On the four studied benchmarks of Defects4J, which contain 224 real faults, GRT also shows better fault detection capability than peer techniques, finding 147 faults (66 %). Furthermore, in an in-depth evaluation on the latest versions of ten popular real-world projects, GRT successfully detects over 20 unknown defects that were confirmed by developers. --- paper_title: TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection paper_content: Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated malformed inputs are rejected in the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. In this paper, we present TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel contributions: 1) TaintScope is the first checksum-aware fuzzing tool to the best of our knowledge. It can identify checksum fields in input instances, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. 2) TaintScope is a directed fuzzing tool working at X86 binary level (on both Linux and Window). Based on fine-grained dynamic taint tracing, TaintScope identifies which bytes in a well-formed input are used in security-sensitive operations (e.g., invoking system/library calls) and then focuses on modifying such bytes. Thus, generated inputs are more likely to trigger potential vulnerabilities. 3) TaintScope is fully automatic, from detecting checksum, directed fuzzing, to repairing crashed samples. It can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 27 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Google Picasa, Microsoft Paint, and ImageMagick. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Corresponding patches from vendors are released or in progress based on our reports. --- paper_title: 0-knowledge fuzzing paper_content: Nowadays fuzzing is a pretty common technique used both by attackers and software developers. Currently known techniques usually involve knowing the protocol/format that needs to be fuzzed and having a basic understanding of how the user input is processed inside the binary. In the past since fuzzing was little-used obtaining good results with a small amount of eort was possible. --- paper_title: VUzzer : Application - aware Evolutionary Fuzzing paper_content: See, stats, and : https : / / www . researchgate . net / publication / 311886374 VUzzer : Application - aware Conference DOI : 10 . 14722 / ndss . 2017 . 23404 CITATIONS 0 READS 17 6 , including : Some : Systems Sanjay Vrije , Amsterdam , Netherlands 38 SEE Ashish International 1 SEE Cristiano VU 51 SEE Herbert VU 163 , 836 SEE All . The . All - text and , letting . Abstract—Fuzzing is an effective software testing technique to find bugs . Given the size and complexity of real - world applications , modern fuzzers tend to be either scalable , but not effective in exploring bugs that lie deeper in the execution , or capable of penetrating deeper in the application , but not scalable . In this paper , we present an application - aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format . In order to maximize coverage and explore deeper paths , we leverage control - and data - flow features based on static and dynamic analysis to infer fundamental prop - erties of the application . This enables much faster generation of interesting inputs compared to an application - agnostic approach . We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets : DARPA Grand Challenge binaries (CGC) , a set of real - world applications (binary input parsers) , and the recently released LAVA dataset . On all of these datasets , VUzzer yields significantly better results than state - of - the - art fuzzers , by quickly finding several existing and new bugs . --- paper_title: TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection paper_content: Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated malformed inputs are rejected in the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. In this paper, we present TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel contributions: 1) TaintScope is the first checksum-aware fuzzing tool to the best of our knowledge. It can identify checksum fields in input instances, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. 2) TaintScope is a directed fuzzing tool working at X86 binary level (on both Linux and Window). Based on fine-grained dynamic taint tracing, TaintScope identifies which bytes in a well-formed input are used in security-sensitive operations (e.g., invoking system/library calls) and then focuses on modifying such bytes. Thus, generated inputs are more likely to trigger potential vulnerabilities. 3) TaintScope is fully automatic, from detecting checksum, directed fuzzing, to repairing crashed samples. It can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 27 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Google Picasa, Microsoft Paint, and ImageMagick. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Corresponding patches from vendors are released or in progress based on our reports. --- paper_title: Input generation via decomposition and re-stitching: finding bugs in Malware paper_content: Attackers often take advantage of vulnerabilities in benign software, and the authors of benign software must search their code for bugs in hopes of finding vulnerabilities before they are exploited. But there has been little research on the converse question of whether defenders can turn the tables by finding vulnerabilities in malware. We provide a first affirmative answer to that question. We introduce a new technique, stitched dynamic symbolic execution, that makes it possible to use exploration techniques based on symbolic execution in the presence of functionalities that are common in malware and otherwise hard to analyze, such as decryption and checksums. The technique is based on decomposing the constraints induced by a program, solving only a subset, and then re-stitching the constraint solution into a complete input. We implement the approach in a system for x86 binaries, and apply it to 4 prevalent families of bots and other malware. We find 6 bugs that could be exploited by a network attacker to terminate or subvert the malware. These bugs have persisted across malware revisions for months, and even years. We discuss the possible applications and ethical considerations of this new capability --- paper_title: T-Fuzz: Fuzzing by Program Transformation paper_content: Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries. --- paper_title: CETS: compiler enforced temporal safety for C paper_content: Temporal memory safety errors, such as dangling pointer dereferences and double frees, are a prevalent source of software bugs in unmanaged languages such as C. Existing schemes that attempt to retrofit temporal safety for such languages have high runtime overheads and/or are incomplete, thereby limiting their effectiveness as debugging aids. This paper presents CETS, a compile-time transformation for detecting all violations of temporal safety in C programs. Inspired by existing approaches, CETS maintains a unique identifier with each object, associates this metadata with the pointers in a disjoint metadata space to retain memory layout compatibility, and checks that the object is still allocated on pointer dereferences. A formal proof shows that this is sufficient to provide temporal safety even in the presence of arbitrary casts if the program contains no spatial safety violations. Our CETS prototype employs both temporal check removal optimizations and traditional compiler optimizations to achieve a runtime overhead of just 48% on average. When combined with a spatial-checking system, the average overall overhead is 116% for complete memory safety --- paper_title: TypeSan: Practical Type Confusion Detection paper_content: The low-level C++ programming language is ubiquitously used for its modularity and performance. Typecasting is a fundamental concept in C++ (and object-oriented programming in general) to convert a pointer from one object type into another. However, downcasting (converting a base class pointer to a derived class pointer) has critical security implications due to potentially different object memory layouts. Due to missing type safety in C++, a downcasted pointer can violate a programmer's intended pointer semantics, allowing an attacker to corrupt the underlying memory in a type-unsafe fashion. This vulnerability class is receiving increasing attention and is known as type confusion (or bad-casting). Several existing approaches detect different forms of type confusion, but these solutions are severely limited due to both high run-time performance overhead and low detection coverage. This paper presents TypeSan, a practical type-confusion detector which provides both low run-time overhead and high detection coverage. Despite improving the coverage of state-of-the-art techniques, TypeSan significantly reduces the type-confusion detection overhead compared to other solutions. TypeSan relies on an efficient per-object metadata storage service based on a compact memory shadowing scheme. Our scheme treats all the memory objects (i.e., globals, stack, heap) uniformly to eliminate extra checks on the fast path and relies on a variable compression ratio to minimize run-time performance and memory overhead. Our experimental results confirm that TypeSan is practical, even when explicitly checking almost all the relevant typecasts in a given C++ program. Compared to the state of the art, TypeSan yields orders of magnitude higher coverage at 4--10 times lower performance overhead on SPEC and 2 times on Firefox. As a result, our solution offers superior protection and is suitable for deployment in production software. Moreover, our highly efficient metadata storage back-end is potentially useful for other defenses that require memory object tracking. --- paper_title: AddressSanitizer : A Fast Address Sanity Checker paper_content: Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. ::: ::: This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. ::: ::: AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software. --- paper_title: SoftBound: highly compatible and complete spatial memory safety for c paper_content: The serious bugs and security vulnerabilities facilitated by C/C++'s lack of bounds checking are well known, yet C and C++ remain in widespread use. Unfortunately, C's arbitrary pointer arithmetic, conflation of pointers and arrays, and programmer-visible memory layout make retrofitting C/C++ with spatial safety guarantees extremely challenging. Existing approaches suffer from incompleteness, have high runtime overhead, or require non-trivial changes to the C source code. Thus far, these deficiencies have prevented widespread adoption of such techniques. This paper proposes SoftBound, a compile-time transformation for enforcing spatial safety of C. Inspired by HardBound, a previously proposed hardware-assisted approach, SoftBound similarly records base and bound information for every pointer as disjoint metadata. This decoupling enables SoftBound to provide spatial safety without requiring changes to C source code. Unlike HardBound, SoftBound is a software-only approach and performs metadata manipulation only when loading or storing pointer values. A formal proof shows that this is sufficient to provide spatial safety even in the presence of arbitrary casts. SoftBound's full checking mode provides complete spatial violation detection with 67% runtime overhead on average. To further reduce overheads, SoftBound has a store-only checking mode that successfully detects all the security vulnerabilities in a test suite at the cost of only 22% runtime overhead on average. --- paper_title: Type casting verification: stopping an emerging attack vector paper_content: Many applications such as the Chrome and Firefox browsers are largely implemented in C++ for its performance and modularity. Type casting, which converts one type of an object to another, plays an essential role in enabling polymorphism in C++ because it allows a program to utilize certain general or specific implementations in the class hierarchies. However, if not correctly used, it may return unsafe and incorrectly casted values, leading to so-called bad-casting or type-confusion vulnerabilities. Since a bad-casted pointer violates a programmer's intended pointer semantics and enables an attacker to corrupt memory, bad-casting has critical security implications similar to those of other memory corruption vulnerabilities. Despite the increasing number of bad-casting vulnerabilities, the bad-casting detection problem has not been addressed by the security community. ::: ::: In this paper, we present CAVER, a runtime bad-casting detection tool. It performs program instrumentation at compile time and uses a new runtime type tracing mechanism--the type hierarchy table--to overcome the limitation of existing approaches and efficiently verify type casting dynamically. In particular, CAVER can be easily and automatically adopted to target applications, achieves broader detection coverage, and incurs reasonable runtime overhead. We have applied CAVER to large-scale software including Chrome and Firefox browsers, and discovered 11 previously unknown security vulnerabilities: nine in GNU libstdc++ and two in Firefox, all of which have been confirmed and subsequently fixed by vendors. Our evaluation showed that CAVER imposes up to 7.6% and 64.6% overhead for performance-intensive benchmarks on the Chromium and Firefox browsers, respectively. --- paper_title: Control-flow integrity principles, implementations, and applications paper_content: Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions. --- paper_title: HexType: Efficient Detection of Type Confusion Errors for C++ paper_content: Type confusion, often combined with use-after-free, is the main attack vector to compromise modern C++ software like browsers or virtual machines. Typecasting is a core principle that enables modularity in C++. For performance, most typecasts are only checked statically, i.e., the check only tests if a cast is allowed for the given type hierarchy, ignoring the actual runtime type of the object. Using an object of an incompatible base type instead of a derived type results in type confusion. Attackers abuse such type confusion issues to attack popular software products including Adobe Flash, PHP, Google Chrome, or Firefox. We propose to make all type checks explicit, replacing static checks with full runtime type checks. To minimize the performance impact of our mechanism HexType, we develop both low-overhead data structures and compiler optimizations. To maximize detection coverage, we handle specific object allocation patterns, e.g., placement new or reinterpret_cast which are not handled by other mechanisms. Our prototype results show that, compared to prior work, HexType has at least 1.1 -- 6.1 times higher coverage on Firefox benchmarks. For SPEC CPU2006 benchmarks with overhead, we show a 2 -- 33.4 times reduction in overhead. In addition, HexType discovered 4 new type confusion bugs in Qt and Apache Xerces-C++. --- paper_title: Enforcing forward-edge control-flow integrity in GCC & LLVM paper_content: Constraining dynamic control transfers is a common technique for mitigating software vulnerabilities. This defense has been widely and successfully used to protect return addresses and stack data; hence, current attacks instead typically corrupt vtable and function pointers to subvert a forward edge (an indirect jump or call) in the control-flow graph. Forward edges can be protected using Control-Flow Integrity (CFI) but, to date, CFI implementations have been research prototypes, based on impractical assumptions or ad hoc, heuristic techniques. To be widely adoptable, CFI mechanisms must be integrated into production compilers and be compatible with software-engineering aspects such as incremental compilation and dynamic libraries. ::: ::: This paper presents implementations of fine-grained, forward-edge CFI enforcement and analysis for GCC and LLVM that meet the above requirements. An analysis and evaluation of the security, performance, and resource consumption of these mechanisms applied to the SPEC CPU2006 benchmarks and common benchmarks for the Chromium web browser show the practicality of our approach: these fine-grained CFI mechanisms have significantly lower overhead than recent academic CFI prototypes. Implementing CFI in industrial compiler frameworks has also led to insights into design tradeoffs and practical challenges, such as dynamic loading. --- paper_title: Towards optimization-safe systems: analyzing the impact of undefined behavior paper_content: This paper studies an emerging class of software bugs called optimization-unstable code: code that is unexpectedly discarded by compiler optimizations due to undefined behavior in the program. Unstable code is present in many systems, including the Linux kernel and the Postgres database. The consequences of unstable code range from incorrect functionality to missing security checks. To reason about unstable code, this paper proposes a novel model, which views unstable code in terms of optimizations that leverage undefined behavior. Using this model, we introduce a new static checker called Stack that precisely identifies unstable code. Applying Stack to widely used systems has uncovered 160 new bugs that have been confirmed and fixed by developers. --- paper_title: MemorySanitizer: fast detector of uninitialized memory use in C++ paper_content: This paper presents MemorySanitizer, a dynamic tool that detects uses of uninitialized memory in C and C++. The tool is based on compile time instrumentation and relies on bit-precise shadow memory at run-time. Shadow propagation technique is used to avoid false positive reports on copying of uninitialized memory. MemorySanitizer finds bugs at a modest cost of 2.5x in execution time and 2x in memory usage; the tool has an optional origin tracking mode that provides better reports with moderate extra overhead. The reports with origins are more detailed compared to reports from other similar tools; such reports contain names of local variables and the entire history of the uninitialized memory including intermediate stores. In this paper we share our experience in deploying the tool at a large scale and demonstrate the benefits of compile-time instrumentation over dynamic binary instrumentation. --- paper_title: ThreadSanitizer: data race detection in practice paper_content: Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed. --- paper_title: Understanding Integer Overflow in C/C++ paper_content: Integer overflow bugs in C and C++ programs are difficult to track down and may lead to fatal errors or exploitable vulnerabilities. Although a number of tools for finding these bugs exist, the situation is complicated because not all overflows are bugs. Better tools need to be constructed, but a thorough understanding of the issues behind these errors does not yet exist. We developed IOC, a dynamic checking tool for integer overflows, and used it to conduct the first detailed empirical study of the prevalence and patterns of occurrence of integer overflows in C and C++ code. Our results show that intentional uses of wraparound behaviors are more common than is widely believed; for example, there are over 200 distinct locations in the SPEC CINT2000 benchmarks where overflow occurs. Although many overflows are intentional, a large number of accidental overflows also occur. Orthogonal to programmers' intent, overflows are found in both well-defined and undefined flavors. Applications executing undefined operations can be, and have been, broken by improvements in compiler optimizations. Looking beyond SPEC, we found and reported undefined integer overflows in SQLite, PostgreSQL, SafeInt, GNU MPC and GMP, Firefox, LLVM, Python, BIND, and OpenSSL; many of these have since been fixed. --- paper_title: Automated testing for SQL injection vulnerabilities: an input mutation approach paper_content: Web services are increasingly adopted in various domains, from finance and e-government to social media. As they are built on top of the web technologies, they suffer also an unprecedented amount of attacks and exploitations like the Web. Among the attacks, those that target SQL injection vulnerabilities have consistently been top-ranked for the last years. Testing to detect such vulnerabilities before making web services public is crucial. We present in this paper an automated testing approach, namely μ4SQLi, and its underpinning set of mutation operators. μ4SQLi can produce effective inputs that lead to executable and harmful SQL statements. Executability is key as otherwise no injection vulnerability can be exploited. Our evaluation demonstrated that the approach is effective to detect SQL injection vulnerabilities and to produce inputs that bypass application firewalls, which is a common configuration in real world. --- paper_title: KameleonFuzz: evolutionary fuzzing for black-box XSS detection paper_content: Fuzz testing consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? Where to observe its effects? In this paper, we specifically address the questions: How to fuzz a parameter? How to observe its effects? To address these questions, we propose KameleonFuzz, a black-box Cross Site Scripting (XSS) fuzzer for web applications. KameleonFuzz can not only generate malicious inputs to exploit XSS, but also detect how close it is revealing a vulnerability. The malicious inputs generation and evolution is achieved with a genetic algorithm, guided by an attack grammar. A double taint inference, up to the browser parse tree, permits to detect precisely whether an exploitation attempt succeeded. Our evaluation demonstrates no false positives and high XSS revealing capabilities: KameleonFuzz detects several vulnerabilities missed by other black-box scanners. --- paper_title: Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations paper_content: Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. --- paper_title: Coverage-directed differential testing of JVM implementations paper_content: Java virtual machine (JVM) is a core technology, whose reliability is critical. Testing JVM implementations requires painstaking effort in designing test classfiles (*.class) along with their test oracles. An alternative is to employ binary fuzzing to differentially test JVMs by blindly mutating seeding classfiles and then executing the resulting mutants on different JVM binaries for revealing inconsistent behaviors. However, this blind approach is not cost effective in practice because most of the mutants are invalid and redundant. This paper tackles this challenge by introducing classfuzz, a coverage-directed fuzzing approach that focuses on representative classfiles for differential testing of JVMs’ startup processes. Our core insight is to (1) mutate seeding classfiles using a set of predefined mutation operators (mutators) and employ Markov Chain Monte Carlo (MCMC) sampling to guide mutator selection, and (2) execute the mutants on a reference JVM implementation and use coverage uniqueness as a discipline for accepting representative ones. The accepted classfiles are used as inputs to differentially test different JVM implementations and find defects. We have implemented classfuzz and conducted an extensive evaluation of it against existing fuzz testing algorithms. Our evaluation results show that classfuzz can enhance the ratio of discrepancy-triggering classfiles from 1.7% to 11.9%. We have also reported 62 JVM discrepancies, along with the test classfiles, to JVM developers. Many of our reported issues have already been confirmed as JVM defects, and some even match recent clarifications and changes to the Java SE 8 edition of the JVM specification. --- paper_title: Privacy oracle: a system for finding application leaks with black box differential testing paper_content: We describe the design and implementation of Privacy Oracle, a system that reports on application leaks of user information via the network traffic that they send. Privacy Oracle treats each application as a black box, without access to either its internal structure or communication protocols. This means that it can be used over a broad range of applications and information leaks (i.e., not only Web traffic or credit card numbers). To accomplish this, we develop a differential testing technique in which perturbations in the application inputs are mapped to perturbations in the application outputs to discover likely leaks; we leverage alignment algorithms from computational biology to find high quality mappings between different byte-sequences efficiently. Privacy Oracle includes this technique and a virtual machine-based testing system. To evaluate it, we tested 26 popular applications, including system and file utilities, media players, and IM clients. We found that Privacy Oracle discovered many small and previously undisclosed information leaks. In several cases, these are leaks of directly identifying information that are regularly sent in the clear (without end-to-end encryption) and which could make users vulnerable to tracking by third parties or providers. --- paper_title: NEZHA: Efficient Domain-Independent Differential Testing paper_content: Differential testing uses similar programs as cross-referencing oracles to find semantic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Unfortunately, existing differential testing tools are domain-specific and inefficient, requiring large numbers of test inputs to find a single bug. In this paper, we address these issues by designing and implementing NEZHA, an efficient input-format-agnostic differential testing framework. The key insight behind NEZHA's design is that current tools generate inputs by simply borrowing techniques designed for finding crash or memory corruption bugs in individual programs (e.g., maximizing code coverage). By contrast, NEZHA exploits the behavioral asymmetries between multiple test programs to focus on inputs that are more likely to trigger semantic bugs. We introduce the notion of δ-diversity, which summarizes the observed asymmetries between the behaviors of multiple test applications. Based on δ-diversity, we design two efficient domain-independent input generation mechanisms for differential testing, one gray-box and one black-box. We demonstrate that both of these input generation schemes are significantly more efficient than existing tools at finding semantic bugs in real-world, complex software. NEZHA's average rate of finding differences is 52 times and 27 times higher than that of Frankencerts and Mucerts, two popular domain-specific differential testing tools that check SSL/TLS certificate validation implementations, respectively. Moreover, performing differential testing with NEZHA results in 6 times more semantic bugs per tested input, compared to adapting state-of-the-art general-purpose fuzzers like American Fuzzy Lop (AFL) to differential testing by running them on individual test programs for input generation. NEZHA discovered 778 unique, previously unknown discrepancies across a wide variety of applications (ELF and XZ parsers, PDF viewers and SSL/TLS libraries), many of which constitute previously unknown critical security vulnerabilities. In particular, we found two critical evasion attacks against ClamAV, allowing arbitrary malicious ELF/XZ files to evade detection. The discrepancies NEZHA found in the X.509 certificate validation implementations of the tested SSL/TLS libraries range from mishandling certain types of KeyUsage extensions, to incorrect acceptance of specially crafted expired certificates, enabling man-in-the-middle attacks. All of our reported vulnerabilities have been confirmed and fixed within a week from the date of reporting. --- paper_title: Designing New Operating Primitives to Improve Fuzzing Performance paper_content: Fuzzing is a software testing technique that finds bugs by repeatedly injecting mutated inputs to a target program. Known to be a highly practical approach, fuzzing is gaining more popularity than ever before. Current research on fuzzing has focused on producing an input that is more likely to trigger a vulnerability. In this paper, we tackle another way to improve the performance of fuzzing, which is to shorten the execution time of each iteration. We observe that AFL, a state-of-the-art fuzzer, slows down by 24x because of file system contention and the scalability of fork() system call when it runs on 120 cores in parallel. Other fuzzers are expected to suffer from the same scalability bottlenecks in that they follow a similar design pattern. To improve the fuzzing performance, we design and implement three new operating primitives specialized for fuzzing that solve these performance bottlenecks and achieve scalable performance on multi-core machines. Our experiment shows that the proposed primitives speed up AFL and LibFuzzer by 6.1 to 28.9x and 1.1 to 735.7x, respectively, on the overall number of executions per second when targeting Google's fuzzer test suite with 120 cores. In addition, the primitives improve AFL's throughput up to 7.7x with 30 cores, which is a more common setting in data centers. Our fuzzer-agnostic primitives can be easily applied to any fuzzer with fundamental performance improvement and directly benefit large-scale fuzzing and cloud-based fuzzing services. --- paper_title: Dynamic Test Generation to Find Integer Bugs in x86 Binary Linux Programs paper_content: Recently, integer bugs, including integer overflow, width conversion, and signed/unsigned conversion errors, have risen to become a common root cause for serious security vulnerabilities. We introduce new methods for discovering integer bugs using dynamic test generation on x86 binaries, and we describe key design choices in efficient symbolic execution of such programs. We implemented our methods in a prototype tool SmartFuzz, which we use to analyze Linux x86 binary executables. We also created a reporting service, metafuzz.com, to aid in triaging and reporting bugs found by SmartFuzz and the black-box fuzz testing tool zzuf. We report on experiments applying these tools to a range of software applications, including the mplayer media player, the exiv2 image metadata library, and ImageMagick convert. We also report on our experience using SmartFuzz, zzuf, and metafuzz.com to perform testing at scale with the Amazon Elastic Compute Cloud (EC2). To date, the metafuzz.com site has recorded more than 2; 614 test runs, comprising 2; 361; 595 test cases. Our experiments found approximately 77 total distinct bugs in 864 compute hours, costing us an average of $2:24 per bug at current EC2 rates. We quantify the overlap in bugs found by the two tools, and we show that SmartFuzz finds bugs missed by zzuf, including one program where Smart-Fuzz finds bugs but zzuf does not. --- paper_title: Turning programs against each other: high coverage fuzz-testing using binary-code mutation and dynamic slicing paper_content: Mutation-based fuzzing is a popular and widely employed black-box testing technique for finding security and robustness bugs in software. It owes much of its success to its simplicity; a well-formed seed input is mutated, e.g. through random bit-flipping, to produce test inputs. While reducing the need for human effort, and enabling security testing even of closed-source programs with undocumented input formats, the simplicity of mutation-based fuzzing comes at the cost of poor code coverage. Often millions of iterations are needed, and the results are highly dependent on configuration parameters and the choice of seed inputs. In this paper we propose a novel method for automated generation of high-coverage test cases for robustness testing. Our method is based on the observation that, even for closed-source programs with proprietary input formats, an implementation that can generate well-formed inputs to the program is typically available. By systematically mutating the program code of such generating programs, we leverage information about the input format encoded in the generating program to produce high-coverage test inputs, capable of reaching deep states in the program under test. Our method works entirely at the machine-code level, enabling use-cases similar to traditional black-box fuzzing. We have implemented the method in our tool MutaGen, and evaluated it on 7 popular Linux programs. We found that, for most programs, our method improves code coverage by one order of magnitude or more, compared to two well-known mutation-based fuzzers. We also found a total of 8 unique bugs. --- paper_title: Scheduling black-box mutational fuzzing paper_content: Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time. --- paper_title: RETracer: triaging crashes by reverse execution from partial memory dumps paper_content: Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics. In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying. We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft's crash reporting service. --- paper_title: Dynamic Taint Analysis for Automatic Detection, Analysis, and SignatureGeneration of Exploits on Commodity Software paper_content: Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [26, 43]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we describe how TaintCheck could improve automatic signature generation in --- paper_title: Taming compiler fuzzers paper_content: Aggressive random testing tools ("fuzzers") are impressively effective at finding compiler bugs. For example, a single test-case generator has resulted in more than 1,700 bugs reported for a single JavaScript engine. However, fuzzers can be frustrating to use: they indiscriminately and repeatedly find bugs that may not be severe enough to fix right away. Currently, users filter out undesirable test cases using ad hoc methods such as disallowing problematic features in tests and grepping test results. This paper formulates and addresses the fuzzer taming problem: given a potentially large number of random test cases that trigger failures, order them such that diverse, interesting test cases are highly ranked. Our evaluation shows our ability to solve the fuzzer taming problem for 3,799 test cases triggering 46 bugs in a C compiler and 2,603 test cases triggering 28 bugs in a JavaScript engine. --- paper_title: All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but Might Have Been Afraid to Ask) paper_content: Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context. --- paper_title: Test-case reduction for C compiler bugs paper_content: To report a compiler bug, one must often find a small test case that triggers the bug. The existing approach to automated test-case reduction, delta debugging, works by removing substrings of the original input; the result is a concatenation of substrings that delta cannot remove. We have found this approach less than ideal for reducing C programs because it typically yields test cases that are too large or even invalid (relying on undefined behavior). To obtain small and valid test cases consistently, we designed and implemented three new, domain-specific test-case reducers. The best of these is based on a novel framework in which a generic fixpoint computation invokes modular transformations that perform reduction operations. This reducer produces outputs that are, on average, more than 25 times smaller than those produced by our other reducers or by the existing reducer that is most commonly used by compiler developers. We conclude that effective program reduction requires more than straightforward delta debugging. --- paper_title: Well There’s Your Problem: Isolating the Crash-Inducing Bits in a Fuzzed File paper_content: Abstract : Mutational input testing (fuzzing, and in particular dumb fuzzing) is an effective technique for discovering vulnerabilities in software. However, many of the bitwise changes in fuzzed input files are not relevant to the actual software crashes found. In this report, we describe an algorithm that efficiently reverts bits from the fuzzed file to those found in the original seed file, keeping only the minimal bits required to recreate the crash under investigation. This technique reduces the complexity of analyzing a crashing test case by eliminating the changes to the seed file that are not essential to the crash being evaluated. --- paper_title: Simplifying and isolating failure-inducing input paper_content: Given some test case, a program fails. Which circumstances of the test case are responsible for the particular failure? The delta debugging algorithm generalizes and simplifies the failing test case to a minimal test case that still produces the failure. It also isolates the difference between a passing and a failing test case. In a case study, the Mozilla Web browser crashed after 95 user actions. Our prototype implementation automatically simplified the input to three relevant user actions. Likewise, it simplified 896 lines of HTML to the single line that caused the failure. The case study required 139 automated test runs or 35 minutes on a 500 MHz PC. --- paper_title: STADS: Software Testing as Species Discovery paper_content: A fundamental challenge of software testing is the statistically well-grounded extrapolation from program behaviors observed during testing. For instance, a security researcher who has run the fuzzer for a week has currently no means (i) to estimate the total number of feasible program branches, given that only a fraction has been covered so far, (ii) to estimate the additional time required to cover 10% more branches, or (iii) to assess the residual risk that a vulnerability exists when no vulnerability has been discovered. Failing to discover a vulnerability, does not mean that none exists---even if the fuzzer was run for a week (or a year). Hence, testing provides no formal correctness guarantees. In this article, I establish an unexpected connection with the otherwise unrelated scientific field of ecology, and introduce a statistical framework that models Software Testing and Analysis as Discovery of Species (STADS). For instance, in order to study the species diversity of arthropods in a tropical rain forest, ecologists would first sample a large number of individuals from that forest, determine their species, and extrapolate from the properties observed in the sample to properties of the whole forest. The estimation (i) of the total number of species, (ii) of the additional sampling effort required to discover 10% more species, or (iii) of the probability to discover a new species are classical problems in ecology. The STADS framework draws from over three decades of research in ecological biostatistics to address the fundamental extrapolation challenge for automated test generation. Our preliminary empirical study demonstrates a good estimator performance even for a fuzzer with adaptive sampling bias---AFL, a state-of-the-art vulnerability detection tool. The STADS framework provides statistical correctness guarantees with quantifiable accuracy. --- paper_title: VUzzer : Application - aware Evolutionary Fuzzing paper_content: See, stats, and : https : / / www . researchgate . net / publication / 311886374 VUzzer : Application - aware Conference DOI : 10 . 14722 / ndss . 2017 . 23404 CITATIONS 0 READS 17 6 , including : Some : Systems Sanjay Vrije , Amsterdam , Netherlands 38 SEE Ashish International 1 SEE Cristiano VU 51 SEE Herbert VU 163 , 836 SEE All . The . All - text and , letting . Abstract—Fuzzing is an effective software testing technique to find bugs . Given the size and complexity of real - world applications , modern fuzzers tend to be either scalable , but not effective in exploring bugs that lie deeper in the execution , or capable of penetrating deeper in the application , but not scalable . In this paper , we present an application - aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format . In order to maximize coverage and explore deeper paths , we leverage control - and data - flow features based on static and dynamic analysis to infer fundamental prop - erties of the application . This enables much faster generation of interesting inputs compared to an application - agnostic approach . We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets : DARPA Grand Challenge binaries (CGC) , a set of real - world applications (binary input parsers) , and the recently released LAVA dataset . On all of these datasets , VUzzer yields significantly better results than state - of - the - art fuzzers , by quickly finding several existing and new bugs . --- paper_title: Steelix: program-state based binary fuzzing paper_content: Coverage-based fuzzing is one of the most effective techniques to find vulnerabilities, bugs or crashes. However, existing techniques suffer from the difficulty in exercising the paths that are protected by magic bytes comparisons (e.g., string equality comparisons). Several approaches have been proposed to use heavy-weight program analysis to break through magic bytes comparisons, and hence are less scalable. In this paper, we propose a program-state based binary fuzzing approach, named Steelix, which improves the penetration power of a fuzzer at the cost of an acceptable slow down of the execution speed. In particular, we use light-weight static analysis and binary instrumentation to provide not only coverage information but also comparison progress information to a fuzzer. Such program state information informs a fuzzer about where the magic bytes are located in the test input and how to perform mutations to match the magic bytes efficiently. We have implemented Steelix and evaluated it on three datasets: LAVA-M dataset, DARPA CGC sample binaries and five real-life programs. The results show that Steelix has better code coverage and bug detection capability than the state-of-the-art fuzzers. Moreover, we found one CVE and nine new bugs. --- paper_title: T-Fuzz: Fuzzing by Program Transformation paper_content: Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries. --- paper_title: The Past, Present, and Future of Cyberdyne paper_content: Cyberdyne is a distributed system that discovers vulnerabilities in third-party, off-the-shelf binary programs. It competed in all rounds of DARPA’s Cyber Grand Challenge (CGC). In the qualifying event, Cyberdyne was the second most effective bug-finding system. In the final event, it was the bug-finding arm of the fourth-place team. Since then, Cyberdyne has been successfully applied during commercial code audits. The first half of this article describes the evolution and implementation of Cyberdyne and its bug-finding tools. The second half of the article looks at what it took to have Cyberdyne audit real applications and how we performed the first paid automated security audit for the Mozilla Secure Open Source Fund. We conclude with a discussion about the future of automated security audits. --- paper_title: Fuzzing for Software Security Testing and Quality Assurance. Artech House paper_content: "A fascinating look at the new direction fuzzing technology is taking -- useful for both QA engineers and bug hunters alike!" --Dave Aitel, CTO, Immunity Inc. Learn the code cracker's malicious mindset, so you can find worn-size holes in the software you are designing, testing, and building. Fuzzing for Software Security Testing and Quality Assurance takes a weapon from the black-hat arsenal to give you a powerful new tool to build secure, high-quality software. This practical resource helps you add extra protection without adding expense or time to already tight schedules and budgets. The book shows you how to make fuzzing a standard practice that integrates seamlessly with all development activities. This comprehensive reference goes through each phase of software development and points out where testing and auditing can tighten security. It surveys all popular commercial fuzzing tools and explains how to select the right one for a software development project. The book also identifies those cases where commercial tools fall short and when there is a need for building your own fuzzing tools. --- paper_title: Fuzzing: State of the Art paper_content: As one of the most popular software testing techniques, fuzzing can find a variety of weaknesses in a program, such as software bugs and vulnerabilities, by generating numerous test inputs. Due to its effectiveness, fuzzing is regarded as a valuable bug hunting method. In this paper, we present an overview of fuzzing that concentrates on its general process, as well as classifications, followed by detailed discussion of the key obstacles and some state-of-the-art technologies which aim to overcome or mitigate these obstacles. We further investigate and classify several widely used fuzzing tools. Our primary goal is to equip the stakeholder with a better understanding of fuzzing and the potential solutions for improving fuzzing methods in the spectrum of software testing and security. To inspire future research, we also predict some future directions with regard to fuzzing. --- paper_title: Open Source Fuzzing Tools paper_content: This chapter discusses some open source fuzzing tools. Fuzzing tools typically fall into one of three categories: fuzzing frameworks, special purpose tools, and general-purpose fuzzers. Fuzzing frameworks are good if one is looking to write his/her own fuzzer or needs to fuzz a customer or proprietary protocol. The advantage is that the tool set is provided by the framework; the disadvantage is that all open source fuzzing frameworks are far from complete and most are very immature. Special-purpose tools are usually fuzzers that were written for a specific protocol or application. While they can usually be extended, they are fairly limited to fuzzing anything outside the original scope of the project. In many cases, general-purpose fuzzers are very partial, as the writers tend to use them to find a few holes in a protocol/application and then move on to more interesting things, leaving the fuzzer unmaintained. General-purpose tools are neat, if they work. They typically don’t, and those that do are too general and lack optimization to be very useful. --- paper_title: Fuzzing: a survey paper_content: Security vulnerability is one of the root causes of cyber-security threats. To discover vulnerabilities and fix them in advance, researchers have proposed several techniques, among which fuzzing is the most widely used one. In recent years, fuzzing solutions, like AFL, have made great improvements in vulnerability discovery. This paper presents a summary of the recent advances, analyzes how they improve the fuzzing process, and sheds light on future work in fuzzing. Firstly, we discuss the reason why fuzzing is popular, by comparing different commonly used vulnerability discovery techniques. Then we present an overview of fuzzing solutions, and discuss in detail one of the most popular type of fuzzing, i.e., coverage-based fuzzing. Then we present other techniques that could make fuzzing process smarter and more efficient. Finally, we show some applications of fuzzing, and discuss new trends of fuzzing and potential future directions. --- paper_title: Fuzzing: Brute Force Vulnerability Discovery paper_content: Piezoelectric crystalline films which consist essentially of a crystalline zinc oxide film with a c-axis perpendicular to a substrate surface, containing 0.01 to 20.0 atomic percent of bismuth. These films are prepared by radio-frequency sputtering. --- paper_title: Evaluating Fuzz Testing paper_content: Fuzz testing has enjoyed great success at discovering security critical bugs in real software. Recently, researchers have devoted significant effort to devising new fuzzing techniques, strategies, and algorithms. Such new ideas are primarily evaluated experimentally so an important question is: What experimental setup is needed to produce trustworthy results? We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We conclude with some guidelines that we hope will help improve experimental evaluations of fuzz testing algorithms, making reported results more robust. ---
Title: The Art, Science, and Engineering of Fuzzing: A Survey Section 1: INTRODUCTION Description 1: Provide an introduction to fuzzing, its history, significance, applications, and the motivation for the survey. Section 2: GRAMS Description 2: Explain the terminology and definitions used in fuzzing, including fuzzing and fuzz testing, fuzzer, fuzz campaign, bug oracle, and fuzz configuration. Section 3: Paper Selection Criteria Description 3: Describe the criteria used for selecting papers that were included in the survey. Section 4: Fuzz Testing Algorithm Description 4: Present a generic algorithm for fuzz testing and discuss its components. Section 5: Taxonomy of Fuzzers Description 5: Categorize different types of fuzzers (black-box, white-box, grey-box) and explain their characteristics. Section 6: PREPROCESS Description 6: Discuss the preprocessing stage of fuzzing, including instrumentation, seed selection, seed trimming, and preparing a driver application. Section 7: SCHEDULING Description 7: Explain the scheduling algorithms used in fuzzing, including the Fuzz Configuration Scheduling problem and specific algorithms for black-box and grey-box fuzzers. Section 8: INPUT GENERATION Description 8: Explore different input generation techniques used by fuzzers, including model-based and mutation-based methods. Section 9: INPUT EVALUATION Description 9: Describe the process of evaluating the generated inputs, including bug oracles and execution optimizations. Section 10: CONFIGURATION UPDATING Description 10: Explain how fuzzing configurations are updated during the fuzzing process, including evolutionary seed pool updates and maintaining a minset. Section 11: RELATED WORK Description 11: Review other works related to fuzzing, including previous surveys and books on the subject. Section 12: CONCLUDING REMARKS Description 12: Summarize the findings of the survey and provide concluding remarks.
A survey on Deep Learning Advances on Different 3D Data Representations
6
--- paper_title: FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis paper_content: Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. CNNs do not easily extend, however, to data that are not represented by regular grids, such as 3D shape meshes or other graph-structured data, to which traditional local convolution operators do not directly apply. To address this problem, we propose a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of our approach is that these correspondences are dynamically computed from features learned by the network, rather than relying on predefined static coordinates over the graph as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results. This shows that our approach can learn effective shape representations from raw input coordinates, without relying on shape descriptors. --- paper_title: A large-scale hierarchical multi-view RGB-D object dataset paper_content: Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results. --- paper_title: SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels paper_content: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. --- paper_title: Multi-view Convolutional Neural Networks for 3D Shape Recognition paper_content: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives. --- paper_title: Generative and Discriminative Voxel Modeling with Convolutional Neural Networks paper_content: When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification. --- paper_title: Deep Learning Advances in Computer Vision with 3D Data: A Survey paper_content: Deep learning has recently gained popularity achieving state-of-the-art performance in tasks involving text, sound, or image processing. Due to its outstanding performance, there have been efforts to apply it in more challenging scenarios, for example, 3D data processing. This article surveys methods applying deep learning on 3D data and provides a classification based on how they exploit them. From the results of the examined works, we conclude that systems employing 2D views of 3D data typically surpass voxel-based (3D) deep models, which however, can perform better with more layers and severe data augmentation. Therefore, larger-scale datasets and increased resolutions are required. --- paper_title: Deep Learning 3D Shape Surfaces Using Geometry Images paper_content: Surfaces serve as a natural parametrization to 3D shapes. Learning surfaces using convolutional neural networks (CNNs) is a challenging task. Current paradigms to tackle this challenge are to either adapt the convolutional filters to operate on surfaces, learn spectral descriptors defined by the Laplace-Beltrami operator, or to drop surfaces altogether in lieu of voxelized inputs. Here we adopt an approach of converting the 3D shape into a ‘geometry image’ so that standard CNNs can directly be used to learn 3D shapes. We qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces. This spherically parameterized shape is then projected and cut to convert the original 3D shape into a flat and regular geometry image. We propose a way to implicitly learn the topology and structure of 3D shapes using geometry images encoded with suitable features. We show the efficacy of our approach to learn 3D shape surfaces for classification and retrieval tasks on non-rigid and rigid shape datasets. --- paper_title: VoxNet: A 3D Convolutional Neural Network for real-time object recognition paper_content: Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: Geometric deep learning: going beyond Euclidean data paper_content: Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them. --- paper_title: Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs paper_content: Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches. --- paper_title: UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor paper_content: Human action recognition has a wide range of applications including biometrics, surveillance, and human computer interaction. The use of multimodal sensors for human action recognition is steadily increasing. However, there are limited publicly available datasets where depth camera and inertial sensor data are captured at the same time. This paper describes a freely available dataset, named UTD-MHAD, which consists of four temporally synchronized data modalities. These modalities include RGB videos, depth videos, skeleton positions, and inertial signals from a Kinect camera and a wearable inertial sensor for a comprehensive set of 27 human actions. Experimental results are provided to show how this database can be used to study fusion approaches that involve using both depth camera data and inertial sensor data. This public domain dataset is of benefit to multimodality research activities being conducted for human action recognition by various research groups. --- paper_title: RGBD Datasets: Past, Present and Future paper_content: Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. ::: Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes. --- paper_title: Benchmark datasets for 3D computer vision paper_content: With the rapid development of range image acquisition techniques, 3D computer vision has became a popular research area. It has numerous applications in various domains including robotics, biometrics, remote sensing, entertainment, civil construction, and medical treatment. Recently, a large number of algorithms have been proposed to address specific problems in the area of 3D computer vision. Meanwhile, several benchmark datasets have also been released to stimulate the research in this area. The availability of benchmark datasets plays an significant role in the process of technological progress. In this paper, we first introduce several major 3D acquisition techniques. We also present an overview on various popular topics in 3D computer vision including 3D object modeling, 3D model retrieval, 3D object recognition, 3D face recognition, RGB-D vision, and 3D remote sensing. Moreover, we present a contemporary summary of the existing benchmark datasets in 3D computer vision. This paper can therefore, serve as a handbook for those who are working in the related areas. --- paper_title: On Visual Similarity Based 3D Model Retrieval paper_content: A large number of 3D models are created and available on the Web, since more and more 3D modelling and digitizing tools are developed for ever increasing applications. The techniques for content-based 3D model retrieval then become necessary. In this paper, a visual similarity-based 3D model retrieval system is proposed. This approach measures the similarity among 3D models by visual similarity, and the main idea is that if two 3D models are similar, they also look similar from all viewing angles. Therefore, one hundred orthogonal projections of an object, excluding symmetry, are encoded both by Zernike moments and Fourier descriptors as features for later retrieval. The visual similarity-based approach is robust against similarity transformation, noise, model degeneracy etc., and provides 42%, 94% and 25% better performance (precision-recall evaluation diagram) than three other competing approaches: (1)the spherical harmonics approach developed by Funkhouser et al., (2)the MPEG-7 Shape 3D descriptors, and (3)the MPEG-7 Multiple View Descriptor. The proposed system is on the Web for practical trial use (http://3d.csie.ntu.edu.tw), and the database contains more than 10,000 publicly available 3D models collected from WWW pages. Furthermore, a user friendly interface is provided to retrieve 3D models by drawing 2D shapes. The retrieval is fast enough on a server with Pentium IV 2.4GHz CPU, and it takes about 2 seconds and 0.1 seconds for querying directly by a 3D model and by hand drawn 2D shapes, respectively. --- paper_title: Ensemble of shape functions for 3D object classification paper_content: This work addresses the problem of real-time 3D shape based object class recognition, its scaling to many categories and the reliable perception of categories. A novel shape descriptor for partial point clouds based on shape functions is presented, capable of training on synthetic data and classifying objects from a depth sensor in a single partial view in a fast and robust manner. The classification task is stated as a 3D retrieval task finding the nearest neighbors from synthetically generated views of CAD-models to the sensed point cloud with a Kinect-style depth sensor. The presented shape descriptor shows that the combination of angle, point-distance and area shape functions gives a significant boost in recognition rate against the baseline descriptor and outperforms the state-of-the-art descriptors in our experimental evaluation on a publicly available dataset of real-world objects in table scene contexts with up to 200 categories. --- paper_title: Shape Classification Using the Inner-Distance paper_content: Part structure and articulation are of fundamental importance in computer and human vision. We propose using the inner-distance to build shape descriptors that are robust to articulation and capture part structure. The inner-distance is defined as the length of the shortest path between landmark points within the shape silhouette. We show that it is articulation insensitive and more effective at capturing part structures than the Euclidean distance. This suggests that the inner-distance can be used as a replacement for the Euclidean distance to build more accurate descriptors for complex shapes, especially for those with articulated parts. In addition, texture information along the shortest path can be used to further improve shape classification. With this idea, we propose three approaches to using the inner-distance. The first method combines the inner-distance and multidimensional scaling (MDS) to build articulation invariant signatures for articulated shapes. The second method uses the inner-distance to build a new shape descriptor based on shape contexts. The third one extends the second one by considering the texture information along shortest paths. The proposed approaches have been tested on a variety of shape databases, including an articulated shape data set, MPEG7 CE-Shape-1, Kimia silhouettes, the ETH-80 data set, two leaf data sets, and a human motion silhouette data set. In all the experiments, our methods demonstrate effective performance compared with other algorithms --- paper_title: Scale-invariant heat kernel signatures for non-rigid shape recognition paper_content: One of the biggest challenges in non-rigid shape retrieval and comparison is the design of a shape descriptor that would maintain invariance under a wide class of transformations the shape can undergo. Recently, heat kernel signature was introduced as an intrinsic local shape descriptor based on diffusion scale-space analysis. In this paper, we develop a scale-invariant version of the heat kernel descriptor. Our construction is based on a logarithmically sampled scale-space in which shape scaling corresponds, up to a multiplicative constant, to a translation. This translation is undone using the magnitude of the Fourier transform. The proposed scale-invariant local descriptors can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling. We get significant performance improvement over state-of-the-art algorithms on recently established non-rigid shape retrieval benchmarks. --- paper_title: A scalable active framework for region annotation in 3D shape collections paper_content: Large repositories of 3D shapes provide valuable input for data-driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Given a shape collection and a user-specified region label our goal is to correctly demarcate the corresponding regions with minimal manual work. Our active framework achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility function that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human efficiency. We demonstrate that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure. We automatically propagate human labels across a dynamic shape network using a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be significantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by annotating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collection more than one order of magnitude larger than existing ones. --- paper_title: Fast Point Feature Histograms (FPFH) for 3D registration paper_content: In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment). --- paper_title: Combined 2D–3D categorization and classification for multimodal perception systems paper_content: In this article we describe an object perception system for autonomous robots performing everyday manipulation tasks in kitchen environments. The perception system gains its strengths by exploiting that the robots are to perform the same kinds of tasks with the same objects over and over again. It does so by learning the object representations necessary for the recognition and reconstruction in the context of pick-and-place tasks. The system employs a library of specialized perception routines that solve different, well-defined perceptual sub-tasks and can be combined into composite perceptual activities including the construction of an object model database, multimodal object classification, and object model reconstruction for grasping. We evaluate the effectiveness of our methods, and give examples of application scenarios using our personal robotic assistants acting in a human living environment. --- paper_title: Using spin images for efficient object recognition in cluttered 3D scenes paper_content: We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes. --- paper_title: OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation paper_content: We propose a novel method to estimate a unique and repeatable reference frame in the context of 3D object recognition from a single viewpoint based on global descriptors. We show that the ability of defining a robust reference frame on both model and scene views allows creating descriptive global representations of the object view, with the beneficial effect of enhancing the spatial descriptiveness of the feature and its ability to recognize objects by means of a simple nearest neighbor classifier computed on the descriptor space. Moreover, the definition of repeatable directions can be deployed to efficiently retrieve the 6DOF pose of the objects in a scene. We experimentally demonstrate the effectiveness of the proposed method on a dataset including 23 scenes acquired with the Microsoft Kinect sensor and 25 full-3D models by comparing the proposed approach with state-of-the-art global descriptors. A substantial improvement is presented regarding accuracy in recognition and 6DOF pose estimation, as well as in terms of computational performance. --- paper_title: Rotational Projection Statistics for 3D Local Surface Description and Object Recognition paper_content: Recognizing 3D objects in the presence of noise, varying mesh resolution, occlusion and clutter is a very challenging task. This paper presents a novel method named Rotational Projection Statistics (RoPS). It has three major modules: local reference frame (LRF) definition, RoPS feature description and 3D object recognition. We propose a novel technique to define the LRF by calculating the scatter matrix of all points lying on the local surface. RoPS feature descriptors are obtained by rotationally projecting the neighboring points of a feature point onto 2D planes and calculating a set of statistics (including low-order central moments and entropy) of the distribution of these projected points. Using the proposed LRF and RoPS descriptor, we present a hierarchical 3D object recognition algorithm. The performance of the proposed LRF, RoPS descriptor and object recognition algorithm was rigorously tested on a number of popular and publicly available datasets. Our proposed techniques exhibited superior performance compared to existing techniques. We also showed that our method is robust with respect to noise and varying mesh resolution. Our RoPS based algorithm achieved recognition rates of 100, 98.9, 95.4 and 96.0 % respectively when tested on the Bologna, UWA, Queen’s and Ca’ Foscari Venezia Datasets. --- paper_title: A 3D facial expression database for facial behavior research paper_content: Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation. --- paper_title: Spatial Data Modelling for 3D GIS paper_content: This book covers fundamental aspects of spatial data modelling specifically on the aspect of three-dimensional (3D) modelling and structuring. Realisation of true 3D GIS spatial system needs a lot of efforts, and the process is taking place in various research centres and universities in some countries. The development of spatial data modelling for 3D objects is the focus of this book. The book begins with some problems and motivations, the fundamental theories, the implementation, and some applications developed based on the concepts. The book is intended for various geoinformation related professionals like GIS engineers, GIS software developers, photogrammetrists, land surveyors, mapping specialists, researchers, postgraduate students, and lecturers. --- paper_title: Multi-view Convolutional Neural Networks for 3D Shape Recognition paper_content: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives. --- paper_title: RGB-D Multi-view System Calibration for Full 3D Scene Reconstruction paper_content: One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We propose BAICP+ which combines Bundle Adjustment (BA) and Iterative Closest Point (ICP) algorithms to take into account both 2D visual and 3D shape information in one minimization formulation to estimate relative pose parameters of each camera. BAICP+ is generic enough to take different types of visual features into account and can be easily adapted to varying quality of 2D and 3D data. We perform experiments on real and simulated data. Results show that with the right weighting factor BAICP+ has an optimal performance when compared to BA and ICP used independently or sequentially. --- paper_title: DeepPano: Deep Panoramic Representation for 3-D Shape Recognition paper_content: This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN). Firstly, each 3-D shape is converted into a panoramic view, namely a cylinder projection around its principle axis. Then, a variant of CNN is specifically designed for learning the deep representations directly from such views. Different from typical CNN, a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis. Our approach achieves state-of-the-art retrieval/classification results on two large-scale 3-D model datasets (ModelNet-10 and ModelNet-40), outperforming typical methods by a large margin. --- paper_title: Deep Learning 3D Shape Surfaces Using Geometry Images paper_content: Surfaces serve as a natural parametrization to 3D shapes. Learning surfaces using convolutional neural networks (CNNs) is a challenging task. Current paradigms to tackle this challenge are to either adapt the convolutional filters to operate on surfaces, learn spectral descriptors defined by the Laplace-Beltrami operator, or to drop surfaces altogether in lieu of voxelized inputs. Here we adopt an approach of converting the 3D shape into a ‘geometry image’ so that standard CNNs can directly be used to learn 3D shapes. We qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces. This spherically parameterized shape is then projected and cut to convert the original 3D shape into a flat and regular geometry image. We propose a way to implicitly learn the topology and structure of 3D shapes using geometry images encoded with suitable features. We show the efficacy of our approach to learn 3D shape surfaces for classification and retrieval tasks on non-rigid and rigid shape datasets. --- paper_title: Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs paper_content: We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: Data-driven 3D Voxel Patterns for object category recognition paper_content: Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6% in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D. --- paper_title: Real-time non-rigid reconstruction using an RGB-D camera paper_content: We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting. --- paper_title: RGBD Datasets: Past, Present and Future paper_content: Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. ::: Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes. --- paper_title: Spoofing in 2D face recognition with 3D masks and anti-spoofing with Kinect paper_content: The problem of detecting face spoofing attacks (presentation attacks) has recently gained a well-deserved popularity. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. In this paper, we inspect the spoofing potential of subject-specific 3D facial masks for 2D face recognition. Additionally, we analyze Local Binary Patterns based coun-termeasures using both color and depth data, obtained by Kinect. For this purpose, we introduce the 3D Mask Attack Database (3DMAD), the first publicly available 3D spoofing database, recorded with a low-cost depth camera. Extensive experiments on 3DMAD show that easily attainable facial masks can pose a serious threat to 2D face recognition systems and LBP is a powerful weapon to eliminate it. --- paper_title: Geometric deep learning: going beyond Euclidean data paper_content: Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them. --- paper_title: SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels paper_content: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. --- paper_title: Object recognition in noisy RGB-D data using GNG paper_content: Object recognition in 3D scenes is a research field in which there is intense activity guided by the problems related to the use of 3D point clouds. Some of these problems are influenced by the presence of noise in the cloud that reduces the effectiveness of a recognition process. This work proposes a method for dealing with the noise present in point clouds by applying the growing neural gas (GNG) network filtering algorithm. This method is able to represent the input data with the desired number of neurons while preserving the topology of the input space. The GNG obtained results which were compared with a Voxel grid filter to determine the efficacy of our approach. Moreover, since a stage of the recognition process includes the detection of keypoints in a cloud, we evaluated different keypoint detectors to determine which one produces the best results in the selected pipeline. Experiments show how the GNG method yields better recognition results than other filtering algorithms when noise is present. --- paper_title: High quality depth map upsampling for 3D-TOF cameras paper_content: This paper describes an application framework to perform high quality upsampling on depth maps captured from a low-resolution and noisy 3D time-of-flight (3D-ToF) camera that has been coupled with a high-resolution RGB camera. Our framework is inspired by recent work that uses nonlocal means filtering to regularize depth maps in order to maintain fine detail and structure. Our framework extends this regularization with an additional edge weighting scheme based on several image features based on the additional high-resolution RGB input. Quantitative and qualitative results show that our method outperforms existing approaches for 3D-ToF upsampling. We describe the complete process for this system, including device calibration, scene warping for input alignment, and even how the results can be further processed using simple user markup. --- paper_title: Supervised Hash Coding With Deep Neural Network for Environment Perception of Intelligent Vehicles paper_content: Image content analysis is an important surround perception modality of intelligent vehicles. In order to efficiently recognize the on-road environment based on image content analysis from the large-scale scene database, relevant images retrieval becomes one of the fundamental problems. To improve the efficiency of calculating similarities between images, hashing techniques have received increasing attentions. For most existing hash methods, the suboptimal binary codes are generated, as the hand-crafted feature representation is not optimally compatible with the binary codes. In this paper, a one-stage supervised deep hashing framework (SDHP) is proposed to learn high-quality binary codes. A deep convolutional neural network is implemented, and we enforce the learned codes to meet the following criterions: 1) similar images should be encoded into similar binary codes, and vice versa; 2) the quantization loss from Euclidean space to Hamming space should be minimized; and 3) the learned codes should be evenly distributed. The method is further extended into SDHP+ to improve the discriminative power of binary codes. Extensive experimental comparisons with state-of-the-art hashing algorithms are conducted on CIFAR-10 and NUS-WIDE, the MAP of SDHP reaches to 87.67% and 77.48% with 48 b, respectively, and the MAP of SDHP+ reaches to 91.16%, 81.08% with 12 b, 48 b on CIFAR-10 and NUS-WIDE, respectively. It illustrates that the proposed method can obviously improve the search accuracy. --- paper_title: Geometric deep learning: going beyond Euclidean data paper_content: Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them. --- paper_title: Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks paper_content: Recent advances in convolutional neural networks have shown promising results in 3D shape completion. But due to GPU memory limitations, these methods can only produce low-resolution outputs. To inpaint 3D models with semantic plausibility and contextual details, we introduce a hybrid framework that combines a 3D Encoder-Decoder Generative Adversarial Network (3D-ED-GAN) and a Long-term Recurrent Convolutional Network (LRCN). The 3D-ED-GAN is a 3D convolutional neural network trained with a generative adversarial paradigm to fill missing 3D data in low-resolution. LRCN adopts a recurrent neural network architecture to minimize GPU memory usage and incorporates an Encoder-Decoder pair into a Long Short-term Memory Network. By handling the 3D model as a sequence of 2D slices, LRCN transforms a coarse 3D shape into a more complete and higher resolution volume. While 3D-ED-GAN captures global contextual structure of the 3D shape, LRCN localizes the fine-grained details. Experimental results on both real-world and synthetic data show reconstructions from corrupted models result in complete and high-resolution 3D objects. --- paper_title: High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference paper_content: We propose a data-driven method for recovering miss-ing parts of 3D shapes. Our method is based on a new deep learning architecture consisting of two sub-networks: a global structure inference network and a local geometry refinement network. The global structure inference network incorporates a long short-term memorized context fusion module (LSTM-CF) that infers the global structure of the shape based on multi-view depth information provided as part of the input. It also includes a 3D fully convolutional (3DFCN) module that further enriches the global structure representation according to volumetric information in the input. Under the guidance of the global structure network, the local geometry refinement network takes as input lo-cal 3D patches around missing regions, and progressively produces a high-resolution, complete surface through a volumetric encoder-decoder architecture. Our method jointly trains the global structure inference and local geometry refinement networks in an end-to-end manner. We perform qualitative and quantitative evaluations on six object categories, demonstrating that our method outperforms existing state-of-the-art work on shape completion. --- paper_title: Fully convolutional networks for semantic segmentation paper_content: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image. --- paper_title: Imagenet classification with deep convolutional neural networks paper_content: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. --- paper_title: Learning Hierarchical Features for Scene Labeling paper_content: Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks paper_content: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat. --- paper_title: Real-Time Facial Segmentation and Performance Capture from RGB Input paper_content: We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking accuracy. The modeling of occlusions has been mostly avoided due to its immense space of appearance variability. To address this curse of high dimensionality, we perform tracking in unconstrained images assuming non-face regions can be fully masked out. Along with recent breakthroughs in deep learning, we demonstrate that pixel-level facial segmentation is possible in real-time by repurposing convolutional neural networks designed originally for general semantic segmentation. We develop an efficient architecture based on a two-stream deconvolution network with complementary characteristics, and introduce carefully designed training samples and data augmentation strategies for improved segmentation accuracy and robustness. We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views. Furthermore, the resulting segmentation can be directly used to composite partial 3D face models on the input images and enable seamless facial manipulation tasks, such as virtual make-up or face replacement. --- paper_title: Learning Deconvolution Network for Semantic Segmentation paper_content: We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5%) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network. --- paper_title: Projective Feature Learning for 3D Shapes with Multi-View Depth Images paper_content: Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine MVD-ELM to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multi-view learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning. --- paper_title: Multi-view Convolutional Neural Networks for 3D Shape Recognition paper_content: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives. --- paper_title: Scale-invariant heat kernel signatures for non-rigid shape recognition paper_content: One of the biggest challenges in non-rigid shape retrieval and comparison is the design of a shape descriptor that would maintain invariance under a wide class of transformations the shape can undergo. Recently, heat kernel signature was introduced as an intrinsic local shape descriptor based on diffusion scale-space analysis. In this paper, we develop a scale-invariant version of the heat kernel descriptor. Our construction is based on a logarithmically sampled scale-space in which shape scaling corresponds, up to a multiplicative constant, to a translation. This translation is undone using the magnitude of the Fourier transform. The proposed scale-invariant local descriptors can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling. We get significant performance improvement over state-of-the-art algorithms on recently established non-rigid shape retrieval benchmarks. --- paper_title: Learning High-Level Feature by Deep Belief Networks for 3-D Model Retrieval and Recognition paper_content: 3-D shape analysis has attracted extensive research efforts in recent years, where the major challenge lies in designing an effective high-level 3-D shape feature. In this paper, we propose a multi-level 3-D shape feature extraction framework by using deep learning. The low-level 3-D shape descriptors are first encoded into geometric bag-of-words, from which middle-level patterns are discovered to explore geometric relationships among words. After that, high-level shape features are learned via deep belief networks, which are more discriminative for the tasks of shape classification and retrieval. Experiments on 3-D shape recognition and retrieval demonstrate the superior performance of the proposed method in comparison to the state-of-the-art methods. --- paper_title: Ensemble of PANORAMA-based convolutional neural networks for 3D model classification and retrieval paper_content: Abstract A novel method for the classification and retrieval of 3D models is proposed; it exploits the 2D panoramic view representation of 3D models as input to an ensemble of convolutional neural networks which automatically compute the features. The first step of the proposed pipeline, pose normalization is performed using the SYMPAN method, which is also computed on the panoramic view representation. In the training phase, three panoramic views corresponding to the major axes, are used for the training of an ensemble of convolutional neural networks. the panoramic views consist of 3-channel images, containing the Spatial Distribution Map, the Normals’ Deviation Map and the magnitude of the Normals’ Devation Map Gradient Image. The proposed method aims at capturing feature continuity of 3D models, while simultaneously minimizing data preprocessing via the construction of an augmented image representation. It is extensively tested in terms of classification and retrieval accuracy on two standard large scale datasets: ModelNet and ShapeNet. --- paper_title: DeepPano: Deep Panoramic Representation for 3-D Shape Recognition paper_content: This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN). Firstly, each 3-D shape is converted into a panoramic view, namely a cylinder projection around its principle axis. Then, a variant of CNN is specifically designed for learning the deep representations directly from such views. Different from typical CNN, a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis. Our approach achieves state-of-the-art retrieval/classification results on two large-scale 3-D model datasets (ModelNet-10 and ModelNet-40), outperforming typical methods by a large margin. --- paper_title: Mesh Convolutional Restricted Boltzmann Machines for Unsupervised Learning of Features With Structure Preservation on 3-D Meshes paper_content: Discriminative features of 3-D meshes are significant to many 3-D shape analysis tasks. However, handcrafted descriptors and traditional unsupervised 3-D feature learning methods suffer from several significant weaknesses: 1) the extensive human intervention is involved; 2) the local and global structure information of 3-D meshes cannot be preserved, which is in fact an important source of discriminability; 3) the irregular vertex topology and arbitrary resolution of 3-D meshes do not allow the direct application of the popular deep learning models; 4) the orientation is ambiguous on the mesh surface; and 5) the effect of rigid and nonrigid transformations on 3-D meshes cannot be eliminated. As a remedy, we propose a deep learning model with a novel irregular model structure, called mesh convolutional restricted Boltzmann machines (MCRBMs). MCRBM aims to simultaneously learn structure-preserving local and global features from a novel raw representation, local function energy distribution. In addition, multiple MCRBMs can be stacked into a deeper model, called mesh convolutional deep belief networks (MCDBNs). MCDBN employs a novel local structure preserving convolution (LSPC) strategy to convolve the geometry and the local structure learned by the lower MCRBM to the upper MCRBM. LSPC facilitates resolving the challenging issue of the orientation ambiguity on the mesh surface in MCDBN. Experiments using the proposed MCRBM and MCDBN were conducted on three common aspects: global shape retrieval, partial shape retrieval, and shape correspondence. Results show that the features learned by the proposed methods outperform the other state-of-the-art 3-D shape features. --- paper_title: Deep Learning 3D Shape Surfaces Using Geometry Images paper_content: Surfaces serve as a natural parametrization to 3D shapes. Learning surfaces using convolutional neural networks (CNNs) is a challenging task. Current paradigms to tackle this challenge are to either adapt the convolutional filters to operate on surfaces, learn spectral descriptors defined by the Laplace-Beltrami operator, or to drop surfaces altogether in lieu of voxelized inputs. Here we adopt an approach of converting the 3D shape into a ‘geometry image’ so that standard CNNs can directly be used to learn 3D shapes. We qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces. This spherically parameterized shape is then projected and cut to convert the original 3D shape into a flat and regular geometry image. We propose a way to implicitly learn the topology and structure of 3D shapes using geometry images encoded with suitable features. We show the efficacy of our approach to learn 3D shape surfaces for classification and retrieval tasks on non-rigid and rigid shape datasets. --- paper_title: A Survey on Partial Retrieval of 3D Shapes paper_content: Content-based shape retrieval techniques can facilitate 3D model resource reuse, 3D model modeling, object recognition, and 3D content classification. Recently more and more researchers have attempted to solve the problems of partial retrieval in the domain of computer graphics, vision, CAD, and multimedia. Unfortunately, in the literature, there is little comprehensive discussion on the state-of-the-art methods of partial shape retrieval. In this article we focus on reviewing the partial shape retrieval methods over the last decade, and help novices to grasp latest developments in this field. We first give the definition of partial retrieval and discuss its desirable capabilities. Secondly, we classify the existing methods on partial shape retrieval into three classes by several criteria, describe the main ideas and techniques for each class, and detailedly compare their advantages and limits. We also present several relevant 3D datasets and corresponding evaluation metrics, which are necessary for evaluating partial retrieval performance. Finally, we discuss possible research directions to address partial shape retrieval. --- paper_title: High-level semantic feature for 3D shape based on deep belief networks paper_content: Deep learning has emerged as a powerful technique to extract high-level features from low-level information, which shows that hierarchical representation can be easily achieved. However, applying deep learning into 3D shape is still a challenge. In this paper, we propose a novel high-level feature learning method for 3D shape retrieval based on deep learning. In this framework, the low-level 3D shape descriptors are first encoded into visual bag-of-words, and then highlevel shape features are generated via deep belief network, which facilitates a good semantic preserving ability for the tasks of shape classification and retrieval. Experiments on 3D shape recognition and retrieval demonstrate the superior performance of the proposed method in comparison to the state-of-the-art methods. --- paper_title: Local deep feature learning framework for 3D shape paper_content: For 3D shape analysis, an effective and efficient feature is the key to popularize its applications in 3D domain. In this paper, we present a novel framework to learn and extract local deep feature (LDF), which encodes multiple low-level descriptors and provides high-discriminative representation of local region on 3D shape. The framework consists of four main steps. First, several basic descriptors are calculated and encapsulated to generate geometric bag-of-words in order to make full use of the various basic descriptors? properties. Then 3D mesh is down-sampled to hundreds of feature points for accelerating the model learning. Next, in order to preserve the local geometric information and establish the relationships among points in a local area, the geometric bag-of-words are encoded into local geodesic-aware bag-of-features (LGA-BoF). However, the resulting feature is redundant, which leads to low discriminative and efficiency. Therefore, in the final step, we use deep belief networks (DBNs) to learn a model, and use it to generate the LDF, which is high-discriminative and effective for 3D shape applications. 3D shape correspondence and symmetry detection experiments compared with related feature descriptors are carried out on several datasets and shape recognition is also conducted, validating the proposed local deep feature learning framework. Graphical abstractDisplay Omitted HighlightsWe propose a novel framework to learn and extract local deep feature from several 3D shape descriptors.The framework is not only limited to SI-HKS or AGD, other local descriptors are also supported.The learning procedure is fully unsupervised.There are no parameters to be turned in the learning procedure. Some other parameters have little influence on the performance, and it is easy to select proper parameters. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: RGBD Datasets: Past, Present and Future paper_content: Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. ::: Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes. --- paper_title: Deep learning representation using autoencoder for 3D shape retrieval paper_content: We study the problem of how to build a deep learning representation for 3D shape. Deep learning has shown to be very effective in variety of visual applications, such as image classification and object detection. However, it has not been successfully applied to 3D shape recognition. This is because 3D shape has complex structure in 3D space and there are limited number of 3D shapes for feature learning. To address these problems, we project 3D shapes into 2D space and use autoencoder for feature learning on the 2D images. High accuracy 3D shape retrieval performance is obtained by aggregating the features learned on 2D images. In addition, we show the proposed deep learning feature is complementary to conventional local image descriptors. By combing the global deep learning representation and the local descriptor representation, our method can obtain the state-of-the-art performance on 3D shape retrieval benchmarks. --- paper_title: Indoor Semantic Segmentation using depth information paper_content: This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. --- paper_title: Convolutional-Recursive Deep Learning for 3D Object Classification paper_content: Recent advances in 3D sensing technologies make it possible to easily record color and depth images which together can improve object recognition. Most current methods rely on very well-designed features for this new 3D modality. We introduce a model based on a combination of convolutional and recursive neural networks (CNN and RNN) for learning features and classifying RGB-D images. The CNN layer learns low-level translationally invariant features which are then given as inputs to multiple, fixed-tree RNNs in order to compose higher order features. RNNs can be seen as combining convolution and pooling into one efficient, hierarchical operation. Our main result is that even RNNs with random weights compose powerful features. Our model obtains state of the art performance on a standard RGB-D object dataset while being more accurate and faster during training and testing than comparable architectures such as two-layer CNNs. --- paper_title: RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features paper_content: Object recognition and pose estimation from RGB-D images are important tasks for manipulation robots which can be learned from examples. Creating and annotating datasets for learning is expensive, however. We address this problem with transfer learning from deep convolutional neural networks (CNN) that are pre-trained for image categorization and provide a rich, semantically meaningful feature set. We incorporate depth information, which the CNN was not trained with, by rendering objects from a canonical perspective and colorizing the depth channel according to distance from the object center. We evaluate our approach on the Washington RGB-D Objects dataset, where we find that the generated feature set naturally separates classes and instances well and retains pose manifolds. We outperform state-of-the-art on a number of subtasks and show that our approach can yield superior results when only little training data is available. --- paper_title: Multimodal deep learning for robust RGB-D object recognition paper_content: Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset and show recognition in challenging RGB-D real-world noisy settings. --- paper_title: 3D shape retrieval using a single depth image from low-cost sensors paper_content: Content-based 3D shape retrieval is an important problem in computer vision. Traditional retrieval interfaces require a 2D sketch or a manually designed 3D model as the query, which is difficult to specify and thus not practical in real applications. With the recent advance in low-cost 3D sensors such as Microsoft Kinect and Intel Realsense, capturing depth images that carry 3D information is fairly simple, making shape retrieval more practical and user-friendly. In this paper, we study the problem of cross-domain 3D shape retrieval using a single depth image from low-cost sensors as the query to search for similar human designed CAD models. We propose a novel method using an ensemble of autoencoders in which each autoencoder is trained to learn a compressed representation of depth views synthesize d from each database object. By viewing each autoencoder as a probabilistic model, a likelihood score can be derived as a similarity measure. A domain adaptation layer is built on top of autoencoder outputs to explicitly address the cross-domain issue (between noisy sensory data and clean 3D models) by incorporating training data of sensor depth images and their category labels in a weakly supennsed learning formulation. Experiments using real-world depth images and a large-scale CAD dataset demonstrate the effectiveness of our approach, which offers significant improvements over state-of-the-art 3D shape retrieval methods. --- paper_title: 3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels paper_content: RGB-D data is getting ever more interest from the research community as both cheap cameras appear in the market and the applications of this type of data become more common. A current trend in processing image data is the use of convolutional neural networks (CNNs) that have consistently beat competition in most benchmark data sets. In this paper, we investigate the possibility of transferring knowledge between CNNs when processing RGB-D data with the goal of both improving accuracy and reducing training time. We present experiments that show that our proposed approach can achieve both these goals. --- paper_title: Orientation-boosted Voxel Nets for 3D Object Recognition paper_content: Recent work has shown good recognition results in 3D object recognition using 3D convolutional networks. In this paper, we show that the object orientation plays an important role in 3D recognition. More specifically, we argue that objects induce different features in the network under rotation. Thus, we approach the category-level classification task as a multi-task problem, in which the network is trained to predict the pose of the object in addition to the class label as a parallel task. We show that this yields significant improvements in the classification results. We test our suggested architecture on several datasets representing various 3D data sources: LiDAR data, CAD models, and RGB-D images. We report state-of-the-art results on classification as well as significant improvements in precision and speed over the baseline on 3D detection. --- paper_title: Beam search for learning a deep Convolutional Neural Network of 3D shapes paper_content: This paper addresses 3D shape recognition. Recent work typically represents a 3D shape as a set of binary variables corresponding to 3D voxels of a uniform 3D grid centered on the shape, and resorts to deep convolutional neural networks (CNNs) for modeling these binary variables. Robust learning of such CNNs is currently limited by the small datasets of 3D shapes available - an order of magnitude smaller than other common datasets in computer vision. Related work typically deals with the small training datasets using a number of ad hoc, hand-tuning strategies. To address this issue, we formulate CNN learning as a beam search aimed at identifying an optimal CNN architecture - namely, the number of layers, nodes, and their connectivity in the network - as well as estimating parameters of such an optimal CNN. Each state of the beam search corresponds to a candidate CNN. Two types of actions are defined to add new convolutional filters or new convolutional layers to a parent CNN, and thus transition to children states. The utility function of each action is efficiently computed by transferring parameter values of the parent CNN to its children, thereby enabling an efficient beam search. Our experimental evaluation on the 3D ModelNet dataset demonstrates that our model pursuit using the beam search yields a CNN with superior performance on 3D shape classification than the state of the art. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Generative and Discriminative Voxel Modeling with Convolutional Neural Networks paper_content: When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification. --- paper_title: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift paper_content: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters. --- paper_title: Deep Networks with Stochastic Depth paper_content: Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 % on CIFAR-10). --- paper_title: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning paper_content: Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge --- paper_title: Toward real-time 3D object recognition: A lightweight volumetric CNN framework using multitask learning paper_content: Abstract 3D data are becoming increasingly popular and easier to access, making 3D information increasingly important for object recognition. Although volumetric convolutional neural networks (CNNs) have been exploited to recognize 3D objects and have achieved notable progress, their computational cost is too high for real-time applications. In this paper, we propose a lightweight volumetric CNN architecture (namely, LightNet) to address the real-time 3D object recognition problem leveraging on multitask learning. We use LightNet to simultaneously predict class and orientation labels from complete and partial shapes. In contrast to the earlier version of this method presented at 3DOR 2017, this extended version introduces batch normalization and better training strategies to improve the recognition accuracy, and also includes more experiments on the newly released large-scale ShapeNet Core55 dataset. Our model has been evaluated on three publicly available benchmarks of complete 3D CAD shapes and incomplete point clouds. Experimental results show that our model achieves the state-of-the-art 3D object recognition performance among shallow volumetric CNNs with the smallest number of training parameters. It is also demonstrated that our method can perform accurate object recognition in real time (less than 6 ms). --- paper_title: VoxNet: A 3D Convolutional Neural Network for real-time object recognition paper_content: Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images paper_content: We focus on the task of amodal 3D object detection in RGB-D images, which aims to produce a 3D bounding box of an object in metric form at its full extent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a 3D volumetric scene from a RGB-D image as input and outputs 3D object bounding boxes. In our approach, we propose the first 3D Region Proposal Network (RPN) to learn objectness from geometric shapes and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D. In particular, we handle objects of various sizes by training an amodal RPN at two different scales and an ORN to regress 3D bounding boxes. Experiments show that our algorithm outperforms the state-of-the-art by 13.8 in mAP and is 200× faster than the original Sliding Shapes. --- paper_title: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations paper_content: There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images. --- paper_title: Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling paper_content: We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods. --- paper_title: VConv-DAE: Deep Volumetric Shape Learning Without Object Labels paper_content: With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes. --- paper_title: GIFT: A Real-Time and Scalable 3D Shape Search Engine paper_content: Projective analysis is an important solution for 3D shape retrieval, since human visual perceptions of 3D shapes rely on various 2D observations from different view points. Although multiple informative and discriminative views are utilized, most projection-based retrieval systems suffer from heavy computational cost, thus cannot satisfy the basic requirement of scalability for search engines. In this paper, we present a real-time 3D shape search engine based on the projective images of 3D shapes. The real-time property of our search engine results from the following aspects: (1) efficient projection and view feature extraction using GPU acceleration, (2) the first inverted file, referred as F-IF, is utilized to speed up the procedure of multi-view matching, (3) the second inverted file (S-IF), which captures a local distribution of 3D shapes in the feature manifold, is adopted for efficient context-based reranking. As a result, for each query the retrieval task can be finished within one second despite the necessary cost of IO overhead. We name the proposed 3D shape search engine, which combines GPU acceleration and Inverted File (Twice), as GIFT. Besides its high efficiency, GIFT also outperforms the state-of-the-art methods significantly in retrieval accuracy on various shape benchmarks and competitions. --- paper_title: Volumetric and Multi-view CNNs for Object Classification on 3D Data paper_content: 3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-the-art methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multi-resolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data. --- paper_title: RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints paper_content: We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on 10- and 40-class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset. The code is available on this https URL --- paper_title: Projective Feature Learning for 3D Shapes with Multi-View Depth Images paper_content: Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine MVD-ELM to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multi-view learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning. --- paper_title: 3D object retrieval with stacked local convolutional autoencoder paper_content: The success of object recognition and retrieval is largely determined by data representation. A good feature descriptor can detect the high-level abstraction of objects, which contains much discriminative information. In this paper, a novel 3D object retrieval method is proposed based on stacked local convolutional autoencoder (SLCAE). In this approach, the greedy layerwise strategy is applied to train SLCAE, and gradient descent method is used for training each layer. After the processing of training, the representations of input data can be obtained, regarded as the features of 3D objects. The experiments are conducted on three publicly available 3D object datasets, and the results demonstrate that the proposed method can greatly improve 3D object retrieval performance, compared with several state-of-the-art methods. HighlightsIt proposes a novel 3D model retrieval method with stacked local convolutional autoencoder (SLCAE).The greedy layerwise strategy is applied to train the SLCAE.A gradient descent method is used for training each layer.The experiments are conducted on three publicly available 3D object datasets, and the results demonstrate that the proposed method can greatly improve 3D object retrieval performance, compared with several state-of-the-art methods. --- paper_title: Multi-view Convolutional Neural Networks for 3D Shape Recognition paper_content: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives. --- paper_title: 3D object understanding with 3D Convolutional Neural Networks paper_content: Feature engineering plays an important role in object understanding. Expressive discriminative features can guarantee the success of object understanding tasks. With remarkable ability of data abstraction, deep hierarchy architecture has the potential to represent objects. For 3D objects with multiple views, the existing deep learning methods can not handle all the views with high quality. In this paper, we propose a 3D convolutional neural network, a deep hierarchy model which has a similar structure with convolutional neural network. We employ stochastic gradient descent (SGD) method to pretrain the convolutional layer, and then a back-propagation method is proposed to fine-tune the whole network. Finally, we use the result of the two phases for 3D object retrieval. The proposed method is shown to out-perform the state-of-the-art approaches by experiments conducted on publicly available 3D object datasets. --- paper_title: FusionNet: 3D Object Classification Using Multiple Data Representations paper_content: High-quality 3D object recognition is an important component of many vision and robotics systems. We tackle the object recognition problem using two data representations, to achieve leading results on the Princeton ModelNet challenge. The two representations: 1. Volumetric representation: the 3D object is discretized spatially as binary voxels - $1$ if the voxel is occupied and $0$ otherwise. 2. Pixel representation: the 3D object is represented as a set of projected 2D pixel images. Current leading submissions to the ModelNet Challenge use Convolutional Neural Networks (CNNs) on pixel representations. However, we diverge from this trend and additionally, use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures. --- paper_title: A 3D Shape Retrieval Framework Supporting Multimodal Queries paper_content: This paper presents a unified framework for 3D shape retrieval. The method supports multimodal queries (2D images, sketches, 3D objects) by introducing a novel view-based approach able to handle the different types of multimedia data. More specifically, a set of 2D images (multi-views) are automatically generated from a 3D object, by taking views from uniformly distributed viewpoints. For each image, a set of 2D rotation-invariant shape descriptors is produced. The global shape similarity between two 3D models is achieved by applying a novel matching scheme, which effectively combines the information extracted from the multi-view representation. The experimental results prove that the proposed method demonstrates superior performance over other well-known state-of-the-art approaches. --- paper_title: Sketch-based 3D shape retrieval using Convolutional Neural Networks paper_content: Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of “best views” are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection - matching) is pragmatic but also problematic because the “best views” are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of “best views” and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics. --- paper_title: Generative and Discriminative Voxel Modeling with Convolutional Neural Networks paper_content: When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification. --- paper_title: Dominant Set Clustering and Pooling for Multi-View 3D Object Recognition paper_content: View based strategies for 3D object recognition have proven to be very successful. The state-of-the-art methods now achieve over 90% correct category level recognition performance on appearance images. We improve upon these methods by introducing a view clustering and pooling layer based on dominant sets. The key idea is to pool information from views which are similar and thus belong to the same cluster. The pooled feature vectors are then fed as inputs to the same layer, in a recurrent fashion. This recurrent clustering and pooling module, when inserted in an off-the-shelf pretrained CNN, boosts performance for multi-view 3D object recognition, achieving a new state of the art test set recognition accuracy of 93.8% on the ModelNet 40 database. We also explore a fast approximate learning strategy for our cluster-pooling CNN, which, while sacrificing end-to-end learning, greatly improves its training efficiency with only a slight reduction of recognition accuracy to 93.3%. Our implementation is available at this https URL. --- paper_title: An efficient and effective convolutional auto-encoder extreme learning machine network for 3d feature learning paper_content: 3D shape features play a crucial role in graphics applications, such as 3D shape matching, recognition, and retrieval. Various 3D shape descriptors have been developed over the last two decades; however, existing descriptors are handcrafted features that are labor-intensively designed and cannot extract discriminative information for a large set of data. In this paper, we propose a rapid 3D feature learning method, namely, a convolutional auto-encoder extreme learning machine (CAE-ELM) that combines the advantages of the convolutional neuron network, auto-encoder, and extreme learning machine (ELM). This method performs better and faster than other methods. In addition, we define a novel architecture based on CAE-ELM. The architecture accepts two types of 3D shape representation, namely, voxel data and signed distance field data (SDF), as inputs to extract the global and local features of 3D shapes. Voxel data describe structural information, whereas SDF data contain details on 3D shapes. Moreover, the proposed CAE-ELM can be used in practical graphics applications, such as 3D shape completion. Experiments show that the features extracted by CAE-ELM are superior to existing hand-crafted features and other deep learning methods or ELM models. Moreover, the classification accuracy of the proposed architecture is superior to that of other methods on ModelNet10 (91.4%) and ModelNet40 (84.35%). The training process also runs faster than existing deep learning methods by approximately two orders of magnitude. --- paper_title: Extreme learning machine: Theory and applications paper_content: Abstract It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called e xtreme l earning m achine (ELM) for s ingle-hidden l ayer f eedforward neural n etworks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks. 1 --- paper_title: 3D Object Classification Using Deep Belief Networks paper_content: Extracting features with strong expressive and discriminative ability is one of key factors for the effectiveness of 3D model classifier. Lots of research work has illustrated that deep belief networks (DBN) have enough power to represent the distributions of input data. In this paper, we apply DBN for extracting the features of 3D model. After implementing a contrastive divergence method, we obtain a trained-well DBN, which can powerfully represent the input data. Therefore, the feature from the output of last layer is acquired. This procedure is unsupervised. Due to the limit of labeled data, a semi-supervised method is utilized to recognize 3D objects using the feature obtained from the trained DBN. The experiments are conducted in the publicly available Princeton Shape Benchmark (PSB), and the experimental results demonstrate the effectiveness of our method. --- paper_title: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning paper_content: Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge --- paper_title: Deep learning for 3D shape classification from multiple depth maps paper_content: This paper proposes a novel approach for the classification of 3D shapes exploiting deep learning techniques. The proposed algorithm starts by constructing a set of depth maps by rendering the input 3D shape from different viewpoints. Then the depth maps are fed to a multi-branch Convolutional Neural Network. Each branch of the network takes in input one of the depth maps and produces a classification vector by using 5 convolutional layers of progressively reduced resolution. The various classification vectors are finally fed to a linear classifier that combines the outputs of the various branches and produces the final classification. Experimental results on the Princeton ModelNet database show how the proposed approach allows to obtain a high classification accuracy and outperforms several state-of-the-art approaches. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: 3D Point Cloud Classification and Segmentation using 3D Modified Fisher Vector Representation for Convolutional Neural Networks paper_content: The point cloud is gaining prominence as a method for representing 3D shapes, but its irregular format poses a challenge for deep learning methods. The common solution of transforming the data into a 3D voxel grid introduces its own challenges, mainly large memory size. In this paper we propose a novel 3D point cloud representation called 3D Modified Fisher Vectors (3DmFV). Our representation is hybrid as it combines the discrete structure of a grid with continuous generalization of Fisher vectors, in a compact and computationally efficient way. Using the grid enables us to design a new CNN architecture for point cloud classification and part segmentation. In a series of experiments we demonstrate competitive performance or even better than state-of-the-art on challenging benchmark datasets. --- paper_title: Pairwise Decomposition of Image Sequences for Active Multi-view Recognition paper_content: A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both. --- paper_title: SO-Net: Self-Organizing Network for Point Cloud Analysis paper_content: This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website. https://github.com/lijx10/SO-Net --- paper_title: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation paper_content: Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper_content: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. --- paper_title: Order Matters: Sequence to sequence for sets paper_content: Sequences have become first class citizens in supervised learning thanks to the resurgence of recurrent neural networks. Many complex tasks that require mapping from or to a sequence of observations can now be formulated with the sequence-to-sequence (seq2seq) framework which employs the chain rule to efficiently represent the joint probability of sequences. In many cases, however, variable sized inputs and/or outputs might not be naturally expressed as sequences. For instance, it is not clear how to input a set of numbers into a model where the task is to sort them; similarly, we do not know how to organize outputs when they correspond to random variables and the task is to model their unknown joint probability. In this paper, we first show using various examples that the order in which we organize input and/or output data matters significantly when learning an underlying model. We then discuss an extension of the seq2seq framework that goes beyond sequences and handles input sets in a principled way. In addition, we propose a loss which, by searching over possible orders during training, deals with the lack of structure of output sets. We show empirical evidence of our claims regarding ordering, and on the modifications to the seq2seq framework on benchmark language modeling and parsing tasks, as well as two artificial tasks -- sorting numbers and estimating the joint probability of unknown graphical models. --- paper_title: Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models paper_content: We present a new deep learning architecture (called Kd-network) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and share parameters of these transformations according to the subdivisions of the point clouds imposed onto them by Kd-trees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform two-dimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behaviour. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation. --- paper_title: Deep Sets paper_content: We study the problem of designing models for machine learning tasks defined on sets. In contrast to the traditional approach of operating on fixed dimensional vectors, we consider objective functions defined on sets and are invariant to permutations. Such problems are widespread, ranging from the estimation of population statistics, to anomaly detection in piezometer data of embankment dams, to cosmology. Our main theorem characterizes the permutation invariant objective functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and outlier detection. --- paper_title: Deep Learning with Sets and Point Clouds paper_content: We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information. --- paper_title: A Network Architecture for Point Cloud Classification via Automatic Depth Images Generation paper_content: We propose a novel neural network architecture for point cloud classification. Our key idea is to automatically transform the 3D unordered input data into a set of useful 2D depth images, and classify them by exploiting well performing image classification CNNs. We present new differentiable module designs to generate depth images from a point cloud. These modules can be combined with any network architecture for processing point clouds. We utilize them in combination with state-of-the-art classification networks, and get results competitive with the state of the art in point cloud classification. Furthermore, our architecture automatically produces informative images representing the input point cloud, which could be used for further applications such as point cloud visualization. --- paper_title: Learning shape correspondence with anisotropic convolutional neural networks paper_content: Establishing correspondence between shapes is a fundamental problem in geometry processing, arising in a wide variety of applications. The problem is especially difficult in the setting of non-isometric deformations, as well as in the presence of topological noise and missing parts, mainly due to the limited capability to model such deformations axiomatically. Several recent works showed that invariance to complex shape transformations can be learned from examples. In this paper, we introduce an intrinsic convolutional neural network architecture based on anisotropic diffusion kernels, which we term Anisotropic Convolutional Neural Network (ACNN). In our construction, we generalize convolutions to non-Euclidean domains by constructing a set of oriented anisotropic diffusion kernels, creating in this way a local intrinsic polar representation of the data (`patch'), which is then correlated with a filter. Several cascades of such filters, linear, and non-linear operators are stacked to form a deep neural network whose parameters are learned by minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic dense correspondences between deformable shapes in very challenging settings, achieving state-of-the-art results on some of the most difficult recent correspondence benchmarks. --- paper_title: Geodesic Convolutional Neural Networks on Riemannian Manifolds paper_content: Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we introduce Geodesic Convolutional Neural Networks (GCNN), a generalization of the convolutional networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract "patches", which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use GCNN to learn invariant shape features, allowing to achieve state-of-the-art performance in problems such as shape description, retrieval, and correspondence. --- paper_title: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering paper_content: In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs. --- paper_title: Learning Multiagent Communication with Backpropagation paper_content: Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand. --- paper_title: SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels paper_content: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. --- paper_title: Coupled quasi-harmonic bases paper_content: The use of Laplacian eigenbases has been shown to be fruitful in many computer graphics applications. Today, state-of-the-art approaches to shape analysis, synthesis, and correspondence rely on these natural harmonic bases that allow using classical tools from harmonic analysis on manifolds. However, many applications involving multiple shapes are obstacled by the fact that Laplacian eigenbases computed independently on different shapes are often incompatible with each other. In this paper, we propose the construction of common approximate eigenbases for multiple shapes using approximate joint diagonalization algorithms, taking as input a set of corresponding functions (e.g. indicator functions of stable regions) on the two shapes. We illustrate the benefits of the proposed approach on tasks from shape editing, pose transfer, correspondence, and similarity. --- paper_title: A Compositional Object-Based Approach to Learning Physical Dynamics paper_content: We present the Neural Physics Engine (NPE), a framework for learning simulators of intuitive physics that naturally generalize across variable object count and different scene configurations. We propose a factorization of a physical scene into composable object-based representations and a neural network architecture whose compositional structure factorizes object dynamics into pairwise interactions. Like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions; realized as a neural network, it can be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that the NPE's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass. --- paper_title: The Graph Neural Network Model paper_content: Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities. --- paper_title: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper_content: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. --- paper_title: Gated Graph Sequence Neural Networks paper_content: Abstract: Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures. --- paper_title: Interaction Networks for Learning about Objects, Relations and Physics paper_content: Reasoning about objects, relations, and physics is central to human intelligence, and a key goal of artificial intelligence. Here we introduce the interaction network, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system. Our model takes graphs as input, performs object- and relation-centric reasoning in a way that is analogous to a simulation, and is implemented using deep neural networks. We evaluate its ability to reason about several challenging physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. Our results show it can be trained to accurately simulate the physical trajectories of dozens of objects over thousands of time steps, estimate abstract quantities such as energy, and generalize automatically to systems with different numbers and configurations of objects and relations. Our interaction network implementation is the first general-purpose, learnable physics engine, and a powerful general framework for reasoning about object and relations in a wide variety of complex real-world domains. --- paper_title: Learning Combinatorial Optimization Algorithms over Graphs paper_content: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems. --- paper_title: Spectral Networks and Locally Connected Networks on Graphs paper_content: Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures. --- paper_title: Learning Convolutional Neural Networks for Graphs paper_content: Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient. --- paper_title: Semi-Supervised Classification with Graph Convolutional Networks paper_content: We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. --- paper_title: Emergence of Complex-Like Cells in a Temporal Product Network with Local Receptive Fields paper_content: We introduce a new neural architecture and an unsupervised algorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicatively: one that represents the content of the image, constrained to be constant over several consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encoder to extract features, and a decoder to reconstruct the input from the features. The method was applied to patches extracted from consecutive movie frames and produces orientation and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive field spread over a large image of arbitrary size. A layer of complex cells, subject to sparsity constraints, pool feature units over overlapping local neighborhoods, which causes the feature units to organize themselves into pinwheel patterns of orientation-selective receptive fields, similar to those observed in the mammalian visual cortex. A feed-forward encoder efficiently computes the feature representation of full images. --- paper_title: Community Detection with Graph Neural Networks paper_content: We study data-driven methods for community detection in graphs. This estimation problem is typically formulated in terms of the spectrum of certain operators, as well as via posterior inference under certain probabilistic graphical models. Focusing on random graph families such as the Stochastic Block Model, recent research has unified these two approaches, and identified both statistical and computational signal-to-noise detection thresholds. ::: We embed the resulting class of algorithms within a generic family of graph neural networks and show that they can reach those detection thresholds in a purely data-driven manner, without access to the underlying generative models and with no parameter assumptions. The resulting model is also tested on real datasets, requiring less computational steps and performing significantly better than rigid parametric models. --- paper_title: A new model for learning in graph domains paper_content: In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model. --- paper_title: Convolutional Networks on Graphs for Learning Molecular Fingerprints paper_content: We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks. --- paper_title: Geometric deep learning: going beyond Euclidean data paper_content: Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them. --- paper_title: Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs paper_content: Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches. --- paper_title: Selecting Receptive Fields in Deep Networks paper_content: Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer. Unfortunately, for such large architectures the number of parameters can grow quadratically in the width of the network, thus necessitating hand-coded "local receptive fields" that limit the number of connections from lower level features to higher ones (e.g., based on spatial locality). In this paper we propose a fast method to choose these connections that may be incorporated into a wide variety of unsupervised training methods. Specifically, we choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric. This approach allows us to harness the advantages of local receptive fields (such as improved scalability, and reduced data requirements) when we do not know how to specify such receptive fields by hand or where our unsupervised training algorithm has no obvious generalization to a topographic setting. We produce results showing how this method allows us to use even simple unsupervised training algorithms to train successful multi-layered networks that achieve state-of-the-art results on CIFAR and STL datasets: 82.0% and 60.1% accuracy, respectively. --- paper_title: Graph2Seq: Scalable Learning Dynamics for Graphs paper_content: Neural networks are increasingly used as a general purpose approach to learning algorithms over graph structured data. However, techniques for representing graphs as real-valued vectors are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but as we show in this paper, these methods have difficulty generalizing to large graphs. In this paper we propose Graph2Seq, an embedding framework that represents graphs as an infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq naturally scales to graphs of arbitrary size. Moreover, through analysis of a formal computational model we show that an unbounded sequence is necessary for scalability. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequence. Experimental evaluations of Graph2Seq on a variety of combinatorial optimization problems show strong generalization and strict improvement over state of the art. --- paper_title: Local Spectral Graph Convolution for Point Set Feature Learning paper_content: Feature learning on point clouds has shown great promise, with the introduction of effective and generalizable deep learning frameworks such as pointnet++. Thus far, however, point features have been abstracted in an independent and isolated manner, ignoring the relative layout of neighboring points as well as their features. In the present article, we propose to overcome this limitation by using spectral graph convolution on a local graph, combined with a novel graph pooling strategy. In our approach, graph convolution is carried out on a nearest neighbor graph constructed from a point's neighborhood, such that features are jointly learned. We replace the standard max pooling step with a recursive clustering and pooling strategy, devised to aggregate information from within clusters of nodes that are close to one another in their spectral coordinates, leading to richer overall feature descriptors. Through extensive experiments on diverse datasets, we show a consistent demonstrable advantage for the tasks of both point set classification and segmentation. --- paper_title: 3DBodyTex: Textured 3D Body Dataset paper_content: In this paper, a dataset, named 3DBodyTex, of static 3D body scans with high-quality texture information is presented along with a fully automatic method for body model fitting to a 3D scan. 3D shape modelling is a fundamental area of computer vision that has a wide range of applications in the industry. It is becoming even more important as 3D sensing technologies are entering consumer devices such as smartphones. As the main output of these sensors is the 3D shape, many methods rely on this information alone. The 3D shape information is, however, very high dimensional and leads to models that must handle many degrees of freedom from limited information. Coupling texture and 3D shape alleviates this burden, as the texture of 3D objects is complementary to their shape. Unfortunately, high-quality texture content is lacking from commonly available datasets, and in particular in datasets of 3D body scans. The proposed 3DBodyTex dataset aims to fill this gap with hundreds of high-quality 3D body scans with high-resolution texture. Moreover, a novel fully automatic pipeline to fit a body model to a 3D scan is proposed. It includes a robust 3D landmark estimator that takes advantage of the high-resolution texture of 3DBodyTex. The pipeline is applied to the scans, and the results are reported and discussed, showcasing the diversity of the features in the dataset. --- paper_title: A 3D facial expression database for facial behavior research paper_content: Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation. --- paper_title: ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes paper_content: A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available – current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval. --- paper_title: ImageNet: A large-scale hierarchical image database paper_content: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. --- paper_title: Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis paper_content: Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community. --- paper_title: FAUST: Dataset and Evaluation for 3D Mesh Registration paper_content: New scanning technologies are increasing the importance of 3D mesh data and the need for algorithms that can reliably align it. Surface registration is important for building full 3D models from partial scans, creating statistical shape models, shape retrieval, and tracking. The problem is particularly challenging for non-rigid and articulated objects like human bodies. While the challenges of real-world data registration are not present in existing synthetic datasets, establishing ground-truth correspondences for real 3D scans is difficult. We address this with a novel mesh registration technique that combines 3D shape and appearance information to produce high-quality alignments. We define a new dataset called FAUST that contains 300 scans of 10 people in a wide range of poses together with an evaluation methodology. To achieve accurate registration, we paint the subjects with high-frequency textures and use an extensive validation process to ensure accurate ground truth. We find that current shape registration methods have trouble with this real-world data. The dataset and evaluation website are available for research purposes at http://faust.is.tue.mpg.de. --- paper_title: Numerical Geometry Of Non-Rigid Shapes paper_content: Deformable objects are ubiquitous in the world surrounding us, on all levels from micro to macro. The need to study such shapes and model their behavior arises in a wide spectrum of applications, ranging from medicine to security. In recent years, non-rigid shapes have attracted growing interest, which has led to rapid development of the field, where state-of-the-art results from very different sciences - theoretical and numerical geometry, optimization, linear algebra, graph theory, machine learning and computer graphics, to mention several - are applied to find solutions. This book gives an overview of the current state of science in analysis and synthesis of non-rigid shapes. Everyday examples are used to explain concepts and to illustrate different techniques. The presentation unfolds systematically and numerous figures enrich the engaging exposition. Practice problems follow at the end of each chapter, with detailed solutions to selected problems in the appendix. A gallery of colored images enhances the text. This book will be of interest to graduate students, researchers and professionals in different fields of mathematics, computer science and engineering. It may be used for courses in computer vision, numerical geometry and geometric modeling and computer graphics or for self-study. --- paper_title: SceneNet: An annotated model generator for indoor scene understanding paper_content: We introduce SceneNet, a framework for generating high-quality annotated 3D scenes to aid indoor scene understanding. SceneNet leverages manually-annotated datasets of real world scenes such as NYUv2 to learn statistics about object co-occurrences and their spatial relationships. Using a hierarchical simulated annealing optimisation, these statistics are exploited to generate a potentially unlimited number of new annotated scenes, by sampling objects from various existing databases of 3D objects such as ModelNet, and textures such as OpenSurfaces and ArchiveTextures. Depending on the task, SceneNet can be used directly in the form of annotated 3D models for supervised training and 3D reconstruction benchmarking, or in the form of rendered annotated sequences of RGB-D frames or videos. --- paper_title: A high-resolution 3D dynamic facial expression database paper_content: Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: Semantic Scene Completion from a Single Depth Image paper_content: This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu. --- paper_title: BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database paper_content: article Facial expression is central to human experience. Its efficiency and valid measurement are challenges that auto- mated facial image analysis seeks to address. Most publically available databases are limited to 2D static images orvideoofposedfacialbehavior.Becauseposedandun-posed(aka "spontaneous")facialexpressionsdifferalong severaldimensionsincludingcomplexityandtiming,well-annotatedvideoofun-posedfacialbehaviorisneeded. --- paper_title: GIFT: A Real-Time and Scalable 3D Shape Search Engine paper_content: Projective analysis is an important solution for 3D shape retrieval, since human visual perceptions of 3D shapes rely on various 2D observations from different view points. Although multiple informative and discriminative views are utilized, most projection-based retrieval systems suffer from heavy computational cost, thus cannot satisfy the basic requirement of scalability for search engines. In this paper, we present a real-time 3D shape search engine based on the projective images of 3D shapes. The real-time property of our search engine results from the following aspects: (1) efficient projection and view feature extraction using GPU acceleration, (2) the first inverted file, referred as F-IF, is utilized to speed up the procedure of multi-view matching, (3) the second inverted file (S-IF), which captures a local distribution of 3D shapes in the feature manifold, is adopted for efficient context-based reranking. As a result, for each query the retrieval task can be finished within one second despite the necessary cost of IO overhead. We name the proposed 3D shape search engine, which combines GPU acceleration and Inverted File (Twice), as GIFT. Besides its high efficiency, GIFT also outperforms the state-of-the-art methods significantly in retrieval accuracy on various shape benchmarks and competitions. --- paper_title: FaceNet: A Unified Embedding for Face Recognition and Clustering paper_content: Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. --- paper_title: Volumetric and Multi-view CNNs for Object Classification on 3D Data paper_content: 3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-the-art methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multi-resolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data. --- paper_title: OctNet: Learning Deep 3D Representations at High Resolutions paper_content: We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling. --- paper_title: Multi-view Convolutional Neural Networks for 3D Shape Recognition paper_content: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives. --- paper_title: Multi-view Harmonized Bilinear Network for 3D Object Recognition paper_content: View-based methods have achieved considerable success in 3D object recognition tasks. Different from existing view-based methods pooling the view-wise features, we tackle this problem from the perspective of patches-to-patches similarity measurement. By exploiting the relationship between polynomial kernel and bilinear pooling, we obtain an effective 3D object representation by aggregating local convolutional features through bilinear pooling. Meanwhile, we harmonize different components inherited in the bilinear feature to obtain a more discriminative representation. To achieve an end-to-end trainable framework, we incorporate the harmonized bilinear pooling as a layer of a network, constituting the proposed Multi-view Harmonized Bilinear Network (MHBN). Systematic experiments conducted on two public benchmark datasets demonstrate the efficacy of the proposed methods in 3D object recognition. --- paper_title: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation paper_content: Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption. --- paper_title: Generative and Discriminative Voxel Modeling with Convolutional Neural Networks paper_content: When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification. --- paper_title: Dominant Set Clustering and Pooling for Multi-View 3D Object Recognition paper_content: View based strategies for 3D object recognition have proven to be very successful. The state-of-the-art methods now achieve over 90% correct category level recognition performance on appearance images. We improve upon these methods by introducing a view clustering and pooling layer based on dominant sets. The key idea is to pool information from views which are similar and thus belong to the same cluster. The pooled feature vectors are then fed as inputs to the same layer, in a recurrent fashion. This recurrent clustering and pooling module, when inserted in an off-the-shelf pretrained CNN, boosts performance for multi-view 3D object recognition, achieving a new state of the art test set recognition accuracy of 93.8% on the ModelNet 40 database. We also explore a fast approximate learning strategy for our cluster-pooling CNN, which, while sacrificing end-to-end learning, greatly improves its training efficiency with only a slight reduction of recognition accuracy to 93.3%. Our implementation is available at this https URL. --- paper_title: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper_content: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. --- paper_title: A Discriminative Feature Learning Approach for Deep Face Recognition paper_content: Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks. --- paper_title: Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models paper_content: We present a new deep learning architecture (called Kd-network) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and share parameters of these transformations according to the subdivisions of the point clouds imposed onto them by Kd-trees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform two-dimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behaviour. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation. --- paper_title: 3D ShapeNets: A deep representation for volumetric shapes paper_content: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks. --- paper_title: GVCNN: Group-View Convolutional Neural Networks for 3D Shape Recognition paper_content: 3D shape recognition has attracted much attention recently. Its recent advances advocate the usage of deep features and achieve the state-of-the-art performance. However, existing deep features for 3D shape recognition are restricted to a view-to-shape setting, which learns the shape descriptor from the view-level feature directly. Despite the exciting progress on view-based 3D shape description, the intrinsic hierarchical correlation and discriminability among views have not been well exploited, which is important for 3D shape representation. To tackle this issue, in this paper, we propose a group-view convolutional neural network (GVCNN) framework for hierarchical correlation modeling towards discriminative 3D shape description. The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i.e., from the view level, the group level and the shape level, which are organized using a grouping strategy. Concretely, we first use an expanded CNN to extract a view level descriptor. Then, a grouping module is introduced to estimate the content discrimination of each view, based on which all views can be splitted into different groups according to their discriminative level. A group level description can be further generated by pooling from view descriptors. Finally, all group level descriptors are combined into the shape level descriptor according to their discriminative weights. Experimental results and comparison with state-of-the-art methods show that our proposed GVCNN method can achieve a significant performance gain on both the 3D shape classification and retrieval tasks. --- paper_title: 3D shape recognition and retrieval based on multi-modality deep learning paper_content: Abstract For 3D shape analysis, an effective and efficient feature is the key to popularize its applications in 3D domain where the major challenge lies in designing an effective high-level feature. The three-dimensional shape contains various useful information including visual information, geometric relationships, and other type properties. Thus the strategy of exploring these characteristics is the core of extracting effective 3D shape features. In this paper, we propose a novel 3D feature learning framework which combines different modality data effectively to promote the discriminability of uni-modal feature by using deep learning. The geometric information and visual information are extracted by Convolutional Neural Networks (CNNs) and Convolutional Deep Belief Networks (CDBNs), respectively, and then two independent Deep Belief Networks (DBNs) are employed to learn high-level features from geometric and visual features. Finally, a Restricted Boltzmann Machine (RBM) is trained for mining the deep correlations between different modalities. Extensive experiments demonstrate that the proposed framework achieves better performance. --- paper_title: Triplet-Center Loss for Multi-view 3D Object Retrieval paper_content: Most existing 3D object recognition algorithms focus on leveraging the strong discriminative power of deep learning models with softmax loss for the classification of 3D data, while learning discriminative features with deep metric learning for 3D object retrieval is more or less neglected. In the paper, we study variants of deep metric learning losses for 3D object retrieval, which did not receive enough attention from this area. First , two kinds of representative losses, triplet loss and center loss, are introduced which could learn more discriminative features than traditional classification loss. Then, we propose a novel loss named triplet-center loss, which can further enhance the discriminative power of the features. The proposed triplet-center loss learns a center for each class and requires that the distances between samples and centers from the same class are closer than those from different classes. Extensive experimental results on two popular 3D object retrieval benchmarks and two widely-adopted sketch-based 3D shape retrieval benchmarks consistently demonstrate the effectiveness of our proposed loss, and significant improvements have been achieved compared with the state-of-the-arts. --- paper_title: FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis paper_content: Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. CNNs do not easily extend, however, to data that are not represented by regular grids, such as 3D shape meshes or other graph-structured data, to which traditional local convolution operators do not directly apply. To address this problem, we propose a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of our approach is that these correspondences are dynamically computed from features learned by the network, rather than relying on predefined static coordinates over the graph as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results. This shows that our approach can learn effective shape representations from raw input coordinates, without relying on shape descriptors. --- paper_title: SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels paper_content: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. --- paper_title: 3DBodyTex: Textured 3D Body Dataset paper_content: In this paper, a dataset, named 3DBodyTex, of static 3D body scans with high-quality texture information is presented along with a fully automatic method for body model fitting to a 3D scan. 3D shape modelling is a fundamental area of computer vision that has a wide range of applications in the industry. It is becoming even more important as 3D sensing technologies are entering consumer devices such as smartphones. As the main output of these sensors is the 3D shape, many methods rely on this information alone. The 3D shape information is, however, very high dimensional and leads to models that must handle many degrees of freedom from limited information. Coupling texture and 3D shape alleviates this burden, as the texture of 3D objects is complementary to their shape. Unfortunately, high-quality texture content is lacking from commonly available datasets, and in particular in datasets of 3D body scans. The proposed 3DBodyTex dataset aims to fill this gap with hundreds of high-quality 3D body scans with high-resolution texture. Moreover, a novel fully automatic pipeline to fit a body model to a 3D scan is proposed. It includes a robust 3D landmark estimator that takes advantage of the high-resolution texture of 3DBodyTex. The pipeline is applied to the scans, and the results are reported and discussed, showcasing the diversity of the features in the dataset. --- paper_title: Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds paper_content: Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN. --- paper_title: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper_content: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. --- paper_title: FAUST: Dataset and Evaluation for 3D Mesh Registration paper_content: New scanning technologies are increasing the importance of 3D mesh data and the need for algorithms that can reliably align it. Surface registration is important for building full 3D models from partial scans, creating statistical shape models, shape retrieval, and tracking. The problem is particularly challenging for non-rigid and articulated objects like human bodies. While the challenges of real-world data registration are not present in existing synthetic datasets, establishing ground-truth correspondences for real 3D scans is difficult. We address this with a novel mesh registration technique that combines 3D shape and appearance information to produce high-quality alignments. We define a new dataset called FAUST that contains 300 scans of 10 people in a wide range of poses together with an evaluation methodology. To achieve accurate registration, we paint the subjects with high-frequency textures and use an extensive validation process to ensure accurate ground truth. We find that current shape registration methods have trouble with this real-world data. The dataset and evaluation website are available for research purposes at http://faust.is.tue.mpg.de. --- paper_title: SMPL: a skinned multi-person linear model paper_content: We present a learned model of human body shape and pose-dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. We quantitatively evaluate variants of SMPL using linear or dual-quaternion blend skinning and show that both are more accurate than a Blend-SCAPE model trained on the same data. We also extend SMPL to realistically model dynamic soft-tissue deformations. Because it is based on blend skinning, SMPL is compatible with existing rendering engines and we make it available for research purposes. --- paper_title: FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis paper_content: Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. CNNs do not easily extend, however, to data that are not represented by regular grids, such as 3D shape meshes or other graph-structured data, to which traditional local convolution operators do not directly apply. To address this problem, we propose a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of our approach is that these correspondences are dynamically computed from features learned by the network, rather than relying on predefined static coordinates over the graph as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results. This shows that our approach can learn effective shape representations from raw input coordinates, without relying on shape descriptors. --- paper_title: SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels paper_content: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. ---
Title: A Survey on Deep Learning Advances on Different 3D Data Representations Section 1: Overview of 3D Data Representations Description 1: This section discusses various 3D data representations, categorizing them into Euclidean-structured data and non-Euclidean data, and highlights their structural differences. Section 2: Deep Learning Architectures on Different 3D Data Representations Description 2: This section provides an overview of different DL paradigms applied to 3D data, classifying them into Euclidean and non-Euclidean representations and discussing the main differences, strengths, and weaknesses of each model. Section 3: Deep Learning Architectures on 3D Euclidean-structured Data Description 3: This section delves into DL models applied to 3D Euclidean-structured data, including 3D data descriptors, projections, RGB-D data, volumetric data, and multi-view data, highlighting the features learned by each paradigm. Section 4: Deep Learning Architectures on 3D Non-Euclidean Structured Data Description 4: This section discusses DL models applied to non-Euclidean structured data, covering point clouds, and graphs/meshes, and the challenges in applying DL techniques to these representations. Section 5: Analysis and Discussions Description 5: This section discusses the main 3D datasets, their exploitation in various 3D computer vision tasks, and presents DL advances in 3D recognition/classification, retrieval, and correspondence tasks. Section 6: Conclusion Description 6: This section concludes the paper by summarizing the ongoing challenges and the need for further investigation to improve the robustness and generalization of DL models on 3D data.
Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results
8
--- paper_title: Use Cases for IPv6 Source Packet Routing in Networking (SPRING) paper_content: The Source Packet Routing in Networking (SPRING) architecture ::: describes how Segment Routing can be used to steer packets through an ::: IPv6 or MPLS network using the source routing paradigm. This document ::: illustrates some use cases for Segment Routing in an IPv6-only ::: environment. --- paper_title: Source Packet Routing in Networking (SPRING) Problem Statement and Requirements paper_content: The ability for a node to specify a forwarding path, other than the ::: normal shortest path, that a particular packet will traverse, benefits ::: a number of network functions. Source-based routing mechanisms have ::: previously been specified for network protocols but have not seen ::: widespread adoption. In this context, the term "source" means "the ::: point at which the explicit route is imposed"; therefore, it is not ::: limited to the originator of the packet (i.e., the node imposing the ::: explicit route may be the ingress node of an operator's network). ::: This document outlines various use cases, with their requirements, ::: that need to be taken into account by the Source Packet Routing in ::: Networking (SPRING) architecture for unicast traffic. Multicast use ::: cases and requirements are out of scope for this document. --- paper_title: IPv6 Segment Routing Header (SRH) paper_content: Segment Routing can be applied to the IPv6 data plane using a new type ::: of Routing Extension Header called the Segment Routing Header. This ::: document describes the Segment Routing Header and how it is used by ::: Segment Routing capable nodes. --- paper_title: Segment Routing in Software Defined Networks: A Survey paper_content: Segment routing (SR) has emerged as a promising source-routing methodology to overcome the challenges in the current routing schemes. It has received noticeable attention both in industry and academia, due to its flexibility, scalability, and applicability, especially in software defined networks. The emerging cloud services require strict service level agreements such as packet loss, delay, and jitter. Studies have shown that traditional network architectures lack the essential flexibility and scalability to offer these services. To combat this, a more flexible and agile routing paradigm of SR enables a source node to steer an incoming packet along a performance engineered path represented as an ordered list of instructions called segment list. This is encoded as a multiprotocol label switching label stack or an IPv6 address list in the packet header. This paper provides a comprehensive review of the novel SR technology by describing its architecture, operations, and key applications to date. SR paradigm can be effectively applied to a wide range of network applications, such as traffic engineering, network resiliency, network monitoring, and service function chaining, to achieve efficient network solutions. Furthermore, this paper identifies an interesting set of future research directions and open issues that can help realize the full potential of the emergent SR paradigm. --- paper_title: Resiliency Use Cases in Source Packet Routing in Networking (SPRING) Networks paper_content: This document identifies and describes the requirements for a set of ::: use cases related to Segment Routing network resiliency on Source ::: Packet Routing in Networking (SPRING) networks. --- paper_title: Segment Routing in Software Defined Networks: A Survey paper_content: Segment routing (SR) has emerged as a promising source-routing methodology to overcome the challenges in the current routing schemes. It has received noticeable attention both in industry and academia, due to its flexibility, scalability, and applicability, especially in software defined networks. The emerging cloud services require strict service level agreements such as packet loss, delay, and jitter. Studies have shown that traditional network architectures lack the essential flexibility and scalability to offer these services. To combat this, a more flexible and agile routing paradigm of SR enables a source node to steer an incoming packet along a performance engineered path represented as an ordered list of instructions called segment list. This is encoded as a multiprotocol label switching label stack or an IPv6 address list in the packet header. This paper provides a comprehensive review of the novel SR technology by describing its architecture, operations, and key applications to date. SR paradigm can be effectively applied to a wide range of network applications, such as traffic engineering, network resiliency, network monitoring, and service function chaining, to achieve efficient network solutions. Furthermore, this paper identifies an interesting set of future research directions and open issues that can help realize the full potential of the emergent SR paradigm. --- paper_title: The Segment Routing Architecture paper_content: Network operators anticipate the offering of an increasing variety of cloud-based services with stringent Service Level Agreements. Technologies currently supporting IP networks however lack the flexibility and scalability properties to realize such evolution. In this article, we present Segment Routing (SR), a new network architecture aimed at filling this gap, driven by use-cases defined by network operators. SR implements the source routing and tunneling paradigms, letting nodes steer packets over paths using a sequence of instructions (segments) placed in the packet header. As such, SR allows the implementation of routing policies without per-flow entries at intermediate routers. This paper introduces the SR architecture, describes its related ongoing standardization efforts, and reviews the main use-cases envisioned by network operators. --- paper_title: Source Packet Routing in Networking (SPRING) Problem Statement and Requirements paper_content: The ability for a node to specify a forwarding path, other than the ::: normal shortest path, that a particular packet will traverse, benefits ::: a number of network functions. Source-based routing mechanisms have ::: previously been specified for network protocols but have not seen ::: widespread adoption. In this context, the term "source" means "the ::: point at which the explicit route is imposed"; therefore, it is not ::: limited to the originator of the packet (i.e., the node imposing the ::: explicit route may be the ingress node of an operator's network). ::: This document outlines various use cases, with their requirements, ::: that need to be taken into account by the Source Packet Routing in ::: Networking (SPRING) architecture for unicast traffic. Multicast use ::: cases and requirements are out of scope for this document. --- paper_title: Resiliency Use Cases in Source Packet Routing in Networking (SPRING) Networks paper_content: This document identifies and describes the requirements for a set of ::: use cases related to Segment Routing network resiliency on Source ::: Packet Routing in Networking (SPRING) networks. --- paper_title: IPv6 Segment Routing Header (SRH) paper_content: Segment Routing can be applied to the IPv6 data plane using a new type ::: of Routing Extension Header called the Segment Routing Header. This ::: document describes the Segment Routing Header and how it is used by ::: Segment Routing capable nodes. --- paper_title: Use Cases for IPv6 Source Packet Routing in Networking (SPRING) paper_content: The Source Packet Routing in Networking (SPRING) architecture ::: describes how Segment Routing can be used to steer packets through an ::: IPv6 or MPLS network using the source routing paradigm. This document ::: illustrates some use cases for Segment Routing in an IPv6-only ::: environment. --- paper_title: Source Packet Routing in Networking (SPRING) Problem Statement and Requirements paper_content: The ability for a node to specify a forwarding path, other than the ::: normal shortest path, that a particular packet will traverse, benefits ::: a number of network functions. Source-based routing mechanisms have ::: previously been specified for network protocols but have not seen ::: widespread adoption. In this context, the term "source" means "the ::: point at which the explicit route is imposed"; therefore, it is not ::: limited to the originator of the packet (i.e., the node imposing the ::: explicit route may be the ingress node of an operator's network). ::: This document outlines various use cases, with their requirements, ::: that need to be taken into account by the Source Packet Routing in ::: Networking (SPRING) architecture for unicast traffic. Multicast use ::: cases and requirements are out of scope for this document. --- paper_title: Resiliency Use Cases in Source Packet Routing in Networking (SPRING) Networks paper_content: This document identifies and describes the requirements for a set of ::: use cases related to Segment Routing network resiliency on Source ::: Packet Routing in Networking (SPRING) networks. --- paper_title: Routing Perturbation for Traffic Matrix Evaluation in a Segment Routing Network paper_content: Traffic matrix (TM) assessment is a key issue for optimizing network management costs and quality of service. This paper presents a method to measure the intensity of ingress–egress traffic flows on an Internet service providers network that overcomes the limits of the classical measurement-based approaches. The proposed algorithm, called segment routing perturbation traffic (SERPENT), uses a routing perturbation approach enabled by the segment routing paradigm: The paths of a subset of flows are changed so that their intensities can be determined measuring the variation of the load of the network links. The TM is measured in successive steps, called snapshots, in which sets of flows are progressively re-routed and measured, under a maximum link utilization constraint. We state an integer linear programming (ILP) optimization problem to determine the flows to be rerouted in one snapshot. SERPENT is an heuristic offering an efficient solution to the stated ILP. Results show that SERPENT assesses the intensity of more than 80% of flows even when the network is highly stressed, while reducing the configuration cost with respect to classical approaches. Moreover, when used in conjunction with an estimation algorithm, SERPENT allows a reduction of the estimation error by more than 50% with fewer than 5 snapshots. --- paper_title: A Heuristic Approach to Assess the Traffic Matrix of an ISP Exploiting Segment Routing Flexibility paper_content: The Ingress Egress Traffic Matrix (IE TM) assessment is a fundamental step of the network management for an ISP network, since it represents the key input parameter used by any Traffic Engineering solution to optimize the resource utilization and to improve the Quality of Service. The actual TM assessment procedures are based on estimation algorithms or measurement based approaches. This paper presents a method to measure the intensity of traffic flows, that overcomes the limits of the classical measurement/estimation based approaches. The idea is to exploit the flexibility of the Segment Routing paradigm to implement controlled routing changes so that to measure the intensity of a subset of network flows. The main contribution of the work is to show the feasibility of the proposed approach by means of a low complexity heuristic, referred to as Path Cost Bases (PaCoB), able to identify the list of routing changes that allow to improve the TM assessment procedure. The heuristic is composed of successive steps, referred to as snapshots: in each snapshot the routing of a set of flows is changed so that to assess their intensities. The performance evaluation show thatPaCoB assesses the intensity of more than 90% of flows. Moreover, when used in conjunction with an estimation algorithm, PaCoB allows to reduce the estimation error by more than 50% performing only 10 snapshots. --- paper_title: Bandwidth-efficient network monitoring algorithms based on segment routing paper_content: Abstract We study the bandwidth-efficient network monitoring algorithms using segment routing on topologies that can be modelled as a symmetric directed graph. Aiming at directly minimizing the cycle cover length, four new two-phase algorithms, A, B, C and D, are proposed. Algorithm A is a simple extension of an existing algorithm. Algorithm B utilizes the notion of symmetric cycles. Algorithm C enhances B by adopting a new path metric. Algorithm D further improves C with a parallel and sequential cycle construction process. Since two-phase algorithms require a dedicated monitoring topology, the four two-phase algorithms are then enhanced to function without monitoring topology. Although the cycle cover length found is slightly increased, we argue that the cost saving in removing monitoring topology is far more significant. --- paper_title: Interface Counters in Segment Routing v6: a powerful instrument for Traffic Matrix Assessment paper_content: In this work we investigate the use of Segment Routing version 6 (SRv6) interface counters to improve the Traffic Matrix (TM) assessment of an ISP network. SRv6 is a source routing solution for IPv6 networks: it allows defining a network path as a sequence of routers, represented by Segments Identifiers (SID), to be crossed, i.e. a Segment List (SL). SRv6 provides a new set of interface counters able to measure the packets/bytes of Ingress/Egress flows exploiting the forwarding operations performed on the Segment Identifiers (SIDs) reported in the SLs. In this work, we formally define the contribution of SR counters in the TM computation procedure, by integrating them in the classical TM Assessment problem. The main outcome of the performance analysis are: i) SR counters can greatly reduce the estimation error and, ii) there is a correlation between the TM assessment improvement and the SLs structure. --- paper_title: SCMon: Leveraging segment routing to improve network monitoring paper_content: To guarantee correct operation of their networks, operators have to promptly detect and diagnose data-plane issues, like broken interface cards or link failures. Networks are becoming more complex, with a growing number of Equal Cost MultiPath (ECMP) and link bundles. Hence, some data-plane problems (e.g. silent packet dropping at one router) can hardly be detected with control-plane protocols or simple monitoring tools like ping or traceroute. In this paper, we propose a new technique, called SCMon, that enables continuous monitoring of the data-plane, in order to track the health of all routers and links. SCMon leverages the recently proposed Segment Routing (SR) architecture to monitor the entire network with a single box (and no additional monitoring protocol). In particular, SCMon uses SR to (i) force monitoring probes to travel over cycles; and (ii) test parallel links and bundles at a per-link granularity. We present original algorithms to compute cycles that cover all network links with a limited number of SR segments. Further, we prototype and evaluate SCMon both with simulations and Linux-based emulations. Our experiments show that SCMon quickly detects and precisely pinpoints data-plane problems, with a limited overhead. --- paper_title: Translating Traffic Engineering outcome into Segment Routing paths: The Encoding problem paper_content: Traffic Engineering (TE) algorithms aims at determining the packet routing paths in order to satisfy specific QoS requirements. These paths are normally established through control procedures e.g., exchange of RSVP messages in MPLS networks or links weights modification in pure IP networks. An increase of control traffic or long convergence time intervals, respectively, are the drawbacks of these solutions. Segment Routing (SR) is a new network paradigm able to implement TE routing strategies over legacy IP/MPLS networks with no need of dedicated signaling procedures. This result is obtained by inserting in each packet header an ordered list of instructions, called Segments List, that indicates the path to be crossed. This paper provides the formulation of the Segment List Encoding problem i.e., the detection of the proper Segment Lists to obtain TE network paths minimizing the Segment Lists sizes. The SL encoding procedure is composed of two steps: i) the creation of an auxiliary graph representing the forwarding paths between the couple of source and destination nodes; ii) the solution of a Multi-commodity Flow (MCF) problem over the auxiliary graph. The performance evaluation shows that properly performing SL encoding allows to implement TE outcome with a reduced reconfiguration cost with respect to E2E tunneling and Hop-by-Hop solutions; moreover a significant advantage in terms of packets overhead is obtained. --- paper_title: On Traffic Engineering with Segment Routing in SDN based WANs paper_content: Segment routing is an emerging technology to simplify traffic engineering implementation in WANs. It expresses an end-to-end logical path as a sequence of segments, each of which is represented by a middlepoint. In this paper, we arguably conduct the first systematic study of traffic engineering with segment routing in SDN based WANs. We first provide a theoretical characterization of the problem. We show that for general segment routing, where flows can take any path that goes through a middlepoint, the resulting traffic engineering is NP-hard. We then consider segment routing with shortest paths only, and prove that the traffic engineering problem can now be solved in (weakly) polynomial time when the number of middlepoints per path is fixed and not part of the input. Our results thus explain, for the first time, the underlying reason why existing work only focuses on segment routing with shortest paths. In the second part of the paper, we study practical traffic engineering using shortest path based segment routing. We note that existing methods work by taking each node as a candidate middlepoint. This requires solving a large-scale linear program which is prohibitively slow. We thus propose to select just a few important nodes as middlepoints for all traffic. We use node centrality concepts from graph theory, notably group shortest path centrality, for middlepoint selection. Our performance evaluation using realistic topologies and traffic traces shows that a small percentage of the most central nodes can achieve good results with orders of magnitude lower runtime. --- paper_title: Novel SDN architecture for smart MPLS Traffic Engineering-DiffServ Aware management paper_content: Abstract Large scale networks are still a major challenge for management, guarantee a good level of Quality of Service (QoS) and especially optimization (rational use of network resources). Multi-Protocol Label Switching (MPLS), mainly used in the backbone of Internet service providers, must meet these three major challenges. Software-Defined Network (SDN), is a paradigm that allows, through the principle of orchestration and layers abstraction, to manage large scale networks through specific protocols. This paper presents a new SDN-based architecture for managing an MPLS Traffic Engineering Diffserv Aware (DS-TE) network. The architecture manages the QoS and routing with QoS constraints, following a new smart and dynamic model of allocation of the bandwidth (Smart Alloc). The proposed architecture is suitable for SDN equipment and especially the legacy equipment. We tested our architecture by simulation on a hybrid network made up of SDN equipment and another legacy. The results of the simulation showed that thanks to our architecture we can not only efficiently manage hybrid architectures but also achieve good QoS levels for convergent traffic. The performance evaluation was performed on VoIP, video, HTTP, and ICMP traffic increasing packet load. --- paper_title: Semi-Oblivious Segment Routing with Bounded Traffic Fluctuations paper_content: Segment Routing (SR) is a recently proposed protocol by IETF to realize scalability and flexibility properties for internet traffic engineering. This protocol is an integration of source routing and tunneling paradigms. The paths are defined in terms of segments and each segment is specified by a label. In this paper we consider the problem of offline segment routing when traffic matrix is not fully known. Because of wide variation in traffic matrix over time, it is difficult to estimate traffic matrix accurately. So, we have developed a semi-oblivious segment routing algorithm that takes bounded traffic fluctuations based on an initial estimated traffic matrix. This makes it robust to traffic fluctuations and yet substantially improves the routing performance compared to oblivious routing techniques. --- paper_title: SDN-Based Data Center Networking With Collaboration of Multipath TCP and Segment Routing paper_content: Large-scale data centers are major infrastructures in the big data era. Therefore, a stable and optimized architecture is required for data center networks (DCNs) to provide services to the applications. Many studies use software-defined network (SDN)-based multipath TCP (MPTCP) implementation to utilize the entire DCN’s performance and achieve good results. However, the deployment cost is high. In SDN-based MPTCP solutions, the flow allocation mechanism leads to a large number of forwarding rules, which may lead to storage consumption. Considering the advantages and limitations of the SDN-based MPTCP solution, we aim to reduce the deployment cost due to the use of an extremely expensive storage resource—ternary content addressable memory (TCAM). We combine MPTCP and segment routing (SR) for traffic management to limit the storage requirements. And to the best of our knowledge, we are among the first to use the collaboration of MPTCP and SR in multi-rooted DCN topologies. To explain how MPTCP and SR work together, we use four-layer DCN architecture for better description, which contains physical topology, SR over the topology, multiple path selection supplied by MPTCP, and traffic scheduling on the selected paths. Finally, we implement the proposed design in a simulated SDN-based DCN environment. The simulation results reveal the great benefits of such a collaborative approach. --- paper_title: Demonstration of Segment Routing with SDN based label stack optimization paper_content: Segment Routing (SR) architecture is a promising technology. It is being standardized in collaboration between vendors and service providers. Through its simplistic control plane and the reuse of existing data planes namely MPLS and IPv6, SR helps operators to reduce the Operation Expense (OpEx) and the Capital Expense (CapEx). In the instantiation of SR over the MPLS data plane (SR-MPLS), a SR path gets encoded as a label stack that the ingress nodes push onto the client packet. However, the longer the path in term of traversed nodes the bigger the stack gets. In this demonstration, we couple the capabilities of an SDN controller and a path encoding engine to reduce that size of the label stack to express segment routing paths. --- paper_title: A Novel QoE-Centric SDN-Based Multipath Routing Approach for Multimedia Services over 5G Networks paper_content: The explosion of enhanced applications such as live video streaming, video gaming and Virtual Reality calls for efforts to optimize transport protocols to manage the increasing amount of data traffic on future 5G networks. Through bandwidth aggregation over multiple paths, the Multi-Path Transmission Control Protocol (MPTCP) can enhance the performance of network applications. MPTCP can split a large multimedia flow into subflows and apply a congestion control mechanism on each subflow. Segment Routing (SR), a promising source routing approach, has emerged to provide advanced packet forwarding over 5G networks. In this paper, we explore the utilization of MPTCP and SR in SDN-based networks to improve network resources utilization and end- user's QoE for delivering multimedia services over 5G networks. We propose a novel QoE-aware, SDN- based MPTCP/SR approach for service delivery. In order to demonstrate the feasibility of our approach, we implemented an intelligent QoE- centric Multipath Routing Algorithm (QoMRA) on an SDN source routing platform using Mininet and POX controller. We carried out experiments on Dynamic Adaptive video Steaming over HTTP (DASH) applications over various network conditions. The preliminary results show that, our QoE-aware SDN- based MPTCP/SR scheme performs better compared to the conventional TCP approach in terms of throughput, link utilization and the end-user's QoE. --- paper_title: Expect the unexpected: Sub-second optimization for segment routing paper_content: In this paper, we study how to perform traffic engineering at an extremely-small time scale with segment routing, addressing a critical need for modern wide area networks. Prior work has shown that segment routing enables to better engineer traffic, thanks to its ability to program detours in forwarding paths, at scale. Two main approaches have been explored for traffic engineering with segment routing, respectively based on integer linear programming and constraint programming. However, no previous work deeply investigated how quickly those approaches can react to unexpected traffic changes and failures. We highlight limitations of existing algorithms, both in terms of required execution time and amount of path changes to be applied. Thus, we propose a new approach, based on local search and focused on the quick re-arrangement of (few) forwarding paths. We describe heuristics for sub-second recomputation of segment-routing paths that comply with requirements on the maximum link load (e.g., for congestion avoidance). Our heuristics enable a prompt answer to sudden criticalities affecting network services and business agreements. Through extensive simulations, we indeed experimentally show that our proposal significantly outperforms previous algorithms in the context of time-constrained optimization, supporting radical traffic changes in few tens of milliseconds for realistic networks. --- paper_title: An efficient routing algorithm based on segment routing in software-defined networking paper_content: Software-defined networking (SDN) is an emerging architecture that offers advantages over traditional network architecture. Segment routing (SR) defines the path of information through the network via an ordered list of multi-protocol label switching (MPLS) mechanisms on the packet headers at the ingress device, and this system makes SDN routing management more simple and efficient. SR can also solve some scalability issues in SDN. In this paper, we propose a routing algorithm for SDN with SR that can meet the bandwidth requirements of routing requests. Our algorithm considers the balance of traffic load and reduces the extra cost of packet header size in a network. Simulation results show that the performance of our algorithm is better than that of previously developed algorithms in terms of the average network throughput and the average rejection rate of routing requests. --- paper_title: A scalable and bandwidth-efficient multicast algorithm based on segment routing in software-defined networking paper_content: Software-Defined Networking (SDN) is an emerging architecture and offers advantages over traditional network architecture, while there exist some scalability challenges. In this paper, we propose a multicast routing algorithm for SDN with segment routing to serve the bandwidth requirement of a multicast routing request. Our algorithm considers the balance of traffic load for network resource of link bandwidth and node flow entries both. Simulation results show that the performance of our algorithm is better than previous works in terms of average network throughput and average rejection rate of routing requests. Besides, the results also show that our multicast architecture improve scalability problem of original SDN model in terms of number of flow entries used. --- paper_title: Traffic Engineering with Segment Routing: SDN-Based Architectural Design and Open Source Implementation paper_content: Traffic Engineering (TE) in IP carrier networks is one of the functions that can benefit from the Software Defined Networking paradigm. However traditional per-flow routing requires a direct interaction between the SDN controller and each node that is involved in the traffic paths. Segment Routing (SR) may simplify the route enforcement delegating all the configuration and per-flow state at the border of the network. In this work we propose an architecture that integrates the SDN paradigm with SR based TE, for which we have provided an open source reference implementation. We have designed and implemented a simple TE/SR heuristic for flow allocation and we show and discuss experimental results. --- paper_title: Incremental Deployment of Segment Routing Into an ISP Network: a Traffic Engineering Perspective paper_content: Segment routing (SR) is a new routing paradigm to provide traffic engineering (TE) capabilities in an IP network. The main feature of SR is that no signaling protocols are needed, since extensions of the interior gateway protocol routing protocols are used. Despite the benefit that SR brings, introducing a new technology into an operational network presents many difficulties. In particular, the network operators consider both capital expenditure and performance degradation as drawbacks for the deployment of the new technology; for this reason, an incremental approach is preferred. In this paper, we face the challenge of managing the transition between a pure IP network to a full SR one while optimizing the network performances. We focus our attention on a network scenario where: 1) only a subset of nodes are SR-capable and 2) the TE objective is the minimization of the maximum link utilization. For such a scenario, we propose an architectural solution, named SR domain (SRD), to guarantee the proper interworking between the IP routers and the SR nodes. We propose a mixed integer linear programming formulation to solve the SRD design problem, consisting in identifying the subset of SR nodes; moreover, a strategy to manage the routing inside the SRD is defined. The performance evaluation shows that the hybrid IP/SR network based on SRD offers TE opportunities comparable to the one of a full SR network. Finally, a heuristic method to identify nodes to be inserted in the set of nodes composing the SRD is discussed. --- paper_title: Optimizing segment routing using evolutionary computation paper_content: Abstract Segment Routing (SR) combines the simplicity of Link-State routing protocols with the flexibility of Multiprotocol Label Switching (MPLS). By decomposing forwarding paths into segments, identified by labels, SR improves Traffic Engineering (TE) and enables new solutions for the optimization of network resources utilization. This work proposes an Evolutionary Computation approach that enables Path Computation Element (PCE) or Software-defined Network (SDN) controllers to optimize label switching paths for congestion avoidance while using at the most three labels to configure each label switching path. --- paper_title: An Optimization Routing Algorithm Based on Segment Routing in Software-Defined Networks paper_content: Software-defined networks (SDNs) are improving the controllability and flexibility of networks as an innovative network architecture paradigm. Segment routing (SR) exploits an end-to-end logical path and is composed of a sequence of segments as an effective routing strategy. Each segment is represented by a middle point. The combination of SR and SDN can meet the differentiated business needs of users and can quickly deploy applications. In this paper, we propose two routing algorithms based on SR in SDN. The algorithms aim to save the cost of the path, alleviate the congestion of networks, and formulate the selection strategy by comprehensively evaluating the value of paths. The simulation results show that compared with existing algorithms, the two proposed algorithms can effectively reduce the consumption of paths and better balance the load of the network. Furthermore, the proposed algorithms take into account the preferences of users, actualize differentiated business networks, and achieve a larger comprehensive evaluation value of the path compared with other algorithms. --- paper_title: On Traffic Engineering with Segment Routing in SDN based WANs paper_content: Segment routing is an emerging technology to simplify traffic engineering implementation in WANs. It expresses an end-to-end logical path as a sequence of segments, each of which is represented by a middlepoint. In this paper, we arguably conduct the first systematic study of traffic engineering with segment routing in SDN based WANs. We first provide a theoretical characterization of the problem. We show that for general segment routing, where flows can take any path that goes through a middlepoint, the resulting traffic engineering is NP-hard. We then consider segment routing with shortest paths only, and prove that the traffic engineering problem can now be solved in (weakly) polynomial time when the number of middlepoints per path is fixed and not part of the input. Our results thus explain, for the first time, the underlying reason why existing work only focuses on segment routing with shortest paths. In the second part of the paper, we study practical traffic engineering using shortest path based segment routing. We note that existing methods work by taking each node as a candidate middlepoint. This requires solving a large-scale linear program which is prohibitively slow. We thus propose to select just a few important nodes as middlepoints for all traffic. We use node centrality concepts from graph theory, notably group shortest path centrality, for middlepoint selection. Our performance evaluation using realistic topologies and traffic traces shows that a small percentage of the most central nodes can achieve good results with orders of magnitude lower runtime. --- paper_title: Novel SDN architecture for smart MPLS Traffic Engineering-DiffServ Aware management paper_content: Abstract Large scale networks are still a major challenge for management, guarantee a good level of Quality of Service (QoS) and especially optimization (rational use of network resources). Multi-Protocol Label Switching (MPLS), mainly used in the backbone of Internet service providers, must meet these three major challenges. Software-Defined Network (SDN), is a paradigm that allows, through the principle of orchestration and layers abstraction, to manage large scale networks through specific protocols. This paper presents a new SDN-based architecture for managing an MPLS Traffic Engineering Diffserv Aware (DS-TE) network. The architecture manages the QoS and routing with QoS constraints, following a new smart and dynamic model of allocation of the bandwidth (Smart Alloc). The proposed architecture is suitable for SDN equipment and especially the legacy equipment. We tested our architecture by simulation on a hybrid network made up of SDN equipment and another legacy. The results of the simulation showed that thanks to our architecture we can not only efficiently manage hybrid architectures but also achieve good QoS levels for convergent traffic. The performance evaluation was performed on VoIP, video, HTTP, and ICMP traffic increasing packet load. --- paper_title: Semi-Oblivious Segment Routing with Bounded Traffic Fluctuations paper_content: Segment Routing (SR) is a recently proposed protocol by IETF to realize scalability and flexibility properties for internet traffic engineering. This protocol is an integration of source routing and tunneling paradigms. The paths are defined in terms of segments and each segment is specified by a label. In this paper we consider the problem of offline segment routing when traffic matrix is not fully known. Because of wide variation in traffic matrix over time, it is difficult to estimate traffic matrix accurately. So, we have developed a semi-oblivious segment routing algorithm that takes bounded traffic fluctuations based on an initial estimated traffic matrix. This makes it robust to traffic fluctuations and yet substantially improves the routing performance compared to oblivious routing techniques. --- paper_title: SDN-Based Data Center Networking With Collaboration of Multipath TCP and Segment Routing paper_content: Large-scale data centers are major infrastructures in the big data era. Therefore, a stable and optimized architecture is required for data center networks (DCNs) to provide services to the applications. Many studies use software-defined network (SDN)-based multipath TCP (MPTCP) implementation to utilize the entire DCN’s performance and achieve good results. However, the deployment cost is high. In SDN-based MPTCP solutions, the flow allocation mechanism leads to a large number of forwarding rules, which may lead to storage consumption. Considering the advantages and limitations of the SDN-based MPTCP solution, we aim to reduce the deployment cost due to the use of an extremely expensive storage resource—ternary content addressable memory (TCAM). We combine MPTCP and segment routing (SR) for traffic management to limit the storage requirements. And to the best of our knowledge, we are among the first to use the collaboration of MPTCP and SR in multi-rooted DCN topologies. To explain how MPTCP and SR work together, we use four-layer DCN architecture for better description, which contains physical topology, SR over the topology, multiple path selection supplied by MPTCP, and traffic scheduling on the selected paths. Finally, we implement the proposed design in a simulated SDN-based DCN environment. The simulation results reveal the great benefits of such a collaborative approach. --- paper_title: Reliable segment routing paper_content: Segment Routing (SR) is a novel traffic engineering technique compatible with traditional MPLS data plane. SR relies on label stacking to steer traffic flows throughout the network. Signaling protocol is not required, thus control plane operation is greatly simplified. SR can also be exploited upon network failures to promptly perform traffic recovery and, subsequently, to optimize the recovered traffic for avoiding network congestion. This study proposes a procedure to dynamically recover the traffic flows disrupted by a single failure in segment routing networks. The proposed procedure is evaluated in several network topologies to estimate the complexity of the required operations. --- paper_title: Demonstration of Segment Routing with SDN based label stack optimization paper_content: Segment Routing (SR) architecture is a promising technology. It is being standardized in collaboration between vendors and service providers. Through its simplistic control plane and the reuse of existing data planes namely MPLS and IPv6, SR helps operators to reduce the Operation Expense (OpEx) and the Capital Expense (CapEx). In the instantiation of SR over the MPLS data plane (SR-MPLS), a SR path gets encoded as a label stack that the ingress nodes push onto the client packet. However, the longer the path in term of traversed nodes the bigger the stack gets. In this demonstration, we couple the capabilities of an SDN controller and a path encoding engine to reduce that size of the label stack to express segment routing paths. --- paper_title: Optimizing restoration with segment routing paper_content: Segment routing is a new proposed routing mechanism for simplified and flexible path control in IP/MPLS networks. It builds on existing network routing and connection management protocols and one of its important features is the automatic rerouting of connections upon failure. Re-routing can be done with available restoration mechanisms including IGP-based rerouting and fast reroute with loop-free alternates. This is particularly attractive for use in Software Defined Networks (SDN) because the central controller need only be involved at connection set-up time and failures are handled automatically in a distributed manner. A significant challenge in restoration optimization in segment routed networks is the centralized determination of connections primary paths so as to enable the best sharing of restoration bandwidth over non-simultaneous network failures. We formulate this problem as a linear programming problem and develop an efficient primal-dual algorithm for the solution. We also develop a simple randomized rounding scheme for cases when there are additional constraints on segment routing. We demonstrate the significant capacity benefits achievable from this optimized restoration with segment routing. --- paper_title: A Novel QoE-Centric SDN-Based Multipath Routing Approach for Multimedia Services over 5G Networks paper_content: The explosion of enhanced applications such as live video streaming, video gaming and Virtual Reality calls for efforts to optimize transport protocols to manage the increasing amount of data traffic on future 5G networks. Through bandwidth aggregation over multiple paths, the Multi-Path Transmission Control Protocol (MPTCP) can enhance the performance of network applications. MPTCP can split a large multimedia flow into subflows and apply a congestion control mechanism on each subflow. Segment Routing (SR), a promising source routing approach, has emerged to provide advanced packet forwarding over 5G networks. In this paper, we explore the utilization of MPTCP and SR in SDN-based networks to improve network resources utilization and end- user's QoE for delivering multimedia services over 5G networks. We propose a novel QoE-aware, SDN- based MPTCP/SR approach for service delivery. In order to demonstrate the feasibility of our approach, we implemented an intelligent QoE- centric Multipath Routing Algorithm (QoMRA) on an SDN source routing platform using Mininet and POX controller. We carried out experiments on Dynamic Adaptive video Steaming over HTTP (DASH) applications over various network conditions. The preliminary results show that, our QoE-aware SDN- based MPTCP/SR scheme performs better compared to the conventional TCP approach in terms of throughput, link utilization and the end-user's QoE. --- paper_title: TI-MFA: Keep calm and reroute segments fast paper_content: Segment Routing (SR) promises to provide scalable and fine-grained traffic engineering. However, little is known today on how to implement resilient routing in SR, i.e., routes which tolerate one or even multiple failures. This paper initiates the theoretical study of static fast failover mechanisms which do not depend on reconvergence and hence support a very fast reaction to failures. We introduce formal models and identify fundamental tradeoffs on what can and cannot be achieved in terms of static resilient routing. In particular, we identify an inherent price in terms of performance if routing paths need to be resilient, even in the absence of failures. Our main contribution is a first algorithm which is resilient even to multiple failures and which comes with provable resiliency and performance guarantees. We complement our formal analysis with simulations on real topologies, which show the benefits of our approach over existing algorithms. --- paper_title: Expect the unexpected: Sub-second optimization for segment routing paper_content: In this paper, we study how to perform traffic engineering at an extremely-small time scale with segment routing, addressing a critical need for modern wide area networks. Prior work has shown that segment routing enables to better engineer traffic, thanks to its ability to program detours in forwarding paths, at scale. Two main approaches have been explored for traffic engineering with segment routing, respectively based on integer linear programming and constraint programming. However, no previous work deeply investigated how quickly those approaches can react to unexpected traffic changes and failures. We highlight limitations of existing algorithms, both in terms of required execution time and amount of path changes to be applied. Thus, we propose a new approach, based on local search and focused on the quick re-arrangement of (few) forwarding paths. We describe heuristics for sub-second recomputation of segment-routing paths that comply with requirements on the maximum link load (e.g., for congestion avoidance). Our heuristics enable a prompt answer to sudden criticalities affecting network services and business agreements. Through extensive simulations, we indeed experimentally show that our proposal significantly outperforms previous algorithms in the context of time-constrained optimization, supporting radical traffic changes in few tens of milliseconds for realistic networks. --- paper_title: An efficient routing algorithm based on segment routing in software-defined networking paper_content: Software-defined networking (SDN) is an emerging architecture that offers advantages over traditional network architecture. Segment routing (SR) defines the path of information through the network via an ordered list of multi-protocol label switching (MPLS) mechanisms on the packet headers at the ingress device, and this system makes SDN routing management more simple and efficient. SR can also solve some scalability issues in SDN. In this paper, we propose a routing algorithm for SDN with SR that can meet the bandwidth requirements of routing requests. Our algorithm considers the balance of traffic load and reduces the extra cost of packet header size in a network. Simulation results show that the performance of our algorithm is better than that of previously developed algorithms in terms of the average network throughput and the average rejection rate of routing requests. --- paper_title: A scalable and bandwidth-efficient multicast algorithm based on segment routing in software-defined networking paper_content: Software-Defined Networking (SDN) is an emerging architecture and offers advantages over traditional network architecture, while there exist some scalability challenges. In this paper, we propose a multicast routing algorithm for SDN with segment routing to serve the bandwidth requirement of a multicast routing request. Our algorithm considers the balance of traffic load for network resource of link bandwidth and node flow entries both. Simulation results show that the performance of our algorithm is better than previous works in terms of average network throughput and average rejection rate of routing requests. Besides, the results also show that our multicast architecture improve scalability problem of original SDN model in terms of number of flow entries used. --- paper_title: Segment routing based traffic engineering for energy efficient backbone networks paper_content: Energy consumption has become a limiting factor for deploying large-scale distributed infrastructures. This work1 seeks to improve the energy efficiency of backbone networks by providing an intra-domain Software Defined Network (SDN) approach to selectively turn off a subset of links. We propose the STREETE framework (SegmenT Routing based Energy Efficient Traffic Engineering) that dynamically adapts the number of powered-on links to the traffic load. The core of the solution relies on SPRING, a novel protocol being standardized by IETF. It is also known under the name of Segment Routing. The algorithms have been implemented and evaluated using the OMNET++ simulator. Experimental results show that the consumption of 44% of links can be reduced while preserving good quality of service. --- paper_title: Demonstration of dynamic restoration in segment routing multi-layer SDN networks paper_content: Dynamic traffic recovery is designed and validated in a multi-layer network exploiting an SDN-based implementation of Segment Routing. Traffic recovery is locally performed from the node detecting the failure up to the destination node without involving the SDN controller. Experimental results demonstrate recovery time within 50 ms. --- paper_title: Traffic duplication through segmentable disjoint paths paper_content: Ultra-low latency is a key component of safety-critical operations such as robot-assisted remote surgery or financial applications where every single millisecond counts. In this paper, we show how network operators can build upon the recently proposed Segment Routing architecture to provide a traffic duplication service to better serve the users of such demanding applications. We propose the first implementation of Segment Routing in the Linux kernel and leverage it to provide a traffic duplication service that sends packets over disjoint paths. Our experiments show that with such a service existing TCP stacks can preserve latency in the presence of packet losses. We also propose and evaluate an efficient algorithm that computes disjoint paths that can be realised by using segments. Our evaluation with real and synthetic network topologies shows that our proposed algorithms perform well in large networks. --- paper_title: Traffic engineering in segment routing networks paper_content: Abstract Segment routing (SR) has been recently proposed as an alternative traffic engineering (TE) technology enabling relevant simplifications in control plane operations. In the literature, preliminary investigations on SR have focused on label encoding algorithms and experimental assessments, without carefully addressing some key aspects of SR in terms of the overall network TE performance. In this study, ILP models and heuristics are proposed and successfully utilized to assess the TE performance of SR-based packet networks. Results show that the default SR behavior of exploiting equal cost multiple paths (ECMP) may lead to several drawbacks, including higher network resource utilization with respect to cases where ECMP is avoided. Moreover, results show that, by properly performing segment list computations, it is possible to achieve very effective TE solutions by just using a very limited number of stacked labels, thus successfully exploiting the benefits of the SR technology. --- paper_title: Robustly disjoint paths with segment routing paper_content: Motivated by conversations with operators and by possibilities to unlock future Internet-based applications, we study how to enable Internet Service Providers (ISPs) to reliably offer connectivity through disjoint paths as an advanced, value-added service. As ISPs are increasingly deploying Segment Routing (SR), we focus on implementing such service with SR. We introduce the concept of robustly disjoint paths, pairs of paths that are constructed to remain disjoint even after an input set of failures, with no external intervention (e.g., configuration change). We extend the routing theory, study the problem complexity, and design efficient algorithms to automatically compute SR-based robustly disjoint paths. Our algorithms enable a fully automated approach to offer the disjoint-path connectivity, based on configuration synthesis. Our evaluation on real topologies shows that such an approach is practical, and scales to large ISP networks. --- paper_title: Local Fast Segment Rerouting on Hypercubes paper_content: Fast rerouting is an essential mechanism in any dependable communication network, allowing to quickly, i.e., locally, recover from network failures, without invoking the control plane. However, while locality ensures a fast reaction, the absence of global information also renders the design of highly resilient fast rerouting algorithms more challenging. In this paper, we study algorithms for fast rerouting in emerging Segment Routing (SR) networks, where intermediate destinations can be added to packets by nodes along the path. Our main contribution is a maximally resilient polynomial-time fast rerouting algorithm for SR networks based on a hypercube topology. Our algorithm is attractive as it preserves the original paths (and hence waypoints traversed along the way), and does not require packets to carry failure information. We complement our results with an integer linear program formulation for general graphs and exploratory simulation results. --- paper_title: Solving Segment Routing Problems with Hybrid Constraint Programming Techniques paper_content: Segment routing is an emerging network technology that exploits the existence of several paths between a source and a destination to spread the traffic in a simple and elegant way. The major commercial network vendors already support segment routing, and several Internet actors are ready to use segment routing in their network. Unfortunately, by changing the way paths are computed, segment routing poses new optimization problems which cannot be addressed with previous research contributions. In this paper, we propose a new hybrid constraint programming framework to solve traffic engineering problems in segment routing. We introduce a new representation of path variables which can be seen as a lightweight relaxation of usual representations. We show how to define and implement fast propagators on these new variables while reducing the memory impact of classical traffic engineering models. The efficiency of our approach is confirmed by experiments on real and artificial networks of big Internet actors. --- paper_title: PMSR - Poor Man's Segment Routing, a minimalistic approach to Segment Routing and a Traffic Engineering use case paper_content: The current specification of the Segment Routing (SR) architecture requires enhancements to the intradomain routing protocols (e.g. OSPF and IS-IS) so that the nodes can advertise the Segment Identifiers (SIDs). We propose a simpler solution called PMSR (Poor Man's Segment Routing), that does not require any enhancement to routing protocol. We compare the procedures of PMSR with traditional SR, showing that PMSR can reduce the operation and management complexity. We analyze the set of use cases in the current SR drafts and we claim that PMSR can support the large majority of them. Thanks to the drastic simplification of the control plane, we have been able to develop an open source prototype of PMSR. In the second part of the paper, we consider a Traffic Engineering use case, starting from a traditional flow assignment optimization problem, which allocates hop-by-hop paths to flows. We propose a SR path assignment algorithm and prove that it is optimal with respect to the number of segments allocated to a flow. --- paper_title: Traffic Engineering with Segment Routing: SDN-Based Architectural Design and Open Source Implementation paper_content: Traffic Engineering (TE) in IP carrier networks is one of the functions that can benefit from the Software Defined Networking paradigm. However traditional per-flow routing requires a direct interaction between the SDN controller and each node that is involved in the traffic paths. Segment Routing (SR) may simplify the route enforcement delegating all the configuration and per-flow state at the border of the network. In this work we propose an architecture that integrates the SDN paradigm with SR based TE, for which we have provided an open source reference implementation. We have designed and implemented a simple TE/SR heuristic for flow allocation and we show and discuss experimental results. --- paper_title: Optimized network traffic engineering using segment routing paper_content: Segment Routing is a proposed IETF protocol to improve traffic engineering and online route selection in IP networks. The key idea in segment routing is to break up the routing path into segments in order to enable better network utilization. Segment routing also enables finer control of the routing paths and can be used to route traffic through middle boxes. This paper considers the problem of determining the optimal parameters for segment routing in the offline and online cases. We develop a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing. We also show that both these algorithms work well in practice. --- paper_title: Incremental Deployment of Segment Routing Into an ISP Network: a Traffic Engineering Perspective paper_content: Segment routing (SR) is a new routing paradigm to provide traffic engineering (TE) capabilities in an IP network. The main feature of SR is that no signaling protocols are needed, since extensions of the interior gateway protocol routing protocols are used. Despite the benefit that SR brings, introducing a new technology into an operational network presents many difficulties. In particular, the network operators consider both capital expenditure and performance degradation as drawbacks for the deployment of the new technology; for this reason, an incremental approach is preferred. In this paper, we face the challenge of managing the transition between a pure IP network to a full SR one while optimizing the network performances. We focus our attention on a network scenario where: 1) only a subset of nodes are SR-capable and 2) the TE objective is the minimization of the maximum link utilization. For such a scenario, we propose an architectural solution, named SR domain (SRD), to guarantee the proper interworking between the IP routers and the SR nodes. We propose a mixed integer linear programming formulation to solve the SRD design problem, consisting in identifying the subset of SR nodes; moreover, a strategy to manage the routing inside the SRD is defined. The performance evaluation shows that the hybrid IP/SR network based on SRD offers TE opportunities comparable to the one of a full SR network. Finally, a heuristic method to identify nodes to be inserted in the set of nodes composing the SRD is discussed. --- paper_title: Per-packet based energy aware segment routing approach for Data Center Networks with SDN paper_content: In today's scenario, energy efficiency has become one of the most crucial issues for Data Center Networks. This paper analyzes the energy saving capability of a Data Center Network (DCN) using Segment Routing (SR) based model within the SDN architecture. Apart from saving energy by turning off links, our work efficiently manages the traffic within the available links by using per-packet based load balancing approach to avoid congestion within DCNs and increases the sleeping time of inactive links. An algorithm for deciding the particular set of links to be turned off within a network is presented. Energy efficiency is measured in terms of number of links turned off and for how long the links remain in sleep mode. Results show that the proposed per-packet SR approach saves more energy and provides better performance as compared to per-flow based SR approach. --- paper_title: Optimizing segment routing using evolutionary computation paper_content: Abstract Segment Routing (SR) combines the simplicity of Link-State routing protocols with the flexibility of Multiprotocol Label Switching (MPLS). By decomposing forwarding paths into segments, identified by labels, SR improves Traffic Engineering (TE) and enables new solutions for the optimization of network resources utilization. This work proposes an Evolutionary Computation approach that enables Path Computation Element (PCE) or Software-defined Network (SDN) controllers to optimize label switching paths for congestion avoidance while using at the most three labels to configure each label switching path. --- paper_title: Segment routing for effective recovery and multi-domain traffic engineering paper_content: Segment routing is an emerging traffic engineering technique relying on Multi-protocol Label-Switched (MPLS) label stacking to steer traffic using the source-routing paradigm. Traffic flows are enforced through a given path by applying a specifically designed stack of labels (i.e., the segment list). Each packet is then forwarded along the shortest path toward the network element represented by the top label. Unlike traditional MPLS networks, segment routing maintains a per-flow state only at the ingress node; no signaling protocol is required to establish new flows or change the routing of active flows. Thus, control plane scalability is greatly improved. Several segment routing use cases have recently been proposed. As an example, it can be effectively used to dynamically steer traffic flows on paths characterized by low latency values. However, this may suffer from some potential issues. Indeed, deployed MPLS equipment typically supports a limited number of stacked labels. Therefore, it is important to define the proper procedures to minimize the required segment list depth. This work is focused on two relevant segment routing use cases: dynamic traffic recovery and traffic engineering in multi-domain networks. Indeed, in both use cases, the utilization of segment routing can significantly simplify the network operation with respect to traditional Internet Protocol (IP)/MPLS procedures. Thus, two original procedures based on segment routing are proposed for the aforementioned use cases. Both procedures are evaluated including a simulative analysis of the segment list depth. Moreover, an experimental demonstration is performed in a multi-layer test bed exploiting a software-defined-networking-based implementation of segment routing. --- paper_title: An Optimization Routing Algorithm Based on Segment Routing in Software-Defined Networks paper_content: Software-defined networks (SDNs) are improving the controllability and flexibility of networks as an innovative network architecture paradigm. Segment routing (SR) exploits an end-to-end logical path and is composed of a sequence of segments as an effective routing strategy. Each segment is represented by a middle point. The combination of SR and SDN can meet the differentiated business needs of users and can quickly deploy applications. In this paper, we propose two routing algorithms based on SR in SDN. The algorithms aim to save the cost of the path, alleviate the congestion of networks, and formulate the selection strategy by comprehensively evaluating the value of paths. The simulation results show that compared with existing algorithms, the two proposed algorithms can effectively reduce the consumption of paths and better balance the load of the network. Furthermore, the proposed algorithms take into account the preferences of users, actualize differentiated business networks, and achieve a larger comprehensive evaluation value of the path compared with other algorithms. --- paper_title: A Linux kernel implementation of Segment Routing with IPv6 paper_content: Network operators seek more flexibility to provide added-value services to their customers. We propose to leverage the IPv6 Segment Routing architecture with a example usecase. We also provide an implementation of SR-IPv6 and we evaluate its performance on broadband routers. --- paper_title: Reliable segment routing paper_content: Segment Routing (SR) is a novel traffic engineering technique compatible with traditional MPLS data plane. SR relies on label stacking to steer traffic flows throughout the network. Signaling protocol is not required, thus control plane operation is greatly simplified. SR can also be exploited upon network failures to promptly perform traffic recovery and, subsequently, to optimize the recovered traffic for avoiding network congestion. This study proposes a procedure to dynamically recover the traffic flows disrupted by a single failure in segment routing networks. The proposed procedure is evaluated in several network topologies to estimate the complexity of the required operations. --- paper_title: Flexible failure detection and fast reroute using eBPF and SRv6 paper_content: Segment Routing is a modern variant of source routing that is being gradually deployed by network operators. Large ISPs use it for traffic engineering and fast reroute purposes. Its IPv6 dataplane, named SRv6, goes beyond the initial MPLS dataplane, notably by enabling network programmability. With SRv6, it becomes possible to define transparent network functions on routers and endhosts. These functions are mapped to IPv6 addresses and their execution is scheduled by segments placed in the forwarded packets. We have recently extended the Linux SRv6 implementation to enable the execution of specific eBPF code upon reception of an SRv6 packet containing local segments. eBPF is a virtual machine that is included in the Linux kernel. We leverage this new feature of Linux 4.18 to propose and implement flexible eBPF-based fast-reroute and failure detection schemes. Our lab measurements confirm that they provide good performance and enable faster failure detections than existing BFD implementations on Linux routers and servers. --- paper_title: Optimizing restoration with segment routing paper_content: Segment routing is a new proposed routing mechanism for simplified and flexible path control in IP/MPLS networks. It builds on existing network routing and connection management protocols and one of its important features is the automatic rerouting of connections upon failure. Re-routing can be done with available restoration mechanisms including IGP-based rerouting and fast reroute with loop-free alternates. This is particularly attractive for use in Software Defined Networks (SDN) because the central controller need only be involved at connection set-up time and failures are handled automatically in a distributed manner. A significant challenge in restoration optimization in segment routed networks is the centralized determination of connections primary paths so as to enable the best sharing of restoration bandwidth over non-simultaneous network failures. We formulate this problem as a linear programming problem and develop an efficient primal-dual algorithm for the solution. We also develop a simple randomized rounding scheme for cases when there are additional constraints on segment routing. We demonstrate the significant capacity benefits achievable from this optimized restoration with segment routing. --- paper_title: Demonstration of dynamic restoration in segment routing multi-layer SDN networks paper_content: Dynamic traffic recovery is designed and validated in a multi-layer network exploiting an SDN-based implementation of Segment Routing. Traffic recovery is locally performed from the node detecting the failure up to the destination node without involving the SDN controller. Experimental results demonstrate recovery time within 50 ms. --- paper_title: Traffic duplication through segmentable disjoint paths paper_content: Ultra-low latency is a key component of safety-critical operations such as robot-assisted remote surgery or financial applications where every single millisecond counts. In this paper, we show how network operators can build upon the recently proposed Segment Routing architecture to provide a traffic duplication service to better serve the users of such demanding applications. We propose the first implementation of Segment Routing in the Linux kernel and leverage it to provide a traffic duplication service that sends packets over disjoint paths. Our experiments show that with such a service existing TCP stacks can preserve latency in the presence of packet losses. We also propose and evaluate an efficient algorithm that computes disjoint paths that can be realised by using segments. Our evaluation with real and synthetic network topologies shows that our proposed algorithms perform well in large networks. --- paper_title: Leveraging eBPF for programmable network functions with IPv6 Segment Routing paper_content: With the advent of Software Defined Networks (SDN), Network Function Virtualisation (NFV) or Service Function Chaining (SFC), operators expect networks to support flexible services beyond the mere forwarding of packets. The network programmability framework which is being developed within the IETF by leveraging IPv6 Segment Routing enables the realisation of in-network functions. In this paper, we demonstrate that this vision of in-network programmability can be realised. By leveraging the eBPF support in the Linux kernel, we implement a flexible framework that allows network operators to encode their own network functions as eBPF code that is automatically executed while processing specific packets. Our lab measurements indicate that the overhead of calling such eBPF functions remains acceptable. Thanks to eBPF, operators can implement a variety of network functions. We describe the architecture of our implementation in the Linux kernel. This extension has been released with Linux 4.18. We illustrate the flexibility of our approach with three different use cases: delay measurements, hybrid networks and network discovery. Our lab measurements also indicate that the performance penalty of running eBPF network functions on Linux routers does not incur a significant overhead. --- paper_title: Segment routing for effective recovery and multi-domain traffic engineering paper_content: Segment routing is an emerging traffic engineering technique relying on Multi-protocol Label-Switched (MPLS) label stacking to steer traffic using the source-routing paradigm. Traffic flows are enforced through a given path by applying a specifically designed stack of labels (i.e., the segment list). Each packet is then forwarded along the shortest path toward the network element represented by the top label. Unlike traditional MPLS networks, segment routing maintains a per-flow state only at the ingress node; no signaling protocol is required to establish new flows or change the routing of active flows. Thus, control plane scalability is greatly improved. Several segment routing use cases have recently been proposed. As an example, it can be effectively used to dynamically steer traffic flows on paths characterized by low latency values. However, this may suffer from some potential issues. Indeed, deployed MPLS equipment typically supports a limited number of stacked labels. Therefore, it is important to define the proper procedures to minimize the required segment list depth. This work is focused on two relevant segment routing use cases: dynamic traffic recovery and traffic engineering in multi-domain networks. Indeed, in both use cases, the utilization of segment routing can significantly simplify the network operation with respect to traditional Internet Protocol (IP)/MPLS procedures. Thus, two original procedures based on segment routing are proposed for the aforementioned use cases. Both procedures are evaluated including a simulative analysis of the segment list depth. Moreover, an experimental demonstration is performed in a multi-layer test bed exploiting a software-defined-networking-based implementation of segment routing. --- paper_title: Evolve carrier ethernet architecture with SDN and segment routing paper_content: Ethernet technology has been evolving to become the main Wide Area Network transport technology. To address the rising CAPEX and OPEX issues, service providers are deploying Unified MPLS technology to consolidate various networks into one an integrated carrier ethernet network. Although Unified MPLS has simplified MPLS deployment in many aspects, it is still deemed complex and it is not as agile as what clouding computing demands. This paper proposes a further evolution of carrier ethernet architecture by coupling the emerging segment routing and Software-Defined Network (SDN) technologies. This new architecture will significantly simplify the network infrastructure while providing rich converged services with embedded high availability and agility. --- paper_title: SDN Architecture and Southbound APIs for IPv6 Segment Routing Enabled Wide Area Networks paper_content: The SRv6 architecture (segment routing based on IPv6 data plane) is a promising solution to support services like Traffic Engineering, service function chaining and virtual private networks in IPv6 backbones and datacenters. The SRv6 architecture has interesting scalability properties as it reduces the amount of state information that needs to be configured in the nodes to support the network services. In this paper, we describe the advantages of complementing the SRv6 technology with an software defined networking (SDN) based approach in backbone networks. We discuss the architecture of a SRv6 enabled network based on Linux nodes. In addition, we present the design and implementation of the Southbound API between the SDN controller and the SRv6 device. We have defined a data-model and four different implementations of the API, respectively based on gRPC, REST, NETCONF, and remote command line interface. Since it is important to support both the development and testing aspects we have realized an Intent-based emulation system to build realistic and reproducible experiments. This collection of tools automate most of the configuration aspects relieving the experimenter from a significant effort. Finally, we have realized an evaluation of some performance aspects of our architecture and of the different variants of the Southbound APIs and we have analyzed the effects of the configuration updates in the SRv6 enabled nodes. --- paper_title: Demonstration of SDN-based orchestration for multi-domain Segment Routing networks paper_content: This work demonstrates a hierarchical control plane architecture for Software Defined Networking (SDN)-based Segment Routing (SR) in multi-domain networks. An orchestrator application, on top of multiple open source SDN controllers, creates a hierarchical control plane architecture using northbound RESTFul APIs of controllers. The orchestrator has control, visibility and traffic engineering capabilities to manage multi-domain SR service creation. Standard southbound interfaces with proper SR extensions are exploited to manage SR tunnels in the MPLS data plane. --- paper_title: Software Resolved Networks: Rethinking Enterprise Networks with IPv6 Segment Routing paper_content: Enterprise networks often need to implement complex policies that match business objectives. They will embrace IPv6 like ISP networks in the coming years. Among the benefits of IPv6, the recently proposed IPv6 Segment Routing (SRv6) architecture supports richer policies in a clean manner. This matches very well the requirements of enterprise networks. In this paper, we propose Software Resolved Networks (SRNs), a new architecture for IPv6 enterprise networks. We apply the fundamental principles of Software Defined Networks, i.e., the ability to control the operation of the network through software, but in a different manner that also involves the endhosts. We leverage SRv6 to enforce and control network paths according to the network policies. Those paths are computed by a centralized controller that interacts with the endhosts through the DNS protocol. We implement a Software Resolved Network on Linux endhosts, routers and controllers. Through benchmarks and simulations, we analyze the performance of those SRNs, and demonstrate that they meet the expectations of enterprise networks. --- paper_title: Path Encoding in Segment Routing paper_content: Segment Routing (SR) is emerging as an innovative traffic engineering technique compatible with traditional MPLS data plane. SR relies on label stacking, without requiring a signaling protocol. This greatly simplifies network operations in transit nodes. However, it may introduce scalability issues at the ingress node and packet overhead. Therefore, specific algorithms are required to efficiently compute the label stack for a given path. This study proposes two algorithms for SR label stack computation of strict routes that guarantee minimum label stack depth. Then, SR scalability performance is investigated. Results show that, in most of the cases, SR uses label stacks composed of few labels and introduces a limited packet overhead. However, relevant scalability issues may arise in specific cases, e.g., large planar topologies. --- paper_title: Efficient label encoding in segment-routing enabled optical networks paper_content: The currently standardized GMPLS protocol suite for packet over optical networks relies on hierarchical instances of signaling sessions. Such sessions have to be established and maintained also in transit nodes, leading to complex and weighty control plane implementations. A novel technology called Segment Routing (SR) has been recently proposed to address these issues. SR relies on the source routing paradigm to provide traffic engineering solutions. In particular, the computed route for a given request is expressed as a segment list applied as an header to data packets at the ingress node. Specific algorithms are then required to perform the path computation and express the computed path through an effective segment list encoding (i.e., label stack), minimizing the segment list depth (SLD) (i.e., the number of labels included in the segment list). So far, no algorithms have been proposed to jointly provide path and segment list computation in SR-based networks. In this study, an efficient segment list encoding algorithm is proposed, guaranteeing optimal path computation and limited SLD in SR-based networks. The algorithm also accounts for equal-cost multiple paths and multiple constraints. The proposed algorithm is successfully applied to different network scenarios, demonstrating its flexibility in several use cases and showing effective performance in terms of segment list depth and introduced packet overhead. --- paper_title: First demonstration of SDN-based segment routing in multi-layer networks paper_content: Segment Routing enabling dynamic optical bypass in a multi-layer network is experimentally demonstrated. Edge nodes are efficiently configured by an enhanced SDN controller, showing effective scalability performance under different label stacking conditions. --- paper_title: Service chaining in multi-layer networks using segment routing and extended BGP FlowSpec paper_content: Effective service chaining enforcement along TE paths is proposed using Segment Routing and extended BGP Flowspec for micro-flows mapping. The proposed solution is experimentally evaluated with a deep packet inspection service supporting dynamic flow enforcement. --- paper_title: PMSR - Poor Man's Segment Routing, a minimalistic approach to Segment Routing and a Traffic Engineering use case paper_content: The current specification of the Segment Routing (SR) architecture requires enhancements to the intradomain routing protocols (e.g. OSPF and IS-IS) so that the nodes can advertise the Segment Identifiers (SIDs). We propose a simpler solution called PMSR (Poor Man's Segment Routing), that does not require any enhancement to routing protocol. We compare the procedures of PMSR with traditional SR, showing that PMSR can reduce the operation and management complexity. We analyze the set of use cases in the current SR drafts and we claim that PMSR can support the large majority of them. Thanks to the drastic simplification of the control plane, we have been able to develop an open source prototype of PMSR. In the second part of the paper, we consider a Traffic Engineering use case, starting from a traditional flow assignment optimization problem, which allocates hop-by-hop paths to flows. We propose a SR path assignment algorithm and prove that it is optimal with respect to the number of segments allocated to a flow. --- paper_title: SCMon: Leveraging segment routing to improve network monitoring paper_content: To guarantee correct operation of their networks, operators have to promptly detect and diagnose data-plane issues, like broken interface cards or link failures. Networks are becoming more complex, with a growing number of Equal Cost MultiPath (ECMP) and link bundles. Hence, some data-plane problems (e.g. silent packet dropping at one router) can hardly be detected with control-plane protocols or simple monitoring tools like ping or traceroute. In this paper, we propose a new technique, called SCMon, that enables continuous monitoring of the data-plane, in order to track the health of all routers and links. SCMon leverages the recently proposed Segment Routing (SR) architecture to monitor the entire network with a single box (and no additional monitoring protocol). In particular, SCMon uses SR to (i) force monitoring probes to travel over cycles; and (ii) test parallel links and bundles at a per-link granularity. We present original algorithms to compute cycles that cover all network links with a limited number of SR segments. Further, we prototype and evaluate SCMon both with simulations and Linux-based emulations. Our experiments show that SCMon quickly detects and precisely pinpoints data-plane problems, with a limited overhead. --- paper_title: Network service chaining using segment routing in multi-layer networks paper_content: Network service chaining, originally conceived in the network function virtualization (NFV) framework for software defined networks (SDN), is becoming an attractive solution for enabling service differentiation enforcement to microflows generated by data centers, 5G fronthaul and Internet of Things (IoT) cloud/fog nodes, and traversing a metro-core network. However, the current IP/MPLS-over optical multi-layer network is practically unable to provide such service chain enforcement. First, MPLS granularity prevents microflows from being conveyed in dedicated paths. Second, service configuration for a huge number of selected flows with different requirements is prone to scalability concerns, even considering the deployment of a SDN network. In this paper, effective service chaining enforcement along traffic engineered (TE) paths is proposed using segment routing and extended traffic steering mechanisms for mapping micro-flows. The proposed control architecture is based on an extended SDN controller encompassing a stateful path computation element (PCE) handling microflow computation and placement supporting service chains, whereas segment routing allows automatic service enforcement without the need for continuous configuration of the service node. The proposed solution is experimentally evaluated in segment routing over an elastic optical network (EON) network testbed with a deep packet inspection service supporting dynamic and automatic flow enforcement using Border Gateway Protocol with Flow Specification (BGP Flowspec) and OpenFlow protocols as alternative traffic steering enablers. Scalability of flow computation, placement, and steering are also evaluated showing the effectiveness of the proposed solution. --- paper_title: Translating Traffic Engineering outcome into Segment Routing paths: The Encoding problem paper_content: Traffic Engineering (TE) algorithms aims at determining the packet routing paths in order to satisfy specific QoS requirements. These paths are normally established through control procedures e.g., exchange of RSVP messages in MPLS networks or links weights modification in pure IP networks. An increase of control traffic or long convergence time intervals, respectively, are the drawbacks of these solutions. Segment Routing (SR) is a new network paradigm able to implement TE routing strategies over legacy IP/MPLS networks with no need of dedicated signaling procedures. This result is obtained by inserting in each packet header an ordered list of instructions, called Segments List, that indicates the path to be crossed. This paper provides the formulation of the Segment List Encoding problem i.e., the detection of the proper Segment Lists to obtain TE network paths minimizing the Segment Lists sizes. The SL encoding procedure is composed of two steps: i) the creation of an auxiliary graph representing the forwarding paths between the couple of source and destination nodes; ii) the solution of a Multi-commodity Flow (MCF) problem over the auxiliary graph. The performance evaluation shows that properly performing SL encoding allows to implement TE outcome with a reduced reconfiguration cost with respect to E2E tunneling and Hop-by-Hop solutions; moreover a significant advantage in terms of packets overhead is obtained. --- paper_title: SDN and PCE implementations for segment routing paper_content: Segment Routing (SR) technology has been proposed to enforce effective routing strategies without relying on signaling protocols. In this paper, two SR implementations are presented and successfully demonstrated in two different network testbeds. The first implementation focuses on a software defined networking (SDN) scenario where nodes consist of OpenFlow switches and the SR Controller is a specifically designed enhanced version of an OpenFlow Controller. The second implementation includes a novel Path Computation Element (PCE) scenario where nodes consist of commercially available IP/MPLS routers and the SR Controller is a new extended version of a PCE solution. Both implementations have been successfully applied to demonstrate dynamic traffic rerouting. In particular, by enforcing different segment list configurations at the ingress node, rerouting is effectively achieved with no packet loss and without requiring the use of signaling protocols. --- paper_title: A SDN-based network architecture for cloud resiliency paper_content: In spite of their commercial success, Cloud services are still subject to two major weak points: data security and infrastructure resiliency. In this paper, we propose an original Cloud network architecture aiming at improving the resiliency of Cloud network infrastructures interconnecting remote datacenters. The main originality of this architecture consists in exploiting the principles of Software Defined Networking (SDN) in order to adapt the rerouting strategies in case of network failure according to a set of requirements. In existing Cloud networks configurations, network recovery after a fiber cut is achieved by means of the usage of redundant bandwidth capacity preplanned through backup links. Such an approach has two drawbacks. First, it induces at a large scale a non-negligible additional cost for the Cloud Service Providers (CSP). Second, the pre-computation of the rerouting strategy may not be suited to the specific quality of service requirements of the various data flows that were transiting on the failing link. To prevent these two drawbacks, we propose that CSPs deploy their services in several redundant datacenters and make sure that those datacenters are properly interconnected via the Internet. For that purpose, we propose that a CSP may use the services of multiple (typically two) Internet Service Providers to interconnect its datacenters via the Internet. In practice, we propose that a set of “routing inflection points” may form an overlay network exploiting a specific routing strategy. We propose that this overlay is coordinated by a Software Defined Networking-based centralized controller. Thus, such a CSP may choose the network path between two datacenters the most suited to the underlying traffic QoS requirement. The proposed approach enables this CSP a certain independency from its network providers. In this paper, we present this new Cloud architecture. We outline how our approach mixes concepts taken from both SDN and Segment Routing. Unlike the protection techniques used by existing CSPs, we explain how this approach can be used to implement fast rerouting strategy for inter-datacenter data exchanges. --- paper_title: Label encoding algorithm for MPLS Segment Routing paper_content: Segment Routing is a new architecture that leverages the source routing mechanism to enhance packet forwarding in networks. It is designed to operate over either an MPLS (SR-MPLS) or an IPv6 control plane. SR-MPLS encodes a path as a stack of labels inserted in the packet header by the ingress node. This overhead may violate the Maximum SID Depth (MSD), the equipment hardware limitation which indicates the maximum number of labels an ingress node can push onto the packet header. Currently, the MSD varies from 3 to 5 depending on the equipment manufacturer. Therefore, the MSD value considerably limits the number of paths that can be implemented with SR-MPLS, leading to an inefficient network resource utilization and possibly to congestion. We propose and analyze SR-LEA, an algorithm for an efficient path label encoding that takes advantage of the existing IGP shortest paths in the network. The output of SR-LEA is the minimum label stack to express SR-MPLS paths according to the MSD constraint. Therefore, SR-LEA substantially slackens the impact of MSD and restores the path diversity that MSD forbids in the network. --- paper_title: Optimizing Segment Routing With the Maximum SLD Constraint Using Openflow paper_content: Segment routing is an emerging routing technology that was initially driven by commercial vendors to achieve scalable, flexible, and controllable routing. In segment routing, multiple multi-protocol label switch labels are stacked in the packet header to complete end-to-end transmission, which may lead to a large label stack and a long packet header. Thus, scalability issues may occur when segment routing is applied to large-scale networks. To address this issue, multiple mechanisms and algorithms have been proposed for minimizing the label stack size. However, we argue that these methods ignore the constraint on the maximum segment list depth (SLD), since the typical network equipment can currently only support three to five layers of labels. In this paper, we study segment routing with the maximum SLD constraint and demonstrate that issues, such as explosive increases in the size of the label space and the management overheads will arise when the maximum SLD constraint is imposed. To address these issues, we make contributions from two main aspects. First, based on the network programmability that is provided by openflow, a novel segment routing architecture with improved data plane is proposed that reduces the overhead of additional flow entries and label space. Second, a new path encoding scheme is designed to minimize the SLD under the given maximum constraint, while taking multiple types of overhead into consideration. Moreover, we also perform simulations under different scenarios to evaluate the performances of the proposed algorithms. The simulation results demonstrate that the proposed mechanisms and algorithms can address the issues of segment routing when there is a constraint on the maximum SLD. --- paper_title: Segment routing in hybrid software-defined networking paper_content: Software-defined networking (SDN) decouples the network into different layers, bringing many new attributes over traditional network. But SDN also comes with its own set of challenges and limitations. Combining the mature traditional network with new benefits of SDN, hybrid SDN has attracted significant attention. On the other hand, segment routing keeps partial routing information within packet header so that switch can directly dispatch without calculate routing path or looking flow entry table. In this paper, we first propose a mechanism which integrates segment routing into hybrid SDN that can reduce the needs of flow entries of SDN switch. Then we formulate a routing algorithm for this mechanism. Our algorithm considers the balance of traffic load and reduces the needs of flow entries in each switch. Simulation results show that the performance of our mechanism significantly reduces the number of flow entries than compared solution, and achieves better load balance compared with previous routing protocols. --- paper_title: Exploring various use cases for IPv6 Segment Routing paper_content: IPv6 Segment Routing (SRv6) is a modern version of source routing that is being standardised within the IETF to address a variety of use cases in ISP, datacenter and entreprise networks. Its inclusion in recent versions of the Linux kernel enables researchers to explore and extend this new protocol. We leverage and extend the SRv6 implementation in the Linux kernel to demonstrate two very different usages of this new protocol. We first show how entreprise networks can leverage SRv6 to better control the utilisation of their infrastructure and demonstrate how DNS resolvers can act as SDN controllers. We then demonstrate how SRv6Pipes can be used to efficiently implement network functions that need to process bytestreams on top of a packet-based SRv6 network. --- paper_title: Implementation of virtual network function chaining through segment routing in a linux-based NFV infrastructure paper_content: This paper presents an architecture to support Vir- tual Network Functions (VNFs) chaining using the IPv6 Segment Routing (SR) network programming model. Two classes of VNFs are considered: SR-aware and SR-unaware. The operations to support both SR-aware and SR-unaware VNFs are described at an architectural level and we propose a solution for SR-unaware VNFs hosted in a NFV node. An Open Source implementation of the proposed solution for a Linux based NFV host is available and a set of performance measurements have been carried out in a testbed. --- paper_title: SERA: SEgment Routing Aware Firewall for Service Function Chaining scenarios paper_content: In this paper we consider the use of IPv6 Segment Routing (SRv6) for Service Function Chaining (SFC) in an NFV infrastructure. We first analyze the issues of deploying Virtual Network Functions (VNFs) based on SR-unaware applications, which require the introduction of SR proxies in the NFV infrastructure, leading to high complexity in the configuration and in the packet processing. Then we consider the advantages of SR-aware applications, focusing on a firewall application. We present the design and implementation of the SERA (SEgment Routing Aware) firewall, which extends the Linux iptables firewall. In its basic mode the SERA firewall works like the legacy iptables firewall (it can reuse an identical set of rules), but with the great advantage that it can operate on the SR encapsulated packets with no need of an SR proxy. Moreover we define an advanced mode, in which the SERA firewall can inspect all the fields of an SR encapsulated packet and can perform SR-specific actions. In the advanced mode the SERA firewall can fully exploit the features of the IPv6 Segment Routing network programming model. A performance evaluation of the SERA firewall is discussed, based on its result a further optimized prototype has been implemented and evaluated. --- paper_title: Efficient label encoding in segment-routing enabled optical networks paper_content: The currently standardized GMPLS protocol suite for packet over optical networks relies on hierarchical instances of signaling sessions. Such sessions have to be established and maintained also in transit nodes, leading to complex and weighty control plane implementations. A novel technology called Segment Routing (SR) has been recently proposed to address these issues. SR relies on the source routing paradigm to provide traffic engineering solutions. In particular, the computed route for a given request is expressed as a segment list applied as an header to data packets at the ingress node. Specific algorithms are then required to perform the path computation and express the computed path through an effective segment list encoding (i.e., label stack), minimizing the segment list depth (SLD) (i.e., the number of labels included in the segment list). So far, no algorithms have been proposed to jointly provide path and segment list computation in SR-based networks. In this study, an efficient segment list encoding algorithm is proposed, guaranteeing optimal path computation and limited SLD in SR-based networks. The algorithm also accounts for equal-cost multiple paths and multiple constraints. The proposed algorithm is successfully applied to different network scenarios, demonstrating its flexibility in several use cases and showing effective performance in terms of segment list depth and introduced packet overhead. --- paper_title: SRLB: The Power of Choices in Load Balancing with Segment Routing paper_content: Network load-balancers generally either do not take application state into account, or do so at the cost of a centralized monitoring system. This paper introduces a load-balancer running exclusively within the IP forwarding plane, i.e. in an application protocol agnostic fashion - yet which still provides application-awareness and makes real-time, decentralized decisions. To that end, IPv6 Segment Routing is used to direct data packets from a new flow through a chain of candidate servers, until one decides to accept the connection, based on its local state. This way, applications themselves naturally decide on how to share incoming connections, while incurring minimal network overhead, and no out-of-band signaling. Tests on different workloads - including realistic workloads such as replaying actual Wikipedia access traffic towards a set of replica Wikipedia instances - show significant performance benefits, in terms of shorter response times, when compared to a traditional random load-balancer. --- paper_title: A Content-aware Data-plane for Efficient And Scalable Video Delivery paper_content: Internet users consume increasing quantities of video content with higher Quality of Experience (QoE) expectations. Network scalability thus becomes a critical problem for video delivery as traditional Content Delivery Networks (CDN) struggle to cope with the demand. In particular, content-awareness has been touted as a tool for scaling CDNs through clever request and content placement. Building on that insight, we propose a network paradigm that provides application-awareness in the network layer, enabling the offload of CDN decisions to the data-plane. Namely, it uses chunk-level identifiers encoded into IPv6 addresses. These identifiers are used to perform network-layer cache admission by estimating the popularity of requests with a Least-Recently-Used (LRU) filter. Popular requests are then served from the edge cache, while unpopular requests are directly redirected to the origin server, circumventing the HTTP proxy. The parameters of the filter are optimized through analytical modeling and validated via both simulation and experimentation with a testbed featuring real cache servers. It yields improvements in QoE while decreasing the hardware requirements on the edge cache. Specifically, for a typical content distribution, our evaluation shows a 22% increase of the hit rate, a 36% decrease of the chunk download-time, and a 37% decrease of the cache server CPU load. --- paper_title: SRv6Pipes: enabling in-network bytestream functions paper_content: IPv6 Segment Routing is a recent IPv6 extension that is generating a lot of interest among researchers and in industry. Thanks to IPv6 SR, network operators can better control the paths followed by packets inside their networks. This provides enhanced traffic engineering capabilities and is key to support Service Function Chaining (SFC). With SFC, an end-to-end service is the composition of a series of in-network services. Simple services such as NAT, accounting or stateless firewalls can be implemented on a per-packet basis. However, more interesting services like transparent proxies, transparent compression or encryption, transcoding, etc. require functions that operate on the bytestream.In this paper, we extend the IPv6 implementation of Segment Routing in the Linux kernel to enable network functions that operate on the bytestream and not on a per-packet basis. Our SRv6Pipes enable network architects to design end-to-end services as a series of in-network functions. We evaluate the performance of our implementation with different microbenchmarks. --- paper_title: PMSR - Poor Man's Segment Routing, a minimalistic approach to Segment Routing and a Traffic Engineering use case paper_content: The current specification of the Segment Routing (SR) architecture requires enhancements to the intradomain routing protocols (e.g. OSPF and IS-IS) so that the nodes can advertise the Segment Identifiers (SIDs). We propose a simpler solution called PMSR (Poor Man's Segment Routing), that does not require any enhancement to routing protocol. We compare the procedures of PMSR with traditional SR, showing that PMSR can reduce the operation and management complexity. We analyze the set of use cases in the current SR drafts and we claim that PMSR can support the large majority of them. Thanks to the drastic simplification of the control plane, we have been able to develop an open source prototype of PMSR. In the second part of the paper, we consider a Traffic Engineering use case, starting from a traditional flow assignment optimization problem, which allocates hop-by-hop paths to flows. We propose a SR path assignment algorithm and prove that it is optimal with respect to the number of segments allocated to a flow. --- paper_title: Translating Traffic Engineering outcome into Segment Routing paths: The Encoding problem paper_content: Traffic Engineering (TE) algorithms aims at determining the packet routing paths in order to satisfy specific QoS requirements. These paths are normally established through control procedures e.g., exchange of RSVP messages in MPLS networks or links weights modification in pure IP networks. An increase of control traffic or long convergence time intervals, respectively, are the drawbacks of these solutions. Segment Routing (SR) is a new network paradigm able to implement TE routing strategies over legacy IP/MPLS networks with no need of dedicated signaling procedures. This result is obtained by inserting in each packet header an ordered list of instructions, called Segments List, that indicates the path to be crossed. This paper provides the formulation of the Segment List Encoding problem i.e., the detection of the proper Segment Lists to obtain TE network paths minimizing the Segment Lists sizes. The SL encoding procedure is composed of two steps: i) the creation of an auxiliary graph representing the forwarding paths between the couple of source and destination nodes; ii) the solution of a Multi-commodity Flow (MCF) problem over the auxiliary graph. The performance evaluation shows that properly performing SL encoding allows to implement TE outcome with a reduced reconfiguration cost with respect to E2E tunneling and Hop-by-Hop solutions; moreover a significant advantage in terms of packets overhead is obtained. --- paper_title: An Efficient Linux Kernel Implementation of Service Function Chaining for Legacy VNFs Based on IPv6 Segment Routing paper_content: We consider the IPv6 Segment Routing (SRv6) technology for Service Function Chaining of Virtual Network Functions (VNFs). Most of the VNFs are legacy VNFs (not aware of the SRv6 technology) and expect to process traditional IP packets. An SR proxy is needed to support them. We have extended the implementation of SRv6 in the Linux kernel, realizing an open source SR-proxy, referred to as SRNK (SR-Proxy Native Kernel). The performance of the proposed solution (SRNKvl) has been evaluated, identifying a poor scalability with respect to the number of VNFs to be supported in a node. Therefore we provided a second design (SRNKv2), enhancing the Linux Policy Routing framework. The performance of SRNKv2 is independent from the number of supported VNFs in a node. We compared the performance of SRNKv2 with a reference scenario not performing the encapsulation and decapsulation operation and demonstrated that the overhead of SRNKv2 is very small, on the order of 3.5%. --- paper_title: Label encoding algorithm for MPLS Segment Routing paper_content: Segment Routing is a new architecture that leverages the source routing mechanism to enhance packet forwarding in networks. It is designed to operate over either an MPLS (SR-MPLS) or an IPv6 control plane. SR-MPLS encodes a path as a stack of labels inserted in the packet header by the ingress node. This overhead may violate the Maximum SID Depth (MSD), the equipment hardware limitation which indicates the maximum number of labels an ingress node can push onto the packet header. Currently, the MSD varies from 3 to 5 depending on the equipment manufacturer. Therefore, the MSD value considerably limits the number of paths that can be implemented with SR-MPLS, leading to an inefficient network resource utilization and possibly to congestion. We propose and analyze SR-LEA, an algorithm for an efficient path label encoding that takes advantage of the existing IGP shortest paths in the network. The output of SR-LEA is the minimum label stack to express SR-MPLS paths according to the MSD constraint. Therefore, SR-LEA substantially slackens the impact of MSD and restores the path diversity that MSD forbids in the network. --- paper_title: Optimizing Segment Routing With the Maximum SLD Constraint Using Openflow paper_content: Segment routing is an emerging routing technology that was initially driven by commercial vendors to achieve scalable, flexible, and controllable routing. In segment routing, multiple multi-protocol label switch labels are stacked in the packet header to complete end-to-end transmission, which may lead to a large label stack and a long packet header. Thus, scalability issues may occur when segment routing is applied to large-scale networks. To address this issue, multiple mechanisms and algorithms have been proposed for minimizing the label stack size. However, we argue that these methods ignore the constraint on the maximum segment list depth (SLD), since the typical network equipment can currently only support three to five layers of labels. In this paper, we study segment routing with the maximum SLD constraint and demonstrate that issues, such as explosive increases in the size of the label space and the management overheads will arise when the maximum SLD constraint is imposed. To address these issues, we make contributions from two main aspects. First, based on the network programmability that is provided by openflow, a novel segment routing architecture with improved data plane is proposed that reduces the overhead of additional flow entries and label space. Second, a new path encoding scheme is designed to minimize the SLD under the given maximum constraint, while taking multiple types of overhead into consideration. Moreover, we also perform simulations under different scenarios to evaluate the performances of the proposed algorithms. The simulation results demonstrate that the proposed mechanisms and algorithms can address the issues of segment routing when there is a constraint on the maximum SLD. --- paper_title: Zero-Loss Virtual Machine Migration with IPv6 Segment Routing paper_content: With the development of large-scale data centers, Virtual Machine (VM) migration is a key component for resource optimization, cost reduction, and maintenance. From a network perspective, traditional VM migration mechanisms rely on the hypervisor running at the destination host advertising the new location of the VM once migration is complete. However, this creates a period of time during which the VM is not reachable, yielding packet loss.This paper introduces a method to perform zero-loss VM migration by using IPv6 Segment Routing (SR). Rather than letting the hypervisor update a locator mapping after VM migration is complete, a logical path consisting of the source and destination hosts is pre-provisioned. Packets destined to the migrating VM are sent through this path using SR, shortly before, during, and shortly after migration - the virtual router on the source host being in charge of forwarding packets locally if the VM migration has not completed yet, or to the destination host otherwise. The proposed mechanism is implemented as a VPP plugin, and feasibility of zero-loss VM migration is demonstrated with various workloads. Evaluation shows that this yields benefits in terms of session opening latency and TCP throughput. --- paper_title: A Linux kernel implementation of Segment Routing with IPv6 paper_content: Network operators seek more flexibility to provide added-value services to their customers. We propose to leverage the IPv6 Segment Routing architecture with a example usecase. We also provide an implementation of SR-IPv6 and we evaluate its performance on broadband routers. --- paper_title: Implementation of virtual network function chaining through segment routing in a linux-based NFV infrastructure paper_content: This paper presents an architecture to support Vir- tual Network Functions (VNFs) chaining using the IPv6 Segment Routing (SR) network programming model. Two classes of VNFs are considered: SR-aware and SR-unaware. The operations to support both SR-aware and SR-unaware VNFs are described at an architectural level and we propose a solution for SR-unaware VNFs hosted in a NFV node. An Open Source implementation of the proposed solution for a Linux based NFV host is available and a set of performance measurements have been carried out in a testbed. --- paper_title: SERA: SEgment Routing Aware Firewall for Service Function Chaining scenarios paper_content: In this paper we consider the use of IPv6 Segment Routing (SRv6) for Service Function Chaining (SFC) in an NFV infrastructure. We first analyze the issues of deploying Virtual Network Functions (VNFs) based on SR-unaware applications, which require the introduction of SR proxies in the NFV infrastructure, leading to high complexity in the configuration and in the packet processing. Then we consider the advantages of SR-aware applications, focusing on a firewall application. We present the design and implementation of the SERA (SEgment Routing Aware) firewall, which extends the Linux iptables firewall. In its basic mode the SERA firewall works like the legacy iptables firewall (it can reuse an identical set of rules), but with the great advantage that it can operate on the SR encapsulated packets with no need of an SR proxy. Moreover we define an advanced mode, in which the SERA firewall can inspect all the fields of an SR encapsulated packet and can perform SR-specific actions. In the advanced mode the SERA firewall can fully exploit the features of the IPv6 Segment Routing network programming model. A performance evaluation of the SERA firewall is discussed, based on its result a further optimized prototype has been implemented and evaluated. --- paper_title: SRLB: The Power of Choices in Load Balancing with Segment Routing paper_content: Network load-balancers generally either do not take application state into account, or do so at the cost of a centralized monitoring system. This paper introduces a load-balancer running exclusively within the IP forwarding plane, i.e. in an application protocol agnostic fashion - yet which still provides application-awareness and makes real-time, decentralized decisions. To that end, IPv6 Segment Routing is used to direct data packets from a new flow through a chain of candidate servers, until one decides to accept the connection, based on its local state. This way, applications themselves naturally decide on how to share incoming connections, while incurring minimal network overhead, and no out-of-band signaling. Tests on different workloads - including realistic workloads such as replaying actual Wikipedia access traffic towards a set of replica Wikipedia instances - show significant performance benefits, in terms of shorter response times, when compared to a traditional random load-balancer. --- paper_title: SRv6Pipes: enabling in-network bytestream functions paper_content: IPv6 Segment Routing is a recent IPv6 extension that is generating a lot of interest among researchers and in industry. Thanks to IPv6 SR, network operators can better control the paths followed by packets inside their networks. This provides enhanced traffic engineering capabilities and is key to support Service Function Chaining (SFC). With SFC, an end-to-end service is the composition of a series of in-network services. Simple services such as NAT, accounting or stateless firewalls can be implemented on a per-packet basis. However, more interesting services like transparent proxies, transparent compression or encryption, transcoding, etc. require functions that operate on the bytestream.In this paper, we extend the IPv6 implementation of Segment Routing in the Linux kernel to enable network functions that operate on the bytestream and not on a per-packet basis. Our SRv6Pipes enable network architects to design end-to-end services as a series of in-network functions. We evaluate the performance of our implementation with different microbenchmarks. --- paper_title: Leveraging IPv6 Segment Routing for Service Function Chaining paper_content: Network operators seek more flexibility with solutions like Network Function Virtualization and Service Function Chain- ing (SFC). Thanks to these techniques operators can define on-the-fly which services have to be applied to which pack- ets. We propose to use IPv6 Segment Routing to support SFC and describe and evaluate the performance of our im- plementation in the Linux kernel. --- paper_title: An Efficient Linux Kernel Implementation of Service Function Chaining for Legacy VNFs Based on IPv6 Segment Routing paper_content: We consider the IPv6 Segment Routing (SRv6) technology for Service Function Chaining of Virtual Network Functions (VNFs). Most of the VNFs are legacy VNFs (not aware of the SRv6 technology) and expect to process traditional IP packets. An SR proxy is needed to support them. We have extended the implementation of SRv6 in the Linux kernel, realizing an open source SR-proxy, referred to as SRNK (SR-Proxy Native Kernel). The performance of the proposed solution (SRNKvl) has been evaluated, identifying a poor scalability with respect to the number of VNFs to be supported in a node. Therefore we provided a second design (SRNKv2), enhancing the Linux Policy Routing framework. The performance of SRNKv2 is independent from the number of supported VNFs in a node. We compared the performance of SRNKv2 with a reference scenario not performing the encapsulation and decapsulation operation and demonstrated that the overhead of SRNKv2 is very small, on the order of 3.5%. --- paper_title: Zero-Loss Virtual Machine Migration with IPv6 Segment Routing paper_content: With the development of large-scale data centers, Virtual Machine (VM) migration is a key component for resource optimization, cost reduction, and maintenance. From a network perspective, traditional VM migration mechanisms rely on the hypervisor running at the destination host advertising the new location of the VM once migration is complete. However, this creates a period of time during which the VM is not reachable, yielding packet loss.This paper introduces a method to perform zero-loss VM migration by using IPv6 Segment Routing (SR). Rather than letting the hypervisor update a locator mapping after VM migration is complete, a logical path consisting of the source and destination hosts is pre-provisioned. Packets destined to the migrating VM are sent through this path using SR, shortly before, during, and shortly after migration - the virtual router on the source host being in charge of forwarding packets locally if the VM migration has not completed yet, or to the destination host otherwise. The proposed mechanism is implemented as a VPP plugin, and feasibility of zero-loss VM migration is demonstrated with various workloads. Evaluation shows that this yields benefits in terms of session opening latency and TCP throughput. --- paper_title: Implementing IPv6 Segment Routing in the Linux Kernel paper_content: IPv6 Segment Routing is a major IPv6 extension that provides a modern version of source routing that is currently being developed within the Internet Engineering Task Force (IETF). We propose the first open-source implementation of IPv6 Segment Routing in the Linux kernel. We first describe it in details and explain how it can be used on both endhosts and routers. We then evaluate and compare its performance with plain IPv6 packet forwarding in a lab environment. Our measurements indicate that the performance penalty of inserting IPv6 Segment Routing Headers or encapsulating packets is limited to less than 15%. On the other hand, the optional HMAC security feature of IPv6 Segment Routing is costly in a pure software implementation. Since our implementation has been included in the official Linux 4.10 kernel, we expect that it will be extended by other researchers for new use cases. --- paper_title: Field trial of a software defined network (SDN) using carrier Ethernet and segment routing in a tier-1 provider paper_content: Software Defined Networking (SDN) has brought a paradigmatic shift in the networking industry and has led to significant benefits in the data-center and enterprise network domains. The service provider networks that form the largest segment of networking industry, are now evaluating SDN technologies for adoption. In this paper, we present a SDN framework for service provider networks and report the first field trial of SDN in a tier-1 service provider domain. The proposed SDN framework is built using Carrier Ethernet and augmented with recently proposed Segment Routing paradigm manifested through Software Defined-Carrier Ethernet Switch Routers (SD-CESRs). Carrier Ethernet on account of its distinct, programmable control plane and Segment Routing through its source routing capabilities facilitates SDN implementation. The SD-CESRs are deployed in a tier-1 service provider network in the metropolis of Mumbai. The SDN framework is extended through specific APIs to enhance revenue bearing services portfolio of the service provider and performance results from the field are shown to validate the benefits of SDN adoption. --- paper_title: Control Exchange Points: Providing QoS-enabled End-to-End Services via SDN-based Inter-domain Routing Orchestration paper_content: Introduction. This paper presents the vision of the Control Exchange Point (CXP) architectural model. The model is motivated by the inflexibility and ossification of today’s inter-domain routing system, which renders critical QoS-constrained end-toend (e2e) network services difficult or simply impossible to provide. CXPs operate on slices of ISP networks and are built on basic Software Defined Networking (SDN) principles, such as the clean decoupling of the routing control plane from the data plane and the consequent logical centralization of control. The main goal of the architectural model is to provide e2e services with QoS constraints across domains. This is achieved through defining a new type of business relationship between ISPs, which advertise partial paths (so-called pathlets [7]) with specific properties, and the orchestrating role of the CXPs, which dynamically stitch them together and provision e2e QoS. Revenue from value-added services flows from the clients of the CXP to the ISPs participating in the service. The novelty of the approach is the combination of SDN programmability and dynamic path stitching techniques for inter-domain routing, which extends the value proposition of SDN over multiple domains. We first describe the challenges related to e2e service provision with the current inter-domain routing and peering model, and then continue with the benefits of our approach. Subsequently, we describe the CXP model in detail and report on an initial feasibility analysis. Motivation and Challenges. Complexity and ossification: The notorious complexity of the inter-domain routing system renders its management difficult and error-prone, leading to various inefficiencies such as suboptimal inter-domain paths. Indicatively, 60% of all Internet paths today are suffering from triangle inequality violations [9]. The current ossification of the system, hindering the introduction of new solutions, aggravates the problem further. Highly popular inter-domain services, such as highdefinition e2e real-time video streaming, already test the limits of the status quo, or are simply impossible. This is because such services require tight coordination along entire chains of ISPs demanding QoS provisioning. More advanced and mission-critical services, such as telemedical applications, are usually out of the question. --- paper_title: IPv6 Segment Routing Header (SRH) paper_content: Segment Routing can be applied to the IPv6 data plane using a new type ::: of Routing Extension Header called the Segment Routing Header. This ::: document describes the Segment Routing Header and how it is used by ::: Segment Routing capable nodes. --- paper_title: Performance of IPv6 Segment Routing in Linux Kernel paper_content: IPv6 Segment Routing (SRv6) is a promising solution to support advanced services such as Traffic Engineering, Service Function Chaining, Virtual Private Networks, and Load Balancing. The SRv6 data-plane is supported in many different forwarding engines including the Linux kernel. It has been introduced into the 4.10 release of the Linux kernel to support both endhost and router functionalities. The implementation has been updated several times, with every new kernel release, to include new features and also to improve the performance of existing ones. In this paper, we present SRPerf, a performance evaluation framework for software and hardware implementations of SRv6. SRPerf is able to perform different benchmarking tests such as throughput and latency. The architecture of SRPerf can be easily extended to support new benchmarking methodologies as well as different SRv6 implementations. We have used SRPerf to evaluate the performance of the SRv6 implementation in the Linux kernel. --- paper_title: Segment Routing IPv6 for Mobile User Plane paper_content: This document shows the applicability of SRv6 (Segment Routing IPv6) ::: to the user-plane of mobile networks. The network programming nature ::: of SRv6 accomplish mobile user-plane functions in a simple manner. The ::: statelessness of SRv6 and its ability to control both service layer ::: path and underlying transport can be beneficial to the mobile user- ::: plane, providing flexibility, end-to-end network slicing and SLA ::: control for various applications. This document describes the SRv6 ::: mobile user plane behavior and defines the SID functions for that. --- paper_title: SERA: SEgment Routing Aware Firewall for Service Function Chaining scenarios paper_content: In this paper we consider the use of IPv6 Segment Routing (SRv6) for Service Function Chaining (SFC) in an NFV infrastructure. We first analyze the issues of deploying Virtual Network Functions (VNFs) based on SR-unaware applications, which require the introduction of SR proxies in the NFV infrastructure, leading to high complexity in the configuration and in the packet processing. Then we consider the advantages of SR-aware applications, focusing on a firewall application. We present the design and implementation of the SERA (SEgment Routing Aware) firewall, which extends the Linux iptables firewall. In its basic mode the SERA firewall works like the legacy iptables firewall (it can reuse an identical set of rules), but with the great advantage that it can operate on the SR encapsulated packets with no need of an SR proxy. Moreover we define an advanced mode, in which the SERA firewall can inspect all the fields of an SR encapsulated packet and can perform SR-specific actions. In the advanced mode the SERA firewall can fully exploit the features of the IPv6 Segment Routing network programming model. A performance evaluation of the SERA firewall is discussed, based on its result a further optimized prototype has been implemented and evaluated. --- paper_title: Flexible failure detection and fast reroute using eBPF and SRv6 paper_content: Segment Routing is a modern variant of source routing that is being gradually deployed by network operators. Large ISPs use it for traffic engineering and fast reroute purposes. Its IPv6 dataplane, named SRv6, goes beyond the initial MPLS dataplane, notably by enabling network programmability. With SRv6, it becomes possible to define transparent network functions on routers and endhosts. These functions are mapped to IPv6 addresses and their execution is scheduled by segments placed in the forwarded packets. We have recently extended the Linux SRv6 implementation to enable the execution of specific eBPF code upon reception of an SRv6 packet containing local segments. eBPF is a virtual machine that is included in the Linux kernel. We leverage this new feature of Linux 4.18 to propose and implement flexible eBPF-based fast-reroute and failure detection schemes. Our lab measurements confirm that they provide good performance and enable faster failure detections than existing BFD implementations on Linux routers and servers. --- paper_title: Leveraging eBPF for programmable network functions with IPv6 Segment Routing paper_content: With the advent of Software Defined Networks (SDN), Network Function Virtualisation (NFV) or Service Function Chaining (SFC), operators expect networks to support flexible services beyond the mere forwarding of packets. The network programmability framework which is being developed within the IETF by leveraging IPv6 Segment Routing enables the realisation of in-network functions. In this paper, we demonstrate that this vision of in-network programmability can be realised. By leveraging the eBPF support in the Linux kernel, we implement a flexible framework that allows network operators to encode their own network functions as eBPF code that is automatically executed while processing specific packets. Our lab measurements indicate that the overhead of calling such eBPF functions remains acceptable. Thanks to eBPF, operators can implement a variety of network functions. We describe the architecture of our implementation in the Linux kernel. This extension has been released with Linux 4.18. We illustrate the flexibility of our approach with three different use cases: delay measurements, hybrid networks and network discovery. Our lab measurements also indicate that the performance penalty of running eBPF network functions on Linux routers does not incur a significant overhead. --- paper_title: Implementing IPv6 Segment Routing in the Linux Kernel paper_content: IPv6 Segment Routing is a major IPv6 extension that provides a modern version of source routing that is currently being developed within the Internet Engineering Task Force (IETF). We propose the first open-source implementation of IPv6 Segment Routing in the Linux kernel. We first describe it in details and explain how it can be used on both endhosts and routers. We then evaluate and compare its performance with plain IPv6 packet forwarding in a lab environment. Our measurements indicate that the performance penalty of inserting IPv6 Segment Routing Headers or encapsulating packets is limited to less than 15%. On the other hand, the optional HMAC security feature of IPv6 Segment Routing is costly in a pure software implementation. Since our implementation has been included in the official Linux 4.10 kernel, we expect that it will be extended by other researchers for new use cases. --- paper_title: Implementation of virtual network function chaining through segment routing in a linux-based NFV infrastructure paper_content: This paper presents an architecture to support Vir- tual Network Functions (VNFs) chaining using the IPv6 Segment Routing (SR) network programming model. Two classes of VNFs are considered: SR-aware and SR-unaware. The operations to support both SR-aware and SR-unaware VNFs are described at an architectural level and we propose a solution for SR-unaware VNFs hosted in a NFV node. An Open Source implementation of the proposed solution for a Linux based NFV host is available and a set of performance measurements have been carried out in a testbed. --- paper_title: Software Resolved Networks: Rethinking Enterprise Networks with IPv6 Segment Routing paper_content: Enterprise networks often need to implement complex policies that match business objectives. They will embrace IPv6 like ISP networks in the coming years. Among the benefits of IPv6, the recently proposed IPv6 Segment Routing (SRv6) architecture supports richer policies in a clean manner. This matches very well the requirements of enterprise networks. In this paper, we propose Software Resolved Networks (SRNs), a new architecture for IPv6 enterprise networks. We apply the fundamental principles of Software Defined Networks, i.e., the ability to control the operation of the network through software, but in a different manner that also involves the endhosts. We leverage SRv6 to enforce and control network paths according to the network policies. Those paths are computed by a centralized controller that interacts with the endhosts through the DNS protocol. We implement a Software Resolved Network on Linux endhosts, routers and controllers. Through benchmarks and simulations, we analyze the performance of those SRNs, and demonstrate that they meet the expectations of enterprise networks. --- paper_title: Hybrid IP/SDN Networking: Open Implementation and Experiment Management Tools paper_content: The introduction of SDN in large-scale IP provider networks is still an open issue and different solutions have been suggested so far. In this paper, we propose a hybrid approach that allows the coexistence of traditional IP routing with SDN based forwarding within the same provider domain. The solution is called OSHI—Open Source Hybrid IP/SDN networking, as we have fully implemented it combining and extending open source software. We discuss the OSHI system architecture and the design and implementation of advanced services like pseudo wires and virtual switches. In addition, we describe a set of open source management tools for the emulation of the proposed solution using either the Mininet emulator or distributed physical testbeds. We refer to this suite of tools as Mantoo (management tools). Mantoo includes an extensible Web-based graphical topology designer, which provides different layered network “views” (e.g., from physical links to service relationships among nodes). The suite can validate an input topology, automatically deploy it over a Mininet emulator or a distributed SDN testbed and allows access to emulated nodes by opening consoles in the web GUI. Mantoo provides also tools to evaluate the performance of the deployed nodes. --- paper_title: SRv6Pipes: enabling in-network bytestream functions paper_content: IPv6 Segment Routing is a recent IPv6 extension that is generating a lot of interest among researchers and in industry. Thanks to IPv6 SR, network operators can better control the paths followed by packets inside their networks. This provides enhanced traffic engineering capabilities and is key to support Service Function Chaining (SFC). With SFC, an end-to-end service is the composition of a series of in-network services. Simple services such as NAT, accounting or stateless firewalls can be implemented on a per-packet basis. However, more interesting services like transparent proxies, transparent compression or encryption, transcoding, etc. require functions that operate on the bytestream.In this paper, we extend the IPv6 implementation of Segment Routing in the Linux kernel to enable network functions that operate on the bytestream and not on a per-packet basis. Our SRv6Pipes enable network architects to design end-to-end services as a series of in-network functions. We evaluate the performance of our implementation with different microbenchmarks. --- paper_title: Mantoo - A Set of Management Tools for Controlling SDN Experiments paper_content: OSHI -- Open Source Hybrid IP/SDN networking is a hybrid approach allowing the coexistence of traditional IP routing with SDN based forwarding within the same provider domain. In this demo, we will show a set of Open Source management tools for the emulation of the proposed solution over the Mininet emulator and over distributed test beds. We refer to this suite of tools as Mantoo (Management tools). Mantoo includes an extensible web-based graphical topology designer providing different layered network "views" (e.g. From physical links to service relationships among nodes). The framework is able to validate a topology, to automatically deploy it over a Mininet emulator or a distributed SDN test bed, to access nodes by opening consoles directly via the web GUI. --- paper_title: PMSR - Poor Man's Segment Routing, a minimalistic approach to Segment Routing and a Traffic Engineering use case paper_content: The current specification of the Segment Routing (SR) architecture requires enhancements to the intradomain routing protocols (e.g. OSPF and IS-IS) so that the nodes can advertise the Segment Identifiers (SIDs). We propose a simpler solution called PMSR (Poor Man's Segment Routing), that does not require any enhancement to routing protocol. We compare the procedures of PMSR with traditional SR, showing that PMSR can reduce the operation and management complexity. We analyze the set of use cases in the current SR drafts and we claim that PMSR can support the large majority of them. Thanks to the drastic simplification of the control plane, we have been able to develop an open source prototype of PMSR. In the second part of the paper, we consider a Traffic Engineering use case, starting from a traditional flow assignment optimization problem, which allocates hop-by-hop paths to flows. We propose a SR path assignment algorithm and prove that it is optimal with respect to the number of segments allocated to a flow. --- paper_title: Traffic Engineering with Segment Routing: SDN-Based Architectural Design and Open Source Implementation paper_content: Traffic Engineering (TE) in IP carrier networks is one of the functions that can benefit from the Software Defined Networking paradigm. However traditional per-flow routing requires a direct interaction between the SDN controller and each node that is involved in the traffic paths. Segment Routing (SR) may simplify the route enforcement delegating all the configuration and per-flow state at the border of the network. In this work we propose an architecture that integrates the SDN paradigm with SR based TE, for which we have provided an open source reference implementation. We have designed and implemented a simple TE/SR heuristic for flow allocation and we show and discuss experimental results. --- paper_title: An Efficient Linux Kernel Implementation of Service Function Chaining for Legacy VNFs Based on IPv6 Segment Routing paper_content: We consider the IPv6 Segment Routing (SRv6) technology for Service Function Chaining of Virtual Network Functions (VNFs). Most of the VNFs are legacy VNFs (not aware of the SRv6 technology) and expect to process traditional IP packets. An SR proxy is needed to support them. We have extended the implementation of SRv6 in the Linux kernel, realizing an open source SR-proxy, referred to as SRNK (SR-Proxy Native Kernel). The performance of the proposed solution (SRNKvl) has been evaluated, identifying a poor scalability with respect to the number of VNFs to be supported in a node. Therefore we provided a second design (SRNKv2), enhancing the Linux Policy Routing framework. The performance of SRNKv2 is independent from the number of supported VNFs in a node. We compared the performance of SRNKv2 with a reference scenario not performing the encapsulation and decapsulation operation and demonstrated that the overhead of SRNKv2 is very small, on the order of 3.5%. --- paper_title: Implementing IPv6 Segment Routing in the Linux Kernel paper_content: IPv6 Segment Routing is a major IPv6 extension that provides a modern version of source routing that is currently being developed within the Internet Engineering Task Force (IETF). We propose the first open-source implementation of IPv6 Segment Routing in the Linux kernel. We first describe it in details and explain how it can be used on both endhosts and routers. We then evaluate and compare its performance with plain IPv6 packet forwarding in a lab environment. Our measurements indicate that the performance penalty of inserting IPv6 Segment Routing Headers or encapsulating packets is limited to less than 15%. On the other hand, the optional HMAC security feature of IPv6 Segment Routing is costly in a pure software implementation. Since our implementation has been included in the official Linux 4.10 kernel, we expect that it will be extended by other researchers for new use cases. --- paper_title: Exploring various use cases for IPv6 Segment Routing paper_content: IPv6 Segment Routing (SRv6) is a modern version of source routing that is being standardised within the IETF to address a variety of use cases in ISP, datacenter and entreprise networks. Its inclusion in recent versions of the Linux kernel enables researchers to explore and extend this new protocol. We leverage and extend the SRv6 implementation in the Linux kernel to demonstrate two very different usages of this new protocol. We first show how entreprise networks can leverage SRv6 to better control the utilisation of their infrastructure and demonstrate how DNS resolvers can act as SDN controllers. We then demonstrate how SRv6Pipes can be used to efficiently implement network functions that need to process bytestreams on top of a packet-based SRv6 network. --- paper_title: IPv6 Segment Routing Header (SRH) paper_content: Segment Routing can be applied to the IPv6 data plane using a new type ::: of Routing Extension Header called the Segment Routing Header. This ::: document describes the Segment Routing Header and how it is used by ::: Segment Routing capable nodes. ---
Title: Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results Section 1: THE SEGMENT ROUTING (SR) ARCHITECTURE Description 1: This section includes a short introduction to the main SR architectural aspects. Section 2: MPLS dataplane (SR-MPLS) Description 2: This section discusses the specifics of the MPLS dataplane and how Segment Routing is implemented over MPLS. Section 3: IPv6 dataplane (SRv6) Description 3: This section explores the implementation of Segment Routing over IPv6, including the Segment Routing Header (SRH) and SRv6 functions. Section 4: Control plane for SR and relation with SDN Description 4: This section describes the control plane operations for Segment Routing, considering distributed, centralized, and hybrid approaches. Section 5: Segment Routing motivations and use cases Description 5: This section identifies the motivations behind Segment Routing and various use cases it addresses. Section 6: RESEARCH ACTIVITIES Description 6: This section provides a comprehensive review of the research activities on Segment Routing, classified into different categories. Section 7: STANDARDIZATION ACTIVITIES Description 7: This section describes the standardization efforts related to Segment Routing, classified into various categories. Section 8: SR IMPLEMENTATIONS AND TOOLS Description 8: This section outlines the implementation results related to Segment Routing, including software and hardware implementations and interoperability efforts.
A Survey of Distributed Consensus Protocols for Blockchain Networks
13
--- paper_title: On Scaling Decentralized Blockchains paper_content: The increasing popularity of blockchain-based cryptocurrencies has made scalability a primary and urgent concern. We analyze how fundamental and circumstantial bottlenecks in Bitcoin limit the ability of its current peer-to-peer overlay network to support substantially higher throughputs and lower latencies. Our results suggest that reparameterization of block size and intervals should be viewed only as a first increment toward achieving next-generation, high-load blockchain protocols, and major advances will additionally require a basic rethinking of technical approaches. We offer a structured perspective on the design space for such approaches. Within this perspective, we enumerate and briefly discuss a number of recently proposed protocol ideas and offer several new ideas and open challenges. --- paper_title: PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake paper_content: A peer-to-peer crypto-currency design derived from Satoshi Nakamoto’s Bitcoin. Proof-of-stake replaces proof-of-work to provide most of the network security. Under this hybrid design proof-of-work mainly provides initial minting and is largely non-essential in the long run. Security level of the network is not dependent on energy consumption in the long term thus providing an energyefficient and more cost-competitive peer-to-peer crypto-currency. Proof-of-stake is based on coin age and generated by each node via a hashing scheme bearing similarity to Bitcoin’s but over limited search space. Block chain history and transaction settlement are further protected by a centrally broadcasted checkpoint mechanism. --- paper_title: Snow White: Robustly Reconfigurable Consensus and Applications to Provably Secure Proof of Stake. paper_content: We present the a provably secure proof-of-stake protocol called Snow White. The primary application of Snow White is to be used as a “green” consensus alternative for a decentralized cryptocurrency system with open enrollement. We break down the task of designing Snow White into the following core challenges: ::: ::: 1. ::: ::: identify a core “permissioned” consensus protocol suitable for proof-of-stake; specifically the core consensus protocol should offer robustness in an Internet-scale, heterogeneous deployment; ::: ::: ::: ::: ::: 2. ::: ::: propose a robust committee re-election mechanism such that as stake switches hands in the cryptocurrency system, the consensus committee can evolve in a timely manner and always reflect the most recent stake distribution; and ::: ::: ::: ::: ::: 3. ::: ::: relying on the formal security of the underlying consensus protocol, prove the full end-to-end protocol to be secure—more specifically, we show that any consensus protocol satisfying the desired robustness properties can be used to construct proofs-of-stake consensus, as long as money does not switch hands too quickly. --- paper_title: On Scaling Decentralized Blockchains paper_content: The increasing popularity of blockchain-based cryptocurrencies has made scalability a primary and urgent concern. We analyze how fundamental and circumstantial bottlenecks in Bitcoin limit the ability of its current peer-to-peer overlay network to support substantially higher throughputs and lower latencies. Our results suggest that reparameterization of block size and intervals should be viewed only as a first increment toward achieving next-generation, high-load blockchain protocols, and major advances will additionally require a basic rethinking of technical approaches. We offer a structured perspective on the design space for such approaches. Within this perspective, we enumerate and briefly discuss a number of recently proposed protocol ideas and offer several new ideas and open challenges. --- paper_title: BEAT: Asynchronous BFT Made Practical paper_content: We present BEAT, a set of practical Byzantine fault-tolerant (BFT) protocols for completely asynchronous environments. BEAT is flexible, versatile, and extensible, consisting of five asynchronous BFT protocols that are designed to meet different goals (e.g., different performance metrics, different application scenarios). Due to modularity in its design, features of these protocols can be mixed to achieve even more meaningful trade-offs between functionality and performance for various applications. Through a 92-instance, five-continent deployment of BEAT on Amazon EC2, we show that BEAT is efficient: roughly, all our BEAT instances significantly outperform, in terms of both latency and throughput, HoneyBadgerBFT, the most efficient asynchronous BFT known. --- paper_title: Consensus in the Age of Blockchains paper_content: The blockchain initially gained traction in 2008 as the technology underlying bitcoin, but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. ::: The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: first protocols based on proof-of-work (PoW), second proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and third hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours. --- paper_title: Casper the Friendly Finality Gadget paper_content: We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions. --- paper_title: Bitcoin-NG: A Scalable Blockchain Protocol paper_content: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. ::: This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. ::: In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network. --- paper_title: Hyperledger fabric: a distributed operating system for permissioned blockchains paper_content: Hyperledger Fabric is a modular and extensible open-source system for deploying and operating permissioned blockchains. Fabric is currently used in more than 400 prototypes and proofs-of-concept of distributed ledger technology, as well as several production systems, across different industries and use cases. Starting from the premise that there are no"one-size-fits-all"solutions, Fabric is the first truly extensible blockchain system for running distributed applications. It supports modular consensus protocols, which allows the system to be tailored to particular use cases and trust models. Fabric is also the first blockchain system that runs distributed applications written in general-purpose programming languages, without systemic dependency on a native cryptocurrency. This stands in sharp contrast to existing blockchain platforms for running smart contracts that require code to be written in domain-specific languages or rely on a cryptocurrency. Furthermore, it uses a portable notion of membership for realizing the permissioned model, which may be integrated with industry-standard identity management. To support such flexibility, Fabric takes a novel approach to the design of a permissioned blockchain and revamps the way blockchains cope with non-determinism, resource exhaustion, and performance attacks. This paper describes Fabric, its architecture, the rationale behind various design decisions, its security model and guarantees, its most prominent implementation aspects, as well as its distributed application programming model. We further evaluate Fabric by implementing and benchmarking a Bitcoin-inspired digital currency. We show that Fabric achieves end-to-end throughput of more than 3500 transactions per second in certain popular deployment configurations, with sub-second latency. --- paper_title: A Survey on Consensus Mechanisms and Mining Management in Blockchain Networks. paper_content: The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and the industry. The blockchain network was originated in the Internet finical sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and data-driven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-of-the-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions. --- paper_title: Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol paper_content: We present “Ouroboros”, the first blockchain protocol based on proof of stake with rigorous security guarantees. We establish security properties for the protocol comparable to those achieved by the bitcoin blockchain protocol. As the protocol provides a “proof of stake” blockchain discipline, it offers qualitative efficiency advantages over blockchains based on proof of physical resources (e.g., proof of work). We also present a novel reward mechanism for incentivizing Proof of Stake protocols and we prove that, given this mechanism, honest behavior is an approximate Nash equilibrium, thus neutralizing attacks such as selfish mining. --- paper_title: The Quest for Scalable Blockchain Fabric: Proof-of-Work vs. BFT Replication paper_content: Bitcoin cryptocurrency demonstrated the utility of global consensus across thousands of nodes, changing the world of digital transactions forever. In the early days of Bitcoin, the performance of its probabilistic proof-of-work (PoW) based consensus fabric, also known as blockchain, was not a major issue. Bitcoin became a success story, despite its consensus latencies on the order of an hour and the theoretical peak throughput of only up to 7 transactions per second. --- paper_title: The Honey Badger of BFT Protocols paper_content: The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network. --- paper_title: Algorand: Scaling Byzantine Agreements for Cryptocurrencies paper_content: Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence. Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed. We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users. --- paper_title: Distributed Computing: Fundamentals, Simulations and Advanced Topics paper_content: 1. Introduction.PART I: FUNDAMENTALS.2. Basic Algorithms in Message-Passing Systems.3. Leader Election in Rings.4. Mutual Exclusion in Shared Memory.5. Fault-Tolerant Consensus.6. Causality and Time.PART II: SIMULATIONS.7. A Formal Model for Simulations.8. Broadcast and Multicast.9. Distributed Shared Memory.10. Fault-Tolerant Simulations of Read/Write Objects.11. Simulating Synchrony.12. Improving the Fault Tolerance of Algorithms.13. Fault-Tolerant Clock Synchronization.PART III: ADVANCED TOPICS.14. Randomization.15. Wait-Free Simulations of Arbitrary Objects.16. Problems Solvable in Asynchronous Systems.17. Solving Consensus in Eventually Stable Systems.References.Index. --- paper_title: Practical byzantine fault tolerance paper_content: This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS. --- paper_title: The Byzantine Generals Problem paper_content: Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed. --- paper_title: Impossibility of distributed consensus with one faulty process paper_content: The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. We show that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the "Byzantine Generals" problem. --- paper_title: Consensus in the presence of partial synchrony paper_content: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound Δ on the time required for a message to be sent from one processor to another and a known fixed upper bound P on the relative speeds of different processors. In an asynchronous system no fixed upper bounds Δ and P exist. In one version of partial synchrony, fixed bounds Δ and P exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds Δ and P. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T , and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time. --- paper_title: Distributed Computing: Fundamentals, Simulations and Advanced Topics paper_content: 1. Introduction.PART I: FUNDAMENTALS.2. Basic Algorithms in Message-Passing Systems.3. Leader Election in Rings.4. Mutual Exclusion in Shared Memory.5. Fault-Tolerant Consensus.6. Causality and Time.PART II: SIMULATIONS.7. A Formal Model for Simulations.8. Broadcast and Multicast.9. Distributed Shared Memory.10. Fault-Tolerant Simulations of Read/Write Objects.11. Simulating Synchrony.12. Improving the Fault Tolerance of Algorithms.13. Fault-Tolerant Clock Synchronization.PART III: ADVANCED TOPICS.14. Randomization.15. Wait-Free Simulations of Arbitrary Objects.16. Problems Solvable in Asynchronous Systems.17. Solving Consensus in Eventually Stable Systems.References.Index. --- paper_title: Reaching Agreement in the Presence of Faults paper_content: The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods. --- paper_title: Consensus in the presence of partial synchrony paper_content: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound Δ on the time required for a message to be sent from one processor to another and a known fixed upper bound P on the relative speeds of different processors. In an asynchronous system no fixed upper bounds Δ and P exist. In one version of partial synchrony, fixed bounds Δ and P exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds Δ and P. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T , and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time. --- paper_title: Unreliable failure detectors for reliable distributed systems paper_content: We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992]. --- paper_title: Atomic Broadcast: From Simple Message Diffusion to Byzantine Agreement paper_content: In distributed systems subject to random communication delays and component failures, atomic broadcast can be used to implement the abstraction of synchronous replicated storage, a distributed storage that displays the same contents at every correct processor as of any clock time. This paper presents a systematic derivation of a family of atomic broadcast protocols that are tolerant of increasingly general failure classes: omission failures, timing failures, and authentication-detectable Byzantine failures. The protocols work for arbitrary point-to-point network topologies, and can tolerate any number of link and process failures up to network partitioning. After proving their correctness, we also prove two lower bounds that show that the protocols provide in many cases the best possible termination times. --- paper_title: Total order broadcast and multicast algorithms: Taxonomy and survey paper_content: Total order broadcast and multicast (also called atomic broadcast/multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order. --- paper_title: Implementing fault-tolerant services using the state machine approach: a tutorial paper_content: The state machine approach is a general method for implementing fault-tolerant services in distributed systems. This paper reviews the approach and describes protocols for two different failure models—Byzantine and fail stop. Systems reconfiguration techniques for removing faulty components and integrating repaired components are also discussed. --- paper_title: Asynchronous consensus and broadcast protocols paper_content: A consensus protocol enables a system of n asynchronous processes, some of which are faulty, to reach agreement. There are two kinds of faulty processes: fail-stop processes that can only die and malicious processes that can also send false messages. The class of asynchronous systems with fair schedulers is defined, and consensus protocols that terminate with probability 1 for these systems are investigated. With fail-stop processes, it is shown that ⌈( n + 1)/2⌉ correct processes are necessary and sufficient to reach agreement. In the malicious case, it is shown that ⌈(2 n + 1)/3⌉ correct processes are necessary and sufficient to reach agreement. This is contrasted with an earlier result, stating that there is no consensus protocol for the fail-stop case that always terminates within a bounded number of steps, even if only one process can fail. The possibility of reliable broadcast (Byzantine Agreement) in asynchronous systems is also investigated. Asynchronous Byzantine Agreement is defined, and it is shown that ⌈(2 n + 1)/3⌉ correct processes are necessary and sufficient to achieve it. --- paper_title: Using Time Instead of Timeout for Fault-Tolerant Distributed Systems. paper_content: Description d'une methode generale pour implementer un systeme reparti ayant n'importe quel degre desire de tolerance de panne. La synchronisation par horloge fiable et une solution au probleme «Bizantine Generals» sont assumes --- paper_title: Time, clocks, and the ordering of events in a distributed system paper_content: The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become. --- paper_title: Practical byzantine fault tolerance paper_content: This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS. --- paper_title: Fault-scalable Byzantine fault-tolerant services paper_content: A fault-scalable service can be configured to tolerate increasing numbers of faults without significant decreases in performance. The Query/Update (Q/U) protocol is a new tool that enables construction of fault-scalable Byzantine fault-tolerant services. The optimistic quorum-based nature of the Q/U protocol allows it to provide better throughput and fault-scalability than replicated state machines using agreement-based protocols. A prototype service built using the Q/U protocol outperforms the same service built using a popular replicated state machine implementation at all system sizes in experiments that permit an optimistic execution. Moreover, the performance of the Q/U protocol decreases by only 36% as the number of Byzantine faults tolerated increases from one to five, whereas the performance of the replicated state machine decreases by 83%. --- paper_title: Spin One's Wheels? Byzantine Fault Tolerance with a Spinning Primary paper_content: Most Byzantine fault-tolerant state machine replication(BFT) algorithms have a primary replica that is in charge of ordering the clients requests. Recently it was shown that this dependence allows a faulty primary to degrade the performance of the system to a small fraction of what the environment allows. In this paper we present Spinning, a novel BFT algorithm that mitigates such performance attacks by changing the primary after every batch of pending requests is accepted for execution. This novel mode of operation deals with those attacks at a much lower cost than previous solutions, maintaining a throughput equal or better to the algorithm that is usually consider to be the baseline in the area, Castro and Liskov’s PBFT. --- paper_title: In search of an understandable consensus algorithm paper_content: Raft is a consensus algorithm for managing a replicated log. It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than Paxos and also provides a better foundation for building practical systems. In order to enhance understandability, Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered. Results from a user study demonstrate that Raft is easier for students to learn than Paxos. Raft also includes a new mechanism for changing the cluster membership, which uses overlapping majorities to guarantee safety. --- paper_title: Fast Byzantine Consensus paper_content: We present the first consensus protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication step, and number of processes for 2-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. --- paper_title: Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults paper_content: This paper argues for a new approach to building Byzantine fault tolerant replication systems. We observe that although recently developed BFT state machine replication protocols are quite fast, they don't tolerate Byzantine faults very well: a single faulty client or server is capable of rendering PBFT, Q/U, HQ, and Zyzzyva virtually unusable. In this paper, we (1) demonstrate that existing protocols are dangerously fragile, (2) define a set of principles for constructing BFT services that remain useful even when Byzantine faults occur, and (3) apply these principles to construct a new protocol, Aardvark. Aardvark can achieve peak performance within 40% of that of the best existing protocol in our tests and provide a significant fraction of that performance when up to f servers and any number of clients are faulty. We observe useful throughputs between 11706 and 38667 requests per second for a broad range of injected faults. --- paper_title: The next 700 BFT protocols paper_content: Modern Byzantine fault-tolerant state machine replication (BFT) protocols involve about 20,000 lines of challenging C++ code encompassing synchronization, networking and cryptography. They are notoriously difficult to develop, test and prove. We present a new abstraction to simplify these tasks. We treat a BFT protocol as a composition of instances of our abstraction. Each instance is developed and analyzed independently. To illustrate our approach, we first show how our abstraction can be used to obtain the benefits of a state-of-the-art BFT protocol with much less pain. Namely, we develop AZyzzyva, a new protocol that mimics the behavior of Zyzzyva in best-case situations (for which Zyzzyva was optimized) using less than 24% of the actual code of Zyzzyva. To cover worst-case situations, our abstraction enables to use in AZyzzyva any existing BFT protocol, typically, a classical one like PBFT which has been tested and proved correct. We then present Aliph, a new BFT protocol that outperforms previous BFT protocols both in terms of latency (by up to 30%) and throughput (by up to 360%). The development of Aliph required two new instances of our abstraction. Each instance contains less than 25% of the code needed to develop state-of-the-art BFT protocols. --- paper_title: Zyzzyva: Speculative Byzantine fault tolerance paper_content: A longstanding vision in distributed systems is to build reliable systems from unreliable components. An enticing formulation of this vision is Byzantine Fault-Tolerant (BFT) state machine replication, in which a group of servers collectively act as a correct server even if some of the servers misbehave or malfunction in arbitrary (“Byzantine”) ways. Despite this promise, practitioners hesitate to deploy BFT systems, at least partly because of the perception that BFT must impose high overheads. In this article, we present Zyzzyva, a protocol that uses speculation to reduce the cost of BFT replication. In Zyzzyva, replicas reply to a client's request without first running an expensive three-phase commit protocol to agree on the order to process requests. Instead, they optimistically adopt the order proposed by a primary server, process the request, and reply immediately to the client. If the primary is faulty, replicas can become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minima and to achieve throughputs of tens of thousands of requests per second, making BFT replication practical for a broad range of demanding services. --- paper_title: HQ replication: a hybrid quorum protocol for byzantine fault tolerance paper_content: There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q/U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is un-necessary when there is no contention, and Q/U requires a large number of replicas and performs poorly under contention. We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but uses BFT to resolve contention when it arises. Furthermore, HQ uses only 3f + 1 replicas to tolerate f faults, providing optimal resilience to node failures. We implemented a prototype of HQ, and we compare its performance to BFT and Q/U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases. Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well. --- paper_title: Viewstamped Replication Revisited paper_content: This paper presents an updated version of Viewstamped Replication, a replication technique that handles failures in which nodes crash. It describes how client requests are handled, how the group reorganizes when a replica fails, and how a failed replica is able to rejoin the group. The paper also describes a number of important optimizations and presents a protocol for handling reconfigurations that can change both the group membership and the number of failures the group is able to handle. --- paper_title: Consensus in the presence of partial synchrony paper_content: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound Δ on the time required for a message to be sent from one processor to another and a known fixed upper bound P on the relative speeds of different processors. In an asynchronous system no fixed upper bounds Δ and P exist. In one version of partial synchrony, fixed bounds Δ and P exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds Δ and P. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T , and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time. --- paper_title: Asynchronous verifiable information dispersal paper_content: Information dispersal addresses the question of storing a file by distributing it among a set of servers in a storage-efficient way. We introduce the problem of verifiable information dispersal in an asynchronous network, where up to one third of the servers as well as an arbitrary number of clients might exhibit Byzantine faults. Verifiability ensures that the stored information is consistent despite such faults. We present a storage and communication-efficient scheme for asynchronous verifiable information dispersal that achieves an asymptotically optimal storage blow-up. Additionally, we show how to guarantee the secrecy of the stored data with respect to an adversary that may mount adaptive attacks. Our technique also yields a new protocol for asynchronous reliable broadcast that improves the communication complexity by an order of magnitude on large inputs. --- paper_title: The Byzantine Generals Problem paper_content: Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed. --- paper_title: Simple and efficient threshold cryptosystem from the Gap Diffie-Hellman group paper_content: In this paper, we construct a new threshold cryptosystem from the Gap Diffie-Hellman (GDH) group. The proposed scheme enjoys all the most important properties that a robust and practical threshold cryptosystem should possess, that is, it is noninteractive, computationally efficient and provably secure against adaptive chosen ciphertext attacks. In addition, thanks to the elegant structure of the GDH group, the proposed threshold cryptosystem has shorter decryption shares as well as ciphertexts when compared with other schemes proposed in the literature. --- paper_title: BEAT: Asynchronous BFT Made Practical paper_content: We present BEAT, a set of practical Byzantine fault-tolerant (BFT) protocols for completely asynchronous environments. BEAT is flexible, versatile, and extensible, consisting of five asynchronous BFT protocols that are designed to meet different goals (e.g., different performance metrics, different application scenarios). Due to modularity in its design, features of these protocols can be mixed to achieve even more meaningful trade-offs between functionality and performance for various applications. Through a 92-instance, five-continent deployment of BEAT on Amazon EC2, we show that BEAT is efficient: roughly, all our BEAT instances significantly outperform, in terms of both latency and throughput, HoneyBadgerBFT, the most efficient asynchronous BFT known. --- paper_title: Another advantage of free choice (Extended Abstract): Completely asynchronous agreement protocols paper_content: Recently, Fischer, Lynch and Paterson [3] proved that no completely asynchronous consensus protocol can tolerate even a single unannounced process death. We exhibit here a probabilistic solution for this problem, which guarantees that as long as a majority of the processes continues to operate, a decision will be made (Theorem 1). Our solution is completely asynchronous and is rather strong: As in [4], it is guaranteed to work with probability 1 even against an adversary scheduler who knows all about the system. --- paper_title: An asynchronous [(n - 1)/3]-resilient consensus protocol paper_content: A consensus protocol enables a system of n asynchronous processes, some of them malicious, to reach agreement. No assumptions are made on the behaviour of the processes and the message system; both are capable of colluding to prevent the correct processes from reaching decision. A protocol is t -resilient if in the presence of up to t malicious processes it reaches agreement with probability 1. In a recent paper, t -resilient consensus protocols were presented for t n /5. We improve this to t n /3, thus matching the lower bound on the number of correct processes necessary for consensus. The protocol restricts the behaviour of the malicious processes to that of merely fail-stop processes, which makes it interesting in other contexts. --- paper_title: Impossibility of distributed consensus with one faulty process paper_content: The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. We show that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the "Byzantine Generals" problem. --- paper_title: Signature-free asynchronous byzantine consensus with t < n/3 and o(n2) messages paper_content: This paper presents a new round-based asynchronous consensus algorithm that copes with up to t --- paper_title: Asynchronous byzantine agreement protocols paper_content: Abstract A consensus protocol enables a system of n asynchronous processes, some of them faulty, to reach agreement. Both the processes and the message system are capable of cooperating to prevent the correct processes from reaching decision. A protocol is t -resilient if in the presence of up to t faulty processes it reaches agreement with probability 1. Byzantine processes are faulty processes that can deviate arbitrarily from the protocol; Fail-Stop processes can just stop participating in it. In a recent paper, t -resilient randomized consensus protocols were presented for t n 5 . We improve this to t n 3 , thus matching the known lower bound on the number of correct processes necessary for consensus. The protocol uses a general technique in which the behavior of the Byzantine processes is restricted by the use of a broadcast protocol that filters some of the messages. The apparent behavior of the Byzantine processes, filtered by the broadcast protocol, is similar to that of Fail-Stop processes. Plugging the broadcast protocol as a communicating primitive into an agreement protocol for Fail-Stop processes gives the result. This technique, of using broadcast protocols to reduce the power of the faulty processes and then using them as communication primitives in algorithms designed for weaker failure models, was used succesfully in other contexts. --- paper_title: The Honey Badger of BFT Protocols paper_content: The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network. --- paper_title: Bitcoin meets strong consistency paper_content: The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state with high probability. We also show how Discoin, a Bitcoin variant that decouples block creation and transaction confirmation, can be built on top of PeerCensus, enabling real-time payments. Unlike Bitcoin, once transactions in Discoin are committed, they stay committed. --- paper_title: The Honey Badger of BFT Protocols paper_content: The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network. --- paper_title: A Survey on Consensus Mechanisms and Mining Management in Blockchain Networks. paper_content: The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and the industry. The blockchain network was originated in the Internet finical sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and data-driven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-of-the-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions. --- paper_title: Algorand: Scaling Byzantine Agreements for Cryptocurrencies paper_content: Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence. Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed. We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users. --- paper_title: Information propagation in the Bitcoin network paper_content: Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior. --- paper_title: Optimal Selfish Mining Strategies in Bitcoin paper_content: The Bitcoin protocol requires nodes to quickly distribute newly created blocks. Strong nodes can, however, gain higher payoffs by withholding blocks they create and selectively postponing their publication. The existence of such selfish mining attacks was first reported by Eyal and Sirer, who have demonstrated a specific deviation from the standard protocol (a strategy that we name SM1). --- paper_title: On the Security and Performance of Proof of Work Blockchains paper_content: Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions. --- paper_title: On Scaling Decentralized Blockchains paper_content: The increasing popularity of blockchain-based cryptocurrencies has made scalability a primary and urgent concern. We analyze how fundamental and circumstantial bottlenecks in Bitcoin limit the ability of its current peer-to-peer overlay network to support substantially higher throughputs and lower latencies. Our results suggest that reparameterization of block size and intervals should be viewed only as a first increment toward achieving next-generation, high-load blockchain protocols, and major advances will additionally require a basic rethinking of technical approaches. We offer a structured perspective on the design space for such approaches. Within this perspective, we enumerate and briefly discuss a number of recently proposed protocol ideas and offer several new ideas and open challenges. --- paper_title: Majority Is Not Enough: Bitcoin Mining Is Vulnerable paper_content: The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. --- paper_title: Information propagation in the Bitcoin network paper_content: Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior. --- paper_title: The Bitcoin Backbone Protocol: Analysis and Applications paper_content: Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin backbone, and prove two of its fundamental properties which we call common prefix and chain quality in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the “hashing power” of the adversary relative to network synchronicity; we show our results to be tight under high synchronization. --- paper_title: Eclipse attacks on Bitcoin's peer-to-peer network paper_content: We present eclipse attacks on bitcoin's peer-to-peer network. Our attack allows an adversary controlling a sufficient number of IP addresses to monopolize all connections to and from a victim bitcoin node. The attacker can then exploit the victim for attacks on bitcoin's mining and consensus system, including N-confirmation double spending, selfish mining, and adversarial forks in the blockchain. We take a detailed look at bitcoin's peer-to-peer network, and quantify the resources involved in our attack via probabilistic analysis, Monte Carlo simulations, measurements and experiments with live bitcoin nodes. Finally, we present countermeasures, inspired by botnet architectures, that are designed to raise the bar for eclipse attacks while preserving the openness and decentralization of bitcoin's current network architecture. --- paper_title: Bitcoin-NG: A Scalable Blockchain Protocol paper_content: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. ::: This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. ::: In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network. --- paper_title: Secure High-Rate Transaction Processing in Bitcoin paper_content: Bitcoin is a disruptive new crypto-currency based on a decentralized open-source protocol which has been gradually gaining momentum. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the implications of having a higher transaction throughput on Bitcoin’s security against double-spend attacks. We show that at high throughput, substantially weaker attackers are able to reverse payments they have made, even well after they were considered accepted by recipients. We address this security concern through the GHOST rule, a modification to the way Bitcoin nodes construct and re-organize the block chain, Bitcoin’s core distributed data-structure. GHOST has been adopted and a variant of it has been implemented as part of the Ethereum project, a second generation distributed applications platform. --- paper_title: Decentralization in Bitcoin and Ethereum Networks paper_content: Blockchain-based cryptocurrencies have demonstrated how to securely implement traditionally centralized systems, such as currencies, in a decentralized fashion. However, there have been few measurement studies on the level of decentralization they achieve in practice. We present a measurement study on various decentralization metrics of two of the leading cryptocurrencies with the largest market capitalization and user base, Bitcoin and Ethereum. We investigate the extent of decentralization by measuring the network resources of nodes and the interconnection among them, the protocol requirements affecting the operation of nodes, and the robustness of the two systems against attacks. In particular, we adapted existing internet measurement techniques and used the Falcon Relay Network as a novel measurement tool to obtain our data. We discovered that neither Bitcoin nor Ethereum has strictly better properties than the other. We also provide concrete suggestions for improving both systems. --- paper_title: Do the rich get richer? An empirical analysis of the BitCoin transaction network paper_content: The possibility to analyze everyday monetary transactions is limited by the scarcity of available data, as this kind of information is usually considered highly sensitive. Present econophysics models are usually employed on presumed random networks of interacting agents, and only some macroscopic properties (e.g. the resulting wealth distribution) are compared to real-world data. In this paper, we analyze Bitcoin, which is a novel digital currency system, where the complete list of transactions is publicly available. Using this dataset, we reconstruct the network of transactions and extract the time and amount of each payment. We analyze the structure of the transaction network by measuring network characteristics over time, such as the degree distribution, degree correlations and clustering. We find that linear preferential attachment drives the growth of the network. We also study the dynamics taking place on the transaction network, i.e. the flow of money. We measure temporal patterns and the wealth accumulation. Investigating the microscopic statistics of money movement, we find that sublinear preferential attachment governs the evolution of the wealth distribution. We report a scaling law between the degree and wealth associated to individual nodes. --- paper_title: Hybrid Consensus: Efficient Consensus in the Permissionless Model paper_content: Consensus, or state machine replication is a foundational building block of distributed systems and modern cryptography. Consensus in the classical, "permissioned" setting has been extensively studied in the 30 years of distributed systems literature. Recent developments in Bitcoin and other decentralized cryptocurrencies popularized a new form of consensus in a "permissionless" setting, where anyone can join and leave dynamically, and there is no a-priori knowledge of the number of consensus nodes. So far, however, all known permissionless consensus protocols assume network synchrony, i.e., the protocol must know an upper bound of the network's delay, and transactions confirm slower than this a-priori upper bound. ::: ::: We initiate the study of the feasibilities and infeasibilities of achieving responsiveness in permissionless consensus. In a responsive protocol, the transaction confirmation time depends only on the actual network delay, but not on any a-priori known upper bound such as a synchronous round. Classical protocols in the partial synchronous and asynchronous models naturally achieve responsiveness, since the protocol does not even know any delay upper bound. Unfortunately, we show that in the permissionless setting, consensus is impossible in the asynchronous or partially synchronous models. ::: ::: On the positive side, we construct a protocol called Hybrid Consensus by combining classical-style and blockchain-style consensus. Hybrid Consensus shows that responsiveness is nonetheless possible to achieve in permissionless consensus (assuming proof-of-work) when 1) the protocol knows an upper bound on the network delay; 2) we allow a non-responsive warmup period after which transaction confirmation can become responsive; 3) honesty has some stickiness, i.e., it takes a short while for an adversary to corrupt a node or put it to sleep; and 4) less than 1/3 of the nodes are corrupt. We show that all these conditions are in fact necessary - if only one of them is violated, responsiveness would have been impossible. Our work makes a step forward in our understanding of the permissionless model and its differences and relations to classical consensus. --- paper_title: Bitcoin meets strong consistency paper_content: The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state with high probability. We also show how Discoin, a Bitcoin variant that decouples block creation and transaction confirmation, can be built on top of PeerCensus, enabling real-time payments. Unlike Bitcoin, once transactions in Discoin are committed, they stay committed. --- paper_title: Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning paper_content: The secret keys of critical network authorities - such as time, name, certificate, and software update services - represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds. --- paper_title: Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing paper_content: While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin's open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than PayPal currently handles, with a confirmation latency of 15-20 seconds. --- paper_title: PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake paper_content: A peer-to-peer crypto-currency design derived from Satoshi Nakamoto’s Bitcoin. Proof-of-stake replaces proof-of-work to provide most of the network security. Under this hybrid design proof-of-work mainly provides initial minting and is largely non-essential in the long run. Security level of the network is not dependent on energy consumption in the long term thus providing an energyefficient and more cost-competitive peer-to-peer crypto-currency. Proof-of-stake is based on coin age and generated by each node via a hashing scheme bearing similarity to Bitcoin’s but over limited search space. Block chain history and transaction settlement are further protected by a centrally broadcasted checkpoint mechanism. --- paper_title: Verifiable secret sharing and multiparty protocols with honest majority paper_content: Under the assumption that each participant can broadcast a message to all other participants and that each pair of participants can communicate secretly, we present a verifiable secret sharing protocol, and show that any multiparty protocol, or game with incomplete information, can be achieved if a majority of the players are honest. The secrecy achieved is unconditional and does not rely on any assumption about computational intractability. Applications of these results to Byzantine Agreement are also presented. Underlying our results is a new tool of Information Checking which provides authentication without cryptographic assumptions and may have wide applications elsewhere. --- paper_title: Snow White: Robustly Reconfigurable Consensus and Applications to Provably Secure Proof of Stake. paper_content: We present the a provably secure proof-of-stake protocol called Snow White. The primary application of Snow White is to be used as a “green” consensus alternative for a decentralized cryptocurrency system with open enrollement. We break down the task of designing Snow White into the following core challenges: ::: ::: 1. ::: ::: identify a core “permissioned” consensus protocol suitable for proof-of-stake; specifically the core consensus protocol should offer robustness in an Internet-scale, heterogeneous deployment; ::: ::: ::: ::: ::: 2. ::: ::: propose a robust committee re-election mechanism such that as stake switches hands in the cryptocurrency system, the consensus committee can evolve in a timely manner and always reflect the most recent stake distribution; and ::: ::: ::: ::: ::: 3. ::: ::: relying on the formal security of the underlying consensus protocol, prove the full end-to-end protocol to be secure—more specifically, we show that any consensus protocol satisfying the desired robustness properties can be used to construct proofs-of-stake consensus, as long as money does not switch hands too quickly. --- paper_title: Cryptocurrencies without Proof of Work paper_content: We study decentralized cryptocurrency protocols in which the participants do not deplete physical scarce resources. Such protocols commonly rely on Proof of Stake, i.e., on mechanisms that extend voting power to the stakeholders of the system. We offer analysis of existing protocols that have a substantial amount of popularity. We then present our novel pure Proof of Stake protocols, and argue that they help in mitigating problems that the existing protocols exhibit. --- paper_title: Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol paper_content: We present “Ouroboros”, the first blockchain protocol based on proof of stake with rigorous security guarantees. We establish security properties for the protocol comparable to those achieved by the bitcoin blockchain protocol. As the protocol provides a “proof of stake” blockchain discipline, it offers qualitative efficiency advantages over blockchains based on proof of physical resources (e.g., proof of work). We also present a novel reward mechanism for incentivizing Proof of Stake protocols and we prove that, given this mechanism, honest behavior is an approximate Nash equilibrium, thus neutralizing attacks such as selfish mining. --- paper_title: The Sleepy Model of Consensus paper_content: The literature on distributed computing (as well as the cryptography literature) typically considers two types of players—honest players and corrupted players. Resilience properties are then analyzed assuming a lower bound on the fraction of honest players. Honest players, however, are not only assumed to follow the prescribed the protocol, but also assumed to be online throughout the whole execution of the protocol. The advent of “large-scale” consensus protocols (e.g., the blockchain protocol) where we may have millions of players, makes this assumption unrealistic. In this work, we initiate a study of distributed protocols in a “sleepy” model of computation where players can be either online (awake) or offline (asleep), and their online status may change at any point during the protocol. The main question we address is: ::: Can we design consensus protocols that remain resilient under “sporadic participation”, where at any given point, only a subset of the players are actually online? ::: As far as we know, all standard consensus protocols break down under such sporadic participation, even if we assume that \(99\%\) of the online players are honest. --- paper_title: Ouroboros Praos: An Adaptively-Secure, Semi-synchronous Proof-of-Stake Blockchain paper_content: We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting: Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. --- paper_title: Practical byzantine fault tolerance paper_content: This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS. --- paper_title: Casper the Friendly Finality Gadget paper_content: We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions. --- paper_title: Algorand: Scaling Byzantine Agreements for Cryptocurrencies paper_content: Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence. Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed. We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users. --- paper_title: Consensus in the presence of partial synchrony paper_content: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound Δ on the time required for a message to be sent from one processor to another and a known fixed upper bound P on the relative speeds of different processors. In an asynchronous system no fixed upper bounds Δ and P exist. In one version of partial synchrony, fixed bounds Δ and P exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds Δ and P. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T , and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time. --- paper_title: Snow White: Robustly Reconfigurable Consensus and Applications to Provably Secure Proof of Stake. paper_content: We present the a provably secure proof-of-stake protocol called Snow White. The primary application of Snow White is to be used as a “green” consensus alternative for a decentralized cryptocurrency system with open enrollement. We break down the task of designing Snow White into the following core challenges: ::: ::: 1. ::: ::: identify a core “permissioned” consensus protocol suitable for proof-of-stake; specifically the core consensus protocol should offer robustness in an Internet-scale, heterogeneous deployment; ::: ::: ::: ::: ::: 2. ::: ::: propose a robust committee re-election mechanism such that as stake switches hands in the cryptocurrency system, the consensus committee can evolve in a timely manner and always reflect the most recent stake distribution; and ::: ::: ::: ::: ::: 3. ::: ::: relying on the formal security of the underlying consensus protocol, prove the full end-to-end protocol to be secure—more specifically, we show that any consensus protocol satisfying the desired robustness properties can be used to construct proofs-of-stake consensus, as long as money does not switch hands too quickly. --- paper_title: Casper the Friendly Finality Gadget paper_content: We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions. --- paper_title: Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol paper_content: We present “Ouroboros”, the first blockchain protocol based on proof of stake with rigorous security guarantees. We establish security properties for the protocol comparable to those achieved by the bitcoin blockchain protocol. As the protocol provides a “proof of stake” blockchain discipline, it offers qualitative efficiency advantages over blockchains based on proof of physical resources (e.g., proof of work). We also present a novel reward mechanism for incentivizing Proof of Stake protocols and we prove that, given this mechanism, honest behavior is an approximate Nash equilibrium, thus neutralizing attacks such as selfish mining. --- paper_title: Ouroboros Praos: An Adaptively-Secure, Semi-synchronous Proof-of-Stake Blockchain paper_content: We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting: Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. --- paper_title: Intel SGX Explained. paper_content: Intel’s Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and confidentiality guarantees to securitysensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious. This paper analyzes Intel SGX, based on the 3 papers [14, 78, 137] that introduced it, on the Intel Software Developer’s Manual [100] (which supersedes the SGX manuals [94, 98]), on an ISCA 2015 tutorial [102], and on two patents [108, 136]. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. This paper’s contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX’s security properties. --- paper_title: Foreshadow: extracting the keys to the intel SGX kingdom with transient out-of-order execution paper_content: Trusted execution environments, and particularly the Software Guard eXtensions (SGX) included in recent Intel x86 processors, gained significant traction in recent years. A long track of research papers, and increasingly also real-world industry applications, take advantage of the strong hardware-enforced confidentiality and integrity guarantees provided by Intel SGX. Ultimately, enclaved execution holds the compelling potential of securely offloading sensitive computations to untrusted remote platforms. ::: ::: We present Foreshadow, a practical software-only microarchitectural attack that decisively dismantles the security objectives of current SGX implementations. Crucially, unlike previous SGX attacks, we do not make any assumptions on the victim enclave's code and do not necessarily require kernel-level access. At its core, Foreshadow abuses a speculative execution bug in modern Intel processors, on top of which we develop a novel exploitation methodology to reliably leak plaintext enclave secrets from the CPU cache. We demonstrate our attacks by extracting full cryptographic keys from Intel's vetted architectural enclaves, and validate their correctness by launching rogue production enclaves and forging arbitrary local and remote attestation responses. The extracted remote attestation keys affect millions of devices. --- paper_title: Software Grand Exposure: SGX Cache Attacks Are Practical paper_content: Intel SGX isolates the memory of security-critical applications from the untrusted OS. However, it has been speculated that SGX may be vulnerable to side-channel attacks through shared caches. We developed new cache attack techniques customized for SGX. Our attack differs from other SGX cache attacks in that it is easy to deploy and avoids known detection approaches. We demonstrate the effectiveness of our attack on two case studies: RSA decryption and genomic processing. While cache timing attacks against RSA and other cryptographic operations can be prevented by using appropriately hardened crypto libraries, the same cannot be easily done for other computations, such as genomic processing. Our second case study therefore shows that attacks on noncryptographic but privacy sensitive operations are a serious threat. We analyze countermeasures and show that none of the known defenses eliminates the attack. --- paper_title: Permacoin: Repurposing Bitcoin Work for Data Preservation paper_content: Bit coin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that mining Bit coins wastes computational resources. Indeed, Bit coin's underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility. We propose a modification to Bit coin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Perm coin. Unlike Bit coin and its proposed alternatives, Perm coin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bit coin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bit coin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bit coin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bit coin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of Perm coin as a whole. --- paper_title: Pors: proofs of retrievability for large files paper_content: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound. --- paper_title: The Ripple Protocol Consensus Algorithm paper_content: While several consensus algorithms exist for the Byzantine Generals Problem, specifically as it pertains to distributed payment systems, many suffer from high latency induced by the requirement that all nodes within the network communicate synchronously. In this work, we present a novel consensus algorithm that circumvents this requirement by utilizing collectively-trusted subnetworks within the larger network. We show that the “trust” required of these subnetworks is in fact minimal and can be further reduced with principled choice of the member nodes. In addition, we show that minimal connectivity is required to maintain agreement throughout the whole network. The result is a low-latency consensus algorithm which still maintains robustness in the face of Byzantine failures. We present this algorithm in its embodiment in the Ripple Protocol. --- paper_title: Scaling Nakamoto Consensus to Thousands of Transactions per Second paper_content: This paper presents Conflux, a fast, scalable and decentralized blockchain system that optimistically process concurrent blocks without discarding any as forks. The Conflux consensus protocol represents relationships between blocks as a direct acyclic graph and achieves consensus on a total order of the blocks. Conflux then, from the block order, deterministically derives a transaction total order as the blockchain ledger. We evaluated Con- flux on Amazon EC2 clusters with up to 20k full nodes. Conflux achieves a transaction throughput of 5.76GB/h while confirming transactions in 4.5-7.4 minutes. The throughput is equivalent to 6400 transactions per second for typical Bitcoin transactions. Our results also indicate that when running Conflux, the consensus protocol is no longer the throughput bottleneck. The bottleneck is instead at the processing capability of individual nodes. --- paper_title: Graphchain: a Blockchain-Free Scalable Decentralised Ledger paper_content: Blockchain-based replicated ledgers, pioneered in Bitcoin, are effective against double spending, but inherently attract centralised mining pools and incompressible transaction delays. We propose a framework that forgoes blockchains, building a decentralised ledger as a self-scaling graph of cross-verifying transactions. New transactions validate prior ones, forming a thin graph secured by a cumulative proof-of-work mechanism giving fair and predictable rewards for each participant. We exhibit rapid confirmation of new transactions, even across a large network affected by latency. We also show, both theoretically and experimentally, a strong convergence property: that any valid transaction entering the system quickly become enshrined in the ancestry of all future transactions. --- paper_title: Hyperledger fabric: a distributed operating system for permissioned blockchains paper_content: Hyperledger Fabric is a modular and extensible open-source system for deploying and operating permissioned blockchains. Fabric is currently used in more than 400 prototypes and proofs-of-concept of distributed ledger technology, as well as several production systems, across different industries and use cases. Starting from the premise that there are no"one-size-fits-all"solutions, Fabric is the first truly extensible blockchain system for running distributed applications. It supports modular consensus protocols, which allows the system to be tailored to particular use cases and trust models. Fabric is also the first blockchain system that runs distributed applications written in general-purpose programming languages, without systemic dependency on a native cryptocurrency. This stands in sharp contrast to existing blockchain platforms for running smart contracts that require code to be written in domain-specific languages or rely on a cryptocurrency. Furthermore, it uses a portable notion of membership for realizing the permissioned model, which may be integrated with industry-standard identity management. To support such flexibility, Fabric takes a novel approach to the design of a permissioned blockchain and revamps the way blockchains cope with non-determinism, resource exhaustion, and performance attacks. This paper describes Fabric, its architecture, the rationale behind various design decisions, its security model and guarantees, its most prominent implementation aspects, as well as its distributed application programming model. We further evaluate Fabric by implementing and benchmarking a Bitcoin-inspired digital currency. We show that Fabric achieves end-to-end throughput of more than 3500 transactions per second in certain popular deployment configurations, with sub-second latency. --- paper_title: The Honey Badger of BFT Protocols paper_content: The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network. ---
Title: A Survey of Distributed Consensus Protocols for Blockchain Networks Section 1: INTRODUCTION Description 1: Introduce the inception and significance of blockchain technology, its underlying mechanisms, and highlight the importance of distributed consensus protocols. Section 2: FAULT-TOLERANT DISTRIBUTED CONSENSUS Description 2: Discuss classical fault-tolerant consensus in distributed systems, including system models, process failures, and network synchrony. Section 3: Byzantine Fault Tolerant Consensus Description 3: Explain Byzantine Fault Tolerant (BFT) consensus, its requirements, and notable BFT protocols. Section 4: Consensus in Distributed Computing Description 4: Discuss consensus in distributed computing, highlighting state machine replication (SMR) and related protocols like Paxos and Viewstamped Replication (VR). Section 5: Consensus Protocols for Partially Synchronous Network Description 5: Describe consensus protocols designed for partially synchronous networks, such as DLS protocol, PBFT, and Paxos. Section 6: Consensus Protocols for Asynchronous Network Description 6: Discuss consensus protocols tailored for asynchronous networks, including Bracha's RBC, Ben-Or's ACS, HoneyBadgerBFT, and BEAT. Section 7: Blockchain Compatibility of Classical BFT Protocols Description 7: Address the compatibility of classical Byzantine Fault Tolerant protocols with blockchain networks and discuss pertinent challenges and adaptations. Section 8: AN OVERVIEW OF BLOCKCHAIN CONSENSUS Description 8: Provide a summary of blockchain infrastructure, introduce the consensus goal, and define the five essential components of a blockchain consensus protocol. Section 9: THE NAKAMOTO CONSENSUS PROTOCOL AND VARIATIONS Description 9: Detail the Nakamoto consensus protocol, its strengths, and vulnerabilities, along with improvement proposals and hybrid PoW-BFT protocols. Section 10: PROOF-OF-STAKE BASED CONSENSUS PROTOCOLS Description 10: Describe various proof-of-stake (PoS) based consensus protocols, classify them into several types, and discuss their respective characteristics and security implications. Section 11: OTHER EMERGING BLOCKCHAIN CONSENSUS PROTOCOLS Description 11: Explore additional consensus protocols like proof of authority (PoA), proof of elapsed time (PoET), proof of retrievability (PoR), and others, highlighting their unique features and use cases. Section 12: COMPARISON AND DISCUSSION Description 12: Compare different consensus protocols using a five-component framework, examine fault tolerance capabilities, and analyze transaction capacity and design philosophy. Section 13: CONCLUSION Description 13: Summarize the findings of the survey, providing insights into the future of blockchain consensus protocol development and potential research directions.
A Systematic Literature Review about the impact of Artificial Intelligence on Autonomous Vehicle Safety
5
--- paper_title: Concerns on the Differences Between AI and System Safety Mindsets Impacting Autonomous Vehicles Safety paper_content: The inflection point in the development of some core technologies enabled the Autonomous Vehicles (AV). The unprecedented growth rate in Artificial Intelligence (AI) and Machine Learning (ML) capabilities, focusing only on AVs, is expected to shift the transportation paradigm and bring relevant benefits to the society, such as accidents reduction. However, recent AVs accidents resulted in life losses. This paper presents a viewpoint discussion based on findings from a preliminary exploratory literature review. It was identified an important misalignment between AI and Safety research communities regarding the impact of AI on the safety risks in AV. This paper promotes this discussion, raises concerns on the potential consequences and suggests research topics to reduce the differences between AI and system safety mindsets. --- paper_title: Mobile Business Research Published in 2000-2004: Emergence, Current Status, and Future Opportunities paper_content: The convergence of mobile communications and distributed networked computing has provided the foundation for the development of a new channel of electronic business: mobile business. Research into mobile business has begun to grow significantly over the last five years to the point where we now see dedicated journals and conferences focusing on this topic. This paper provides an assessment of the state of mobile business research. Drawing on over 230 research papers from key research outlets, we describe the emergence of this research area and characterize its current status. We also provide a critique of current research and some recommendations for future research into mobile business. --- paper_title: Analyzing the past to prepare for the future: Writing a literature review paper_content: A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed. --- paper_title: Style composition in action research publication paper_content: Examining action research publications in leading Information Systems journals as a particular genre of research communication, we develop the notion of style composition to understand how authors structure their arguments for a research contribution. We define style composition as the activity through which authors select, emphasize, and present elements of their research to establish premises, develop inferences, and present contributions in publications. Drawing on this general notion, we identify a set of styles that is characteristic of how IS action researchers compose their argument. Premise styles relate to the dual goals of action research through practical or theoretical positioning of the argument; inference styles combine insights from the problem-solving and the research cycles through inductive or deductive reasoning; and contribution styles focus on different types of contributions--experience report, field study, theoretical development, problemsolving method, and research method. Based on the considered sample, we analyze the styles adopted in selected publications and show that authors have favored certain styles while leaving others underexplored; further, we reveal important strengths and weaknesses in the composition of styles within the IS discipline. Based on these insights, we discuss how action research practices and writing can be improved, as well as how to further develop style compositions to support the publication of engaged scholarship research. --- paper_title: Systematic mapping studies in software engineering paper_content: BACKGROUND: A software engineering systematic map is a defined method to build a classification scheme and structure a software engineering field of interest. The analysis of results focuses on frequencies of publications for categories within the scheme. Thereby, the coverage of the research field can be determined. Different facets of the scheme can also be combined to answer more specific research questions. ::: ::: OBJECTIVE: We describe how to conduct a systematic mapping study in software engineering and provide guidelines. We also compare systematic maps and systematic reviews to clarify how to chose between them. This comparison leads to a set of guidelines for systematic maps. ::: ::: METHOD: We have defined a systematic mapping process and applied it to complete a systematic mapping study. Furthermore, we compare systematic maps with systematic reviews by systematically analyzing existing systematic reviews. ::: ::: RESULTS: We describe a process for software engineering systematic mapping studies and compare it to systematic reviews. Based on this, guidelines for conducting systematic maps are defined. ::: ::: CONCLUSIONS: Systematic maps and reviews are different in terms of goals, breadth, validity issues and implications. Thus, they should be used complementarily and require different methods (e.g., for analysis). --- paper_title: Editorial: The Future of Writing and Reviewing forIJMR paper_content: In this editorial, the co-editors-in-chief undertake a number of tasks related to International Journal of Management Reviews (IJMR). They begin by reviewing the objectives set out by Macpherson and Jones in their 2010 editorial (IJMR, 12, pp. 107–113). The benefits of publishing in IJMR for scholars at various stages of their careers are then discussed. The section outlining the progress of IJMR over the last four years sets out the main reasons why so many papers are desk rejected by the co-editors. The main criteria for writing an analytical literature review of the type that the editors aspire to publish in the Journal are then discussed. The objectives are not simply to reduce the number of desk rejects, but also to encourage authors to be more ambitious and innovative in their approaches to reviews of the literature. --- paper_title: Data Mining: Practical Machine Learning Tools and Techniques paper_content: Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization --- paper_title: The WEKA data mining software: an update paper_content: More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003. --- paper_title: Mobile Business Research Published in 2000-2004: Emergence, Current Status, and Future Opportunities paper_content: The convergence of mobile communications and distributed networked computing has provided the foundation for the development of a new channel of electronic business: mobile business. Research into mobile business has begun to grow significantly over the last five years to the point where we now see dedicated journals and conferences focusing on this topic. This paper provides an assessment of the state of mobile business research. Drawing on over 230 research papers from key research outlets, we describe the emergence of this research area and characterize its current status. We also provide a critique of current research and some recommendations for future research into mobile business. --- paper_title: Analyzing the past to prepare for the future: Writing a literature review paper_content: A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed. --- paper_title: Systematic mapping studies in software engineering paper_content: BACKGROUND: A software engineering systematic map is a defined method to build a classification scheme and structure a software engineering field of interest. The analysis of results focuses on frequencies of publications for categories within the scheme. Thereby, the coverage of the research field can be determined. Different facets of the scheme can also be combined to answer more specific research questions. ::: ::: OBJECTIVE: We describe how to conduct a systematic mapping study in software engineering and provide guidelines. We also compare systematic maps and systematic reviews to clarify how to chose between them. This comparison leads to a set of guidelines for systematic maps. ::: ::: METHOD: We have defined a systematic mapping process and applied it to complete a systematic mapping study. Furthermore, we compare systematic maps with systematic reviews by systematically analyzing existing systematic reviews. ::: ::: RESULTS: We describe a process for software engineering systematic mapping studies and compare it to systematic reviews. Based on this, guidelines for conducting systematic maps are defined. ::: ::: CONCLUSIONS: Systematic maps and reviews are different in terms of goals, breadth, validity issues and implications. Thus, they should be used complementarily and require different methods (e.g., for analysis). --- paper_title: Concerns on the Differences Between AI and System Safety Mindsets Impacting Autonomous Vehicles Safety paper_content: The inflection point in the development of some core technologies enabled the Autonomous Vehicles (AV). The unprecedented growth rate in Artificial Intelligence (AI) and Machine Learning (ML) capabilities, focusing only on AVs, is expected to shift the transportation paradigm and bring relevant benefits to the society, such as accidents reduction. However, recent AVs accidents resulted in life losses. This paper presents a viewpoint discussion based on findings from a preliminary exploratory literature review. It was identified an important misalignment between AI and Safety research communities regarding the impact of AI on the safety risks in AV. This paper promotes this discussion, raises concerns on the potential consequences and suggests research topics to reduce the differences between AI and system safety mindsets. --- paper_title: M-banking services in Japan: a strategic perspective paper_content: The proliferation of mobile internet enabled devices is creating an extraordinary opportunity for a new mode of e-commerce. In Japan, in April 2003, there were more than 62 million users of mobile internet services. Mobile devices, typically used by a single person, provide an unprecedented platform for individualised services. Such services may build on the value propositions of time, place and individual context. One emerging area of mobile business is banking. This paper explores the state of the art of mobile (m-) banking in Japan. A brief discussion about the main characteristics of Japanese banking practices is accompanied by an overview of this country's mobile market. This is followed by a detailed analysis of the mobile internet services of three major Japanese banks Mizuho, Sumitomo Mitsui and UFJ and the development of a strategic framework for m-banking. The paper concludes with a discussion about the future of m-banking. --- paper_title: Editorial: The Future of Writing and Reviewing forIJMR paper_content: In this editorial, the co-editors-in-chief undertake a number of tasks related to International Journal of Management Reviews (IJMR). They begin by reviewing the objectives set out by Macpherson and Jones in their 2010 editorial (IJMR, 12, pp. 107–113). The benefits of publishing in IJMR for scholars at various stages of their careers are then discussed. The section outlining the progress of IJMR over the last four years sets out the main reasons why so many papers are desk rejected by the co-editors. The main criteria for writing an analytical literature review of the type that the editors aspire to publish in the Journal are then discussed. The objectives are not simply to reduce the number of desk rejects, but also to encourage authors to be more ambitious and innovative in their approaches to reviews of the literature. --- paper_title: Concerns on the Differences Between AI and System Safety Mindsets Impacting Autonomous Vehicles Safety paper_content: The inflection point in the development of some core technologies enabled the Autonomous Vehicles (AV). The unprecedented growth rate in Artificial Intelligence (AI) and Machine Learning (ML) capabilities, focusing only on AVs, is expected to shift the transportation paradigm and bring relevant benefits to the society, such as accidents reduction. However, recent AVs accidents resulted in life losses. This paper presents a viewpoint discussion based on findings from a preliminary exploratory literature review. It was identified an important misalignment between AI and Safety research communities regarding the impact of AI on the safety risks in AV. This paper promotes this discussion, raises concerns on the potential consequences and suggests research topics to reduce the differences between AI and system safety mindsets. --- paper_title: Accurate and Reliable Detection of Traffic Lights Using Multiclass Learning and Multiobject Tracking paper_content: Automatic detection of traffic lights has great importance to road safety. This paper presents a novel approach that combines computer vision and machine learning techniques for accurate detection and classification of different types of traffic lights, including green and red lights both in circular and arrow forms. Initially, color extraction and blob detection are employed to locate the candidates. Subsequently, a pretrained PCA network is used as a multiclass classifier to obtain frame-by-frame results. Furthermore, an online multiobject tracking technique is applied to overcome occasional misses and a forecasting method is used to filter out false positives. Several additional optimization techniques are employed to improve the detector performance and handle the traffic light transitions. When evaluated using the test video sequences, the proposed system can successfully detect the traffic lights on the scene with high accuracy and stable results. Considering hardware acceleration, the proposed technique is ready to be integrated into advanced driver assistance systems or self-driving vehicles. We build our own data set of traffic lights from recorded driving videos, including circular lights and arrow lights in different directions. Our experimental data set is available at http://computing.wpi.edu/Dataset.html. --- paper_title: Road Terrain detection and Classification algorithm based on the Color Feature extraction paper_content: Today, In the content of road vehicles, intelligent systems and autonomous vehicles, one of the important problem that should be solved is Road Terrain Classification that improves driving safety and comfort passengers. There are many studies in this area that improved the accuracy of classification. An improved classification method using color feature extraction is proposed in this paper. Color Feature of images is used to classify The Road Terrain Type and then a Neural Network (NN) is used to classify the Color Features extracted from images. Proposed idea is to identify road by processing digital images taken from the roads with a camera installed on a car. Asphalt, Grass, Dirt and Rocky are four types of terrain that identified in this study. --- paper_title: Autonomous vehicles: from paradigms to technology paper_content: Mobility is a basic necessity of contemporary society and it is a key factor in global economic development. The basic requirements for the transport of people and goods are: safety and duration of travel, but also a number of additional criteria are very important: energy saving, pollution, passenger comfort. Due to advances in hardware and software, automation has penetrated massively in transport systems both on infrastructure and on vehicles, but man is still the key element in vehicle driving. However, the classic concept of 'human-in-the-loop' in terms of 'hands on' in driving the cars is competing aside from the self-driving startups working towards so-called 'Level 4 autonomy', which is defined as "a self-driving system that does not requires human intervention in most scenarios". In this paper, a conceptual synthesis of the autonomous vehicle issue is made in connection with the artificial intelligence paradigm. It presents a classification of the tasks that take place during the driving of the vehicle and its modeling from the perspective of traditional control engineering and artificial intelligence. The issue of autonomous vehicle management is addressed on three levels: navigation, movement in traffic, respectively effective maneuver and vehicle dynamics control. Each level is then described in terms of specific tasks, such as: route selection, planning and reconfiguration, recognition of traffic signs and reaction to signaling and traffic events, as well as control of effective speed, distance and direction. The approach will lead to a better understanding of the way technology is moving when talking about autonomous cars, smart/intelligent cars or intelligent transport systems. Keywords: self-driving vehicle, artificial intelligence, deep learning, intelligent transport systems. --- paper_title: Intelligent adaptive precrash control for autonmous vehicle agents (CBR Engine & hybrid A∗ path planner) paper_content: PreCrash problem of Intelligent Control of autonomous vehicles robot is a very complex problem, especially vehicle pre-crash scenariws and at points of intersections in real-time environmenta. This Paper presents a novel architecture of Intelligent adaptive control for autonomous vehicle agent that depends on Artificial Intelligence Techniques that applies case-based reasoning techniques, where Parallel CBR Engines are implemented for different scenarios' of PreCrash problem and sub-problems of intersection safety and collision avoidance, in the higher level of the controller and A∗ path planner for path planning and at lower-levels it also uses some features of autonomous vehicle dynamics. Moreover, the planner is enhanced by combination of Case-Based Planner. All modules are presented and discussed. Experimental results are conducted in the framework of Webots autonomous vehicle tool and overall results are good for the CBR Engine for Adaptive control and also for the hybrid Case-Based Planner, A∗ and D∗ motion planner along with conclusion and future work. --- paper_title: RobIL — Israeli program for research and development of autonomous UGV: Performance evaluation methodology paper_content: RobIL program is a collaborative effort of several leading Israeli academic institutions and industries in the field of robotics. The current objective of the program is to develop an Autonomous Off-Road Unmanned Ground Vehicle (UGV). The intention is to deal through this project with some of the most urgent challenges in the field of autonomous vehicles. One of these challenges, is the lack of efficient Safety Performance Verification technique, as the existing tools for hardware and software reliability and safety engineering do not provide a comprehensive solution regarding algorithms that are based on Artificial Intelligent (AI) and Machine Learning. In order to deal with this gap a novel methodology that is based on statistical testing in simulated environment, is presented. In this work the RobIL UGV project with special emphasis on the performance and safety verification methodology is presented. --- paper_title: Are all objects equal? Deep spatio-temporal importance prediction in driving videos paper_content: Understanding intent and relevance of surrounding agents from video is an essential task for many applications in robotics and computer vision. The modeling and evaluation of contextual, spatio-temporal situation awareness is particularly important in the domain of intelligent vehicles, where a robot is required to smoothly navigate in a complex environment while also interacting with humans. In this paper, we address these issues by studying the task of on-road object importance ranking from video. First, human-centric object importance annotations are employed in order to analyze the relevance of a variety of multi-modal cues for the importance prediction task. A deep convolutional neural network model is used for capturing video-based contextual spatial and temporal cues of scene type, driving task, and object properties related to intent. Second, the proposed importance annotations are used for producing novel analysis of error types in image-based object detectors. Specifically, we demonstrate how cost-sensitive training, informed by the object importance annotations, results in improved detection performance on objects of higher importance. This insight is essential for an application where navigation mistakes are safety-critical, and the quality of automation and human-robot interaction is key. HighlightsWe study a notion of object relevance, as measured in a spatio-temporal context of driving a vehicle.Various spatio-temporal object and scene cues are analyzed for the task of object importance classification.Human-centric metrics are employed for evaluating object detection and studying data bias.Importance-guided training of object detectors is proposed, showing significant improvement over an importance-agnostic baseline. --- paper_title: Design of Real-Time Transition from Driving Assistance to Automation Function: Bayesian Artificial Intelligence Approach paper_content: Forecasts of automation in driving suggest that wide spread market penetration of fully autonomous vehicles will be decades away and that before such vehicles will gain acceptance by all stake holders, there will be a need for driving assistance in key driving tasks, supplemented by automated active safety capability. This paper advances a Bayesian Artificial Intelligence model for the design of real time transition from assisted driving to automated driving under conditions of high probability of a collision if no action is taken to avoid the collision. Systems can be designed to feature collision warnings as well as automated active safety capabilities. In addition to the high level architecture of the Bayesian transition model, example scenarios illustrate the function of the real-time transition model. --- paper_title: Autonomous vehicles safe-optimal trajectory selection based on big data analysis and predefined user preferences paper_content: Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip. --- paper_title: A driverless vehicle demonstration on motorways and in urban environments paper_content: The constant growth of the number of vehicles in today’s world demands improvements in the safety and efficiency of roads and road use. This can be in part satisfied by the implementation of autonomous driving systems because of their greater precision than human drivers in controlling a vehicle. As result, the capacity of the roads would be increased by reducing the spacing between vehicles. Moreover, greener driving modes could be applied so that the fuel consumption, and therefore carbon emissions, would be reduced. This paper presents the results obtained by the AUTOPIA program during a public demonstration performed in June 2012. This driverless experiment consisted of a 100-kilometre route around Madrid (Spain), including both urban and motorway environments. A first vehicle – acting as leader and manually driven – transmitted its relevant information – i.e., position and speed – through an 802.11p communication link to a second vehicle, which tracked the leader’s trajectory and speed while maintaining a safe distance. The results were encouraging, and showed the viability of the AUTOPIA approach. ::: First published online: 28 Jan 2015 --- paper_title: Drivers’ Manoeuvre Classification for Safe HRI paper_content: Ever increasing autonomy of machines and the need to interact with them creates challenges to ensure safe operation. Recent technical and commercial interest in increasing autonomy of vehicles has led to the integration of more sensors and actuators inside the vehicle, making them more like robots. For interaction with semi-autonomous cars, the use of these sensors could help to create new safety mechanisms. This work explores the concept of using motion tracking (i.e. skeletal tracking) data gathered from the driver whilst driving to learn to classify the manoeuvre being performed. A kernel-based classifier is trained with empirically selected features based on data gathered from a Kinect V2 sensor in a controlled environment. This method shows that skeletal tracking data can be used in a driving scenario to classify manoeuvres and sets a background for further work. --- paper_title: Cognitive Vehicle Design Guided by Human Factors and Supported by Bayesian Artificial Intelligence paper_content: Researchers and automotive industry experts are in favour of accelerating the development of “cognitive vehicle” features, which will integrate intelligent technology and human factors for providing non-distracting interface for safety, efficiency and environmental sustainability in driving. In addition, infotainment capability is a desirable feature of the vehicle of the future, provided that it does not add to driver distraction. Further, these features are expected to be a stepping-stone to fully autonomous driving. This paper describes advances in driver-vehicle interface and presents highlights of research in design. Specifically, the design features of the cognitive vehicle are presented and measures needed to address major issues are noted. The application of Bayesian Artificial Intelligence is described in the design of driving assistance system, and design considerations are advanced in order to overcome issues in in-vehicle telematics systems. Finally, conclusions are advanced based on coverage of research material in the paper. --- paper_title: The research of prediction model on intelligent vehicle based on driver’s perception paper_content: In the field of self-driving technology, the stability and comfort of the intelligent vehicle are the focus of attention. The paper applies cognitive psychology theory to the research of driving behavior and analyzes the behavior mechanism about the driver’s operation. Through applying the theory of hierarchical analysis, we take the safety and comfort of intelligent vehicle as the breakthrough point. And then we took the data of human drivers’ perception behavior as the training set and did regression analysis using the method of regression analysis of machine learning according to the charts of the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision. At last we established linear and nonlinear regression models (including the logarithmic model) for the training set. The change in thinking is the first novelty of this paper. Last but not least important, we verified the accuracy of the model through the comparison of different regression analysis. Eventually, it turned out that using logarithmic relationship to express the relationship between the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision is better than other models. In the aspect of application, we adopted the technology of multi-sensor fusion and transformed the acquired data from radar, navigation and image to log-polar coordinates, which makes us greatly simplify information when dealing with massive data problems from different sensors. This approach can not only reduce the complexity of the server’s processing but also drives the development of intelligent vehicle in information computing. We also make this model applied in the intelligent driver’s cognitive interactive debugging program, which can better explain and understand the intelligent driving behavior and improved the safety of intelligent vehicle to some extent. --- paper_title: Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau paper_content: The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology. --- paper_title: Road signs classification by ANN for real-time implementation paper_content: Traffic safety is an important problem for autonomous vehicles. The development of Traffic Sign Recognition (TSR) dedicated to reducing the number of fatalities and the severity of road accidents is an important and an active research area. Recently, most TSR approaches of machine learning and image processing have achieved advanced performance in traditional natural scenes. However, there exists a limitation on the accuracy in road sign recognition and on time consuming. This paper proposes a real-time algorithm for shape classification of traffic signs and their recognition to provide a driver alert system. The proposed algorithm is mainly composed of two phases: shape classification and content classification. This algorithm takes as input a list of Bounding Boxes generated in a previous work, and will classify them. The traffic sign's shape is classified by an artificial neural network (ANN). Traffic signs are classified according to their shape characteristics, as triangular, squared and circular shapes. Combining color and shape information, traffic signs are classified into one of the following classes: danger, information, obligation or prohibition. The classified circular and triangular shapes are passed on to the second ANN in the third phase. These identify the pictogram of the road sign. The output of the second artificial neural network allows the full classification of the road sign. The algorithm proposed is evaluated on a dataset of road signs of a Tunisian database sign. --- paper_title: Observation Based Creation of Minimal Test Suites for Autonomous Vehicles paper_content: Autonomous vehicles pose new challenges to their testing, which is required for safety certification. While Autonomous vehicles will use training sets as specification for machine learning algorithms, traditional validation depends on the system’s requirements and design.The presented approach uses training sets which are observations of traffic situations as system specification. It aims at deriving test-cases which incorporate the continuous behavior of other traffic participants. Hence, relevant scenarios are mined by analyzing and categorizing behaviors. By using abstract descriptions of the behaviors we discuss how test-cases can be compared to each other, so that similar test-cases are avoided in the test-suite. We demonstrate our approach using a combination of an overtake assistant and an adaptive cruise control. --- paper_title: Learning from Demonstration Using GMM, CHMM and DHMM: A Comparison paper_content: Greater production and improved safety in the mining industry can be enhanced by the use of automated vehicles. This paper presents results in applying Learning from Demonstration (LfD) to a laboratory semi-automated mine inspection robot following a path through a simulated mine. Three methods, Gaussian Mixture Model (GMM), Continuous Hidden Markov Model (CHMM), and Discrete Hidden Markov Model (DHMM) were used to implement the LfD and a comparison of the implementation results is presented. The results from the different models were then used to implement a novel, optimised path decomposition technique that may be suitable for possible robot use within an underground mine. --- paper_title: Autonomy, Trust, and Transportation paper_content: Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the “machine” is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today’s transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. --- paper_title: Towards Autonomous Cruising on Highways paper_content: The objectives of this paper are to discuss how vehicle automation technology can be used to benefit car drivers and also to propose a concept of an autonomous highway vehicle which improves highway driving safety. --- paper_title: Driver monitoring in the context of autonomous vehicle paper_content: Today research is going on within different essential functions need to bring automatic vehicles to the roads. However, there will be manual driven vehicles for many years before it is fully automated vehicles on roads. In complex situations, automated vehicles will need human assistance for long time. So, for road safety driver monitoring is even more important in the context of autonomous vehicle to keep the driver alert and awake. But, limited effort has been done in total integration between automatic vehicle and human driver. Therefore, human drivers need to be monitored and able to take over control within short notice. This papers provides an overview on autonomous vehicles and un-obstructive driver monitoring approaches that can be implemented in future autonomous vehicles to monitor driver e.g., to diagnose and predict stress, fatigue etc. in semi-automated vehicles. --- paper_title: Advance Driver Assistance System (ADAS) - Speed bump detection paper_content: In Intelligent Transportation System, Advance Driver Assistance Systems (ADAS) plays a vital role. In ADAS, many research works are done in the area of traffic sign recognition, Forward Collision Warning, Automotive navigation system, Lane departure warning system but an another important area to look through is speed bumps detection. The recognition of speed bump is a safety to a human and a vehicle. Early research in speed bump detection is done with the help of sensors, accelerometer and GPS. In this paper, a novel method is presented to achieve speed bump detection and recognition either to alert or to interact directly with the vehicle. Detection of speed bump is recognized with a help of image processing concepts. This methodology is effortless and simple to implement without the investment of special sensors, hardware, Smartphone and GPS. This procedure suits very well for the roads constructed with proper marking, and can be used in self-driving car. --- paper_title: Road junction detection from 3D point clouds paper_content: Detecting changing traffic conditions is of primal importance for the safety of autonomous cars navigating in urban environments. Among the traffic situations that require more attention and careful planning, road junctions are the most significant. This work presents an empirical study of the application of well known machine learning techniques to create a robust method for road junction detection. Features are extracted from 3D pointclouds corresponding to single frames of data collected by a laser rangefinder. Three well known classifiers-support vector machines, adaptive boosting and artificial neural networks-are used to classify them into “junctions” or “roads”. The best performing classifier is used in the next stage, where structured classifiers-hidden Markov models and conditional random fields-are used to incorporate contextual information, in an attempt to improve the performance of the method. We tested and compared these approaches on datasets from two different 3D laser scanners, and in two different countries, Germany and Brazil. --- paper_title: Reducing driver's behavioural uncertainties using an interdisciplinary approach: Convergence of Quantified Self, Automated Vehicles, Internet Of Things and Artificial Intelligence. paper_content: Abstract: Growing research progress in Internet of Things (IoT), automated/connected cars, Artificial Intelligence and person's data acquisition (Quantified Self) will help to reduce behavioral uncertainties in transport and unequivocally influence future transport landscapes. This vision paper argues that by capitalizing advances in data collection and methodologies from emerging research disciplines, we could make the driver amenable to a knowable and monitorable entity, which will improve road safety. We present an interdisciplinary framework, inspired by the Safe system, to extract knowledge from the large amount of available data during driving. The limitation of our approach is discussed. --- paper_title: Low Level Control Layer Definition for Autonomous Vehicles Based on Fuzzy Logic paper_content: Abstract The intelligent control of autonomous vehicles is one of the most important challenges that intelligent transport systems face today. The application of artificial intelligence techniques to the automatic management of vehicle actuators enables the different Advanced Driver Assistance Systems (ADAS) or even autonomous driving systems, to perform a low level management in a very similar way to that of human drivers by improving safety and comfort. In this paper, we present a control schema to manage these low level vehicle actuators (steering throttle and brake) based on fuzzy logic, an artificial intelligence technique that is able to mimic human procedural behavior, in this case, when performing the driving task. This automatic low level control system has been defined, implemented and tested in a Citroen C3 testbed vehicle, whose actuators have been automated and can receive control signals from an onboard computer where the soft computing-based control system is running. --- paper_title: A Decision Support System for Improving Resiliency of Cooperative Adaptive Cruise Control Systems paper_content: Abstract Advanced driver assistance systems (ADASs) enhance transportation safety and mobility, and reduce impacts on the environment and economical costs, through decreasing driver errors. One of the main features of ADASs is cruise control system that maintains the driver's desired speed without intervention from the driver. Adaptive cruise control (ACC) systems adjust the vehicle's speed to maintain a safe following distance to the vehicle in front. Adding vehicle-to-vehicle and vehicle-to-infrastructure communications (V2X) to ACC systems, result in cooperative adaptive cruise control (CACC) systems, where each vehicle has trajectory data of all other vehicles in the same lane. Although CACC systems offer advantages over ACC systems in increasing throughput and average speed, they are more vulnerable to cyber-security attacks. This is due to V2X communications that increase the attack surface from one vehicle to multiple vehicles. In this paper, we inject common types of attack on the application layer of connected vehicles to show their vulnerability in comparison to autonomous vehicles. We also proposed a decision support system that eliminates risk of inaccurate information. The microscopic work simulates a CACC system with a bi-objective PID controller and a fuzzy detector. A case study is illustrated in detail to verify the system functionality. --- paper_title: Microscopic traffic simulation based evaluation of highly automated driving on highways paper_content: Highly automated driving on highways requires a complex artificial intelligence that makes optimal decisions based on ongoing measurements. Notably no attempt has been performed to evaluate the impacts of such a sophisticated system on traffic. Another important point is the impact of continuous increase in the number of highly automated vehicles on future traffic safety and traffic flow. This work introduces a novel framework to evaluate these impacts in a developed microscopic traffic simulation environment. This framework is used on the one hand to ensure the functionality of the driving strategy in the early development stage. On the other hand, the impacts of increasing automation rates, up to hundred percent, on traffic safety and traffic flow is evaluated. --- paper_title: Securing wireless communications of connected vehicles with artificial intelligence paper_content: This work applies artificial intelligence (AI) to secure wireless communications of Connected Vehicles. Vehicular Ad-hoc Network (VANET) facilitates exchange of safety messages for collision avoidance, leading to self-driving cars. An AI system continuously learns to augment its ability in discerning and recognizing its surroundings. Such ability plays a vital role in evaluating the authenticity and integrity of safety messages for cars driven by computers. Falsification of meter readings, disablement of brake function, and other unauthorized controls by spoofed messages injected into VANET emerge as security threats. Countermeasures must be considered at design stage, as opposed to afterthought patches, effectively against cyber-attacks. However, current standards oversubscribe security measures by validating every message circulating among Connected Vehicles, making VANET subject to denial-of-service (DoS) Attacks. This interdisciplinary research shows promising results by searching the pivot point to balance between message authentication and DoS prevention, making security measures practical for the real-world deployment of Connected Vehicles. Message authentication adopts Context-Adaptive Signature Verification strategy, applying AI filters to reduce both communication and computation overhead. Combining OMNET++, a data network simulator, and SUMO, a road traffic simulator, with Veins, an open source framework for VANET simulation, the study evaluates AI filters comparatively under various attacking scenarios. The results lead to an effective design choice of securing wireless communications for Connected Vehicles. --- paper_title: Novel architecture for cloud based next gen vehicle platform — Transition from today to 2025 paper_content: The sole idea of the autonomous driving is to reduce the number of accidents which will be caused by human errors. There will be a phase where autonomous vehicles will have to take part with the human driven vehicles. In this phase autonomous vehicle and vehicle drivers must be careful in driving maneuvers, since it is impossible to just have autonomous vehicles on the road while the transition to fully autonomous driving takes place. To make the system more reliable and robust, authors feel that it must be substantially improved and to achieve this 3 methodologies have been proposed. The use cases are derived from the camera placed in the suitable positions of every vehicle. This creates several use cases with different driving behaviors, which truly reflects the conditions which the autonomous car needs to undergo in series. Authors propose several architectures to address the safety related issues. Since, the number of use cases are so random that it becomes practically impossible to manage data, so we propose to have a secured cloud storage. The cloud provides a secure and reliable data management system to the machine learning algorithm computation. The data management till now is effectively done via trained unique use cases. This make a big data which is encapsulation of different unique use cases under different driving conditions. So we conclude that the usage of ML and cloud based data management is the way forward for handling autonomous vehicle reliably. This may not be the complete solution, but nevertheless we are one step closer towards 2025. This concept can be implemented only in co-operation of the OEMs and Public authorities. --- paper_title: Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract paper_content: Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a "safe" envelope knows that it is safe and can operate with full autonomy. A system operating within an "unsafe" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between "safe" and "unsafe" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing "safe" envelopes, shrinking "unsafe" envelopes, and eliminating any gap areas. --- paper_title: A Hidden Markov Model for Vehicle Detection and Counting paper_content: To reduce roadway congestion and improve traffic safety, accurate traffic metrics, such as number of vehicles travelling through lane-ways, are required. Unfortunately most existing infrastructure, such as loop-detectors and many video detectors, do not feasibly provide accurate vehicle counts. Consequently, a novel method is proposed which models vehicle motion using hidden Markov models (HMM). The proposed method represents a specified small region of the roadway as 'empty', 'vehicle entering', 'vehicle inside', and 'vehicle exiting', and then applies a modified Viterbi algorithm to the HMM sequential estimation framework to initialize and track vehicles. Vehicle observations are obtained using an Adaboost trained Haar-like feature detector. When tested on 88 hours of video, from three distinct locations, the proposed method proved to be robust to changes in lighting conditions, moving shadows, and camera motion, and consistently out-performed Multiple Target Tracking (MTT) and Virtual Detection Line(VDL) implementations. The median vehicle count error of the proposed method is lower than MTT and VDL by 28%, and 70% respectively. As future work, this algorithm will be implemented to provide the traffic industry with improved automated vehicle counting, with the intent to eventually provide real-time counts. --- paper_title: Investigation into the Role of Rational Ethics in Crashes of Automated Vehicles paper_content: Traffic engineers, vehicle manufacturers, technology groups, and government agencies are anticipating and preparing for the emergence of fully automated vehicles into the American transportation system. This new technology has the potential to revolutionize many aspects of transportation, particularly safety. However, fully automated vehicles may not create the truly crash-free environment predicted. One particular problem is crash assignment, especially between automated vehicles and nonautomated vehicles. Although some researchers have indicated that automated vehicles will need to be programmed with some sort of ethical system in order to make decisions on how to crash, few, if any, studies have been conducted on how particular ethical theories will actually make crash decisions and how these ethical paradigms will affect automated vehicle programming. The integration of three ethical theories—utilitarianism, respect for persons, and virtue ethics—with vehicle automation is examined, and a simple progr... --- paper_title: A Formal Approach to Autonomous Vehicle Coordination paper_content: Increasing demands on safety and energy efficiency will require higher levels of automation in transportation systems. This involves dealing with safety-critical distributed coordination. In this paper we demonstrate how a Satisfiability Modulo Theories (SMT) solver can be used to prove correctness of a vehicular coordination problem. We formalise a recent distributed coordination protocol and validate our approach using an intersection collision avoidance (ICA) case study. The system model captures continuous time and space, and an unbounded number of vehicles and messages. The safety of the case study is automatically verified using the Z3 theorem prover. --- paper_title: Learning Human-Level AI abilities to drive racing cars paper_content: The final purpose of Automated Vehicle Guidance Systems (AVGSs) is to obtain fully automatic driven vehicles to optimize transport systems, minimizing delays, increasing safety and comfort. In order to achieve these goals, lots of Artificial Intelligence techniques must be improved and merged. In this article we focus on learning and simulating the Human-Level decisions involved in driving a racing car. To achieve this, we have studied the convenience of using Neuroevolution of Augmenting Topologies (NEAT). To experiment and obtain comparative results we have also developed an online videogame prototype called Screaming Racers, which is used as test-bed environment. --- paper_title: Embedding ethical principles in collective decision support systems paper_content: The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. ::: ::: In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles. --- paper_title: Predicting dynamic computational workload of a self-driving car paper_content: This study aims at developing a method that predicts the CPU usage patterns of software tasks running on a self-driving car. To ensure safety of such dynamic systems, the worst-case-based CPU utilization analysis has been used; however, the nature of dynamically changing driving contexts requires more flexible approach for an efficient computing resource management. To better understand the dynamic CPU usage patterns, this paper presents an effort of designing a feature vector to represent the information of driving environments and of predicting, using regression methods, the selected tasks' CPU usage patterns given specific driving contexts. Experiments with real-world vehicle data show a promising result and validate the usefulness of the proposed method. --- paper_title: Intersection management for autonomous vehicles using iCACC paper_content: Recently several artificial intelligence labs have suggested the use of fully equipped vehicles with the capability of sensing the surrounding environment to enhance roadway safety. As a result, it is anticipated in the future that many vehicles will be autonomous and thus there is a need to optimize the movement of these vehicles. This paper presents a new tool for optimizing the movements of autonomous vehicles through intersections: iCACC. The main concept of the proposed tool is to control vehicle trajectories using Cooperative Adaptive Cruise Control (CACC) systems to avoid collisions and minimize intersection delay. Simulations were executed to compare conventional signal control with iCACC considering two measures of effectiveness - delay and fuel consumption. Savings in delay and fuel consumption in the range of 91 and 82 percent relative to conventional signal control were demonstrated, respectively. --- paper_title: Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning paper_content: Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. --- paper_title: SVM-inspired dynamic safe navigation using convex hull construction paper_content: The navigation of mobile robots or unmanned autonomous vehicles (UAVs) in an environment full of obstacles has a significant impact on its safety. If the robot maneuvers too close to an obstacle, it increases the probability of an accident. Preventing this is crucial in dynamic environments, where the obstacles, such as other UAVs, are moving. This kind of safe navigation is needed in any autonomous movement application but it is of a vital importance in applications such as automated transportation of nuclear or chemical waste. This paper presents the Maximum Margin Search using a Convex Hull construction (MMS-CH), an algorithm for a fast construction of a maximum margin between sets of obstacles and its maintenance as the input data are dynamically altered. This calculation of the safest path is inspired by the Support Vector Machines (SVM). It utilizes the convex hull construction to preprocess the input data and uses the boundaries of the hulls to search for the optimal margin. The MMS-CH algorithm takes advantage of the elementary geometrical properties of the 2-dimensional Euclidean space resulting in 1) significant reduction of the problem complexity by eliminating irrelevant data; 2) computationally less expensive approach to maximum margin calculation than standard SVM-based techniques; and 3) inexpensive recomputation of the solution suitable for real time dynamic applications. --- paper_title: Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau paper_content: The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology. --- paper_title: Analyzing the past to prepare for the future: Writing a literature review paper_content: A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed. --- paper_title: Accurate and Reliable Detection of Traffic Lights Using Multiclass Learning and Multiobject Tracking paper_content: Automatic detection of traffic lights has great importance to road safety. This paper presents a novel approach that combines computer vision and machine learning techniques for accurate detection and classification of different types of traffic lights, including green and red lights both in circular and arrow forms. Initially, color extraction and blob detection are employed to locate the candidates. Subsequently, a pretrained PCA network is used as a multiclass classifier to obtain frame-by-frame results. Furthermore, an online multiobject tracking technique is applied to overcome occasional misses and a forecasting method is used to filter out false positives. Several additional optimization techniques are employed to improve the detector performance and handle the traffic light transitions. When evaluated using the test video sequences, the proposed system can successfully detect the traffic lights on the scene with high accuracy and stable results. Considering hardware acceleration, the proposed technique is ready to be integrated into advanced driver assistance systems or self-driving vehicles. We build our own data set of traffic lights from recorded driving videos, including circular lights and arrow lights in different directions. Our experimental data set is available at http://computing.wpi.edu/Dataset.html. --- paper_title: Autonomous vehicles: from paradigms to technology paper_content: Mobility is a basic necessity of contemporary society and it is a key factor in global economic development. The basic requirements for the transport of people and goods are: safety and duration of travel, but also a number of additional criteria are very important: energy saving, pollution, passenger comfort. Due to advances in hardware and software, automation has penetrated massively in transport systems both on infrastructure and on vehicles, but man is still the key element in vehicle driving. However, the classic concept of 'human-in-the-loop' in terms of 'hands on' in driving the cars is competing aside from the self-driving startups working towards so-called 'Level 4 autonomy', which is defined as "a self-driving system that does not requires human intervention in most scenarios". In this paper, a conceptual synthesis of the autonomous vehicle issue is made in connection with the artificial intelligence paradigm. It presents a classification of the tasks that take place during the driving of the vehicle and its modeling from the perspective of traditional control engineering and artificial intelligence. The issue of autonomous vehicle management is addressed on three levels: navigation, movement in traffic, respectively effective maneuver and vehicle dynamics control. Each level is then described in terms of specific tasks, such as: route selection, planning and reconfiguration, recognition of traffic signs and reaction to signaling and traffic events, as well as control of effective speed, distance and direction. The approach will lead to a better understanding of the way technology is moving when talking about autonomous cars, smart/intelligent cars or intelligent transport systems. Keywords: self-driving vehicle, artificial intelligence, deep learning, intelligent transport systems. --- paper_title: Intelligent adaptive precrash control for autonmous vehicle agents (CBR Engine & hybrid A∗ path planner) paper_content: PreCrash problem of Intelligent Control of autonomous vehicles robot is a very complex problem, especially vehicle pre-crash scenariws and at points of intersections in real-time environmenta. This Paper presents a novel architecture of Intelligent adaptive control for autonomous vehicle agent that depends on Artificial Intelligence Techniques that applies case-based reasoning techniques, where Parallel CBR Engines are implemented for different scenarios' of PreCrash problem and sub-problems of intersection safety and collision avoidance, in the higher level of the controller and A∗ path planner for path planning and at lower-levels it also uses some features of autonomous vehicle dynamics. Moreover, the planner is enhanced by combination of Case-Based Planner. All modules are presented and discussed. Experimental results are conducted in the framework of Webots autonomous vehicle tool and overall results are good for the CBR Engine for Adaptive control and also for the hybrid Case-Based Planner, A∗ and D∗ motion planner along with conclusion and future work. --- paper_title: Design of Real-Time Transition from Driving Assistance to Automation Function: Bayesian Artificial Intelligence Approach paper_content: Forecasts of automation in driving suggest that wide spread market penetration of fully autonomous vehicles will be decades away and that before such vehicles will gain acceptance by all stake holders, there will be a need for driving assistance in key driving tasks, supplemented by automated active safety capability. This paper advances a Bayesian Artificial Intelligence model for the design of real time transition from assisted driving to automated driving under conditions of high probability of a collision if no action is taken to avoid the collision. Systems can be designed to feature collision warnings as well as automated active safety capabilities. In addition to the high level architecture of the Bayesian transition model, example scenarios illustrate the function of the real-time transition model. --- paper_title: Cognitive Vehicle Design Guided by Human Factors and Supported by Bayesian Artificial Intelligence paper_content: Researchers and automotive industry experts are in favour of accelerating the development of “cognitive vehicle” features, which will integrate intelligent technology and human factors for providing non-distracting interface for safety, efficiency and environmental sustainability in driving. In addition, infotainment capability is a desirable feature of the vehicle of the future, provided that it does not add to driver distraction. Further, these features are expected to be a stepping-stone to fully autonomous driving. This paper describes advances in driver-vehicle interface and presents highlights of research in design. Specifically, the design features of the cognitive vehicle are presented and measures needed to address major issues are noted. The application of Bayesian Artificial Intelligence is described in the design of driving assistance system, and design considerations are advanced in order to overcome issues in in-vehicle telematics systems. Finally, conclusions are advanced based on coverage of research material in the paper. --- paper_title: The research of prediction model on intelligent vehicle based on driver’s perception paper_content: In the field of self-driving technology, the stability and comfort of the intelligent vehicle are the focus of attention. The paper applies cognitive psychology theory to the research of driving behavior and analyzes the behavior mechanism about the driver’s operation. Through applying the theory of hierarchical analysis, we take the safety and comfort of intelligent vehicle as the breakthrough point. And then we took the data of human drivers’ perception behavior as the training set and did regression analysis using the method of regression analysis of machine learning according to the charts of the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision. At last we established linear and nonlinear regression models (including the logarithmic model) for the training set. The change in thinking is the first novelty of this paper. Last but not least important, we verified the accuracy of the model through the comparison of different regression analysis. Eventually, it turned out that using logarithmic relationship to express the relationship between the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision is better than other models. In the aspect of application, we adopted the technology of multi-sensor fusion and transformed the acquired data from radar, navigation and image to log-polar coordinates, which makes us greatly simplify information when dealing with massive data problems from different sensors. This approach can not only reduce the complexity of the server’s processing but also drives the development of intelligent vehicle in information computing. We also make this model applied in the intelligent driver’s cognitive interactive debugging program, which can better explain and understand the intelligent driving behavior and improved the safety of intelligent vehicle to some extent. --- paper_title: Observation Based Creation of Minimal Test Suites for Autonomous Vehicles paper_content: Autonomous vehicles pose new challenges to their testing, which is required for safety certification. While Autonomous vehicles will use training sets as specification for machine learning algorithms, traditional validation depends on the system’s requirements and design.The presented approach uses training sets which are observations of traffic situations as system specification. It aims at deriving test-cases which incorporate the continuous behavior of other traffic participants. Hence, relevant scenarios are mined by analyzing and categorizing behaviors. By using abstract descriptions of the behaviors we discuss how test-cases can be compared to each other, so that similar test-cases are avoided in the test-suite. We demonstrate our approach using a combination of an overtake assistant and an adaptive cruise control. --- paper_title: Autonomy, Trust, and Transportation paper_content: Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the “machine” is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today’s transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. --- paper_title: Towards Autonomous Cruising on Highways paper_content: The objectives of this paper are to discuss how vehicle automation technology can be used to benefit car drivers and also to propose a concept of an autonomous highway vehicle which improves highway driving safety. --- paper_title: Driver monitoring in the context of autonomous vehicle paper_content: Today research is going on within different essential functions need to bring automatic vehicles to the roads. However, there will be manual driven vehicles for many years before it is fully automated vehicles on roads. In complex situations, automated vehicles will need human assistance for long time. So, for road safety driver monitoring is even more important in the context of autonomous vehicle to keep the driver alert and awake. But, limited effort has been done in total integration between automatic vehicle and human driver. Therefore, human drivers need to be monitored and able to take over control within short notice. This papers provides an overview on autonomous vehicles and un-obstructive driver monitoring approaches that can be implemented in future autonomous vehicles to monitor driver e.g., to diagnose and predict stress, fatigue etc. in semi-automated vehicles. --- paper_title: A Decision Support System for Improving Resiliency of Cooperative Adaptive Cruise Control Systems paper_content: Abstract Advanced driver assistance systems (ADASs) enhance transportation safety and mobility, and reduce impacts on the environment and economical costs, through decreasing driver errors. One of the main features of ADASs is cruise control system that maintains the driver's desired speed without intervention from the driver. Adaptive cruise control (ACC) systems adjust the vehicle's speed to maintain a safe following distance to the vehicle in front. Adding vehicle-to-vehicle and vehicle-to-infrastructure communications (V2X) to ACC systems, result in cooperative adaptive cruise control (CACC) systems, where each vehicle has trajectory data of all other vehicles in the same lane. Although CACC systems offer advantages over ACC systems in increasing throughput and average speed, they are more vulnerable to cyber-security attacks. This is due to V2X communications that increase the attack surface from one vehicle to multiple vehicles. In this paper, we inject common types of attack on the application layer of connected vehicles to show their vulnerability in comparison to autonomous vehicles. We also proposed a decision support system that eliminates risk of inaccurate information. The microscopic work simulates a CACC system with a bi-objective PID controller and a fuzzy detector. A case study is illustrated in detail to verify the system functionality. --- paper_title: Securing wireless communications of connected vehicles with artificial intelligence paper_content: This work applies artificial intelligence (AI) to secure wireless communications of Connected Vehicles. Vehicular Ad-hoc Network (VANET) facilitates exchange of safety messages for collision avoidance, leading to self-driving cars. An AI system continuously learns to augment its ability in discerning and recognizing its surroundings. Such ability plays a vital role in evaluating the authenticity and integrity of safety messages for cars driven by computers. Falsification of meter readings, disablement of brake function, and other unauthorized controls by spoofed messages injected into VANET emerge as security threats. Countermeasures must be considered at design stage, as opposed to afterthought patches, effectively against cyber-attacks. However, current standards oversubscribe security measures by validating every message circulating among Connected Vehicles, making VANET subject to denial-of-service (DoS) Attacks. This interdisciplinary research shows promising results by searching the pivot point to balance between message authentication and DoS prevention, making security measures practical for the real-world deployment of Connected Vehicles. Message authentication adopts Context-Adaptive Signature Verification strategy, applying AI filters to reduce both communication and computation overhead. Combining OMNET++, a data network simulator, and SUMO, a road traffic simulator, with Veins, an open source framework for VANET simulation, the study evaluates AI filters comparatively under various attacking scenarios. The results lead to an effective design choice of securing wireless communications for Connected Vehicles. --- paper_title: Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract paper_content: Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a "safe" envelope knows that it is safe and can operate with full autonomy. A system operating within an "unsafe" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between "safe" and "unsafe" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing "safe" envelopes, shrinking "unsafe" envelopes, and eliminating any gap areas. --- paper_title: Investigation into the Role of Rational Ethics in Crashes of Automated Vehicles paper_content: Traffic engineers, vehicle manufacturers, technology groups, and government agencies are anticipating and preparing for the emergence of fully automated vehicles into the American transportation system. This new technology has the potential to revolutionize many aspects of transportation, particularly safety. However, fully automated vehicles may not create the truly crash-free environment predicted. One particular problem is crash assignment, especially between automated vehicles and nonautomated vehicles. Although some researchers have indicated that automated vehicles will need to be programmed with some sort of ethical system in order to make decisions on how to crash, few, if any, studies have been conducted on how particular ethical theories will actually make crash decisions and how these ethical paradigms will affect automated vehicle programming. The integration of three ethical theories—utilitarianism, respect for persons, and virtue ethics—with vehicle automation is examined, and a simple progr... --- paper_title: Predicting dynamic computational workload of a self-driving car paper_content: This study aims at developing a method that predicts the CPU usage patterns of software tasks running on a self-driving car. To ensure safety of such dynamic systems, the worst-case-based CPU utilization analysis has been used; however, the nature of dynamically changing driving contexts requires more flexible approach for an efficient computing resource management. To better understand the dynamic CPU usage patterns, this paper presents an effort of designing a feature vector to represent the information of driving environments and of predicting, using regression methods, the selected tasks' CPU usage patterns given specific driving contexts. Experiments with real-world vehicle data show a promising result and validate the usefulness of the proposed method. --- paper_title: Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning paper_content: Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. --- paper_title: SVM-inspired dynamic safe navigation using convex hull construction paper_content: The navigation of mobile robots or unmanned autonomous vehicles (UAVs) in an environment full of obstacles has a significant impact on its safety. If the robot maneuvers too close to an obstacle, it increases the probability of an accident. Preventing this is crucial in dynamic environments, where the obstacles, such as other UAVs, are moving. This kind of safe navigation is needed in any autonomous movement application but it is of a vital importance in applications such as automated transportation of nuclear or chemical waste. This paper presents the Maximum Margin Search using a Convex Hull construction (MMS-CH), an algorithm for a fast construction of a maximum margin between sets of obstacles and its maintenance as the input data are dynamically altered. This calculation of the safest path is inspired by the Support Vector Machines (SVM). It utilizes the convex hull construction to preprocess the input data and uses the boundaries of the hulls to search for the optimal margin. The MMS-CH algorithm takes advantage of the elementary geometrical properties of the 2-dimensional Euclidean space resulting in 1) significant reduction of the problem complexity by eliminating irrelevant data; 2) computationally less expensive approach to maximum margin calculation than standard SVM-based techniques; and 3) inexpensive recomputation of the solution suitable for real time dynamic applications. --- paper_title: Accurate and Reliable Detection of Traffic Lights Using Multiclass Learning and Multiobject Tracking paper_content: Automatic detection of traffic lights has great importance to road safety. This paper presents a novel approach that combines computer vision and machine learning techniques for accurate detection and classification of different types of traffic lights, including green and red lights both in circular and arrow forms. Initially, color extraction and blob detection are employed to locate the candidates. Subsequently, a pretrained PCA network is used as a multiclass classifier to obtain frame-by-frame results. Furthermore, an online multiobject tracking technique is applied to overcome occasional misses and a forecasting method is used to filter out false positives. Several additional optimization techniques are employed to improve the detector performance and handle the traffic light transitions. When evaluated using the test video sequences, the proposed system can successfully detect the traffic lights on the scene with high accuracy and stable results. Considering hardware acceleration, the proposed technique is ready to be integrated into advanced driver assistance systems or self-driving vehicles. We build our own data set of traffic lights from recorded driving videos, including circular lights and arrow lights in different directions. Our experimental data set is available at http://computing.wpi.edu/Dataset.html. --- paper_title: Intelligent adaptive precrash control for autonmous vehicle agents (CBR Engine & hybrid A∗ path planner) paper_content: PreCrash problem of Intelligent Control of autonomous vehicles robot is a very complex problem, especially vehicle pre-crash scenariws and at points of intersections in real-time environmenta. This Paper presents a novel architecture of Intelligent adaptive control for autonomous vehicle agent that depends on Artificial Intelligence Techniques that applies case-based reasoning techniques, where Parallel CBR Engines are implemented for different scenarios' of PreCrash problem and sub-problems of intersection safety and collision avoidance, in the higher level of the controller and A∗ path planner for path planning and at lower-levels it also uses some features of autonomous vehicle dynamics. Moreover, the planner is enhanced by combination of Case-Based Planner. All modules are presented and discussed. Experimental results are conducted in the framework of Webots autonomous vehicle tool and overall results are good for the CBR Engine for Adaptive control and also for the hybrid Case-Based Planner, A∗ and D∗ motion planner along with conclusion and future work. --- paper_title: Autonomous vehicles safe-optimal trajectory selection based on big data analysis and predefined user preferences paper_content: Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip. --- paper_title: A driverless vehicle demonstration on motorways and in urban environments paper_content: The constant growth of the number of vehicles in today’s world demands improvements in the safety and efficiency of roads and road use. This can be in part satisfied by the implementation of autonomous driving systems because of their greater precision than human drivers in controlling a vehicle. As result, the capacity of the roads would be increased by reducing the spacing between vehicles. Moreover, greener driving modes could be applied so that the fuel consumption, and therefore carbon emissions, would be reduced. This paper presents the results obtained by the AUTOPIA program during a public demonstration performed in June 2012. This driverless experiment consisted of a 100-kilometre route around Madrid (Spain), including both urban and motorway environments. A first vehicle – acting as leader and manually driven – transmitted its relevant information – i.e., position and speed – through an 802.11p communication link to a second vehicle, which tracked the leader’s trajectory and speed while maintaining a safe distance. The results were encouraging, and showed the viability of the AUTOPIA approach. ::: First published online: 28 Jan 2015 --- paper_title: The research of prediction model on intelligent vehicle based on driver’s perception paper_content: In the field of self-driving technology, the stability and comfort of the intelligent vehicle are the focus of attention. The paper applies cognitive psychology theory to the research of driving behavior and analyzes the behavior mechanism about the driver’s operation. Through applying the theory of hierarchical analysis, we take the safety and comfort of intelligent vehicle as the breakthrough point. And then we took the data of human drivers’ perception behavior as the training set and did regression analysis using the method of regression analysis of machine learning according to the charts of the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision. At last we established linear and nonlinear regression models (including the logarithmic model) for the training set. The change in thinking is the first novelty of this paper. Last but not least important, we verified the accuracy of the model through the comparison of different regression analysis. Eventually, it turned out that using logarithmic relationship to express the relationship between the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision is better than other models. In the aspect of application, we adopted the technology of multi-sensor fusion and transformed the acquired data from radar, navigation and image to log-polar coordinates, which makes us greatly simplify information when dealing with massive data problems from different sensors. This approach can not only reduce the complexity of the server’s processing but also drives the development of intelligent vehicle in information computing. We also make this model applied in the intelligent driver’s cognitive interactive debugging program, which can better explain and understand the intelligent driving behavior and improved the safety of intelligent vehicle to some extent. --- paper_title: Observation Based Creation of Minimal Test Suites for Autonomous Vehicles paper_content: Autonomous vehicles pose new challenges to their testing, which is required for safety certification. While Autonomous vehicles will use training sets as specification for machine learning algorithms, traditional validation depends on the system’s requirements and design.The presented approach uses training sets which are observations of traffic situations as system specification. It aims at deriving test-cases which incorporate the continuous behavior of other traffic participants. Hence, relevant scenarios are mined by analyzing and categorizing behaviors. By using abstract descriptions of the behaviors we discuss how test-cases can be compared to each other, so that similar test-cases are avoided in the test-suite. We demonstrate our approach using a combination of an overtake assistant and an adaptive cruise control. --- paper_title: Autonomy, Trust, and Transportation paper_content: Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the “machine” is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today’s transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. --- paper_title: Advance Driver Assistance System (ADAS) - Speed bump detection paper_content: In Intelligent Transportation System, Advance Driver Assistance Systems (ADAS) plays a vital role. In ADAS, many research works are done in the area of traffic sign recognition, Forward Collision Warning, Automotive navigation system, Lane departure warning system but an another important area to look through is speed bumps detection. The recognition of speed bump is a safety to a human and a vehicle. Early research in speed bump detection is done with the help of sensors, accelerometer and GPS. In this paper, a novel method is presented to achieve speed bump detection and recognition either to alert or to interact directly with the vehicle. Detection of speed bump is recognized with a help of image processing concepts. This methodology is effortless and simple to implement without the investment of special sensors, hardware, Smartphone and GPS. This procedure suits very well for the roads constructed with proper marking, and can be used in self-driving car. --- paper_title: Reducing driver's behavioural uncertainties using an interdisciplinary approach: Convergence of Quantified Self, Automated Vehicles, Internet Of Things and Artificial Intelligence. paper_content: Abstract: Growing research progress in Internet of Things (IoT), automated/connected cars, Artificial Intelligence and person's data acquisition (Quantified Self) will help to reduce behavioral uncertainties in transport and unequivocally influence future transport landscapes. This vision paper argues that by capitalizing advances in data collection and methodologies from emerging research disciplines, we could make the driver amenable to a knowable and monitorable entity, which will improve road safety. We present an interdisciplinary framework, inspired by the Safe system, to extract knowledge from the large amount of available data during driving. The limitation of our approach is discussed. --- paper_title: Microscopic traffic simulation based evaluation of highly automated driving on highways paper_content: Highly automated driving on highways requires a complex artificial intelligence that makes optimal decisions based on ongoing measurements. Notably no attempt has been performed to evaluate the impacts of such a sophisticated system on traffic. Another important point is the impact of continuous increase in the number of highly automated vehicles on future traffic safety and traffic flow. This work introduces a novel framework to evaluate these impacts in a developed microscopic traffic simulation environment. This framework is used on the one hand to ensure the functionality of the driving strategy in the early development stage. On the other hand, the impacts of increasing automation rates, up to hundred percent, on traffic safety and traffic flow is evaluated. --- paper_title: Securing wireless communications of connected vehicles with artificial intelligence paper_content: This work applies artificial intelligence (AI) to secure wireless communications of Connected Vehicles. Vehicular Ad-hoc Network (VANET) facilitates exchange of safety messages for collision avoidance, leading to self-driving cars. An AI system continuously learns to augment its ability in discerning and recognizing its surroundings. Such ability plays a vital role in evaluating the authenticity and integrity of safety messages for cars driven by computers. Falsification of meter readings, disablement of brake function, and other unauthorized controls by spoofed messages injected into VANET emerge as security threats. Countermeasures must be considered at design stage, as opposed to afterthought patches, effectively against cyber-attacks. However, current standards oversubscribe security measures by validating every message circulating among Connected Vehicles, making VANET subject to denial-of-service (DoS) Attacks. This interdisciplinary research shows promising results by searching the pivot point to balance between message authentication and DoS prevention, making security measures practical for the real-world deployment of Connected Vehicles. Message authentication adopts Context-Adaptive Signature Verification strategy, applying AI filters to reduce both communication and computation overhead. Combining OMNET++, a data network simulator, and SUMO, a road traffic simulator, with Veins, an open source framework for VANET simulation, the study evaluates AI filters comparatively under various attacking scenarios. The results lead to an effective design choice of securing wireless communications for Connected Vehicles. --- paper_title: Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract paper_content: Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a "safe" envelope knows that it is safe and can operate with full autonomy. A system operating within an "unsafe" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between "safe" and "unsafe" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing "safe" envelopes, shrinking "unsafe" envelopes, and eliminating any gap areas. --- paper_title: Convolutional neural network based vehicle turn signal recognition paper_content: This Automated driving is an emerging technology in which a car performs recognition, decision making, and control. Recognizing surrounding vehicles is a key technology in order to generate a trajectory of ego vehicle. This paper is focused on detecting a turn signal information as one of the driver's intention for surrounding vehicles. Such information helps to predict their behavior in advance especially about lane change and turn left-or-right on intersection. Using their intension, the automated vehicle is able to generate the safety trajectory before they begin to change their behavior. The proposed method recognizes the turn signal for target vehicle based on mono-camera. It detects lighting state using Convolutional Neural Network, and then calculates a flashing frequency using Fast Fourier Transform. --- paper_title: Predicting dynamic computational workload of a self-driving car paper_content: This study aims at developing a method that predicts the CPU usage patterns of software tasks running on a self-driving car. To ensure safety of such dynamic systems, the worst-case-based CPU utilization analysis has been used; however, the nature of dynamically changing driving contexts requires more flexible approach for an efficient computing resource management. To better understand the dynamic CPU usage patterns, this paper presents an effort of designing a feature vector to represent the information of driving environments and of predicting, using regression methods, the selected tasks' CPU usage patterns given specific driving contexts. Experiments with real-world vehicle data show a promising result and validate the usefulness of the proposed method. --- paper_title: Intersection management for autonomous vehicles using iCACC paper_content: Recently several artificial intelligence labs have suggested the use of fully equipped vehicles with the capability of sensing the surrounding environment to enhance roadway safety. As a result, it is anticipated in the future that many vehicles will be autonomous and thus there is a need to optimize the movement of these vehicles. This paper presents a new tool for optimizing the movements of autonomous vehicles through intersections: iCACC. The main concept of the proposed tool is to control vehicle trajectories using Cooperative Adaptive Cruise Control (CACC) systems to avoid collisions and minimize intersection delay. Simulations were executed to compare conventional signal control with iCACC considering two measures of effectiveness - delay and fuel consumption. Savings in delay and fuel consumption in the range of 91 and 82 percent relative to conventional signal control were demonstrated, respectively. --- paper_title: Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning paper_content: Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. --- paper_title: SVM-inspired dynamic safe navigation using convex hull construction paper_content: The navigation of mobile robots or unmanned autonomous vehicles (UAVs) in an environment full of obstacles has a significant impact on its safety. If the robot maneuvers too close to an obstacle, it increases the probability of an accident. Preventing this is crucial in dynamic environments, where the obstacles, such as other UAVs, are moving. This kind of safe navigation is needed in any autonomous movement application but it is of a vital importance in applications such as automated transportation of nuclear or chemical waste. This paper presents the Maximum Margin Search using a Convex Hull construction (MMS-CH), an algorithm for a fast construction of a maximum margin between sets of obstacles and its maintenance as the input data are dynamically altered. This calculation of the safest path is inspired by the Support Vector Machines (SVM). It utilizes the convex hull construction to preprocess the input data and uses the boundaries of the hulls to search for the optimal margin. The MMS-CH algorithm takes advantage of the elementary geometrical properties of the 2-dimensional Euclidean space resulting in 1) significant reduction of the problem complexity by eliminating irrelevant data; 2) computationally less expensive approach to maximum margin calculation than standard SVM-based techniques; and 3) inexpensive recomputation of the solution suitable for real time dynamic applications. --- paper_title: Road Terrain detection and Classification algorithm based on the Color Feature extraction paper_content: Today, In the content of road vehicles, intelligent systems and autonomous vehicles, one of the important problem that should be solved is Road Terrain Classification that improves driving safety and comfort passengers. There are many studies in this area that improved the accuracy of classification. An improved classification method using color feature extraction is proposed in this paper. Color Feature of images is used to classify The Road Terrain Type and then a Neural Network (NN) is used to classify the Color Features extracted from images. Proposed idea is to identify road by processing digital images taken from the roads with a camera installed on a car. Asphalt, Grass, Dirt and Rocky are four types of terrain that identified in this study. --- paper_title: Autonomous vehicles: from paradigms to technology paper_content: Mobility is a basic necessity of contemporary society and it is a key factor in global economic development. The basic requirements for the transport of people and goods are: safety and duration of travel, but also a number of additional criteria are very important: energy saving, pollution, passenger comfort. Due to advances in hardware and software, automation has penetrated massively in transport systems both on infrastructure and on vehicles, but man is still the key element in vehicle driving. However, the classic concept of 'human-in-the-loop' in terms of 'hands on' in driving the cars is competing aside from the self-driving startups working towards so-called 'Level 4 autonomy', which is defined as "a self-driving system that does not requires human intervention in most scenarios". In this paper, a conceptual synthesis of the autonomous vehicle issue is made in connection with the artificial intelligence paradigm. It presents a classification of the tasks that take place during the driving of the vehicle and its modeling from the perspective of traditional control engineering and artificial intelligence. The issue of autonomous vehicle management is addressed on three levels: navigation, movement in traffic, respectively effective maneuver and vehicle dynamics control. Each level is then described in terms of specific tasks, such as: route selection, planning and reconfiguration, recognition of traffic signs and reaction to signaling and traffic events, as well as control of effective speed, distance and direction. The approach will lead to a better understanding of the way technology is moving when talking about autonomous cars, smart/intelligent cars or intelligent transport systems. Keywords: self-driving vehicle, artificial intelligence, deep learning, intelligent transport systems. --- paper_title: Intelligent adaptive precrash control for autonmous vehicle agents (CBR Engine & hybrid A∗ path planner) paper_content: PreCrash problem of Intelligent Control of autonomous vehicles robot is a very complex problem, especially vehicle pre-crash scenariws and at points of intersections in real-time environmenta. This Paper presents a novel architecture of Intelligent adaptive control for autonomous vehicle agent that depends on Artificial Intelligence Techniques that applies case-based reasoning techniques, where Parallel CBR Engines are implemented for different scenarios' of PreCrash problem and sub-problems of intersection safety and collision avoidance, in the higher level of the controller and A∗ path planner for path planning and at lower-levels it also uses some features of autonomous vehicle dynamics. Moreover, the planner is enhanced by combination of Case-Based Planner. All modules are presented and discussed. Experimental results are conducted in the framework of Webots autonomous vehicle tool and overall results are good for the CBR Engine for Adaptive control and also for the hybrid Case-Based Planner, A∗ and D∗ motion planner along with conclusion and future work. --- paper_title: RobIL — Israeli program for research and development of autonomous UGV: Performance evaluation methodology paper_content: RobIL program is a collaborative effort of several leading Israeli academic institutions and industries in the field of robotics. The current objective of the program is to develop an Autonomous Off-Road Unmanned Ground Vehicle (UGV). The intention is to deal through this project with some of the most urgent challenges in the field of autonomous vehicles. One of these challenges, is the lack of efficient Safety Performance Verification technique, as the existing tools for hardware and software reliability and safety engineering do not provide a comprehensive solution regarding algorithms that are based on Artificial Intelligent (AI) and Machine Learning. In order to deal with this gap a novel methodology that is based on statistical testing in simulated environment, is presented. In this work the RobIL UGV project with special emphasis on the performance and safety verification methodology is presented. --- paper_title: Are all objects equal? Deep spatio-temporal importance prediction in driving videos paper_content: Understanding intent and relevance of surrounding agents from video is an essential task for many applications in robotics and computer vision. The modeling and evaluation of contextual, spatio-temporal situation awareness is particularly important in the domain of intelligent vehicles, where a robot is required to smoothly navigate in a complex environment while also interacting with humans. In this paper, we address these issues by studying the task of on-road object importance ranking from video. First, human-centric object importance annotations are employed in order to analyze the relevance of a variety of multi-modal cues for the importance prediction task. A deep convolutional neural network model is used for capturing video-based contextual spatial and temporal cues of scene type, driving task, and object properties related to intent. Second, the proposed importance annotations are used for producing novel analysis of error types in image-based object detectors. Specifically, we demonstrate how cost-sensitive training, informed by the object importance annotations, results in improved detection performance on objects of higher importance. This insight is essential for an application where navigation mistakes are safety-critical, and the quality of automation and human-robot interaction is key. HighlightsWe study a notion of object relevance, as measured in a spatio-temporal context of driving a vehicle.Various spatio-temporal object and scene cues are analyzed for the task of object importance classification.Human-centric metrics are employed for evaluating object detection and studying data bias.Importance-guided training of object detectors is proposed, showing significant improvement over an importance-agnostic baseline. --- paper_title: Design of Real-Time Transition from Driving Assistance to Automation Function: Bayesian Artificial Intelligence Approach paper_content: Forecasts of automation in driving suggest that wide spread market penetration of fully autonomous vehicles will be decades away and that before such vehicles will gain acceptance by all stake holders, there will be a need for driving assistance in key driving tasks, supplemented by automated active safety capability. This paper advances a Bayesian Artificial Intelligence model for the design of real time transition from assisted driving to automated driving under conditions of high probability of a collision if no action is taken to avoid the collision. Systems can be designed to feature collision warnings as well as automated active safety capabilities. In addition to the high level architecture of the Bayesian transition model, example scenarios illustrate the function of the real-time transition model. --- paper_title: Autonomous vehicles safe-optimal trajectory selection based on big data analysis and predefined user preferences paper_content: Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip. --- paper_title: A driverless vehicle demonstration on motorways and in urban environments paper_content: The constant growth of the number of vehicles in today’s world demands improvements in the safety and efficiency of roads and road use. This can be in part satisfied by the implementation of autonomous driving systems because of their greater precision than human drivers in controlling a vehicle. As result, the capacity of the roads would be increased by reducing the spacing between vehicles. Moreover, greener driving modes could be applied so that the fuel consumption, and therefore carbon emissions, would be reduced. This paper presents the results obtained by the AUTOPIA program during a public demonstration performed in June 2012. This driverless experiment consisted of a 100-kilometre route around Madrid (Spain), including both urban and motorway environments. A first vehicle – acting as leader and manually driven – transmitted its relevant information – i.e., position and speed – through an 802.11p communication link to a second vehicle, which tracked the leader’s trajectory and speed while maintaining a safe distance. The results were encouraging, and showed the viability of the AUTOPIA approach. ::: First published online: 28 Jan 2015 --- paper_title: Drivers’ Manoeuvre Classification for Safe HRI paper_content: Ever increasing autonomy of machines and the need to interact with them creates challenges to ensure safe operation. Recent technical and commercial interest in increasing autonomy of vehicles has led to the integration of more sensors and actuators inside the vehicle, making them more like robots. For interaction with semi-autonomous cars, the use of these sensors could help to create new safety mechanisms. This work explores the concept of using motion tracking (i.e. skeletal tracking) data gathered from the driver whilst driving to learn to classify the manoeuvre being performed. A kernel-based classifier is trained with empirically selected features based on data gathered from a Kinect V2 sensor in a controlled environment. This method shows that skeletal tracking data can be used in a driving scenario to classify manoeuvres and sets a background for further work. --- paper_title: Cognitive Vehicle Design Guided by Human Factors and Supported by Bayesian Artificial Intelligence paper_content: Researchers and automotive industry experts are in favour of accelerating the development of “cognitive vehicle” features, which will integrate intelligent technology and human factors for providing non-distracting interface for safety, efficiency and environmental sustainability in driving. In addition, infotainment capability is a desirable feature of the vehicle of the future, provided that it does not add to driver distraction. Further, these features are expected to be a stepping-stone to fully autonomous driving. This paper describes advances in driver-vehicle interface and presents highlights of research in design. Specifically, the design features of the cognitive vehicle are presented and measures needed to address major issues are noted. The application of Bayesian Artificial Intelligence is described in the design of driving assistance system, and design considerations are advanced in order to overcome issues in in-vehicle telematics systems. Finally, conclusions are advanced based on coverage of research material in the paper. --- paper_title: The research of prediction model on intelligent vehicle based on driver’s perception paper_content: In the field of self-driving technology, the stability and comfort of the intelligent vehicle are the focus of attention. The paper applies cognitive psychology theory to the research of driving behavior and analyzes the behavior mechanism about the driver’s operation. Through applying the theory of hierarchical analysis, we take the safety and comfort of intelligent vehicle as the breakthrough point. And then we took the data of human drivers’ perception behavior as the training set and did regression analysis using the method of regression analysis of machine learning according to the charts of the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision. At last we established linear and nonlinear regression models (including the logarithmic model) for the training set. The change in thinking is the first novelty of this paper. Last but not least important, we verified the accuracy of the model through the comparison of different regression analysis. Eventually, it turned out that using logarithmic relationship to express the relationship between the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision is better than other models. In the aspect of application, we adopted the technology of multi-sensor fusion and transformed the acquired data from radar, navigation and image to log-polar coordinates, which makes us greatly simplify information when dealing with massive data problems from different sensors. This approach can not only reduce the complexity of the server’s processing but also drives the development of intelligent vehicle in information computing. We also make this model applied in the intelligent driver’s cognitive interactive debugging program, which can better explain and understand the intelligent driving behavior and improved the safety of intelligent vehicle to some extent. --- paper_title: Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau paper_content: The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology. --- paper_title: Road signs classification by ANN for real-time implementation paper_content: Traffic safety is an important problem for autonomous vehicles. The development of Traffic Sign Recognition (TSR) dedicated to reducing the number of fatalities and the severity of road accidents is an important and an active research area. Recently, most TSR approaches of machine learning and image processing have achieved advanced performance in traditional natural scenes. However, there exists a limitation on the accuracy in road sign recognition and on time consuming. This paper proposes a real-time algorithm for shape classification of traffic signs and their recognition to provide a driver alert system. The proposed algorithm is mainly composed of two phases: shape classification and content classification. This algorithm takes as input a list of Bounding Boxes generated in a previous work, and will classify them. The traffic sign's shape is classified by an artificial neural network (ANN). Traffic signs are classified according to their shape characteristics, as triangular, squared and circular shapes. Combining color and shape information, traffic signs are classified into one of the following classes: danger, information, obligation or prohibition. The classified circular and triangular shapes are passed on to the second ANN in the third phase. These identify the pictogram of the road sign. The output of the second artificial neural network allows the full classification of the road sign. The algorithm proposed is evaluated on a dataset of road signs of a Tunisian database sign. --- paper_title: Observation Based Creation of Minimal Test Suites for Autonomous Vehicles paper_content: Autonomous vehicles pose new challenges to their testing, which is required for safety certification. While Autonomous vehicles will use training sets as specification for machine learning algorithms, traditional validation depends on the system’s requirements and design.The presented approach uses training sets which are observations of traffic situations as system specification. It aims at deriving test-cases which incorporate the continuous behavior of other traffic participants. Hence, relevant scenarios are mined by analyzing and categorizing behaviors. By using abstract descriptions of the behaviors we discuss how test-cases can be compared to each other, so that similar test-cases are avoided in the test-suite. We demonstrate our approach using a combination of an overtake assistant and an adaptive cruise control. --- paper_title: Learning from Demonstration Using GMM, CHMM and DHMM: A Comparison paper_content: Greater production and improved safety in the mining industry can be enhanced by the use of automated vehicles. This paper presents results in applying Learning from Demonstration (LfD) to a laboratory semi-automated mine inspection robot following a path through a simulated mine. Three methods, Gaussian Mixture Model (GMM), Continuous Hidden Markov Model (CHMM), and Discrete Hidden Markov Model (DHMM) were used to implement the LfD and a comparison of the implementation results is presented. The results from the different models were then used to implement a novel, optimised path decomposition technique that may be suitable for possible robot use within an underground mine. --- paper_title: Autonomy, Trust, and Transportation paper_content: Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the “machine” is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today’s transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. --- paper_title: Towards Autonomous Cruising on Highways paper_content: The objectives of this paper are to discuss how vehicle automation technology can be used to benefit car drivers and also to propose a concept of an autonomous highway vehicle which improves highway driving safety. --- paper_title: Driver monitoring in the context of autonomous vehicle paper_content: Today research is going on within different essential functions need to bring automatic vehicles to the roads. However, there will be manual driven vehicles for many years before it is fully automated vehicles on roads. In complex situations, automated vehicles will need human assistance for long time. So, for road safety driver monitoring is even more important in the context of autonomous vehicle to keep the driver alert and awake. But, limited effort has been done in total integration between automatic vehicle and human driver. Therefore, human drivers need to be monitored and able to take over control within short notice. This papers provides an overview on autonomous vehicles and un-obstructive driver monitoring approaches that can be implemented in future autonomous vehicles to monitor driver e.g., to diagnose and predict stress, fatigue etc. in semi-automated vehicles. --- paper_title: Advance Driver Assistance System (ADAS) - Speed bump detection paper_content: In Intelligent Transportation System, Advance Driver Assistance Systems (ADAS) plays a vital role. In ADAS, many research works are done in the area of traffic sign recognition, Forward Collision Warning, Automotive navigation system, Lane departure warning system but an another important area to look through is speed bumps detection. The recognition of speed bump is a safety to a human and a vehicle. Early research in speed bump detection is done with the help of sensors, accelerometer and GPS. In this paper, a novel method is presented to achieve speed bump detection and recognition either to alert or to interact directly with the vehicle. Detection of speed bump is recognized with a help of image processing concepts. This methodology is effortless and simple to implement without the investment of special sensors, hardware, Smartphone and GPS. This procedure suits very well for the roads constructed with proper marking, and can be used in self-driving car. --- paper_title: Road junction detection from 3D point clouds paper_content: Detecting changing traffic conditions is of primal importance for the safety of autonomous cars navigating in urban environments. Among the traffic situations that require more attention and careful planning, road junctions are the most significant. This work presents an empirical study of the application of well known machine learning techniques to create a robust method for road junction detection. Features are extracted from 3D pointclouds corresponding to single frames of data collected by a laser rangefinder. Three well known classifiers-support vector machines, adaptive boosting and artificial neural networks-are used to classify them into “junctions” or “roads”. The best performing classifier is used in the next stage, where structured classifiers-hidden Markov models and conditional random fields-are used to incorporate contextual information, in an attempt to improve the performance of the method. We tested and compared these approaches on datasets from two different 3D laser scanners, and in two different countries, Germany and Brazil. --- paper_title: Reducing driver's behavioural uncertainties using an interdisciplinary approach: Convergence of Quantified Self, Automated Vehicles, Internet Of Things and Artificial Intelligence. paper_content: Abstract: Growing research progress in Internet of Things (IoT), automated/connected cars, Artificial Intelligence and person's data acquisition (Quantified Self) will help to reduce behavioral uncertainties in transport and unequivocally influence future transport landscapes. This vision paper argues that by capitalizing advances in data collection and methodologies from emerging research disciplines, we could make the driver amenable to a knowable and monitorable entity, which will improve road safety. We present an interdisciplinary framework, inspired by the Safe system, to extract knowledge from the large amount of available data during driving. The limitation of our approach is discussed. --- paper_title: Safety-critical computer systems paper_content: From the Publisher: ::: Increasingly, microcomputers are being used in applications where their correct operation is vital to ensure the safety of the public and the environment: from anti-lock braking systems in automobiles, to fly-by-wire aircraft, to shut-down systems at nuclear power plants. It is, therefore, vital that engineers are aware of the safety implications of the systems they develop. This book is an introduction to the field of safety-critical computer systems, and is written for any engineer who uses microcomputers within real-time embedded systems. It assumes no prior knowledge of safety, or of any specific computer hardware or programming language. This book covers all phases of the life of a safety-critical system from its conception and specification, through to its certification, installation, service and decommissioning; provides information on how to assess the safety implications of projects, and determine the measures necessary to develop systems to meet safety needs; gives a thorough grounding in the techniques available to investigate the safety aspects of computer-based systems and the methods that may be used to enhance their dependability; and uses case studies and worked examples from a wide range of industrial sectors including the nuclear, aircraft, automotive and consumer products industries. This text is intended for both engineering and computer science students, and for practising engineers within computer-related industries. The approach taken is equally suited to engineers who consider computers from a hardware, software or systems viewpoint. --- paper_title: Low Level Control Layer Definition for Autonomous Vehicles Based on Fuzzy Logic paper_content: Abstract The intelligent control of autonomous vehicles is one of the most important challenges that intelligent transport systems face today. The application of artificial intelligence techniques to the automatic management of vehicle actuators enables the different Advanced Driver Assistance Systems (ADAS) or even autonomous driving systems, to perform a low level management in a very similar way to that of human drivers by improving safety and comfort. In this paper, we present a control schema to manage these low level vehicle actuators (steering throttle and brake) based on fuzzy logic, an artificial intelligence technique that is able to mimic human procedural behavior, in this case, when performing the driving task. This automatic low level control system has been defined, implemented and tested in a Citroen C3 testbed vehicle, whose actuators have been automated and can receive control signals from an onboard computer where the soft computing-based control system is running. --- paper_title: Monocular vision-based object recognition for autonomous vehicle driving in a real driving environment paper_content: Nowadays, many attentions have been devoted to autonomous vehicles because the automation of driving technology has a large number of benefits, such as the minimization of risks, the improvement of mobility and ease of drivers. Among many technologies for autonomous driving, road environmental recognition is one of the key issues. In this paper, we present the test results of various object detection algorithms using single monocular camera for autonomous vehicle in real driving conditions. The vision recognition system tested in this paper has three main recognition parts: pedestrian detection, traffic sign and traffic light recognition. We use Histogram of Gradients (HOG) features and detect the pedestrians by Support Vector Machine (SVM). Also features of traffic signs are extracted by Principal Components Analysis (PCA) and canny edge detection is used for traffic lights. These two signals are classified by Neural Network (NN). Algorithms that we tested are implemented in General-Purpose computing on Graphics Processing Units (GPGPU). We show the effectiveness of these methods in real-time applications for autonomous driving. --- paper_title: A Decision Support System for Improving Resiliency of Cooperative Adaptive Cruise Control Systems paper_content: Abstract Advanced driver assistance systems (ADASs) enhance transportation safety and mobility, and reduce impacts on the environment and economical costs, through decreasing driver errors. One of the main features of ADASs is cruise control system that maintains the driver's desired speed without intervention from the driver. Adaptive cruise control (ACC) systems adjust the vehicle's speed to maintain a safe following distance to the vehicle in front. Adding vehicle-to-vehicle and vehicle-to-infrastructure communications (V2X) to ACC systems, result in cooperative adaptive cruise control (CACC) systems, where each vehicle has trajectory data of all other vehicles in the same lane. Although CACC systems offer advantages over ACC systems in increasing throughput and average speed, they are more vulnerable to cyber-security attacks. This is due to V2X communications that increase the attack surface from one vehicle to multiple vehicles. In this paper, we inject common types of attack on the application layer of connected vehicles to show their vulnerability in comparison to autonomous vehicles. We also proposed a decision support system that eliminates risk of inaccurate information. The microscopic work simulates a CACC system with a bi-objective PID controller and a fuzzy detector. A case study is illustrated in detail to verify the system functionality. --- paper_title: Microscopic traffic simulation based evaluation of highly automated driving on highways paper_content: Highly automated driving on highways requires a complex artificial intelligence that makes optimal decisions based on ongoing measurements. Notably no attempt has been performed to evaluate the impacts of such a sophisticated system on traffic. Another important point is the impact of continuous increase in the number of highly automated vehicles on future traffic safety and traffic flow. This work introduces a novel framework to evaluate these impacts in a developed microscopic traffic simulation environment. This framework is used on the one hand to ensure the functionality of the driving strategy in the early development stage. On the other hand, the impacts of increasing automation rates, up to hundred percent, on traffic safety and traffic flow is evaluated. --- paper_title: Concerns on the Differences Between AI and System Safety Mindsets Impacting Autonomous Vehicles Safety paper_content: The inflection point in the development of some core technologies enabled the Autonomous Vehicles (AV). The unprecedented growth rate in Artificial Intelligence (AI) and Machine Learning (ML) capabilities, focusing only on AVs, is expected to shift the transportation paradigm and bring relevant benefits to the society, such as accidents reduction. However, recent AVs accidents resulted in life losses. This paper presents a viewpoint discussion based on findings from a preliminary exploratory literature review. It was identified an important misalignment between AI and Safety research communities regarding the impact of AI on the safety risks in AV. This paper promotes this discussion, raises concerns on the potential consequences and suggests research topics to reduce the differences between AI and system safety mindsets. --- paper_title: Novel architecture for cloud based next gen vehicle platform — Transition from today to 2025 paper_content: The sole idea of the autonomous driving is to reduce the number of accidents which will be caused by human errors. There will be a phase where autonomous vehicles will have to take part with the human driven vehicles. In this phase autonomous vehicle and vehicle drivers must be careful in driving maneuvers, since it is impossible to just have autonomous vehicles on the road while the transition to fully autonomous driving takes place. To make the system more reliable and robust, authors feel that it must be substantially improved and to achieve this 3 methodologies have been proposed. The use cases are derived from the camera placed in the suitable positions of every vehicle. This creates several use cases with different driving behaviors, which truly reflects the conditions which the autonomous car needs to undergo in series. Authors propose several architectures to address the safety related issues. Since, the number of use cases are so random that it becomes practically impossible to manage data, so we propose to have a secured cloud storage. The cloud provides a secure and reliable data management system to the machine learning algorithm computation. The data management till now is effectively done via trained unique use cases. This make a big data which is encapsulation of different unique use cases under different driving conditions. So we conclude that the usage of ML and cloud based data management is the way forward for handling autonomous vehicle reliably. This may not be the complete solution, but nevertheless we are one step closer towards 2025. This concept can be implemented only in co-operation of the OEMs and Public authorities. --- paper_title: Key Considerations in the Development of Driving Automation Systems paper_content: The historical roles of drivers, vehicle manufacturers, federal and state regulators, and law enforcement agencies in automotive safety is well understood. However, the increasing deployment of driving automation technologies to support various comfort, convenience, efficiency, productivity, mobility, and possibly safety features has the potential to alter this understanding. In order to facilitate clarity in discussing the topic of driving automation with other stakeholders and to clarify the level(s) of automation on which the agency is currently focusing its efforts, the National Highway Traffic Safety Administration (NHTSA) released a Preliminary Statement of Policy (SOP) concerning Automated Vehicles that included its automation levels. In this paper, the authors present key factors for consideration in each automation level which are based upon Society of Automotive Engineers (SAE) J3016. These factors focus on adding more specificity with regard to the distribution of the driving tasks between the driver and the automation system. The result of this effort has led to a refinement of an understanding of the automation levels based on the nature of the vehicle control aspect provided by the feature, the nature of the environmental sensing and response, the fallback strategy employed, and the feature’s scope of operation. --- paper_title: Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract paper_content: Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a "safe" envelope knows that it is safe and can operate with full autonomy. A system operating within an "unsafe" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between "safe" and "unsafe" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing "safe" envelopes, shrinking "unsafe" envelopes, and eliminating any gap areas. --- paper_title: Investigation into the Role of Rational Ethics in Crashes of Automated Vehicles paper_content: Traffic engineers, vehicle manufacturers, technology groups, and government agencies are anticipating and preparing for the emergence of fully automated vehicles into the American transportation system. This new technology has the potential to revolutionize many aspects of transportation, particularly safety. However, fully automated vehicles may not create the truly crash-free environment predicted. One particular problem is crash assignment, especially between automated vehicles and nonautomated vehicles. Although some researchers have indicated that automated vehicles will need to be programmed with some sort of ethical system in order to make decisions on how to crash, few, if any, studies have been conducted on how particular ethical theories will actually make crash decisions and how these ethical paradigms will affect automated vehicle programming. The integration of three ethical theories—utilitarianism, respect for persons, and virtue ethics—with vehicle automation is examined, and a simple progr... --- paper_title: A Formal Approach to Autonomous Vehicle Coordination paper_content: Increasing demands on safety and energy efficiency will require higher levels of automation in transportation systems. This involves dealing with safety-critical distributed coordination. In this paper we demonstrate how a Satisfiability Modulo Theories (SMT) solver can be used to prove correctness of a vehicular coordination problem. We formalise a recent distributed coordination protocol and validate our approach using an intersection collision avoidance (ICA) case study. The system model captures continuous time and space, and an unbounded number of vehicles and messages. The safety of the case study is automatically verified using the Z3 theorem prover. --- paper_title: Learning Human-Level AI abilities to drive racing cars paper_content: The final purpose of Automated Vehicle Guidance Systems (AVGSs) is to obtain fully automatic driven vehicles to optimize transport systems, minimizing delays, increasing safety and comfort. In order to achieve these goals, lots of Artificial Intelligence techniques must be improved and merged. In this article we focus on learning and simulating the Human-Level decisions involved in driving a racing car. To achieve this, we have studied the convenience of using Neuroevolution of Augmenting Topologies (NEAT). To experiment and obtain comparative results we have also developed an online videogame prototype called Screaming Racers, which is used as test-bed environment. --- paper_title: Embedding ethical principles in collective decision support systems paper_content: The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. ::: ::: In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles. --- paper_title: Intersection management for autonomous vehicles using iCACC paper_content: Recently several artificial intelligence labs have suggested the use of fully equipped vehicles with the capability of sensing the surrounding environment to enhance roadway safety. As a result, it is anticipated in the future that many vehicles will be autonomous and thus there is a need to optimize the movement of these vehicles. This paper presents a new tool for optimizing the movements of autonomous vehicles through intersections: iCACC. The main concept of the proposed tool is to control vehicle trajectories using Cooperative Adaptive Cruise Control (CACC) systems to avoid collisions and minimize intersection delay. Simulations were executed to compare conventional signal control with iCACC considering two measures of effectiveness - delay and fuel consumption. Savings in delay and fuel consumption in the range of 91 and 82 percent relative to conventional signal control were demonstrated, respectively. --- paper_title: Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning paper_content: Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. --- paper_title: SVM-inspired dynamic safe navigation using convex hull construction paper_content: The navigation of mobile robots or unmanned autonomous vehicles (UAVs) in an environment full of obstacles has a significant impact on its safety. If the robot maneuvers too close to an obstacle, it increases the probability of an accident. Preventing this is crucial in dynamic environments, where the obstacles, such as other UAVs, are moving. This kind of safe navigation is needed in any autonomous movement application but it is of a vital importance in applications such as automated transportation of nuclear or chemical waste. This paper presents the Maximum Margin Search using a Convex Hull construction (MMS-CH), an algorithm for a fast construction of a maximum margin between sets of obstacles and its maintenance as the input data are dynamically altered. This calculation of the safest path is inspired by the Support Vector Machines (SVM). It utilizes the convex hull construction to preprocess the input data and uses the boundaries of the hulls to search for the optimal margin. The MMS-CH algorithm takes advantage of the elementary geometrical properties of the 2-dimensional Euclidean space resulting in 1) significant reduction of the problem complexity by eliminating irrelevant data; 2) computationally less expensive approach to maximum margin calculation than standard SVM-based techniques; and 3) inexpensive recomputation of the solution suitable for real time dynamic applications. --- paper_title: Handbook of Human Factors and Ergonomics paper_content: Partial table of contents: THE HUMAN FACTORS FUNCTION. System Design and Evaluation (S. Czaja). THE HUMAN FACTORS FUNDAMENTALS. Perceptual Motor Skills and Human Motion Analysis (D. Regan). Biomechanics of the Human Body (W. Marras). JOB DESIGN. Allocation of Functions (J. Sharit). Task Analysis (H. Luczak). EQUIPMENT, WORKPLACE, AND ENVIRONMENTAL DESIGN. Biomechanical Aspects of Workplace Design (D. Chaffin). Illumination (P. Boyce). DESIGN FOR HEALTH AND SAFETY. Occupational Risk Management (B. Zimolong). PERFORMANCE MODELING. Cognitive Modeling (R. Eberts). EVALUATION. Human Factors Audits (C. Drury). HUMAN--COMPUTER INTERACTION. Software--User Interface Design (Y. Liu). SELECTED APPLICATIONS OF HUMAN FACTORS. Human Factors in Process Control (N. Moray). Indexes. --- paper_title: Autonomous vehicles: from paradigms to technology paper_content: Mobility is a basic necessity of contemporary society and it is a key factor in global economic development. The basic requirements for the transport of people and goods are: safety and duration of travel, but also a number of additional criteria are very important: energy saving, pollution, passenger comfort. Due to advances in hardware and software, automation has penetrated massively in transport systems both on infrastructure and on vehicles, but man is still the key element in vehicle driving. However, the classic concept of 'human-in-the-loop' in terms of 'hands on' in driving the cars is competing aside from the self-driving startups working towards so-called 'Level 4 autonomy', which is defined as "a self-driving system that does not requires human intervention in most scenarios". In this paper, a conceptual synthesis of the autonomous vehicle issue is made in connection with the artificial intelligence paradigm. It presents a classification of the tasks that take place during the driving of the vehicle and its modeling from the perspective of traditional control engineering and artificial intelligence. The issue of autonomous vehicle management is addressed on three levels: navigation, movement in traffic, respectively effective maneuver and vehicle dynamics control. Each level is then described in terms of specific tasks, such as: route selection, planning and reconfiguration, recognition of traffic signs and reaction to signaling and traffic events, as well as control of effective speed, distance and direction. The approach will lead to a better understanding of the way technology is moving when talking about autonomous cars, smart/intelligent cars or intelligent transport systems. Keywords: self-driving vehicle, artificial intelligence, deep learning, intelligent transport systems. --- paper_title: RobIL — Israeli program for research and development of autonomous UGV: Performance evaluation methodology paper_content: RobIL program is a collaborative effort of several leading Israeli academic institutions and industries in the field of robotics. The current objective of the program is to develop an Autonomous Off-Road Unmanned Ground Vehicle (UGV). The intention is to deal through this project with some of the most urgent challenges in the field of autonomous vehicles. One of these challenges, is the lack of efficient Safety Performance Verification technique, as the existing tools for hardware and software reliability and safety engineering do not provide a comprehensive solution regarding algorithms that are based on Artificial Intelligent (AI) and Machine Learning. In order to deal with this gap a novel methodology that is based on statistical testing in simulated environment, is presented. In this work the RobIL UGV project with special emphasis on the performance and safety verification methodology is presented. --- paper_title: Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau paper_content: The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology. --- paper_title: Autonomy, Trust, and Transportation paper_content: Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the “machine” is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today’s transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. --- paper_title: Towards Autonomous Cruising on Highways paper_content: The objectives of this paper are to discuss how vehicle automation technology can be used to benefit car drivers and also to propose a concept of an autonomous highway vehicle which improves highway driving safety. --- paper_title: Driver monitoring in the context of autonomous vehicle paper_content: Today research is going on within different essential functions need to bring automatic vehicles to the roads. However, there will be manual driven vehicles for many years before it is fully automated vehicles on roads. In complex situations, automated vehicles will need human assistance for long time. So, for road safety driver monitoring is even more important in the context of autonomous vehicle to keep the driver alert and awake. But, limited effort has been done in total integration between automatic vehicle and human driver. Therefore, human drivers need to be monitored and able to take over control within short notice. This papers provides an overview on autonomous vehicles and un-obstructive driver monitoring approaches that can be implemented in future autonomous vehicles to monitor driver e.g., to diagnose and predict stress, fatigue etc. in semi-automated vehicles. --- paper_title: Reducing driver's behavioural uncertainties using an interdisciplinary approach: Convergence of Quantified Self, Automated Vehicles, Internet Of Things and Artificial Intelligence. paper_content: Abstract: Growing research progress in Internet of Things (IoT), automated/connected cars, Artificial Intelligence and person's data acquisition (Quantified Self) will help to reduce behavioral uncertainties in transport and unequivocally influence future transport landscapes. This vision paper argues that by capitalizing advances in data collection and methodologies from emerging research disciplines, we could make the driver amenable to a knowable and monitorable entity, which will improve road safety. We present an interdisciplinary framework, inspired by the Safe system, to extract knowledge from the large amount of available data during driving. The limitation of our approach is discussed. --- paper_title: Microscopic traffic simulation based evaluation of highly automated driving on highways paper_content: Highly automated driving on highways requires a complex artificial intelligence that makes optimal decisions based on ongoing measurements. Notably no attempt has been performed to evaluate the impacts of such a sophisticated system on traffic. Another important point is the impact of continuous increase in the number of highly automated vehicles on future traffic safety and traffic flow. This work introduces a novel framework to evaluate these impacts in a developed microscopic traffic simulation environment. This framework is used on the one hand to ensure the functionality of the driving strategy in the early development stage. On the other hand, the impacts of increasing automation rates, up to hundred percent, on traffic safety and traffic flow is evaluated. --- paper_title: Investigation into the Role of Rational Ethics in Crashes of Automated Vehicles paper_content: Traffic engineers, vehicle manufacturers, technology groups, and government agencies are anticipating and preparing for the emergence of fully automated vehicles into the American transportation system. This new technology has the potential to revolutionize many aspects of transportation, particularly safety. However, fully automated vehicles may not create the truly crash-free environment predicted. One particular problem is crash assignment, especially between automated vehicles and nonautomated vehicles. Although some researchers have indicated that automated vehicles will need to be programmed with some sort of ethical system in order to make decisions on how to crash, few, if any, studies have been conducted on how particular ethical theories will actually make crash decisions and how these ethical paradigms will affect automated vehicle programming. The integration of three ethical theories—utilitarianism, respect for persons, and virtue ethics—with vehicle automation is examined, and a simple progr... --- paper_title: Embedding ethical principles in collective decision support systems paper_content: The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. ::: ::: In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles. --- paper_title: Road Terrain detection and Classification algorithm based on the Color Feature extraction paper_content: Today, In the content of road vehicles, intelligent systems and autonomous vehicles, one of the important problem that should be solved is Road Terrain Classification that improves driving safety and comfort passengers. There are many studies in this area that improved the accuracy of classification. An improved classification method using color feature extraction is proposed in this paper. Color Feature of images is used to classify The Road Terrain Type and then a Neural Network (NN) is used to classify the Color Features extracted from images. Proposed idea is to identify road by processing digital images taken from the roads with a camera installed on a car. Asphalt, Grass, Dirt and Rocky are four types of terrain that identified in this study. --- paper_title: Intelligent adaptive precrash control for autonmous vehicle agents (CBR Engine & hybrid A∗ path planner) paper_content: PreCrash problem of Intelligent Control of autonomous vehicles robot is a very complex problem, especially vehicle pre-crash scenariws and at points of intersections in real-time environmenta. This Paper presents a novel architecture of Intelligent adaptive control for autonomous vehicle agent that depends on Artificial Intelligence Techniques that applies case-based reasoning techniques, where Parallel CBR Engines are implemented for different scenarios' of PreCrash problem and sub-problems of intersection safety and collision avoidance, in the higher level of the controller and A∗ path planner for path planning and at lower-levels it also uses some features of autonomous vehicle dynamics. Moreover, the planner is enhanced by combination of Case-Based Planner. All modules are presented and discussed. Experimental results are conducted in the framework of Webots autonomous vehicle tool and overall results are good for the CBR Engine for Adaptive control and also for the hybrid Case-Based Planner, A∗ and D∗ motion planner along with conclusion and future work. --- paper_title: Are all objects equal? Deep spatio-temporal importance prediction in driving videos paper_content: Understanding intent and relevance of surrounding agents from video is an essential task for many applications in robotics and computer vision. The modeling and evaluation of contextual, spatio-temporal situation awareness is particularly important in the domain of intelligent vehicles, where a robot is required to smoothly navigate in a complex environment while also interacting with humans. In this paper, we address these issues by studying the task of on-road object importance ranking from video. First, human-centric object importance annotations are employed in order to analyze the relevance of a variety of multi-modal cues for the importance prediction task. A deep convolutional neural network model is used for capturing video-based contextual spatial and temporal cues of scene type, driving task, and object properties related to intent. Second, the proposed importance annotations are used for producing novel analysis of error types in image-based object detectors. Specifically, we demonstrate how cost-sensitive training, informed by the object importance annotations, results in improved detection performance on objects of higher importance. This insight is essential for an application where navigation mistakes are safety-critical, and the quality of automation and human-robot interaction is key. HighlightsWe study a notion of object relevance, as measured in a spatio-temporal context of driving a vehicle.Various spatio-temporal object and scene cues are analyzed for the task of object importance classification.Human-centric metrics are employed for evaluating object detection and studying data bias.Importance-guided training of object detectors is proposed, showing significant improvement over an importance-agnostic baseline. --- paper_title: Road signs classification by ANN for real-time implementation paper_content: Traffic safety is an important problem for autonomous vehicles. The development of Traffic Sign Recognition (TSR) dedicated to reducing the number of fatalities and the severity of road accidents is an important and an active research area. Recently, most TSR approaches of machine learning and image processing have achieved advanced performance in traditional natural scenes. However, there exists a limitation on the accuracy in road sign recognition and on time consuming. This paper proposes a real-time algorithm for shape classification of traffic signs and their recognition to provide a driver alert system. The proposed algorithm is mainly composed of two phases: shape classification and content classification. This algorithm takes as input a list of Bounding Boxes generated in a previous work, and will classify them. The traffic sign's shape is classified by an artificial neural network (ANN). Traffic signs are classified according to their shape characteristics, as triangular, squared and circular shapes. Combining color and shape information, traffic signs are classified into one of the following classes: danger, information, obligation or prohibition. The classified circular and triangular shapes are passed on to the second ANN in the third phase. These identify the pictogram of the road sign. The output of the second artificial neural network allows the full classification of the road sign. The algorithm proposed is evaluated on a dataset of road signs of a Tunisian database sign. --- paper_title: Road junction detection from 3D point clouds paper_content: Detecting changing traffic conditions is of primal importance for the safety of autonomous cars navigating in urban environments. Among the traffic situations that require more attention and careful planning, road junctions are the most significant. This work presents an empirical study of the application of well known machine learning techniques to create a robust method for road junction detection. Features are extracted from 3D pointclouds corresponding to single frames of data collected by a laser rangefinder. Three well known classifiers-support vector machines, adaptive boosting and artificial neural networks-are used to classify them into “junctions” or “roads”. The best performing classifier is used in the next stage, where structured classifiers-hidden Markov models and conditional random fields-are used to incorporate contextual information, in an attempt to improve the performance of the method. We tested and compared these approaches on datasets from two different 3D laser scanners, and in two different countries, Germany and Brazil. --- paper_title: Monocular vision-based object recognition for autonomous vehicle driving in a real driving environment paper_content: Nowadays, many attentions have been devoted to autonomous vehicles because the automation of driving technology has a large number of benefits, such as the minimization of risks, the improvement of mobility and ease of drivers. Among many technologies for autonomous driving, road environmental recognition is one of the key issues. In this paper, we present the test results of various object detection algorithms using single monocular camera for autonomous vehicle in real driving conditions. The vision recognition system tested in this paper has three main recognition parts: pedestrian detection, traffic sign and traffic light recognition. We use Histogram of Gradients (HOG) features and detect the pedestrians by Support Vector Machine (SVM). Also features of traffic signs are extracted by Principal Components Analysis (PCA) and canny edge detection is used for traffic lights. These two signals are classified by Neural Network (NN). Algorithms that we tested are implemented in General-Purpose computing on Graphics Processing Units (GPGPU). We show the effectiveness of these methods in real-time applications for autonomous driving. --- paper_title: Novel architecture for cloud based next gen vehicle platform — Transition from today to 2025 paper_content: The sole idea of the autonomous driving is to reduce the number of accidents which will be caused by human errors. There will be a phase where autonomous vehicles will have to take part with the human driven vehicles. In this phase autonomous vehicle and vehicle drivers must be careful in driving maneuvers, since it is impossible to just have autonomous vehicles on the road while the transition to fully autonomous driving takes place. To make the system more reliable and robust, authors feel that it must be substantially improved and to achieve this 3 methodologies have been proposed. The use cases are derived from the camera placed in the suitable positions of every vehicle. This creates several use cases with different driving behaviors, which truly reflects the conditions which the autonomous car needs to undergo in series. Authors propose several architectures to address the safety related issues. Since, the number of use cases are so random that it becomes practically impossible to manage data, so we propose to have a secured cloud storage. The cloud provides a secure and reliable data management system to the machine learning algorithm computation. The data management till now is effectively done via trained unique use cases. This make a big data which is encapsulation of different unique use cases under different driving conditions. So we conclude that the usage of ML and cloud based data management is the way forward for handling autonomous vehicle reliably. This may not be the complete solution, but nevertheless we are one step closer towards 2025. This concept can be implemented only in co-operation of the OEMs and Public authorities. --- paper_title: Convolutional neural network based vehicle turn signal recognition paper_content: This Automated driving is an emerging technology in which a car performs recognition, decision making, and control. Recognizing surrounding vehicles is a key technology in order to generate a trajectory of ego vehicle. This paper is focused on detecting a turn signal information as one of the driver's intention for surrounding vehicles. Such information helps to predict their behavior in advance especially about lane change and turn left-or-right on intersection. Using their intension, the automated vehicle is able to generate the safety trajectory before they begin to change their behavior. The proposed method recognizes the turn signal for target vehicle based on mono-camera. It detects lighting state using Convolutional Neural Network, and then calculates a flashing frequency using Fast Fourier Transform. --- paper_title: Accurate and Reliable Detection of Traffic Lights Using Multiclass Learning and Multiobject Tracking paper_content: Automatic detection of traffic lights has great importance to road safety. This paper presents a novel approach that combines computer vision and machine learning techniques for accurate detection and classification of different types of traffic lights, including green and red lights both in circular and arrow forms. Initially, color extraction and blob detection are employed to locate the candidates. Subsequently, a pretrained PCA network is used as a multiclass classifier to obtain frame-by-frame results. Furthermore, an online multiobject tracking technique is applied to overcome occasional misses and a forecasting method is used to filter out false positives. Several additional optimization techniques are employed to improve the detector performance and handle the traffic light transitions. When evaluated using the test video sequences, the proposed system can successfully detect the traffic lights on the scene with high accuracy and stable results. Considering hardware acceleration, the proposed technique is ready to be integrated into advanced driver assistance systems or self-driving vehicles. We build our own data set of traffic lights from recorded driving videos, including circular lights and arrow lights in different directions. Our experimental data set is available at http://computing.wpi.edu/Dataset.html. --- paper_title: Drivers’ Manoeuvre Classification for Safe HRI paper_content: Ever increasing autonomy of machines and the need to interact with them creates challenges to ensure safe operation. Recent technical and commercial interest in increasing autonomy of vehicles has led to the integration of more sensors and actuators inside the vehicle, making them more like robots. For interaction with semi-autonomous cars, the use of these sensors could help to create new safety mechanisms. This work explores the concept of using motion tracking (i.e. skeletal tracking) data gathered from the driver whilst driving to learn to classify the manoeuvre being performed. A kernel-based classifier is trained with empirically selected features based on data gathered from a Kinect V2 sensor in a controlled environment. This method shows that skeletal tracking data can be used in a driving scenario to classify manoeuvres and sets a background for further work. --- paper_title: Road junction detection from 3D point clouds paper_content: Detecting changing traffic conditions is of primal importance for the safety of autonomous cars navigating in urban environments. Among the traffic situations that require more attention and careful planning, road junctions are the most significant. This work presents an empirical study of the application of well known machine learning techniques to create a robust method for road junction detection. Features are extracted from 3D pointclouds corresponding to single frames of data collected by a laser rangefinder. Three well known classifiers-support vector machines, adaptive boosting and artificial neural networks-are used to classify them into “junctions” or “roads”. The best performing classifier is used in the next stage, where structured classifiers-hidden Markov models and conditional random fields-are used to incorporate contextual information, in an attempt to improve the performance of the method. We tested and compared these approaches on datasets from two different 3D laser scanners, and in two different countries, Germany and Brazil. --- paper_title: Monocular vision-based object recognition for autonomous vehicle driving in a real driving environment paper_content: Nowadays, many attentions have been devoted to autonomous vehicles because the automation of driving technology has a large number of benefits, such as the minimization of risks, the improvement of mobility and ease of drivers. Among many technologies for autonomous driving, road environmental recognition is one of the key issues. In this paper, we present the test results of various object detection algorithms using single monocular camera for autonomous vehicle in real driving conditions. The vision recognition system tested in this paper has three main recognition parts: pedestrian detection, traffic sign and traffic light recognition. We use Histogram of Gradients (HOG) features and detect the pedestrians by Support Vector Machine (SVM). Also features of traffic signs are extracted by Principal Components Analysis (PCA) and canny edge detection is used for traffic lights. These two signals are classified by Neural Network (NN). Algorithms that we tested are implemented in General-Purpose computing on Graphics Processing Units (GPGPU). We show the effectiveness of these methods in real-time applications for autonomous driving. --- paper_title: SVM-inspired dynamic safe navigation using convex hull construction paper_content: The navigation of mobile robots or unmanned autonomous vehicles (UAVs) in an environment full of obstacles has a significant impact on its safety. If the robot maneuvers too close to an obstacle, it increases the probability of an accident. Preventing this is crucial in dynamic environments, where the obstacles, such as other UAVs, are moving. This kind of safe navigation is needed in any autonomous movement application but it is of a vital importance in applications such as automated transportation of nuclear or chemical waste. This paper presents the Maximum Margin Search using a Convex Hull construction (MMS-CH), an algorithm for a fast construction of a maximum margin between sets of obstacles and its maintenance as the input data are dynamically altered. This calculation of the safest path is inspired by the Support Vector Machines (SVM). It utilizes the convex hull construction to preprocess the input data and uses the boundaries of the hulls to search for the optimal margin. The MMS-CH algorithm takes advantage of the elementary geometrical properties of the 2-dimensional Euclidean space resulting in 1) significant reduction of the problem complexity by eliminating irrelevant data; 2) computationally less expensive approach to maximum margin calculation than standard SVM-based techniques; and 3) inexpensive recomputation of the solution suitable for real time dynamic applications. --- paper_title: Design of Real-Time Transition from Driving Assistance to Automation Function: Bayesian Artificial Intelligence Approach paper_content: Forecasts of automation in driving suggest that wide spread market penetration of fully autonomous vehicles will be decades away and that before such vehicles will gain acceptance by all stake holders, there will be a need for driving assistance in key driving tasks, supplemented by automated active safety capability. This paper advances a Bayesian Artificial Intelligence model for the design of real time transition from assisted driving to automated driving under conditions of high probability of a collision if no action is taken to avoid the collision. Systems can be designed to feature collision warnings as well as automated active safety capabilities. In addition to the high level architecture of the Bayesian transition model, example scenarios illustrate the function of the real-time transition model. --- paper_title: Cognitive Vehicle Design Guided by Human Factors and Supported by Bayesian Artificial Intelligence paper_content: Researchers and automotive industry experts are in favour of accelerating the development of “cognitive vehicle” features, which will integrate intelligent technology and human factors for providing non-distracting interface for safety, efficiency and environmental sustainability in driving. In addition, infotainment capability is a desirable feature of the vehicle of the future, provided that it does not add to driver distraction. Further, these features are expected to be a stepping-stone to fully autonomous driving. This paper describes advances in driver-vehicle interface and presents highlights of research in design. Specifically, the design features of the cognitive vehicle are presented and measures needed to address major issues are noted. The application of Bayesian Artificial Intelligence is described in the design of driving assistance system, and design considerations are advanced in order to overcome issues in in-vehicle telematics systems. Finally, conclusions are advanced based on coverage of research material in the paper. --- paper_title: A Hidden Markov Model for Vehicle Detection and Counting paper_content: To reduce roadway congestion and improve traffic safety, accurate traffic metrics, such as number of vehicles travelling through lane-ways, are required. Unfortunately most existing infrastructure, such as loop-detectors and many video detectors, do not feasibly provide accurate vehicle counts. Consequently, a novel method is proposed which models vehicle motion using hidden Markov models (HMM). The proposed method represents a specified small region of the roadway as 'empty', 'vehicle entering', 'vehicle inside', and 'vehicle exiting', and then applies a modified Viterbi algorithm to the HMM sequential estimation framework to initialize and track vehicles. Vehicle observations are obtained using an Adaboost trained Haar-like feature detector. When tested on 88 hours of video, from three distinct locations, the proposed method proved to be robust to changes in lighting conditions, moving shadows, and camera motion, and consistently out-performed Multiple Target Tracking (MTT) and Virtual Detection Line(VDL) implementations. The median vehicle count error of the proposed method is lower than MTT and VDL by 28%, and 70% respectively. As future work, this algorithm will be implemented to provide the traffic industry with improved automated vehicle counting, with the intent to eventually provide real-time counts. --- paper_title: Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning paper_content: Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. --- paper_title: Accurate and Reliable Detection of Traffic Lights Using Multiclass Learning and Multiobject Tracking paper_content: Automatic detection of traffic lights has great importance to road safety. This paper presents a novel approach that combines computer vision and machine learning techniques for accurate detection and classification of different types of traffic lights, including green and red lights both in circular and arrow forms. Initially, color extraction and blob detection are employed to locate the candidates. Subsequently, a pretrained PCA network is used as a multiclass classifier to obtain frame-by-frame results. Furthermore, an online multiobject tracking technique is applied to overcome occasional misses and a forecasting method is used to filter out false positives. Several additional optimization techniques are employed to improve the detector performance and handle the traffic light transitions. When evaluated using the test video sequences, the proposed system can successfully detect the traffic lights on the scene with high accuracy and stable results. Considering hardware acceleration, the proposed technique is ready to be integrated into advanced driver assistance systems or self-driving vehicles. We build our own data set of traffic lights from recorded driving videos, including circular lights and arrow lights in different directions. Our experimental data set is available at http://computing.wpi.edu/Dataset.html. --- paper_title: The research of prediction model on intelligent vehicle based on driver’s perception paper_content: In the field of self-driving technology, the stability and comfort of the intelligent vehicle are the focus of attention. The paper applies cognitive psychology theory to the research of driving behavior and analyzes the behavior mechanism about the driver’s operation. Through applying the theory of hierarchical analysis, we take the safety and comfort of intelligent vehicle as the breakthrough point. And then we took the data of human drivers’ perception behavior as the training set and did regression analysis using the method of regression analysis of machine learning according to the charts of the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision. At last we established linear and nonlinear regression models (including the logarithmic model) for the training set. The change in thinking is the first novelty of this paper. Last but not least important, we verified the accuracy of the model through the comparison of different regression analysis. Eventually, it turned out that using logarithmic relationship to express the relationship between the vehicle speed and the visual field, the vehicle speed and the gaze point as well as the vehicle speed and the dynamic vision is better than other models. In the aspect of application, we adopted the technology of multi-sensor fusion and transformed the acquired data from radar, navigation and image to log-polar coordinates, which makes us greatly simplify information when dealing with massive data problems from different sensors. This approach can not only reduce the complexity of the server’s processing but also drives the development of intelligent vehicle in information computing. We also make this model applied in the intelligent driver’s cognitive interactive debugging program, which can better explain and understand the intelligent driving behavior and improved the safety of intelligent vehicle to some extent. --- paper_title: Safety-critical computer systems paper_content: From the Publisher: ::: Increasingly, microcomputers are being used in applications where their correct operation is vital to ensure the safety of the public and the environment: from anti-lock braking systems in automobiles, to fly-by-wire aircraft, to shut-down systems at nuclear power plants. It is, therefore, vital that engineers are aware of the safety implications of the systems they develop. This book is an introduction to the field of safety-critical computer systems, and is written for any engineer who uses microcomputers within real-time embedded systems. It assumes no prior knowledge of safety, or of any specific computer hardware or programming language. This book covers all phases of the life of a safety-critical system from its conception and specification, through to its certification, installation, service and decommissioning; provides information on how to assess the safety implications of projects, and determine the measures necessary to develop systems to meet safety needs; gives a thorough grounding in the techniques available to investigate the safety aspects of computer-based systems and the methods that may be used to enhance their dependability; and uses case studies and worked examples from a wide range of industrial sectors including the nuclear, aircraft, automotive and consumer products industries. This text is intended for both engineering and computer science students, and for practising engineers within computer-related industries. The approach taken is equally suited to engineers who consider computers from a hardware, software or systems viewpoint. --- paper_title: Novel architecture for cloud based next gen vehicle platform — Transition from today to 2025 paper_content: The sole idea of the autonomous driving is to reduce the number of accidents which will be caused by human errors. There will be a phase where autonomous vehicles will have to take part with the human driven vehicles. In this phase autonomous vehicle and vehicle drivers must be careful in driving maneuvers, since it is impossible to just have autonomous vehicles on the road while the transition to fully autonomous driving takes place. To make the system more reliable and robust, authors feel that it must be substantially improved and to achieve this 3 methodologies have been proposed. The use cases are derived from the camera placed in the suitable positions of every vehicle. This creates several use cases with different driving behaviors, which truly reflects the conditions which the autonomous car needs to undergo in series. Authors propose several architectures to address the safety related issues. Since, the number of use cases are so random that it becomes practically impossible to manage data, so we propose to have a secured cloud storage. The cloud provides a secure and reliable data management system to the machine learning algorithm computation. The data management till now is effectively done via trained unique use cases. This make a big data which is encapsulation of different unique use cases under different driving conditions. So we conclude that the usage of ML and cloud based data management is the way forward for handling autonomous vehicle reliably. This may not be the complete solution, but nevertheless we are one step closer towards 2025. This concept can be implemented only in co-operation of the OEMs and Public authorities. --- paper_title: Predicting dynamic computational workload of a self-driving car paper_content: This study aims at developing a method that predicts the CPU usage patterns of software tasks running on a self-driving car. To ensure safety of such dynamic systems, the worst-case-based CPU utilization analysis has been used; however, the nature of dynamically changing driving contexts requires more flexible approach for an efficient computing resource management. To better understand the dynamic CPU usage patterns, this paper presents an effort of designing a feature vector to represent the information of driving environments and of predicting, using regression methods, the selected tasks' CPU usage patterns given specific driving contexts. Experiments with real-world vehicle data show a promising result and validate the usefulness of the proposed method. --- paper_title: Intersection management for autonomous vehicles using iCACC paper_content: Recently several artificial intelligence labs have suggested the use of fully equipped vehicles with the capability of sensing the surrounding environment to enhance roadway safety. As a result, it is anticipated in the future that many vehicles will be autonomous and thus there is a need to optimize the movement of these vehicles. This paper presents a new tool for optimizing the movements of autonomous vehicles through intersections: iCACC. The main concept of the proposed tool is to control vehicle trajectories using Cooperative Adaptive Cruise Control (CACC) systems to avoid collisions and minimize intersection delay. Simulations were executed to compare conventional signal control with iCACC considering two measures of effectiveness - delay and fuel consumption. Savings in delay and fuel consumption in the range of 91 and 82 percent relative to conventional signal control were demonstrated, respectively. --- paper_title: Novel architecture for cloud based next gen vehicle platform — Transition from today to 2025 paper_content: The sole idea of the autonomous driving is to reduce the number of accidents which will be caused by human errors. There will be a phase where autonomous vehicles will have to take part with the human driven vehicles. In this phase autonomous vehicle and vehicle drivers must be careful in driving maneuvers, since it is impossible to just have autonomous vehicles on the road while the transition to fully autonomous driving takes place. To make the system more reliable and robust, authors feel that it must be substantially improved and to achieve this 3 methodologies have been proposed. The use cases are derived from the camera placed in the suitable positions of every vehicle. This creates several use cases with different driving behaviors, which truly reflects the conditions which the autonomous car needs to undergo in series. Authors propose several architectures to address the safety related issues. Since, the number of use cases are so random that it becomes practically impossible to manage data, so we propose to have a secured cloud storage. The cloud provides a secure and reliable data management system to the machine learning algorithm computation. The data management till now is effectively done via trained unique use cases. This make a big data which is encapsulation of different unique use cases under different driving conditions. So we conclude that the usage of ML and cloud based data management is the way forward for handling autonomous vehicle reliably. This may not be the complete solution, but nevertheless we are one step closer towards 2025. This concept can be implemented only in co-operation of the OEMs and Public authorities. --- paper_title: RobIL — Israeli program for research and development of autonomous UGV: Performance evaluation methodology paper_content: RobIL program is a collaborative effort of several leading Israeli academic institutions and industries in the field of robotics. The current objective of the program is to develop an Autonomous Off-Road Unmanned Ground Vehicle (UGV). The intention is to deal through this project with some of the most urgent challenges in the field of autonomous vehicles. One of these challenges, is the lack of efficient Safety Performance Verification technique, as the existing tools for hardware and software reliability and safety engineering do not provide a comprehensive solution regarding algorithms that are based on Artificial Intelligent (AI) and Machine Learning. In order to deal with this gap a novel methodology that is based on statistical testing in simulated environment, is presented. In this work the RobIL UGV project with special emphasis on the performance and safety verification methodology is presented. --- paper_title: Investigation into the Role of Rational Ethics in Crashes of Automated Vehicles paper_content: Traffic engineers, vehicle manufacturers, technology groups, and government agencies are anticipating and preparing for the emergence of fully automated vehicles into the American transportation system. This new technology has the potential to revolutionize many aspects of transportation, particularly safety. However, fully automated vehicles may not create the truly crash-free environment predicted. One particular problem is crash assignment, especially between automated vehicles and nonautomated vehicles. Although some researchers have indicated that automated vehicles will need to be programmed with some sort of ethical system in order to make decisions on how to crash, few, if any, studies have been conducted on how particular ethical theories will actually make crash decisions and how these ethical paradigms will affect automated vehicle programming. The integration of three ethical theories—utilitarianism, respect for persons, and virtue ethics—with vehicle automation is examined, and a simple progr... --- paper_title: Towards Autonomous Cruising on Highways paper_content: The objectives of this paper are to discuss how vehicle automation technology can be used to benefit car drivers and also to propose a concept of an autonomous highway vehicle which improves highway driving safety. --- paper_title: Autonomy, Trust, and Transportation paper_content: Automation in transportation (rail, air, road, etc.) is becoming increasingly complex and interconnected. Ensuring that these sophisticated non-deterministic software systems can be trusted and remain resilient is a community concern. As technology evolves, systems are moving increasingly towards autonomy where the “machine” is intelligent: perceiving, deciding, learning, etc. often without human engagement. Our current mechanisms and policies for oversight and certification of these systems to ensure they operate robustly in safety-critical situations are not keeping pace with technology advancements. How is an autonomous system different than an advanced automatic system? How is trust different for an autonomous system? What are different perspectives on trust? Is it appropriate to apply the techniques used to establish trust in a today’s transportation systems to establishing trust in an autonomous system? This paper will examine these questions and propose a framework for discussing autonomy assurance and trust in transportation applications. We will explore further with two examples: 1) the notion of a self-driving taxi-cab; and 2) the evolution of a two-pilot flight deck, to single-pilot operations. --- paper_title: Embedding ethical principles in collective decision support systems paper_content: The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. ::: ::: In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles. --- paper_title: Cognitive Vehicle Design Guided by Human Factors and Supported by Bayesian Artificial Intelligence paper_content: Researchers and automotive industry experts are in favour of accelerating the development of “cognitive vehicle” features, which will integrate intelligent technology and human factors for providing non-distracting interface for safety, efficiency and environmental sustainability in driving. In addition, infotainment capability is a desirable feature of the vehicle of the future, provided that it does not add to driver distraction. Further, these features are expected to be a stepping-stone to fully autonomous driving. This paper describes advances in driver-vehicle interface and presents highlights of research in design. Specifically, the design features of the cognitive vehicle are presented and measures needed to address major issues are noted. The application of Bayesian Artificial Intelligence is described in the design of driving assistance system, and design considerations are advanced in order to overcome issues in in-vehicle telematics systems. Finally, conclusions are advanced based on coverage of research material in the paper. --- paper_title: Learning Human-Level AI abilities to drive racing cars paper_content: The final purpose of Automated Vehicle Guidance Systems (AVGSs) is to obtain fully automatic driven vehicles to optimize transport systems, minimizing delays, increasing safety and comfort. In order to achieve these goals, lots of Artificial Intelligence techniques must be improved and merged. In this article we focus on learning and simulating the Human-Level decisions involved in driving a racing car. To achieve this, we have studied the convenience of using Neuroevolution of Augmenting Topologies (NEAT). To experiment and obtain comparative results we have also developed an online videogame prototype called Screaming Racers, which is used as test-bed environment. --- paper_title: A driverless vehicle demonstration on motorways and in urban environments paper_content: The constant growth of the number of vehicles in today’s world demands improvements in the safety and efficiency of roads and road use. This can be in part satisfied by the implementation of autonomous driving systems because of their greater precision than human drivers in controlling a vehicle. As result, the capacity of the roads would be increased by reducing the spacing between vehicles. Moreover, greener driving modes could be applied so that the fuel consumption, and therefore carbon emissions, would be reduced. This paper presents the results obtained by the AUTOPIA program during a public demonstration performed in June 2012. This driverless experiment consisted of a 100-kilometre route around Madrid (Spain), including both urban and motorway environments. A first vehicle – acting as leader and manually driven – transmitted its relevant information – i.e., position and speed – through an 802.11p communication link to a second vehicle, which tracked the leader’s trajectory and speed while maintaining a safe distance. The results were encouraging, and showed the viability of the AUTOPIA approach. ::: First published online: 28 Jan 2015 --- paper_title: Low Level Control Layer Definition for Autonomous Vehicles Based on Fuzzy Logic paper_content: Abstract The intelligent control of autonomous vehicles is one of the most important challenges that intelligent transport systems face today. The application of artificial intelligence techniques to the automatic management of vehicle actuators enables the different Advanced Driver Assistance Systems (ADAS) or even autonomous driving systems, to perform a low level management in a very similar way to that of human drivers by improving safety and comfort. In this paper, we present a control schema to manage these low level vehicle actuators (steering throttle and brake) based on fuzzy logic, an artificial intelligence technique that is able to mimic human procedural behavior, in this case, when performing the driving task. This automatic low level control system has been defined, implemented and tested in a Citroen C3 testbed vehicle, whose actuators have been automated and can receive control signals from an onboard computer where the soft computing-based control system is running. --- paper_title: Intersection management for autonomous vehicles using iCACC paper_content: Recently several artificial intelligence labs have suggested the use of fully equipped vehicles with the capability of sensing the surrounding environment to enhance roadway safety. As a result, it is anticipated in the future that many vehicles will be autonomous and thus there is a need to optimize the movement of these vehicles. This paper presents a new tool for optimizing the movements of autonomous vehicles through intersections: iCACC. The main concept of the proposed tool is to control vehicle trajectories using Cooperative Adaptive Cruise Control (CACC) systems to avoid collisions and minimize intersection delay. Simulations were executed to compare conventional signal control with iCACC considering two measures of effectiveness - delay and fuel consumption. Savings in delay and fuel consumption in the range of 91 and 82 percent relative to conventional signal control were demonstrated, respectively. --- paper_title: Autonomous vehicles safe-optimal trajectory selection based on big data analysis and predefined user preferences paper_content: Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip. --- paper_title: Road junction detection from 3D point clouds paper_content: Detecting changing traffic conditions is of primal importance for the safety of autonomous cars navigating in urban environments. Among the traffic situations that require more attention and careful planning, road junctions are the most significant. This work presents an empirical study of the application of well known machine learning techniques to create a robust method for road junction detection. Features are extracted from 3D pointclouds corresponding to single frames of data collected by a laser rangefinder. Three well known classifiers-support vector machines, adaptive boosting and artificial neural networks-are used to classify them into “junctions” or “roads”. The best performing classifier is used in the next stage, where structured classifiers-hidden Markov models and conditional random fields-are used to incorporate contextual information, in an attempt to improve the performance of the method. We tested and compared these approaches on datasets from two different 3D laser scanners, and in two different countries, Germany and Brazil. --- paper_title: A Formal Approach to Autonomous Vehicle Coordination paper_content: Increasing demands on safety and energy efficiency will require higher levels of automation in transportation systems. This involves dealing with safety-critical distributed coordination. In this paper we demonstrate how a Satisfiability Modulo Theories (SMT) solver can be used to prove correctness of a vehicular coordination problem. We formalise a recent distributed coordination protocol and validate our approach using an intersection collision avoidance (ICA) case study. The system model captures continuous time and space, and an unbounded number of vehicles and messages. The safety of the case study is automatically verified using the Z3 theorem prover. --- paper_title: SVM-inspired dynamic safe navigation using convex hull construction paper_content: The navigation of mobile robots or unmanned autonomous vehicles (UAVs) in an environment full of obstacles has a significant impact on its safety. If the robot maneuvers too close to an obstacle, it increases the probability of an accident. Preventing this is crucial in dynamic environments, where the obstacles, such as other UAVs, are moving. This kind of safe navigation is needed in any autonomous movement application but it is of a vital importance in applications such as automated transportation of nuclear or chemical waste. This paper presents the Maximum Margin Search using a Convex Hull construction (MMS-CH), an algorithm for a fast construction of a maximum margin between sets of obstacles and its maintenance as the input data are dynamically altered. This calculation of the safest path is inspired by the Support Vector Machines (SVM). It utilizes the convex hull construction to preprocess the input data and uses the boundaries of the hulls to search for the optimal margin. The MMS-CH algorithm takes advantage of the elementary geometrical properties of the 2-dimensional Euclidean space resulting in 1) significant reduction of the problem complexity by eliminating irrelevant data; 2) computationally less expensive approach to maximum margin calculation than standard SVM-based techniques; and 3) inexpensive recomputation of the solution suitable for real time dynamic applications. --- paper_title: Are all objects equal? Deep spatio-temporal importance prediction in driving videos paper_content: Understanding intent and relevance of surrounding agents from video is an essential task for many applications in robotics and computer vision. The modeling and evaluation of contextual, spatio-temporal situation awareness is particularly important in the domain of intelligent vehicles, where a robot is required to smoothly navigate in a complex environment while also interacting with humans. In this paper, we address these issues by studying the task of on-road object importance ranking from video. First, human-centric object importance annotations are employed in order to analyze the relevance of a variety of multi-modal cues for the importance prediction task. A deep convolutional neural network model is used for capturing video-based contextual spatial and temporal cues of scene type, driving task, and object properties related to intent. Second, the proposed importance annotations are used for producing novel analysis of error types in image-based object detectors. Specifically, we demonstrate how cost-sensitive training, informed by the object importance annotations, results in improved detection performance on objects of higher importance. This insight is essential for an application where navigation mistakes are safety-critical, and the quality of automation and human-robot interaction is key. HighlightsWe study a notion of object relevance, as measured in a spatio-temporal context of driving a vehicle.Various spatio-temporal object and scene cues are analyzed for the task of object importance classification.Human-centric metrics are employed for evaluating object detection and studying data bias.Importance-guided training of object detectors is proposed, showing significant improvement over an importance-agnostic baseline. --- paper_title: Advance Driver Assistance System (ADAS) - Speed bump detection paper_content: In Intelligent Transportation System, Advance Driver Assistance Systems (ADAS) plays a vital role. In ADAS, many research works are done in the area of traffic sign recognition, Forward Collision Warning, Automotive navigation system, Lane departure warning system but an another important area to look through is speed bumps detection. The recognition of speed bump is a safety to a human and a vehicle. Early research in speed bump detection is done with the help of sensors, accelerometer and GPS. In this paper, a novel method is presented to achieve speed bump detection and recognition either to alert or to interact directly with the vehicle. Detection of speed bump is recognized with a help of image processing concepts. This methodology is effortless and simple to implement without the investment of special sensors, hardware, Smartphone and GPS. This procedure suits very well for the roads constructed with proper marking, and can be used in self-driving car. --- paper_title: A Decision Support System for Improving Resiliency of Cooperative Adaptive Cruise Control Systems paper_content: Abstract Advanced driver assistance systems (ADASs) enhance transportation safety and mobility, and reduce impacts on the environment and economical costs, through decreasing driver errors. One of the main features of ADASs is cruise control system that maintains the driver's desired speed without intervention from the driver. Adaptive cruise control (ACC) systems adjust the vehicle's speed to maintain a safe following distance to the vehicle in front. Adding vehicle-to-vehicle and vehicle-to-infrastructure communications (V2X) to ACC systems, result in cooperative adaptive cruise control (CACC) systems, where each vehicle has trajectory data of all other vehicles in the same lane. Although CACC systems offer advantages over ACC systems in increasing throughput and average speed, they are more vulnerable to cyber-security attacks. This is due to V2X communications that increase the attack surface from one vehicle to multiple vehicles. In this paper, we inject common types of attack on the application layer of connected vehicles to show their vulnerability in comparison to autonomous vehicles. We also proposed a decision support system that eliminates risk of inaccurate information. The microscopic work simulates a CACC system with a bi-objective PID controller and a fuzzy detector. A case study is illustrated in detail to verify the system functionality. --- paper_title: Predicting dynamic computational workload of a self-driving car paper_content: This study aims at developing a method that predicts the CPU usage patterns of software tasks running on a self-driving car. To ensure safety of such dynamic systems, the worst-case-based CPU utilization analysis has been used; however, the nature of dynamically changing driving contexts requires more flexible approach for an efficient computing resource management. To better understand the dynamic CPU usage patterns, this paper presents an effort of designing a feature vector to represent the information of driving environments and of predicting, using regression methods, the selected tasks' CPU usage patterns given specific driving contexts. Experiments with real-world vehicle data show a promising result and validate the usefulness of the proposed method. ---
Title: A Systematic Literature Review about the impact of Artificial Intelligence on Autonomous Vehicle Safety Section 1: INTRODUCTION Description 1: Write an introduction to the topic, emphasizing the importance of AI in autonomous vehicle development and its potential impact on safety. Section 2: RESEARCH METHODOLOGY Description 2: Describe the systematic literature review method used in the study, detailing the protocol, research questions, search strategy, study selection, and data mapping. Section 3: DATA ANALYSIS Description 3: Present the analysis of the data collected from the selected studies, outlining the distribution over the years, keyword consistency, and the categorization of results based on the research questions. Section 4: SLR FINDINGS ORIENTED BY AN AV SYSTEM MODEL Description 4: Discuss the findings from the literature review oriented by an autonomous vehicle system model, highlighting how AI techniques are applied to different components and functions of the AV architecture. Section 5: FINAL REMARKS Description 5: Provide concluding remarks on the state of the art in AI impact on AV safety, emphasizing the gaps in the research, the maturity of the field, and suggest areas for future research.
High-Pressure Induced Phase Transitions in High-Entropy Alloys: A Review
7
--- paper_title: Nanostructured High‐Entropy Alloys with Multiple Principal Elements: Novel Alloy Design Concepts and Outcomes paper_content: A new approach for the design of alloys is presented in this study. These high-entropy alloys with multi-principal elements were synthesized using well-developed processing technologies. Preliminary results demonstrate examples of the alloys with simple crystal structures, nanostructures, and promising mechanical properties. This approach may be opening a new era in materials science and engineering. --- paper_title: Microstructural development in equiatomic multicomponent alloys paper_content: Abstract Multicomponent alloys containing several components in equal atomic proportions have been manufactured by casting and melt spinning, and their microstructures and properties have been investigated by a combination of optical microscopy, scanning electron microscopy, electron probe microanalysis, X-ray diffractrometry and microhardness measurements. Alloys containing 16 and 20 components in equal proportions are multiphase, crystalline and brittle both as-cast and after melt spinning. A five component Fe 20 Cr 20 Mn 20 Ni 20 Co 20 alloy forms a single fcc solid solution which solidifies dendritically. A wide range of other six to nine component late transition metal rich multicomponent alloys exhibit the same majority fcc primary dendritic phase, which can dissolve substantial amounts of other transition metals such as Nb, Ti and V. More electronegative elements such as Cu and Ge are less stable in the fcc dendrites and are rejected into the interdendritic regions. The total number of phases is always well below the maximum equilibrium number allowed by the Gibbs phase rule, and even further below the maximum number allowed under non-equilibrium solidification conditions. Glassy structures are not formed by casting or melt spinning of late transition metal rich multicomponent alloys, indicating that the confusion principle does not apply, and other factors are more important in promoting glass formation. --- paper_title: High-pressure studies with x-rays using diamond anvil cells. paper_content: Pressure profoundly alters all states of matter. The symbiotic development of ultrahigh-pressure diamond anvil cells, to compress samples to sustainable multi-megabar pressures; and synchrotron x-ray techniques, to probe materials' properties in situ, has enabled the exploration of rich high-pressure (HP) science. In this article, we first introduce the essential concept of diamond anvil cell technology, together with recent developments and its integration with other extreme environments. We then provide an overview of the latest developments in HP synchrotron techniques, their applications, and current problems, followed by a discussion of HP scientific studies using x-rays in the key multidisciplinary fields. These HP studies include: HP x-ray emission spectroscopy, which provides information on the filled electronic states of HP samples; HP x-ray Raman spectroscopy, which probes the HP chemical bonding changes of light elements; HP electronic inelastic x-ray scattering spectroscopy, which accesses high energy electronic phenomena, including electronic band structure, Fermi surface, excitons, plasmons, and their dispersions; HP resonant inelastic x-ray scattering spectroscopy, which probes shallow core excitations, multiplet structures, and spin-resolved electronic structure; HP nuclear resonant x-ray spectroscopy, which provides phonon densities of state and time-resolved Mössbauer information; HP x-ray imaging, which provides information on hierarchical structures, dynamic processes, and internal strains; HP x-ray diffraction, which determines the fundamental structures and densities of single-crystal, polycrystalline, nanocrystalline, and non-crystalline materials; and HP radial x-ray diffraction, which yields deviatoric, elastic and rheological information. Integrating these tools with hydrostatic or uniaxial pressure media, laser and resistive heating, and cryogenic cooling, has enabled investigations of the structural, vibrational, electronic, and magnetic properties of materials over a wide range of pressure-temperature conditions. --- paper_title: Recent advances in high-pressure science and technology paper_content: Abstract Recently we are witnessing the boom of high-pressure science and technology from a small niche field to becoming a major dimension in physical sciences. One of the most important technological advances is the integration of synchrotron nanotechnology with the minute samples at ultrahigh pressures. Applications of high pressure have greatly enhanced our understanding of the electronic, phonon, and doping effects on the newly emerged graphene and related 2D layered materials. High pressure has created exotic stoichiometry even in common Group 17, 15, and 14 compounds and drastically altered the basic σ and π bonding of organic compounds. Differential pressure measurements enable us to study the rheology and flow of mantle minerals in solid state, thus quantitatively constraining the geodynamics. They also introduce a new approach to understand defect and plastic deformations of nano particles. These examples open new frontiers of high-pressure research. --- paper_title: Calibration of the ruby pressure gauge to 800 kbar under quasi‐hydrostatic conditions paper_content: An improved calibration curve of the pressure shift of the ruby R1 emission line was obtained under quasi-hydrostatic conditions in the diamond-window, high-pressure cell to 800 kbar. Argon was the pressure-transmitting medium. Metallic copper, as a standard, was studied in situ by X ray diffraction. The reference pressure was determined by calibration against known equations of state of the copper sample and by previously obtained data on silver. --- paper_title: The high-entropy alloys with high hardness and soft magnetic property prepared by mechanical alloying and high-pressure sintering paper_content: Abstract The equiatomic multiprincipal CoCrFeCuNi and CoCrFeMnNi high-entropy alloys (HEAs) were consolidated via high pressure sintering (HPS) from the powders prepared by the mechanical alloying method (MA). The structures of the MA'ed CoCrFeCuNi and CoCrFeMnNi powders consisted of a face-centered-cubic (FCC) phase and a minority body-centered cubic (BCC) phase. After being consolidated by HPS at 5 GPa, the structure of both HEAs transformed to a single FCC phase. The grain sizes of the HPS'ed CoCrFeCuNi and CoCrFeMnNi HEAs were about 100 nm. The alloys keep the FCC structure until the pressure reaches 31 GPa. The hardness of the HPS'ed CoCrFeCuNi and CoCrFeMnNi HEAs were 494 Hv and 587 Hv, respectively, much higher than their counterparts prepared by casting. Both alloys show typical paramagnetism, however, possessing different saturated magnetization. The mechanisms responsible for the observed influence of Cu and Mn on mechanical behavior and magnetic property of the HEAs are discussed in detail. --- paper_title: Calculating elastic constants in high-entropy alloys using the coherent potential approximation: Current issues and errors paper_content: The new class of high-entropy alloys (HEAs) materials have shown interesting properties, such as high strength and good ductility. However, HEAs present a great challenge for conventional ab initio calculations and the few available theoretical predictions involve a large degree of uncertainty. An often adopted theoretical tool to study HEAs from first-principles is based on the exact muffin-tin orbitals (EMTO) method in combination with the coherent potentials approximation (CPA), which can handle both chemical and magnetic disorders. Here we assess the performance of EMTO-CPA method in describing the elastic properties of HEAs based on Co, Cr, Fe, Mn, and Ni. We carefully scrutinize the effect of numerical parameters and the impact of various magnetic states on the calculated properties. The theoretical results for the elastic moduli are compared to the available experimental values. --- paper_title: Pressure-induced ordering phase transition in high-entropy alloy paper_content: Abstract The order-disorder transition has long been viewed as an important key to study materials structures and properties. Especially, pressure-induced ordering attracted the intense research interest in the recent years. In the present work, the pressure-induced ordering phase transition (disordered-fcc to ordered-fcc structures) in a CoCrCuFeNiPr high-entropy alloy (HEA) was found by employing the in situ high-pressure energy-dispersive X-ray diffraction (EDXRD) technique. It is interesting to note that there exists a pressure-induced fast ordering transition at the pressure ranging from 7.8 GPa to 16.0 GPa, followed by a slow transition with the pressure increase up to 106.4 GPa. We suggest that this phenomenon is caused by the presence of some short-range-order (SRO) local structures in the CoCrCuFeNiPr HEA. These SRO structures can be regarded as embryos, which will develop into the ordered phase with increasing the pressure in the prototype materials. This pressure-induced ordering may provides a new technique for tuning structures and properties of HEAs. --- paper_title: Structural stability of high entropy alloys under pressure and temperature paper_content: The stability of high-entropy alloys (HEAs) is a key issue before their selection for industrial applications. In this study, in-situ high-pressure and high-temperature synchrotron radiation X-ray diffraction experiments have been performed on three typical HEAs Ni20Co20Fe20Mn20Cr20, Hf25Nb25Zr25Ti25, and Re25Ru25Co25Fe25 (at. %), having face-centered cubic (fcc), body-centered cubic (bcc), and hexagonal close-packed (hcp) crystal structures, respectively, up to the pressure of ∼80 GPa and temperature of ∼1262 K. Under the extreme conditions of the pressure and temperature, all three studied HEAs remain stable up to the maximum pressure and temperatures achieved. For these three types of studied HEAs, the pressure-dependence of the volume can be well described with the third order Birch-Murnaghan equation of state. The bulk modulus and its pressure derivative are found to be 88.3 GPa and 4 for bcc-Hf25Nb25Zr25Ti25, 193.9 GPa and 5.9 for fcc-Ni20Co20Fe20Mn20Cr20, and 304.6 GPa and 3.8 for hcp-Re25Ru25Co25F... --- paper_title: Pressure-induced fcc to hcp phase transition in Ni-based high entropy solid solution alloys paper_content: A pressure-induced phase transition from the fcc to a hexagonal close-packed (hcp) structure was found in NiCoCrFe solid solution alloy starting at 13.5 GPa. The phase transition is very sluggish and the transition did not complete at ∼40 GPa. The hcp structure is quenchable to ambient pressure. Only a very small amount (<5%) of hcp phase was found in the isostructural NiCoCr ternary alloy up to the pressure of 45 GPa and no obvious hcp phase was found in NiCoCrFePd system till to 74 GPa. Ab initio Gibbs free energy calculations indicated the energy differences between the fcc and the hcp phases for the three alloys are very small, but they are sensitive to temperature. The critical transition pressure in NiCoCrFe varies from ∼1 GPa at room temperature to ∼6 GPa at 500 K. --- paper_title: Temperature dependence of the mechanical properties of equiatomic solid solution alloys with face-centered cubic crystal structures paper_content: We found that compared to decades-old theories of strengthening in dilute solid solutions, the mechanical behavior of concentrated solid solutions is relatively poorly understood. A special subset of these materials includes alloys in which the constituent elements are present in equal atomic proportions, including the high-entropy alloys of recent interest. A unique characteristic of equiatomic alloys is the absence of “solvent” and “solute” atoms, resulting in a breakdown of the textbook picture of dislocations moving through a solvent lattice and encountering discrete solute obstacles. Likewise, to clarify the mechanical behavior of this interesting new class of materials, we investigate here a family of equiatomic binary, ternary and quaternary alloys based on the elements Fe, Ni, Co, Cr and Mn that were previously shown to be single-phase face-centered cubic (fcc) solid solutions. The alloys were arc-melted, drop-cast, homogenized, cold-rolled and recrystallized to produce equiaxed microstructures with comparable grain sizes. Tensile tests were performed at an engineering strain rate of 10-3 s-1 at temperatures in the range 77–673 K. Unalloyed fcc Ni was processed similarly and tested for comparison. The flow stresses depend to varying degrees on temperature, with some (e.g. NiCoCr, NiCoCrMn and FeNiCoCr) exhibiting yield and ultimate strengths that increasemore » strongly with decreasing temperature, while others (e.g. NiCo and Ni) exhibit very weak temperature dependencies. Moreover, to better understand this behavior, the temperature dependencies of the yield strength and strain hardening were analyzed separately. Lattice friction appears to be the predominant component of the temperature-dependent yield stress, possibly because the Peierls barrier height decreases with increasing temperature due to a thermally induced increase of dislocation width. In the early stages of plastic flow (5–13% strain, depending on material), the temperature dependence of strain hardening is due mainly to the temperature dependence of the shear modulus. In all the equiatomic alloys, ductility and strength increase with decreasing temperature down to 77 K. Keywords« less --- paper_title: A Fracture-Resistant High-Entropy Alloy for Cryogenic Applications. paper_content: A CrMnFeCoNi alloy is prepared by arc melting the elements and drop casting into copper molds, followed by cold forging and cross rolling at room temperature into sheets roughly 10 mm thick. --- paper_title: The influences of temperature and microstructure on the tensile properties of a CoCrFeMnNi high-entropy alloy paper_content: Abstract An equiatomic CoCrFeMnNi high-entropy alloy, which crystallizes in the face-centered cubic (fcc) crystal structure, was produced by arc melting and drop casting. The drop-cast ingots were homogenized, cold rolled and recrystallized to obtain single-phase microstructures with three different grain sizes in the range 4–160 μm. Quasi-static tensile tests at an engineering strain rate of 10 −3 s −1 were then performed at temperatures between 77 and 1073 K. Yield strength, ultimate tensile strength and elongation to fracture all increased with decreasing temperature. During the initial stages of plasticity (up to ∼2% strain), deformation occurs by planar dislocation glide on the normal fcc slip system, {1 1 1}〈1 1 0〉, at all the temperatures and grain sizes investigated. Undissociated 1/2〈1 1 0〉 dislocations were observed, as were numerous stacking faults, which imply the dissociation of several of these dislocations into 1/6〈1 1 2〉 Shockley partials. At later stages (∼20% strain), nanoscale deformation twins were observed after interrupted tests at 77 K, but not in specimens tested at room temperature, where plasticity occurred exclusively by the aforementioned dislocations which organized into cells. Deformation twinning, by continually introducing new interfaces and decreasing the mean free path of dislocations during tensile testing (“dynamic Hall–Petch”), produces a high degree of work hardening and a significant increase in the ultimate tensile strength. This increased work hardening prevents the early onset of necking instability and is a reason for the enhanced ductility observed at 77 K. A second reason is that twinning can provide an additional deformation mode to accommodate plasticity. However, twinning cannot explain the increase in yield strength with decreasing temperature in our high-entropy alloy since it was not observed in the early stages of plastic deformation. Since strong temperature dependencies of yield strength are also seen in binary fcc solid solution alloys, it may be an inherent solute effect, which needs further study. --- paper_title: Ab initio thermodynamics of the CoCrFeMnNi high entropy alloy: Importance of entropy contributions beyond the configurational one paper_content: We investigate the thermodynamic properties of the prototype equi-atomic high entropy alloy (HEA) CoCrFeMnNi by using finite-temperature ab initio methods. All relevant free energy contributions are considered for the hcp, fcc, and bcc structures, including electronic, vibrational, and magnetic excitations. We predict the paramagnetic fcc phase to be most stable above room temperature in agreement with experiment. The corresponding thermal expansion and bulk modulus agree likewise well with experimental measurements. A careful analysis of the underlying entropy contributions allows us to identify that the originally postulated dominance of the configurational entropy is questionable. We show that vibrational, electronic, and magnetic entropy contributions must be considered on an equal footing to reliably predict phase stabilities in HEA systems. --- paper_title: Irreversible Phase Transformation in a CoCrFeMnNi High Entropy Alloy under Hydrostatic Compression paper_content: Abstract An equal-molar CoCrFeMnNi, face-centered-cubic high-entropy alloy system is investigated using in-situ angular-dispersive X-ray diffraction under hydrostatic compression up to 20 GPa via diamond anvil cell. The evolutions of multiple diffraction peaks are collected simultaneously to elucidate the phase stability field. The results indicated that an irreversible phase transformation had occurred in the high entropy alloy upon decompression to ambient pressure. A reference material (n-type silicon-doped gallium arsenide) was investigated following the same protocol to demonstrate the different deformation mechanisms. It is suggested that the atomic bonding characteristics on the phase stability may play an important role in the high entropy alloys. --- paper_title: Effects of non-hydrostaticity and grain size on the pressure-induced phase transition of the CoCrFeMnNi high-entropy alloy paper_content: Recently, an irreversible polymorphic transition from face-centered cubic to hexagonal close-packing was surprisingly observed under high pressure in the prototype CoCrFeMnNi high-entropy alloys (HEAs) by various research groups. This unexpected phase transition brings new insights into the stability of HEAs, and its irreversibility stimulates exploration for new HEAs via high-pressure compression synthesis. However, the onset pressure for the phase transition was reported to fluctuate over a vast range from ∼7 to above 49 GPa in the reported experiments. The reason for this inconsistency remains unclear and puzzles the HEA community. To address this problem, this work systematically investigates the effects of non-hydrostaticity and grain size. Our results demonstrate that larger deviatoric stress induced by the non-hydrostaticity of the pressure medium and larger grain size of the initial sample can both promote a phase transition and, therefore, considerably depress the onset pressure.Recently, an irreversible polymorphic transition from face-centered cubic to hexagonal close-packing was surprisingly observed under high pressure in the prototype CoCrFeMnNi high-entropy alloys (HEAs) by various research groups. This unexpected phase transition brings new insights into the stability of HEAs, and its irreversibility stimulates exploration for new HEAs via high-pressure compression synthesis. However, the onset pressure for the phase transition was reported to fluctuate over a vast range from ∼7 to above 49 GPa in the reported experiments. The reason for this inconsistency remains unclear and puzzles the HEA community. To address this problem, this work systematically investigates the effects of non-hydrostaticity and grain size. Our results demonstrate that larger deviatoric stress induced by the non-hydrostaticity of the pressure medium and larger grain size of the initial sample can both promote a phase transition and, therefore, considerably depress the onset pressure. --- paper_title: High pressure synthesis of a hexagonal close-packed phase of the high-entropy alloy CrMnFeCoNi paper_content: High-entropy alloys, near-equiatomic solid solutions of five or more elements, represent a new strategy for the design of materials with properties superior to those of conventional alloys. However, their phase space remains constrained, with transition metal high-entropy alloys exhibiting only face- or body-centered cubic structures. Here, we report the high-pressure synthesis of a hexagonal close-packed phase of the prototypical high-entropy alloy CrMnFeCoNi. This martensitic transformation begins at 14 GPa and is attributed to suppression of the local magnetic moments, destabilizing the initial fcc structure. Similar to fcc-to-hcp transformations in Al and the noble gases, the transformation is sluggish, occurring over a range of >40 GPa. However, the behaviour of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures. This demonstrates a means of tuning the structures and properties of high-entropy alloys in a manner not achievable by conventional processing techniques. --- paper_title: Polymorphism in a high-entropy alloy paper_content: Polymorphism, which describes the occurrence of different lattice structures in a crystalline material, is a critical phenomenon in materials science and condensed matter physics. Recently, configuration disorder was compositionally engineered into single lattices, leading to the discovery of high-entropy alloys and high-entropy oxides. For these novel entropy-stabilized forms of crystalline matter with extremely high structural stability, is polymorphism still possible? Here by employing in situ high-pressure synchrotron radiation X-ray diffraction, we reveal a polymorphic transition from face-centred-cubic (fcc) structure to hexagonal-close-packing (hcp) structure in the prototype CoCrFeMnNi high-entropy alloy. The transition is irreversible, and our in situ high-temperature synchrotron radiation X-ray diffraction experiments at different pressures of the retained hcp high-entropy alloy reveal that the fcc phase is a stable polymorph at high temperatures, while the hcp structure is more thermodynamically favourable at lower temperatures. As pressure is increased, the critical temperature for the hcp-to-fcc transformation also rises. Whether a polymorphic transition exists in high entropy alloys or not remains unclear since discovery of these alloys more than a decade ago. Here authors report an irreversible polymorphic transition fromfcc to hcp in the prototype FeCoCrMnNi high entropy alloy and provide evidence for fccphase being more stable than hcp phase only at high temperatures. --- paper_title: High-pressure high-temperature tailoring of High Entropy Alloys for extreme environments paper_content: Abstract The exceptional performance of some High Entropy Alloys (HEAs) under extreme conditions holds out the possibility of new and exciting materials for engineers to exploit in future applications. In this work, instead of focusing solely on the effects of high temperature on HEAs, the effects of combined high temperature and high pressure were observed. Phase transformations occurring in a pristine HEA, the as-cast bcc–Al2CoCrFeNi, are heavily influenced by temperature, pressure, and by scandium additions. As-cast bcc–Al2CoCrFeNi and fcc–Al0.3CoCrFeNi HEAs are structurally stable below 60 GPa and do not undergo phase transitions. Addition of scandium to bcc–Al2CoCrFeNi results in the precipitation of hexagonal AlScM intermetallic (W-phase), which dissolves in the matrix after high-pressure high-temperature treatment. Addition of scandium and high-pressure sintering improve hardness and thermal stability of well-investigated fcc- and bcc- HEAs. The dissolution of the intermetallic in the main phase at high pressure suggests a new strategy in the design and optimization of HEAs. --- paper_title: Pressure-induced ordering phase transition in high-entropy alloy paper_content: Abstract The order-disorder transition has long been viewed as an important key to study materials structures and properties. Especially, pressure-induced ordering attracted the intense research interest in the recent years. In the present work, the pressure-induced ordering phase transition (disordered-fcc to ordered-fcc structures) in a CoCrCuFeNiPr high-entropy alloy (HEA) was found by employing the in situ high-pressure energy-dispersive X-ray diffraction (EDXRD) technique. It is interesting to note that there exists a pressure-induced fast ordering transition at the pressure ranging from 7.8 GPa to 16.0 GPa, followed by a slow transition with the pressure increase up to 106.4 GPa. We suggest that this phenomenon is caused by the presence of some short-range-order (SRO) local structures in the CoCrCuFeNiPr HEA. These SRO structures can be regarded as embryos, which will develop into the ordered phase with increasing the pressure in the prototype materials. This pressure-induced ordering may provides a new technique for tuning structures and properties of HEAs. --- paper_title: Structural stability of high entropy alloys under pressure and temperature paper_content: The stability of high-entropy alloys (HEAs) is a key issue before their selection for industrial applications. In this study, in-situ high-pressure and high-temperature synchrotron radiation X-ray diffraction experiments have been performed on three typical HEAs Ni20Co20Fe20Mn20Cr20, Hf25Nb25Zr25Ti25, and Re25Ru25Co25Fe25 (at. %), having face-centered cubic (fcc), body-centered cubic (bcc), and hexagonal close-packed (hcp) crystal structures, respectively, up to the pressure of ∼80 GPa and temperature of ∼1262 K. Under the extreme conditions of the pressure and temperature, all three studied HEAs remain stable up to the maximum pressure and temperatures achieved. For these three types of studied HEAs, the pressure-dependence of the volume can be well described with the third order Birch-Murnaghan equation of state. The bulk modulus and its pressure derivative are found to be 88.3 GPa and 4 for bcc-Hf25Nb25Zr25Ti25, 193.9 GPa and 5.9 for fcc-Ni20Co20Fe20Mn20Cr20, and 304.6 GPa and 3.8 for hcp-Re25Ru25Co25F... --- paper_title: Effects of Al addition on the microstructure and mechanical property of AlxCoCrFeNi high-entropy alloys paper_content: A five-component AlxCoCrFeNi high-entropy alloy (HEA) system with finely-divided Al contents (x in molar ratio, x = 0–2.0) was prepared by vacuum arc melting and casting method. The effects of Al addition on the crystal structure, microstructure and mechanical property were investigated using X-ray diffraction (XRD), scanning electron microscopy (SEM), and Vickers hardness tester. The as-cast AlxCoCrFeNi alloys can possess face-centered cubic (FCC), body-centered cubic (BCC) or mixed crystal structure, depending on the aluminum content. The increase of aluminum content results in the formation of BCC structure which is a dominant factor of hardening. All the BCC phases in the as-cast alloys have a nano-scale two-phase structure formed by spinodal decomposition mechanism. The Al0.9CoCrFeNi alloy exhibits a finest spinodal structure consisting of alternating interconnected two-phase microstructure which explains its maximum hardness of Hv 527 among the alloys. The chemical composition analysis of FCC and BCC crystal structures, their lattice constants, overall hardness demonstrate that the formation of a single FCC solid solution should have Al addition <11 at.% and the formation of a single BCC solid solution requires Al addition at least 18.4 at.% in the AlxCoCrFeNi system. --- paper_title: Pressure-induced phase transition in the AlCoCrFeNi high-entropy alloy paper_content: Abstract The recently discovered pressure-induced polymorphic transitions (PIPT) in high-entropy alloys (HEAs) have opened an avenue towards understanding the phase stability and achieving atomic structural tuning of HEAs. So far, whether there is any PIPT in the body-centered cubic ( bcc ) HEAs remains unclear. Here, we studied an ordered bcc -structured (B2 phase) AlCoCrFeNi HEA using in situ synchrotron radiation X-ray diffraction (XRD) up to 42 GPa and ex situ transmission electron microscopy, a PIPT to a likely-distorted phase was observed. These results highlight the effect of the lattice distortion on the stability of HEAs and extend the polymorphism into ordered bcc -structured HEAs. --- paper_title: Searching for Next Single-Phase High-Entropy Alloy Compositions paper_content: There has been considerable technological interest in high-entropy alloys (HEAs) since the initial publications on the topic appeared in 2004. However, only several of the alloys investigated are truly single-phase solid solution compositions. These include the FCC alloys CoCrFeNi and CoCrFeMnNi based on 3d transition metals elements and BCC alloys NbMoTaW, NbMoTaVW, and HfNbTaTiZr based on refractory metals. The search for new single-phase HEAs compositions has been hindered by a lack of an effective scientific strategy for alloy design. This report shows that the chemical interactions and atomic diffusivities predicted from ab initio molecular dynamics simulations which are closely related to primary crystallization during solidification can be used to assist in identifying single phase high-entropy solid solution compositions. Further, combining these simulations with phase diagram calculations via the CALPHAD method and inspection of existing phase diagrams is an effective strategy to accelerate the discovery of new single-phase HEAs. This methodology was used to predict new single-phase HEA compositions. These are FCC alloys comprised of CoFeMnNi, CuNiPdPt and CuNiPdPtRh, and HCP alloys of CoOsReRu. --- paper_title: Structural stability of high entropy alloys under pressure and temperature paper_content: The stability of high-entropy alloys (HEAs) is a key issue before their selection for industrial applications. In this study, in-situ high-pressure and high-temperature synchrotron radiation X-ray diffraction experiments have been performed on three typical HEAs Ni20Co20Fe20Mn20Cr20, Hf25Nb25Zr25Ti25, and Re25Ru25Co25Fe25 (at. %), having face-centered cubic (fcc), body-centered cubic (bcc), and hexagonal close-packed (hcp) crystal structures, respectively, up to the pressure of ∼80 GPa and temperature of ∼1262 K. Under the extreme conditions of the pressure and temperature, all three studied HEAs remain stable up to the maximum pressure and temperatures achieved. For these three types of studied HEAs, the pressure-dependence of the volume can be well described with the third order Birch-Murnaghan equation of state. The bulk modulus and its pressure derivative are found to be 88.3 GPa and 4 for bcc-Hf25Nb25Zr25Ti25, 193.9 GPa and 5.9 for fcc-Ni20Co20Fe20Mn20Cr20, and 304.6 GPa and 3.8 for hcp-Re25Ru25Co25F... --- paper_title: High-Entropy Alloys in Hexagonal Close-Packed Structure paper_content: The microstructures and properties of high-entropy alloys (HEAs) based on the face-centered cubic and body-centered cubic structures have been studied extensively in the literature, but reports on HEAs in the hexagonal close-packed (HCP) structure are very limited. Using an efficient strategy in combining phase diagram inspection, CALPHAD modeling, and ab initio molecular dynamics simulations, a variety of new compositions are suggested that may hold great potentials in forming single-phase HCP HEAs that comprise rare earth elements and transition metals, respectively. Experimental verification was carried out on CoFeReRu and CoReRuV using X-ray diffraction, scanning electron microscopy, and energy dispersion spectroscopy. --- paper_title: Pressure-induced phase transitions in HoDyYGdTb high-entropy alloy paper_content: Abstract The HoDyYGdTb high-entropy alloy (HEA) was synthesized and its structural phase transitions were determined under compression up to 60.1 GPa at room temperature. Three transformations following the sequence hexagonal close-packed (hcp) → samarium type (Sm-type) → double hexagonal close-packed (dhcp) → distorted face-center cubic (dfcc) are observed at 4.4, 26.7, and 40.2 GPa, respectively. The high pressure equation of state was determined for the HEA according to third-order Birch-Murnaghan equation of state. The results show that the bulk modulus and atomic volume of the alloy obey the “additivity law”. --- paper_title: High-Entropy Alloys with a Hexagonal Close-Packed Structure Designed by Equi-Atomic Alloy Strategy and Binary Phase Diagrams paper_content: High-entropy alloys (HEAs) with an atomic arrangement of a hexagonal close-packed (hcp) structure were found in YGdTbDyLu and GdTbDyTmLu alloys as a nearly single hcp phase. The equi-atomic alloy design for HEAs assisted by binary phase diagrams started with selecting constituent elements with the hcp structure at room temperature by permitting allotropic transformation at a high temperature. The binary phase diagrams comprising the elements thus selected were carefully examined for the characteristics of miscibility in both liquid and solid phases as well as in both solids due to allotropic transformation. The miscibility in interest was considerably narrow enough to prevent segregation from taking place during casting around the equi-atomic composition. The alloy design eventually gave candidates of quinary equi-atomic alloys comprising heavy lanthanides principally. The XRD analysis revealed that YGdTbDyLu and GdTbDyTmLu alloys thus designed are formed into the hcp structure in a nearly single phase. It was found that these YGdTbDyLu and GdTbDyTmLu HEAs with an hcp structure have delta parameter (δ) values of 1.4 and 1.6, respectively, and mixing enthalpy (ΔHmix) = 0 kJ/mol for both alloys. These alloys were consistently plotted in zone S for disordered HEAs in a δ-ΔHmix diagram reported by Zhang et al. (Adv Eng Mater 10:534, 2008). The value of valence electron concentration of the alloys was evaluated to be 3 as the first report for HEAs with an hcp structure. The finding of HEAs with the hcp structure is significant in that HEAs have been extended to covering all three simple metallic crystalline structures ultimately followed by the body- and face-centered cubic (bcc and fcc) phases and to all four simple solid solutions that contain the glassy phase from high-entropy bulk metallic glasses. --- paper_title: Rare-earth high entropy alloys with hexagonal close-packed structure paper_content: The formation of octonary DyErGdHoLuScTbY and senary DyGdHoLaTbY and ErGdHoLaTbY high-entropy alloys (HEAs) with the hexagonal close-packed (HCP) structure was reported in this study. Experiments using scanning electron microscopy and x-ray diffraction confirmed the single HCP solid solution in the as-cast state for these three HEAs if the presence of minor rare-earth oxides due to contamination from processing is ignored. The measured compressive yield stress values for these HEAs at room temperature are 245, 205, and 360 MPa for the ErGdHoLaTbY, DyGdHoLaTbY, and DyErGdHoLuScTbY HEAs, respectively. The corresponding solid solution strengthening contributions for these HEAs were estimated using a simple elastic model, and the resulting contributions were 28 MPa, 27 MPa, and 42 MPa for the three aforementioned HEAs.The formation of octonary DyErGdHoLuScTbY and senary DyGdHoLaTbY and ErGdHoLaTbY high-entropy alloys (HEAs) with the hexagonal close-packed (HCP) structure was reported in this study. Experiments using scanning electron microscopy and x-ray diffraction confirmed the single HCP solid solution in the as-cast state for these three HEAs if the presence of minor rare-earth oxides due to contamination from processing is ignored. The measured compressive yield stress values for these HEAs at room temperature are 245, 205, and 360 MPa for the ErGdHoLaTbY, DyGdHoLaTbY, and DyErGdHoLuScTbY HEAs, respectively. The corresponding solid solution strengthening contributions for these HEAs were estimated using a simple elastic model, and the resulting contributions were 28 MPa, 27 MPa, and 42 MPa for the three aforementioned HEAs. --- paper_title: First hexagonal close packed high-entropy alloy with outstanding stability under extreme conditions and electrocatalytic activity for methanol oxidation paper_content: Abstract High-entropy alloys containing 5 and 6 platinum group metals have been prepared by thermal decomposition of single-source precursors non requiring high temperature or mechanical alloying. The prepared Ir 0.19 Os 0.22 Re 0.21 Rh 0.20 Ru 0.19 alloy is the first example of a single-phase hexagonal high-entropy alloy. Heat treatment up to 1500 K and compression up to 45 GPa do not result in phase changes, a record temperature and pressure stability for a single-phase high-entropy alloy. The alloys show pronounced electrocatalytic activity in methanol oxidation, which opens a route for the use of high-entropy alloys as materials for sustainable energy conversion. --- paper_title: Density functional theory based calculations for high pressure research paper_content: Abstract Density functional theory based calculations are commonly employed to complement experimental high pressure research. Here, a brief overview of the underlying theory and available codes is provided, followed by some applications. The influence of the choice of the exchange-correlation functional on predicted structural parameters and physical properties is discussed. --- paper_title: Ab initio thermodynamics of the CoCrFeMnNi high entropy alloy: Importance of entropy contributions beyond the configurational one paper_content: We investigate the thermodynamic properties of the prototype equi-atomic high entropy alloy (HEA) CoCrFeMnNi by using finite-temperature ab initio methods. All relevant free energy contributions are considered for the hcp, fcc, and bcc structures, including electronic, vibrational, and magnetic excitations. We predict the paramagnetic fcc phase to be most stable above room temperature in agreement with experiment. The corresponding thermal expansion and bulk modulus agree likewise well with experimental measurements. A careful analysis of the underlying entropy contributions allows us to identify that the originally postulated dominance of the configurational entropy is questionable. We show that vibrational, electronic, and magnetic entropy contributions must be considered on an equal footing to reliably predict phase stabilities in HEA systems. --- paper_title: Perspective: crystal structure prediction at high pressures. paper_content: Crystal structure prediction at high pressures unbiased by any prior known structure information has recently become a topic of considerable interest. We here present a short overview of recently developed structure prediction methods and propose current challenges for crystal structure prediction. We focus on first-principles crystal structure prediction at high pressures, paying particular attention to novel high pressure structures uncovered by efficient structure prediction methods. Finally, a brief perspective on the outstanding issues that remain to be solved and some directions for future structure prediction researches at high pressure are presented and discussed. --- paper_title: Materials discovery at high pressures paper_content: High pressure offers a unique degree of freedom for the creation of new materials, leading to new superconductors, superhard materials, high-energy-density materials and exotic chemical materials with unprecedented properties. This Review discusses these materials, along with recently developed theoretical and experimental methods for materials discovery at high pressures. --- paper_title: A critical review of high entropy alloys and related concepts paper_content: Abstract : High entropy alloys (HEAs) are barely 12 years old. The field has stimulated new ideas and has inspired the exploration of the vast composition space offered by multi-principal element alloys (MPEAs). Here we present a critical review of this field, with the intent of summarizing key findings, uncovering major trends and providing guidance for future efforts. Major themes in this assessment include definition of terms; thermodynamic analysis of complex, concentrated alloys (CCAs); taxonomy of current alloy families; microstructures; mechanical properties; potential applications; and future efforts. Based on detailed analyses, the following major results emerge. Although classical thermodynamic concepts are unchanged, trends in MPEAs can be different than in simpler alloys. Common thermodynamic perceptions can be misleading and new trends are described. From a strong focus on 3d transition metal alloys, there are now seven distinct CCA families. A new theme of designing alloy families by selecting elements to achieve a specific, intended purpose is starting to emerge. A comprehensive microstructural assessment is performed using three datasets: experimental data drawn from 408 different alloys and two computational datasets generated using the CALculated PHAse Diagram (CALPHAD) method. Each dataset emphasizes different elements and shows different microstructural trends. Trends in these three datasets are all predicted by a structure in - structure out (SISO) analysis developed here that uses the weighted fractions of the constituent element crystal structures in each dataset. A total of 13 distinct multi-principal element single-phase fields are found in this microstructural assessment. Relationships between composition, microstructure and properties are established for 3d transition metal MPEAs, including the roles of Al, Cr and Cu. Critical evaluation shows that commercial austenitic stainless steels and nickel alloys with 3 or more principal elements are MPEAs. ---
Title: High-Pressure Induced Phase Transitions in High-Entropy Alloys: A Review Section 1: Introduction Description 1: Introduce the concept of high-entropy alloys (HEAs) and their significance in material science. Mention the discovery of single solid-solution phase alloys with multi-principal elements and the resulting focus on high-entropy alloys. Section 2: Experimental Methods Description 2: Detail the experimental techniques used to study high-entropy alloys under high pressure, with a focus on diamond anvil cells (DACs) and in situ measurement techniques. Section 3: Structural Stability and Evolution of HEAs under High Pressure Description 3: Review the structural stability of high-entropy alloys under high pressure based on recent research. Discuss different initial crystal structures and their transitions. Section 4: Fcc-Structured HEAs Description 4: Highlight findings specific to fcc-structured high-entropy alloys, including phase transition behaviors and the effects of various factors like pressure environment, grain size, and alloying elements. Section 5: Bcc-Structured HEAs Description 5: Discuss the structural stability of bcc-structured high-entropy alloys under high pressure, including specific examples and any observed phase transitions. Section 6: Hcp-Structured HEAs Description 6: Provide an overview of high-pressure stability in hcp-structured high-entropy alloys, with specific cases and observed phase transitions during compression experiments. Section 7: Conclusions and Outlooks Description 7: Summarize the key findings from high-pressure studies on high-entropy alloys. Provide future outlooks for research directions, including synergic effects of pressure-composition-temperature, combining high-pressure techniques, and theoretical calculations.
5G: A Tutorial Overview of Standards, Trials, Challenges, Deployment, and Practice
13
--- paper_title: 5G-Enabled Tactile Internet paper_content: The long-term ambition of the Tactile Internet is to enable a democratization of skill, and how it is being delivered globally. An integral part of this is to be able to transmit touch in perceived real-time, which is enabled by suitable robotics and haptics equipment at the edges, along with an unprecedented communications network. The fifth generation (5G) mobile communications systems will underpin this emerging Internet at the wireless edge. This paper presents the most important technology concepts, which lay at the intersection of the larger Tactile Internet and the emerging 5G systems. The paper outlines the key technical requirements and architectural approaches for the Tactile Internet, pertaining to wireless access protocols, radio resource management aspects, next generation core networking capabilities, edge-cloud, and edge-AI capabilities. The paper also highlights the economic impact of the Tactile Internet as well as a major shift in business models for the traditional telecommunications ecosystem. --- paper_title: Wideband Millimeter-Wave Propagation Measurements and Channel Models for Future Wireless Communication System Design paper_content: The relatively unused millimeter-wave (mmWave) spectrum offers excellent opportunities to increase mobile capacity due to the enormous amount of available raw bandwidth. This paper presents experimental measurements and empirically-based propagation channel models for the 28, 38, 60, and 73 GHz mmWave bands, using a wideband sliding correlator channel sounder with steerable directional horn antennas at both the transmitter and receiver from 2011 to 2013. More than 15,000 power delay profiles were measured across the mmWave bands to yield directional and omnidirectional path loss models, temporal and spatial channel models, and outage probabilities. Models presented here offer side-by-side comparisons of propagation characteristics over a wide range of mmWave bands, and the results and models are useful for the research and standardization process of future mmWave systems. Directional and omnidirectional path loss models with respect to a 1 m close-in free space reference distance over a wide range of mmWave frequencies and scenarios using directional antennas in real-world environments are provided herein, and are shown to simplify mmWave path loss models, while allowing researchers to globally compare and standardize path loss parameters for emerging mmWave wireless networks. A new channel impulse response modeling framework, shown to agree with extensive mmWave measurements over several bands, is presented for use in link-layer simulations, using the observed fact that spatial lobes contain multipath energy that arrives at many different propagation time intervals. The results presented here may assist researchers in analyzing and simulating the performance of next-generation mmWave wireless networks that will rely on adaptive antennas and multiple-input and multiple-output (MIMO) antenna systems. --- paper_title: Understanding the Current Operation and Future Roles of Wireless Networks: Co-Existence, Competition and Co-Operation in the Unlicensed Spectrum Bands paper_content: Technology and policy are coming together to enable a paradigmatic change to the most widely used mechanism, exclusive rights, which allows mobile telecommunications operators to use the radio spectrum. Although spectrum sharing is not a new idea, the limited supply of spectrum and the enormous demand for mobile broadband services are forcing spectrum authorities to look more closely into a range of tools that might accelerate its adoption. This paper seeks to understand how co-existence and co-operation of Wi-Fi and cellular networks in the unlicensed spectrum can increase the overall capacity of heterogeneous wireless networks. It also reveals the challenges posed by new uses, such as machine-to-machine communications and the Internet of Things. It also brings together two major proposed regulatory approaches, such as those by the U.K.’s Ofcom and the European Commission, which currently represent leading efforts to provide spectrum authorities with robust spectrum sharing frameworks, to discuss policy tools likely to be implemented. --- paper_title: Spectrum Sharing in mmWave Cellular Networks via Cell Association, Coordination, and Beamforming paper_content: This paper investigates the extent to which spectrum sharing in millimeter-wave (mmWave) networks with multiple cellular operators is a viable alternative to traditional dedicated spectrum allocation. Specifically, we develop a general mathematical framework to characterize the performance gain that can be obtained when spectrum sharing is used, as a function of the underlying beamforming, operator coordination, bandwidth, and infrastructure sharing scenarios. The framework is based on joint beamforming and cell association optimization, with the objective of maximizing the long-term throughput of the users. Our asymptotic and non-asymptotic performance analyses reveal five key points: 1) spectrum sharing with light on-demand intra- and inter-operator coordination is feasible, especially at higher mmWave frequencies (for example, 73 GHz); 2) directional communications at the user equipment substantially alleviate the potential disadvantages of spectrum sharing (such as higher multiuser interference); 3) large numbers of antenna elements can reduce the need for coordination and simplify the implementation of spectrum sharing; 4) while inter-operator coordination can be neglected in the large-antenna regime, intra-operator coordination can still bring gains by balancing the network load; and 5) critical control signals among base stations, operators, and user equipment should be protected from the adverse effects of spectrum sharing, for example by means of exclusive resource allocation. The results of this paper, and their extensions obtained by relaxing some ideal assumptions, can provide important insights for future standardization and spectrum policy. --- paper_title: WiGiG: Multi-gigabit wireless communications in the 60 GHz band paper_content: The Wireless Gigabit Alliance [1] - commonly called WiGig - is an industry consortium devoted to the development and promotion of wireless communications in the 60 GHz band. Recent advances in 60 GHz technology and demand for higher-speed wireless connections are key drivers for WiGig. Among the unlicensed frequency bands available for wireless networks, 60 GHz is uniquely suited for carrying extremely high data rates (multiple gigabits per second) over short distances. WiGig has developed a medium access control (MAC) layer, a physical (PHY) layer, and several protocol adaptation layers (PALs) to enable interoperable devices that take advantage of these extremely high data rates. WiGig is also working closely with standards bodies, including IEEE 802.11, and other industry groups, such as the Wi-Fi Alliance, to help enable certification of standards-compliant devices. --- paper_title: A study on the coexistence of fixed satellite service and cellular networks in a mmWave scenario paper_content: The use of a larger bandwith in the millimeter wave (mmWave) spectrum is one of the key components of next generation cellular networks. Currently, part of this band is allocated on a co-primary basis to a number of other applications, such as the fixed satellite services (FSSs). In this paper, we investigate the coexistence between a cellular network and FSSs in a mmWave scenario. In light of the parameters recommended by the standard and the recent results presented in the literature on the mmWave channel model, we analyze different BSs deployments and different antenna configurations at the transmitters. Finally, we show how, exploiting the features of a mmWave scenario, the coexistence between cellular and satellite services is feasible and the interference at the FSS antenna can be kept below recommended levels. --- paper_title: Spectrum Pooling in MmWave Networks: Opportunities, Challenges, and Enablers paper_content: Motivated by the specific characteristics of mmWave technologies, we discuss the possibility of an authorization regime that allows spectrum sharing between multiple operators, also referred to as ... --- paper_title: Massive MIMO for Next Generation Wireless Systems paper_content: Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic. --- paper_title: Coordinated multipoint: Concepts, performance, and field trial results paper_content: Coordinated multipoint or cooperative MIMO is one of the promising concepts to improve cell edge user data rate and spectral efficiency beyond what is possible with MIMOOFDM in the first versions of LTE or WiMAX. Interference can be exploited or mitigated by cooperation between sectors or different sites. Significant gains can be shown for both the uplink and downlink. A range of technical challenges were identified and partially addressed, such as backhaul traffic, synchronization and feedback design. This article also shows the principal feasibility of COMP in two field testbeds with multiple sites and different backhaul solutions between the sites. These activities have been carried out by a powerful consortium consisting of universities, chip manufacturers, equipment vendors, and network operators. --- paper_title: Scaling up MIMO: Opportunities and challenges with very large arrays paper_content: Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. --- paper_title: Interference coordination for dense wireless networks paper_content: The promise of ubiquitous and super-fast connectivity for the upcoming years will be in large part fulfilled by the addition of base stations and spectral aggregation. The resulting very dense networks (DenseNets) will face a number of technical challenges. Among others, the interference emerges as an old acquaintance with new significance. As a matter of fact, the interference conditions and the role of aggressor and victim depend to a large extent on the density and the scenario. To illustrate this, downlink interference statistics for different 3GPP simulation scenarios and a more irregular and dense deployment in Tokyo are compared. Evolution to DenseNets offers new opportunities for further development of downlink interference cooperation techniques. Various mechanisms in LTE and LTE-Advanced are revisited. Some techniques try to anticipate the future in a proactive way, whereas others simply react to an identified interference problem. As an example, we propose two algorithms to apply time domain and frequency domain small cell interference coordination in a DenseNet. --- paper_title: Hybrid Digital and Analog Beamforming Design for Large-Scale Antenna Arrays paper_content: The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multiple-input multiple-output (MIMO) system and a downlink multi-user multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used. --- paper_title: Large-scale antenna systems with hybrid analog and digital beamforming for millimeter wave 5G paper_content: With the severe spectrum shortage in conventional cellular bands, large-scale antenna systems in the mmWave bands can potentially help to meet the anticipated demands of mobile traffic in the 5G era. There are many challenging issues, however, regarding the implementation of digital beamforming in large-scale antenna systems: complexity, energy consumption, and cost. In a practical large-scale antenna deployment, hybrid analog and digital beamforming structures can be important alternative choices. In this article, optimal designs of hybrid beamforming structures are investigated, with the focus on an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid beamforming structure. Optimal analog and digital beamforming designs in a multi-user beamforming scenario are discussed. Also, the energy efficiency and spectrum efficiency of the N × M beamforming structure are analyzed, including their relationship at the green point (i.e., the point with the highest energy efficiency) on the energy efficiency-spectrum efficiency curve, the impact of N on the energy efficiency performance at a given spectrum efficiency value, and the impact of N on the green point energy efficiency. These results can be conveniently utilized to guide practical LSAS design for optimal energy/ spectrum efficiency trade-off. Finally, a reference signal design for the hybrid beamform structure is presented, which achieves better channel estimation performance than the method solely based on analog beamforming. It is expected that large-scale antenna systems with hybrid beamforming structures in the mmWave band can play an important role in 5G. --- paper_title: Downlink Cellular Network Analysis With Multi-Slope Path Loss Models paper_content: Existing cellular network analyses, and even simulations, typically use the standard path loss model where received power decays like $\|x\|^{-\alpha}$ over a distance $\|x\|$. This standard path loss model is quite idealized, and in most scenarios the path loss exponent $\alpha$ is itself a function of $\|x\|$, typically an increasing one. Enforcing a single path loss exponent can lead to orders of magnitude differences in average received and interference powers versus the true values. In this paper we study \emph{multi-slope} path loss models, where different distance ranges are subject to different path loss exponents. We focus on the dual-slope path loss function, which is a piece-wise power law and continuous and accurately approximates many practical scenarios. We derive the distributions of SIR, SNR, and finally SINR before finding the potential throughput scaling, which provides insight on the observed cell-splitting rate gain. The exact mathematical results show that the SIR monotonically decreases with network density, while the converse is true for SNR, and thus the network coverage probability in terms of SINR is maximized at some finite density. With ultra-densification (network density goes to infinity), there exists a \emph{phase transition} in the near-field path loss exponent $\alpha_0$: if $\alpha_0>1$ unbounded potential throughput can be achieved asymptotically; if $\alpha_0<1$, ultra-densification leads in the extreme case to zero throughput. --- paper_title: Large Scale Parameter for the WINNER II Channel Model at 2.53 GHz in Urban Macro Cell paper_content: This paper presents results of wide band channel measurements at 2.53 GHz for a representative urban macro cell environment in Ilmenau, Germany. The extensive channel sounding campaign covered the MIMO radio links from 22 mobile tracks to 3 different base stations and 1 relay station. The results presented in this paper provide insight into the large scale parameter analysis of the power, delay domain, including the transmission loss and the statistical distributions of the shadow fading, narrowband k-factor and delay spread. Large scale parameters from angle domain (azimuth and elevation) are derived based on the high resolution multipath parameter estimation (RIMAX). Furthermore the cross correlation of these parameters are investigated and compared to state-of-the-art channel models as from the IST-WINNER. --- paper_title: Cluster-based analysis of 3D MIMO channel measurement in an urban environment paper_content: Massive MIMO (multiple - input - multiple - output) and full-dimensional MIMO systems in next-generation cellular communications systems as well as high-data-rate military systems have garnered considerable attention recently. For the assessment of their performance, knowledge of the 3D propagation channel characteristics, i.e., azimuth and elevation of the multipath components (MPCs) is essential. In this paper, we present first results of a 3D outdoor propagation channel measurement campaign performed in an urban macro-cellular environment. The measurements were performed with a 20 MHz wideband polarimetric MIMO channel sounder centered at 2.53 GHz. At each measurement location, parameters of all MPCs observed were extracted with RIMAX, an iterative maximum likelihood high-resolution algorithm. It was observed that MPCs naturally grouped into clusters. We then present a cluster-based analysis of the propagation channel providing some results of the intra and inter cluster parameters and their relevant statistics. Correlation between all extracted cluster parameters are also provided. Parameters such as elevation and azimuth spread (at the base-station) in this work have been used as input to recent international channel model standardization. --- paper_title: The double-directional radio channel paper_content: We introduce the concept of the double-directional mobile radio channel. It is called this because it includes angular information at both link ends, e.g., at the base station and at the mobile station. We show that this angular information can be obtained with synchronized antenna arrays at both link ends. In wideband high-resolution measurements, we use a switched linear array at the receiver and a virtual-cross array at the transmitter. We evaluate the raw measurement data with a technique that alternately used estimation and beamforming, and that relied on ESPRIT (estimation of signal parameters via rotational invariance techniques) to obtain superresolution in both angular domains and in the delay domain. In sample microcellular scenarios (open and closed courtyard, line-of-sight and obstructed line-of-sight), up to 50 individual propagation paths are determined. The major multipath components are matched precisely to the physical environment by geometrical considerations. Up to three reflection/scattering points per propagation path are identified and localized, lending insight into the multipath spreading properties in a microcell. The extracted multipath parameters allow unambiguous scatterer identification and channel characterization, independently of a specific antenna, its configuration (single/array), and its pattern. The measurement results demonstrate a considerable amount of power being carried via multiply reflected components, thus suggesting revisiting the popular single-bounce propagation models. It turns out that the wideband double-directional evaluation is a most complete method for separating multipath components. Due to its excellent spatial resolution, the double-directional concept provides accurate estimates of the channel's multipath-richness, which is the important parameter for the capacity of multiple-input multiple-output (MIMO) channels. --- paper_title: Massive MIMO Channel Modeling - Extension of the COST 2100 Model paper_content: As massive MIMO is currently considered a leading 5G technology candidate, channel models that capture important massive MIMO channel characteristics are urgently needed. In this paper we present an attempt for massive MIMO channel modeling based on measurement campaigns at 2.6 GHz in both outdoor and indoor environments, using physically-large arrays and with closely-spaced users. The COST 2100 MIMO channel model is adopted as a general framework. We discuss modeling approaches and scopes for massive MIMO, based on which we suggest extensions to the COST 2100 model. The extensions include 3D propagation, polarization, cluster behavior at the base station side for physically-large arrays, and multi-path component gain functions for closely-spaced users. Model parameters for these extensions in massive MIMO scenarios are reported. Initial validation against the measurements are also performed, which shows that the model is capable of reproducing the channel statistics in terms of temporal behavior of the user separability, singular value spread and sum-rate/capacity. (Less) --- paper_title: Massive MIMO in Real Propagation Environments: Do All Antennas Contribute Equally? paper_content: Massive MIMO can greatly increase both spectral and transmit-energy efficiency. This is achieved by allowing the number of antennas and RF chains to grow very large. However, the challenges include high system complexity and hardware energy consumption. Here we investigate the possibilities to reduce the required number of RF chains, by performing antenna selection. While this approach is not a very effective strategy for theoretical independent Rayleigh fading channels, a substantial reduction in the number of RF chains can be achieved for real massive MIMO channels, without significant performance loss. We evaluate antenna selection performance on measured channels at 2.6 GHz, using a linear and a cylindrical array, both having 128 elements. Sum-rate maximization is used as the criterion for antenna selection. A selection scheme based on convex optimization is nearly optimal and used as a benchmark. The achieved sum-rate is compared with that of a very simple scheme that selects the antennas with the highest received power. The power-based scheme gives performance close to the convex optimization scheme, for the measured channels. This observation indicates a potential for significant reductions of massive MIMO implementation complexity, by reducing the number of RF chains and performing antenna selection using simple algorithms. --- paper_title: Angular power distribution and mean effective gain of mobile antenna in different propagation environments paper_content: We measured the elevation angle distribution and cross-polarization power ratio of the incident power at a mobile station in different radio propagation environments at 2.15 GHz frequency. A novel measurement technique was utilized, based on a wideband channel sounder and a spherical dual-polarized antenna array at the receiver. Data were collected over 9 km of continuous measurement routes, both indoor and outdoor. Our results show that in non-line-of-sight situations, the power distribution in elevation has a shape of a double-sided exponential function, with different slopes on the negative and positive sides of the peak. The slopes and the peak elevation angle depend on the environment and base-station antenna height. The cross-polarization power ratio varied within 6.6 and 11.4 dB, being lowest for indoor and highest for urban microcell environments. We applied the experimental data for analysis of the mean effective gain (MEG) of several mobile handset antenna configurations, with and without the user's head. The obtained MEG values varied from approximately -5 dBi in free space to less than -11 dBi beside the head model. These values are considerably lower than what is typically used in system specifications. The result shows that considering only the maximum gain or total efficiency of the antenna is not enough to describe its performance in practical operating conditions. For most antennas, the environment type has little effect on the MEG, but clear differences exist between antennas. The effect of the user's head on the MEG depends on the antenna type and on which side of the head the user holds the handset. --- paper_title: Elevation Angle Characteristics of Urban Wireless Propagation Environment at 3.5 GHz paper_content: It is known that channel model of propagation characteristics are crucial in the research and evaluation of three-dimensional multiple-input multiple-output (3D-MIMO) technique, especially the elevation angle (EA) model. In this paper, the results of 3D-MIMO channel measurement in the urban macro-cell (UMA) scenario at 3.5 GHz are presented. Based on the measurement data, it is find that the mean value of elevation angle has a offset angle to line-of-sight (LOS) angle especially in not-line-of- sight (NLOS) situation. And the offset angle is depend on the azimuth distance between BS and MS which can be modeled as a function that use distance as a parameter. A novel way to model mean angle of elevation angle is proposed at the end of this paper. The relation between Circular angular spread (CAS) and distance is also studied, the measurement result shows that the effects caused by distance can be neglected. --- paper_title: Correlation Properties of Large Scale Parameters from 2.66 GHz Multi-Site Macro Cell Measurements paper_content: Multi-site measurements for urban macro cells at 2.66 GHz are performed with three base stations and one mobile station. In order to analyze the correlation properties of large scale parameters, we split up the routes into subsets, where it can be assumed that wide-sense stationarity (WSS) applies. The autocorrelation distance and correlation properties of large scale parameters for each link are analyzed. By comparing these properties with the corresponding parameters from the COST 2100 and WINNER II models, we can see that the measured autocorrelation distance of the shadow fading as well the autocorrelation distance of delay spread have similar properties as in the two models. The shadow fading and delay spread from the same link are negatively correlated and match the two models well. Based on the WSS subsets, we can see that large scale parameters for different links can be correlated, also when two BSs are far away from each other. In those cases the correlation of different links tends to be positively correlated when both base stations are in the same direction compared to the movement of the MS, otherwise the two links are usually negatively correlated. --- paper_title: A Radio Channel Sounder for Mobile Millimeter-Wave Communications: System Implementation and Measurement Assessment paper_content: We describe a state-of-the-art channel sounder to support channel-model development for mobile millimeterwave (mm-wave) communications. The system can measure the complex amplitude, delay, and angle of arrival of the multipath components of indoor and outdoor channels. Specifically, a custom multiplexer (MUX) records the channel impulse response across a 16-element receive (RX) antenna array in 65.5 μs, while the channel is static. The delay resolution of the system is 1 ns and, because the elements are oriented in a 3-D space, both azimuth and elevation angles can be extracted. The robust link budget, comprising high-gain directional RX antennas, enables indoor link measurement beyond 150 m in line-of-sight and 20 m in non-line-of-sight conditions. The RX array is mounted on a location-aware robot, which is battery operated. Combined with the speed of the MUX, untethered acquisition of mobile-channel data is possible. To the best of our knowledge, this paper contributes the first sounder that is capable of mobile measurements at mm-wave frequencies. The hardware implementation of a functional 83.5-GHz system is described in this paper, and some illustrative results, including small-scale statistics and Doppler, are presented. --- paper_title: Wireless Device-to-Device Caching Networks: Basic Principles and System Performance paper_content: As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains. --- paper_title: Environment Induced Shadowing of Urban Millimeter-Wave Access Links paper_content: In this letter, we investigate how environment induced shadowing influences a 28.5 GHz access link in an urban open square scenario. Shadowing is caused by cars, buses, lorries, and pedestrians passing between the small cell base station and the mobile user terminal. We isolate typical single shadowing events and provide a model for their description. We also present statistical evaluations that can be used as input to 5G access channel models. --- paper_title: Wideband Millimeter-Wave Propagation Measurements and Channel Models for Future Wireless Communication System Design paper_content: The relatively unused millimeter-wave (mmWave) spectrum offers excellent opportunities to increase mobile capacity due to the enormous amount of available raw bandwidth. This paper presents experimental measurements and empirically-based propagation channel models for the 28, 38, 60, and 73 GHz mmWave bands, using a wideband sliding correlator channel sounder with steerable directional horn antennas at both the transmitter and receiver from 2011 to 2013. More than 15,000 power delay profiles were measured across the mmWave bands to yield directional and omnidirectional path loss models, temporal and spatial channel models, and outage probabilities. Models presented here offer side-by-side comparisons of propagation characteristics over a wide range of mmWave bands, and the results and models are useful for the research and standardization process of future mmWave systems. Directional and omnidirectional path loss models with respect to a 1 m close-in free space reference distance over a wide range of mmWave frequencies and scenarios using directional antennas in real-world environments are provided herein, and are shown to simplify mmWave path loss models, while allowing researchers to globally compare and standardize path loss parameters for emerging mmWave wireless networks. A new channel impulse response modeling framework, shown to agree with extensive mmWave measurements over several bands, is presented for use in link-layer simulations, using the observed fact that spatial lobes contain multipath energy that arrives at many different propagation time intervals. The results presented here may assist researchers in analyzing and simulating the performance of next-generation mmWave wireless networks that will rely on adaptive antennas and multiple-input and multiple-output (MIMO) antenna systems. --- paper_title: A statistical model for the shadowing induced by human bodies in the proximity of a mmWaves radio link paper_content: 5G technology is a broad concept that describes the envisaged disruptive evolution of communication technology in the near future, with a dramatic increase in the network data-rate and capacity to support a variety of innovative services. The exploitation of the new spectrum available at mmWaves represents a key enabler for 5G. mmWaves are expected to revolutionize the indoor wireless connectivity providing a very large capacity at very high data rates. It is well known that mmWaves radio links are strongly influenced by human bodies and this issue is very relevant in indoor environments. Several models are available for ray-tracing investigations and line-of-sight blockage, whereas statistical models enabling tractable analytical studies and simulations of mmWaves wireless systems, accounting for people in the link's proximity, are still lacking. In this paper, measurements of 60 GHz channel impulse responses in static but “evolutionary” office scenarios that involve one, two and three individuals are presented. Regression fits are applied to the experimental responses to obtain an accurate characterization of human-induced shadowing events in both proximity and blockage situations. Tractable statistical models are provided for different scenarios and for eight carrier frequencies spanning the bands from 54 to 59 GHz and from 61 to 66 GHz. --- paper_title: Exploiting directionality for millimeter-wave wireless system improvement paper_content: This paper presents directional and omnidirectional RMS delay spread statistics obtained from 28 GHz and 73 GHz ultrawideband propagation measurements carried out in New York City using a 400 Megachips per second broadband sliding correlator channel sounder and highly directional steerable horn antennas. The 28 GHz measurements did not systematically seek the optimum antenna pointing angles and resulted in 33% outage for 39 T-R separation distances within 200 m. The 73 GHz measurements systematically found the best antenna pointing angles and resulted in 14.3% outage for 35 T-R separation distances within 200 m, all for mobile height receivers. Pointing the antennas to yield the strongest received power is shown to significantly reduce RMS delay spreads in line-of-sight (LOS) environments. A new term, distance extension exponent (DEE) is defined, and used to mathematically describe the increase in coverage distance that results by combining beams from angles with the strongest received power at a given location. These results suggest that employing directionality in millimeter-wave communications systems will reduce inter-symbol interference, improve link margin at cell edges, and enhance overall system performance. --- paper_title: Directional multipath propagation characteristics based on 28GHz outdoor channel measurements paper_content: Propagation characteristics of millimeter-wave are being widely studied for utilization of the fifth generation (5G) mobile communication systems. To overcome the severe path loss on the high frequency band, highly directive antennas or beamforming techniques using large array antennas can be used to establish a reliable communication link between a transmitter and a receiver. In this paper, a recently conducted outdoor channel measurement using directional horn antenna in the 28GHz frequency band is introduced. The measurement campaign has been conducted in typical urban low-rise and very high-rise environments in Korea. To investigate directional multipath characteristics, large-scale parameters such as root mean square (RMS) delay spread and angular spread of arrival corresponding to the beamwidth of antenna are extracted. Finally, we derived empirical models based on our measurement results to estimate these large-scale parameters. --- paper_title: Spatially consistent pathloss modeling for millimeter-wave channels in urban environments paper_content: This paper consider a fundamental issue of pathloss modeling in urban environments, namely the spatial consistency of the model as the mobile station (MS) moves along a trajectory through street canyons. We show that the traditional model of power law pathloss plus lognormally distributed variations can provide misleading results that can have serious implications for system simulations. Rather, the pathloss coefficient has to be modeled as a random variable that changes from street to street, and is also a function of the street orientation. Variations of the channel gain, taken over the ensemble of the whole cell (or multiple cells) thus consist of the compound effect of these pathloss coefficient variations together with traditional shadowing variations along the trajectory of movement. Ray tracing results demonstrate that ignoring this effect can lead to a severe overestimation of shadowing standard deviation. While the effect is irrelevant for “drop-based” simulations, it can have critical impact on system simulations that require spatial consistency for large-scale movement, such as most mm-wave systems. --- paper_title: An updated model for millimeter wave propagation in moist air paper_content: A practical atmospheric Millimeter-Wave Propagation Model (MPM) is formulated that predicts attenuation. delay, and noise properties of moist air for frequencies up to 1000 GHz. Input variables are height distributions (0-30 km) of pressure, temperature, humidity, and suspended droplet concentration along an anticipated radio path. Spectroscopic data consist of more than 450 parameters describing local O2 and H2O absorption lines complemented by continuum spectra for dry air, water vapor, and hydrosols. For a model (MPM*) limited to frequencies below 300 GHz, the number of spectroscopic parameters can be reduced to less than 200. Recent laboratory measurements by us at 138 GHz of absolute attenuation rates for simulated air with water vapor pressures up to saturation allow the formulation of an improved, though empirical water vapor continuum. Model predictions are compared with selected (2.5-430 GHz) data from both laboratory and field experiments. In general, good agreement is obtained. --- paper_title: Wireless Communications paper_content: "Professor Andreas F. Molisch, renowned researcher and educator, has put together the comprehensive book, Wireless Communications. The second edition, which includes a wealth of new material on important topics, ensures the role of the text as the key resource for every student, researcher, and practitioner in the field."Professor Moe Win, MIT, USAWireless communications has grown rapidly over the past decade from a niche market into one of the most important, fast moving industries. Fully updated to incorporate the latest research and developments, Wireless Communications, Second Edition provides an authoritative overview of the principles and applications of mobile communication technology.The author provides an in-depth analysis of current treatment of the area, addressing both the traditional elements, such as Rayleigh fading, BER in flat fading channels, and equalisation, and more recently emerging topics such as multi-user detection in CDMA systems, MIMO systems, and cognitive radio. The dominant wireless standards; including cellular, cordless and wireless LANs; are discussed.Topics featured include: wireless propagation channels, transceivers and signal processing, multiple access and advanced transceiver schemes, and standardised wireless systems.Combines mathematical descriptions with intuitive explanations of the physical facts, enabling readers to acquire a deep understanding of the subject.Includes new chapters on cognitive radio, cooperative communications and relaying, video coding, 3GPP Long Term Evolution, and WiMax; plus significant new sections on multi-user MIMO, 802.11n, and information theory.Companion website featuring: supplementary material on 'DECT', solutions manual and presentation slides for instructors, appendices, list of abbreviations and other useful resources. --- paper_title: Wideband characterisation of the propagation channel for outdoors at 60 GHz paper_content: This article presents a model the analysis of the propagation channel mobile communications in urban streets at the millimetrewave band. A ray-tracing tool, based on geometrical optics, is used to calculate the impulse response at the mobile, when it moves along the street; the scenario is defined by the ground and walls surfaces, and may have obstructing objects within it. A comparison of the statistical distributions of the usual impulse response parameters resulting from the simulations with measured ones shows a good agreement. Further simulations show that the dimensions of the street and the roughness of its walls, the existence of trees, and others factors, influence the average impulse response. --- paper_title: An Empirical Study of Urban Macro Propagation at 10, 18 and 28 GHz paper_content: This paper investigates the propagation characteristics of the urban macro cells at centimeter-wave (cmWave) frequencies, in particular at 10, 18 and 28 GHz. The measurements are performed at several transmitter (Tx) locations and heights, in both line-of-sight (LOS) and non line-of-sight (NLOS) conditions, and with distances up to 1,400 m. The distancedependent mean path loss and shadow fading standard deviation (std) are extracted for all cases based on a single-slope path loss model, and offered here for quick determination of link budget and system capacity. The results show the potential usage of the cmWave band for mobile cellular services in the years to come: the NLOS path loss slopes at 10 and 18 GHz are not much different from the 2 GHz reference, and the corresponding offsets are in the order of 20-23 dB for 25 m Tx height. This gap is expected to be overcome by the usage of high-gain miniaturized steerable antennas, which is feasible due to the reduced antenna aperture size at the cmWave band. Similar to the 2 GHz band, the NLOS shadow fading std for cmWave is within 6 dB. The effect of Tx height is clearly shown in the NLOS scenario: at 10 GHz, for example, 7.5 dB reduction in attenuation could be achieved by raising the Tx antenna from 15 m (below average roof-top) to 25 m (above roof-top), or 23.4 dB if the Tx height is elevated to 54 m. --- paper_title: Quasi-deterministic millimeter-wave channel models in MiWEBA paper_content: This article introduces a quasi-deterministic channel model and a link level-focused channel model, developed with a focus on millimeter-wave outdoor access channels. Channel measurements in an open square scenario at 60 GHz are introduced as a basis for the development of the model and its parameterization. The modeling approaches are explained, and their specific area of application is investigated. --- paper_title: Proposal on Millimeter-Wave Channel Modeling for 5G Cellular System paper_content: This paper presents 28 GHz wideband propagation channel characteristics for millimeter wave (mmWave) urban cellular communication systems. The mmWave spectrum is considered as a key-enabling feature of 5G cellular communication systems to provide an enormous capacity increment; however, mmWave channel models are lacking today. The paper compares measurements conducted with a spherical scanning 28 GHz channel sounder system in the urban street-canyon environments of Daejeon, Korea and NYU campus, Manhattan, with ray-tracing simulations made for the same areas. Since such scanning measurements are very costly and time-intensive, only a relatively small number of channel samples can be obtained. The measurements are thus used to quantify the accuracy of a ray-tracer; the ray-tracer is subsequently used to obtain a large number of channel samples to fill gaps in the measurements. A set of mmWave radio propagation parameters is presented based on both the measurement results and ray-tracing, and the corresponding channel models following the 3GPP spatial channel model (SCM) methodology are also described. --- paper_title: Directional Analysis of Measured 60 GHz Indoor Radio Channels Using SAGE paper_content: Directional properties of the radio channel are of high importance for the development of reliable wireless systems operating in the 60 GHz frequency band. Using transfer functions measured from 61 to 65 GHz in a conference room we have extracted estimates of the multi-path component parameters using the SAGE algorithm. In the paper we compare results for line-of-sight (LOS) scenarios and the corresponding non-line-of-sight (NLOS) scenarios and present values of the direction spread at the Tx and the Rx. --- paper_title: Propagation in ROF road-vehicle communication system using millimeter wave paper_content: Radio on fiber (ROF) technology has received attention due to advantages such as multimode service for intelligent transport systems (ITS) road-vehicle- communication. For this system, using millimeter wave has the advantages of its large capacity and multi-transmission capability. We measured the effect of shadowing by another vehicle in case of road vehicle communication system and the effect of position of mobile station antennas in 36 GHz band. --- paper_title: 5G 3GPP-like Channel Models for Outdoor Urban Microcellular and Macrocellular Environments paper_content: For the development of new 5G systems to operate in bands up to 100 GHz, there is a need for accurate radio propagation models at these bands that currently are not addressed by existing channel models developed for bands below 6 GHz. This document presents a preliminary overview of 5G channel models for bands up to 100 GHz. These have been derived based on extensive measurement and ray tracing results across a multitude of frequencies from 6 GHz to 100 GHz, and this document describes an initial 3D channel model which includes: 1) typical deployment scenarios for urban microcells (UMi) and urban macrocells (UMa), and 2) a baseline model for incorporating path loss, shadow fading, line of sight probability, penetration and blockage models for the typical scenarios. Various processing methodologies such as clustering and antenna decoupling algorithms are also presented. --- paper_title: Investigation of Prediction Accuracy, Sensitivity, and Parameter Stability of Large-Scale Propagation Path Loss Models for 5G Wireless Communications paper_content: This paper compares three candidate large-scale propagation path loss models for use over the entire microwave and millimeter-wave (mmWave) radio spectrum: the alpha-betagamma (ABG) model, the close-in (CI) free-space reference distance model, and the CI model with a frequency-weighted path loss exponent (CIF). Each of these models has been recently studied for use in standards bodies such as 3rd Generation Partnership Project (3GPP) and for use in the design of fifth-generation wireless systems in urban macrocell, urban microcell, and indoor office and shopping mall scenarios. Here, we compare the accuracy and sensitivity of these models using measured data from 30 propagation measurement data sets from 2 to 73 GHz over distances ranging from 4 to 1238 m. A series of sensitivity analyses of the three models shows that the physically based two-parameter CI model and three-parameter CIF model offer computational simplicity, have very similar goodness of fit (i.e., the shadow fading standard deviation), exhibit more stable model parameter behavior across frequencies and distances, and yield smaller prediction error in sensitivity tests across distances and frequencies, when compared to the four-parameter ABG model. Results show the CI model with a 1-m reference distance is suitable for outdoor environments, while the CIF model is more appropriate for indoor modeling. The CI and CIF models are easily implemented in existing 3GPP models by making a very subtle modification — by replacing a floating non-physically based constant with a frequency-dependent constant that represents free-space path loss in the first meter of propagation. This paper shows this subtle change does not change the mathematical form of existing ITU/3GPP models and offers much easier analysis, intuitive appeal, better model parameter stability, and better accuracy in sensitivity tests over a vast range of microwave and mmWave frequencies, scenarios, and distances, while using a simpler model with fewer parameters. This material is based upon work supported by the NYU WIRELESS Industrial Affiliates: ATT [email protected]). T. A. Thomas and A. Ghosh are with Nokia, Arlington Heights, IL 60004, USA (e-mail: [email protected]; [email protected]). H. Nguyen and I. Rodriguez are with Aalborg University, Aalborg 9220, Denmark (e-mail: [email protected]; [email protected]). I. Z. Kovacs is with Nokia, Aalborg 9220, Denmark (e-mail: [email protected]). O. Koymen and A. Partyka are with Qualcomm RD [email protected]). --- paper_title: Propagation measurements and simulations for millimeter-wave mobile access in a busy urban environment paper_content: The authors have performed 60 GHz wideband channel measurements in Berlin to gain knowledge on millimeter-wave outdoor propagation in dense urban environments. In this paper first results for a street canyon are presented, focusing on path loss analysis of the line-of-sight-dominant small-cell access channel. They reveal that the local area path loss is very close to free-space propagation. Reflected paths do not contribute significantly to the received power as long as the line of sight (LOS) is unobstructed. However, they are usually strong enough to maintain a link if the LOS is blocked. The dominant multipath contributions can also be predicted by ray tracing simulations, though further calibration is needed to accurately determine their strength. --- paper_title: Wideband spatial channel model in an urban cellular environments at 28 GHz paper_content: This paper presents channel propagation measurements and analysis of the channel characteristics of millimeter wave (mmWave) transmission for urban cellular communication systems, in particular in the promising 28 GHz band. For channel propagation analysis, the urban measurement campaign was conducted with a synchronously spherical scanning 28 GHz channel sounder system, from which omni-like channel measurements are obtained for channel modeling. From the measurements, we analyze the spatio-temporal channel characteristics such as multipath delay, angular statistics, and pathloss. The clustering analysis has been done including its power distribution. Then, a set of millimeter wave radio propagation parameters is presented, and the corresponding channel models based on the 3GPP spatial channel model (SCM) are also described. --- paper_title: Frequency-Agile Pathloss Models for Urban Street Canyons paper_content: Frequency-agile pathloss models for urban street canyons are discussed in this paper. The models are floating intercept (FI), fixed reference (FR), and ITU-R M.2135 urban microcellular (UMi) line-of-sight (LOS) and Manhattan-grid non-LOS (NLOS) models. These models are parameterized based on channel sounding campaigns in three cities covering radio frequencies ranging from 0.8 to 60 GHz. Fitting the models with measured pathloss reveals that the models are usable to cover the considered frequency range. The FI and FR models are equally simple and robust, with a slight advantage of the FI model in accuracy because of the larger number of model parameters. The original M.2135 LOS model is based on a two-ray model that includes a break point (BP). The model is extended for a better fit with measurements by including new model parameters such as a pathloss offset and a BP scaling factor that represent local scattering conditions of surrounding environments. The new model parameters are found frequency dependent in many cases. The original M.2135 model is furthermore simplified in NLOS scenarios while maintaining the model accuracy. The model parameters are derived using maximum likelihood estimation, which also showed that the modified M.2135 model offers up to 50% better accuracy compared to the FI and FR models in terms of the employed log-likelihood function (LLF). The improvement in accuracy is particularly remarkable in NLOS scenarios. A full set of parameters is provided for the models, allowing a choice for any given requirements on accuracy and complexity. Finally, applicability of the proposed models to other street canyons is discussed using independent pathloss measurements. --- paper_title: 73 GHz Wideband Millimeter-Wave Foliage and Ground Reflection Measurements and Models paper_content: This paper presents 73 GHz wideband outdoor foliage and ground reflection measurements. Propagation measurements were made with a 400 Megachips-per-second sliding correlator channel sounder, with rotatable 27 dBi (7° half-power beamwidth) horn antennas at both the transmitter and receiver, to study foliage-induced scattering and de-polarization effects, to assist in developing future wireless systems that will use adaptive array antennas. Signal attenuation through foliage was measured to be 0.4 dB/m for both co- and cross-polarized antenna configurations. Measured ground reflection coefficients for dirt and gravel ranged from 0.02 to 0.34, for incident angles ranging from 60° to 81° (with respect to the normal incidence of the surface). These data are useful for link budget design and site-specific (ray-tracing) models for future millimeter-wave communications systems. --- paper_title: Centimeter and millimeter wave attenuation and brightness temperature due to atmospheric oxygen and water vapor paper_content: Calculated atmospheric absorption and emission of radiation for several atmospheric conditions and elevation angles are given for frequencies of 1 to 340 GHz. The emission curves are those accepted in 1982 by the International Radio Consultative Committee (CCIR). The absorption values from these calculations are compared with others appearing in the literature or in use elsewhere. --- paper_title: Study of the local multipoint distribution service radio channel paper_content: Millimeter wave communication systems in the 21.5 to 29.5 GHz band are being developed in the United States and Canada for use in a local multipoint distribution service (LMDS). This paper summarizes radiowave propagation impairments for the LMDS and reports measurement data for small cells. Results include area coverage estimates over a range of basic transmission losses for 0.5-, 1.0- and 2.0-km suburban cells with foliated trees. Multipath, signal attenuation, depolarization, and cell to cell coverage also are discussed. Data indicates a high probability of non-line-of-sight paths due to trees which can cause signal attenuation and signal variability when wind is present. Signal variability was studied using k factors and compared to the Rician cumulative distribution function. Depolarization caused by vegetation and other signal scatterers was found to be an order of magnitude greater than rain-induced depolarization. A simple tapped delay line model is presented to describe multipath for three channel states. --- paper_title: 28 GHz channel measurements and modeling in a ski resort town in pyeongchang for 5G cellular network systems paper_content: In this paper, radio propagation measurements and analysis are presented to investigate the channel characteristics of millimeter wave (mmWave) transmission for outdoor cellular communication systems. The measurement campaign is performed in a ski resort town placed in Pyeongchang, South Korea, which is the venue of the coming Winter Olympics in 2018. A 28 GHz synchronous channel sounder system is used in conducting the channel measurements. From the measurements, the spatio-temporal channel characteristics such as path loss, multipath delay, angular statistics are analyzed. The mmWave radio propagation parameters and the corresponding channel models based on the 3GPP spatial channel model (SCM) are also presented. Since a trial service is planned during the 2018 Pyeongchang Winter Olympic Games, this research is significant to demonstrate the emerging 5G technologies. Additionally, blocking tests, which measure the path loss variation when there are some obstacles, are discussed in this paper. --- paper_title: A Statistical Spatio-Temporal Radio Channel Model for Large Indoor Environments at 60 and 70 GHz paper_content: Millimeter-wave radios operating at unlicensed 60 GHz and licensed 70 GHz bands are attractive solutions to realize short-range backhaul links for flexible wireless network deployment. We present a measurement-based spatio-temporal statistical channel model for short-range millimeter-wave links in large office rooms, shopping mall, and station scenarios. Channel sounding in these scenarios at 60 and 70 GHz revealed that spatio-temporal channel characteristics of the two frequencies are similar, making it possible to use an identical channel model framework to cover the radio frequencies and scenarios. The sounding also revealed dominance of a line-of-sight and specular propagation paths over diffuse scattering because of weak reverberation of propagating energy in the scenarios. The main difference between 60 and 70 GHz channels lies in power levels of the specular propagation paths and diffuse scattering which affect their visibility over the noise level in the measurements, and the speed of power decay as the propagation delay increases. Having defined the channel model framework, a set of model parameters has been derived for each scenario at the two radio frequencies. After specifying the implementation recipe of the proposed channel model, channel model outputs are compared with the measurements to show validity of the channel model framework and implementation. Validity was demonstrated through objective parameters, i.e., pathloss and root-mean-square delay spread, which were not used as defining parameters of the channel model. --- paper_title: On mm-Wave Multipath Clustering and Channel Modeling paper_content: Efficient and realistic mm-wave channel models are of vital importance for the development of novel mm-wave wireless technologies. Though many of the current 60 GHz channel models are based on the useful concept of multipath clusters, only a limited number of 60 GHz channel measurements have been reported in the literature for this purpose. Therefore, there is still a need for further measurement based analyses of multipath clustering in the 60 GHz band. This paper presents clustering results for a double-directional 60 GHz MIMO channel model. Based on these results, we derive a model which is validated with measured data. Statistical cluster parameters are evaluated and compared with existing channel models. It is shown that the cluster angular characteristics are closely related to the room geometry and environment, making it infeasible to model the delay and angular domains independently. We also show that when using ray tracing to model the channel, it is insufficient to only consider walls, ceiling, floor and tables; finer structures such as ceiling lamps, chairs and bookshelves need to be taken into account as well. --- paper_title: Indoor mm-Wave Channel Measurements: Comparative Study of 2.9 GHz and 29 GHz paper_content: The millimeter-wave (mm-Wave) frequency band ~30-300 GHz has received significant attention lately as a prospective band for 5G systems. Millimeter-wave frequencies have traditionally been used for backhaul, satellite and other fixed services. While these bands offer substantial amount of bandwidth and opportunity for spatial multiplexing, the propagation characteristics for terrestrial mobile usage need to be fully understood prior to system design. Towards this end, this paper presents preliminary indoor measurement results obtained using a channel sounder equipped with omni- and directional antennas at 2.9 GHz and 29 GHz as a comparative study of the two bands. The measurements are made within a Qualcomm building in Bridgewater, NJ, USA, for two separate floors, each representing a different yet representative type of office plan. We present measurements and estimated parameters for path loss, excess delay, RMS delay and analyze the power profile of received paths. In addition, we present several spherical scans of particular links to illustrate the 3-D angular spread of the received paths. This work represents initial results of an ongoing effort for comprehensive indoor and outdoor channel measurements. The measurements presented here, along with cited references, offer interesting insights into propagation conditions (e.g. loss, delay/angular spread etc.), coverage and robustness for mobile use of millimeter-wave bands. We believe additional extensive measurement campaigns in diverse settings by academia and industry would help facilitate the generation of usable channel models. --- paper_title: Indoor 5G 3GPP-like Channel Models for Office and Shopping Mall Environments paper_content: Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development. --- paper_title: Experimental investigations of 60 GHz WLAN systems in office environment paper_content: This paper presents the results of an experimental investigation of 60 GHz wireless local area network (WLAN) systems in an office environment. The measurement setup with highly directional mechanically steerable antennas and 800 MHz bandwidth was developed and experiments were performed for conference room and cubicle environments. Measurement results demonstrate that the 60 GHz propagation channel is quasioptical in nature and received signal power is obtained through line of sight (LOS) and reflected signal paths of the first and second orders. The 60 GHz WLAN system prototype using steerable directional antennas with 18 dB gain was able to achieve about 30 dB baseband SNR for LOS transmission, about 15-20 dB for communications through the first-order reflected path, and 2-6 dB SNR when using second-order reflection for the office environments. The intra cluster statistical parameters of the propagation channel were evaluated and a statistical model for reflected clusters is proposed. Experimental results demonstrating strong polarization impact on the characteristics of the propagation channel are presented. Cross-polarization discrimination (XPD) of the propagation channel was estimated as approximately 20 dB for LOS transmission and 10-20 dB for NLOS reflected paths. --- paper_title: Experimental Multipath-Cluster Characteristics of 28-GHz Propagation Channel paper_content: In this paper, a channel measurement campaign is introduced, which utilizes direction-scan-sounding to capture the spatial characteristics of 28-GHz wave propagation channels with 500-MHz sounding bandwidth in office environments. Both line-of-sight and non-line-of-sight scenarios were considered. Measurements were performed by fixing a transmit pyramidal horn antenna, and rotating another one in the receiver site at 10° steps in azimuth. The antenna outputs are viewed as array signals, and a space-alternating generalized expectation-maximization (SAGE) algorithm is applied to estimate delay and angular parameters of multipath components. Benefiting from high resolution achieved by using the SAGE and deembedding of antenna radiation pattern and system responses, more multipath clusters with less spreads in delay and azimuth are found per channel compared with existing works on 28-GHz propagation. The statistics of channel parameters extracted here constitute a preliminary stochastic multipath-cluster spatial channel model. --- paper_title: Wideband channel sounder with measurements and model for the 60 GHz indoor radio channel paper_content: A wideband channel sounder and measurement results for the short range indoor 60 GHz channel are presented. The channel sounder is based on a 1 gigasamples/s dual channel arbitrary waveform generator and A/D converter/software demodulator, which synthesize and detect a baseband PN sequence with 500 MHz bandwidth. A heterodyne transmitter and receiver translate the baseband PN sequence to and from the 60 GHz band. Ten channel measurements taken across the 59 GHz to 64 GHz range are concatenated to provide a continuous channel measurement covering 5 GHz of bandwidth, resulting in 0.2 ns time domain channel impulse response resolution. The dynamic range and maximum sensitivity performance of the channel sounder are discussed in detail. Comparisons of results with a vector network analyzer based system are shown to verify the accuracy of the sounder. In an extensive measurement campaign with vertically polarized omnidirectional antennas, several different rooms (offices, labs, conference rooms and others) in four different buildings have been investigated. Over 700 channel measurements are the basis for a comprehensive characterization of the short range 60 GHz indoor radio channel with omnidirectional antennas. Finally, a simple stochastic static multipath channel model is derived from the measurement results. --- paper_title: Frequency-domain measurement of 60 GHz indoor channels: a measurement setup, literature data, and analysis paper_content: Despite the unique capability of 60-GHz technology to offer a multi-gigabit rate and a huge unlicensed bandwidth (up to 7 GHz), a number of technical challenges need to be overcome before its full deployment. The system performance of capacity, coverage, and throughput need to be well understood. All of these are based on characterizing the propagation channel and establishing realistic channel models of wireless systems [1]. Many researchers have reported on propagation studies of indoor channels at 60 GHz using frequency-domain measurements. However, details are scarce in the literature on whether different indoor environments and different frequency-domain measurement setups affect the measurement results in the millimeter-wave frequency band. This article explains the setup details of time resolution, spatial resolution, and windowing, then summarizes and analyzes frequency-domain measurement results selected from important research [2]-[9]. The mean path loss model and the average cumulative distribution function of a root mean squared (rms) delay spread are proposed and compared using measured results from the literature to describe the complete channel characteristics of indoor environments at 60 GHz. --- paper_title: 60 GHz channel directional characterization using extreme size virtual antenna array paper_content: In order to provide reliable knowledge about highly resolved directional properties of the radio propagation channel, straight forward beam-forming has been used in this study. Accurate measurement data based on an extreme size virtual antenna array (25×25×25 = 15625 elements) have been provided for an indoor scenario in both line of sight and non-line if sight conditions. The results indicate that the distinct spikes observed in the power-delay profile are caused mainly by specular reflections. There is however also a significant diffuse contribution due to scattering caused by the many smaller objects of the environment. The diffuse paths are spread out in essentially all azimuth and elevation directions except for the empty parts of the floor. --- paper_title: Statistical Characterization of 60-GHz Indoor Radio Channels paper_content: An extensive review of the statistical characterization of 60-GHz indoor radio channels is provided from a large number of published measurement and modeling results. First, the most prominent driver applications for 60 GHz are considered in order to identify those environment types that need to be characterized most urgently. Large-scale fading is addressed yielding path-loss parameter values for a generic 60-GHz indoor channel model as well as for the office environment in particular. In addition, the small-scale channel behavior is reviewed including the modeling of time-of-arrival and angle-of-arrival details and statistical parameters related to delay spread, angular spread and Doppler spread. Finally, some research directions for future channel characterization are given. --- paper_title: Indoor Office Wideband Millimeter-Wave Propagation Measurements and Channel Models at 28 and 73 GHz for Ultra-Dense 5G Wireless Networks paper_content: Ultra-wideband millimeter-wave (mmWave) propagation measurements were conducted in the 28- and 73-GHz frequency bands in a typical indoor office environment in downtown Brooklyn, New York, on the campus of New York University. The measurements provide large-scale path loss and temporal statistics that will be useful for ultra-dense indoor wireless networks for future mmWave bands. This paper presents the details of measurements that employed a 400 Megachips-per-second broadband sliding correlator channel sounder, using rotatable highly directional horn antennas for both co-polarized and crosspolarized antenna configurations. The measurement environment was a closed-plan in-building scenario that included a line-of-sight and non-line-of-sight corridor, a hallway, a cubicle farm, and adjacent-room communication links. Well-known and new single-frequency and multi-frequency directional and omnidirectional large-scale path loss models are presented and evaluated based on more than 14 000 directional power delay profiles acquired from unique transmitter and receiver antenna pointing angle combinations. Omnidirectional path loss models, synthesized from the directional measurements, are provided for the case of arbitrary polarization coupling, aswell as for the specific cases of co-polarized and cross-polarized antenna orientations. The results show that novel large-scale path loss models provided here are simpler and more physically based compared to previous 3GPP and ITU indoor propagation models that require more model parameters and offer very little additional accuracy and lack a physical basis. Multipath time dispersion statistics formmWave systems using directional antennas are presented for co-polarization, crosspolarization, and combined-polarization scenarios, and show that the multipath root mean square delay spread can be reduced when using transmitter and receiver antenna pointing angles that result in the strongest received power. Raw omnidirectional path loss data and closed-form optimization formulas for all path loss models are given in the Appendices. --- paper_title: The COST259 Directional Channel Model-Part I: Overview and Methodology paper_content: This paper describes a model for mobile radio channels that includes consideration of directions of arrival and is thus suitable for simulations of the performance of wireless systems that use smart antennas. The model is specified for 13 different types of environments, covering macro- micro- and picocells. In this paper, a hierarchy of modeling concepts is described, as well as implementation aspects that are valid for all environments. The model is based on the specification of directional channel impulse response functions, from which the impulse response functions at all antenna elements can be obtained. A layered approach, which distinguishes between external (fixed), large-scale-, and small-scale- parameters allows an efficient parameterization. Different implementation methods, based on either a tapped-delay line or a geometrical model, are described. The paper also derives the transformation between those two approaches. Finally, the concepts of clusters and visibility regions are used to account for large delay and angular spreads that have been measured. In two companion papers, the environment-specific values of the model parameters are explained and justified --- paper_title: The COST259 Directional Channel Model-Part I: Overview and Methodology paper_content: This paper describes a model for mobile radio channels that includes consideration of directions of arrival and is thus suitable for simulations of the performance of wireless systems that use smart antennas. The model is specified for 13 different types of environments, covering macro- micro- and picocells. In this paper, a hierarchy of modeling concepts is described, as well as implementation aspects that are valid for all environments. The model is based on the specification of directional channel impulse response functions, from which the impulse response functions at all antenna elements can be obtained. A layered approach, which distinguishes between external (fixed), large-scale-, and small-scale- parameters allows an efficient parameterization. Different implementation methods, based on either a tapped-delay line or a geometrical model, are described. The paper also derives the transformation between those two approaches. Finally, the concepts of clusters and visibility regions are used to account for large delay and angular spreads that have been measured. In two companion papers, the environment-specific values of the model parameters are explained and justified --- paper_title: A twin-cluster MIMO channel model paper_content: The paper presents a new approach to geometry-based stochastic channel models that can be used for simulating MIMO systems. We use twin-clusters to represent multiply reflected or diffracted multipath components. The location of the two twins can be chosen independently, in order to correctly reflect DoAs, DoDs, and delays. The model is thus more accurate than existing single-scatterer approaches. Simulation results, using a publicly available version of our model, show a very realistic behavior of our model. --- paper_title: "Virtual Cell Deployment Areas" and "Cluster Tracing" - new methods for directional channel modeling in microcells paper_content: We propose a new method for the simulation of microcellular propagation channels, which includes both deterministic and stochastic aspects. First, the geometrical layouts of "typical" deployment areas, called "Virtual Cell Deployment Areas" (VCDA), as well as routes of the mobile station (MS) through the VCDA, are defined. Then, the delays and directions-of-arrival of multipath clusters can be traced by geometrical means. Finally, the small-scale fading within the clusters is modelled stochastically. This approach was adopted by the COST259 DCM, the standard model for directional wireless channels. --- paper_title: Quasi-deterministic millimeter-wave channel models in MiWEBA paper_content: This article introduces a quasi-deterministic channel model and a link level-focused channel model, developed with a focus on millimeter-wave outdoor access channels. Channel measurements in an open square scenario at 60 GHz are introduced as a basis for the development of the model and its parameterization. The modeling approaches are explained, and their specific area of application is investigated. --- paper_title: An ultra-wideband space-variant multipath indoor radio channel model paper_content: We present a generic ultra-wideband (UWB) multipath radio channel model that has been designed for link-level simulations and is suited for indoor environments, e.g. for WPAN applications. It covers the FCC frequency band [FCC, Feb. 2002] and is capable of producing space-variant impulse responses (IR) and transfer functions. The model is of a hybrid type and combines a statistical approach to model dense multipath and a simple quasi-deterministic method to model strong individual echoes. These echoes are important to obtain a realistic spatial (Doppler) behavior. The architecture of the model facilitates the application to UWB MIMO analyses. For a few scenarios (e.g., intra-office and inter-office transmission), site-specific parameter profiles are given that have been derived from measured data. --- paper_title: Scaling up MIMO: Opportunities and challenges with very large arrays paper_content: Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. --- paper_title: Enabling technologies and architectures for 5G wireless paper_content: The proliferation of smart devices and the resulting exponential growth in data traffic has increased the need for higher capacity wireless networks. The cellular systems industry is envisioning an increase in network capacity by a factor of 1000 over the next decade to meet this traffic demand. In addition, with the emergence of Internet of Things (IoT), billions of devices will be connected and managed by wireless networks. Future networks must satisfy the above mentioned requirements with high energy efficiency and at low cost. Hence, the industry attention is now shifting towards the next set of innovations in architecture and technologies that will address capacity and service demands envisioned for 2020, which cannot be met only with the evolution of 4G systems. These innovations are expected to form the so called fifth generation wireless communications systems, or 5G. Candidate 5G solutions include i) higher densification of heterogeneous networks with massive deployment of small base stations supporting various Radio Access Technologies (RATs), ii) use of very large Multiple Input Multiple Output (MIMO) arrays, iii) use of millimeter Wave spectrum where larger wider frequency bands are available, iv) direct device to device (D2D) communication, and v) simultaneous transmission and reception, among others. In this paper, we present the main 5G technologies. We also discuss the network and device evolution towards 5G. --- paper_title: Millimeter Wave Mobile Communications for 5G Cellular: It Will Work! paper_content: The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. --- paper_title: Multi-gigabit millimeter wave wireless communications for 5G: from fixed access to cellular networks paper_content: With the formidable growth of various booming wireless communication services that require ever increasing data throughputs, the conventional microwave band below 10 GHz, which is currently used by almost all mobile communication systems, is going to reach its saturation point within just a few years. Therefore, the attention of radio system designers has been pushed toward ever higher segments of the frequency spectrum in a quest for increased capacity. In this article we investigate the feasibility, advantages, and challenges of future wireless communications over the Eband frequencies. We start with a brief review of the history of the E-band spectrum and its light licensing policy as well as benefits/challenges. Then we introduce the propagation characteristics of E-band signals, based on which some potential fixed and mobile applications at the E-band are investigated. In particular, we analyze the achievability of a nontrivial multiplexing gain in fixed point-to-point E-band links, and propose an E-band mobile broadband (EMB) system as a candidate for the next generation mobile communication networks. The channelization and frame structure of the EMB system are discussed in detail. --- paper_title: Massive MIMO in the UL/DL of Cellular Networks: How Many Antennas Do We Need? paper_content: We consider the uplink (UL) and downlink (DL) of non-cooperative multi-cellular time-division duplexing (TDD) systems, assuming that the number N of antennas per base station (BS) and the number K of user terminals (UTs) per cell are large. Our system model accounts for channel estimation, pilot contamination, and an arbitrary path loss and antenna correlation for each link. We derive approximations of achievable rates with several linear precoders and detectors which are proven to be asymptotically tight, but accurate for realistic system dimensions, as shown by simulations. It is known from previous work assuming uncorrelated channels, that as N→∞ while K is fixed, the system performance is limited by pilot contamination, the simplest precoders/detectors, i.e., eigenbeamforming (BF) and matched filter (MF), are optimal, and the transmit power can be made arbitrarily small. We analyze to which extent these conclusions hold in the more realistic setting where N is not extremely large compared to K. In particular, we derive how many antennas per UT are needed to achieve η% of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively. --- paper_title: Five Disruptive Technology Directions for 5G paper_content: New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. --- paper_title: Noncooperative Cellular Wireless with Unlimited Numbers of Base Station Antennas paper_content: A cellular base station serves a multiplicity of single-antenna terminals over the same time-frequency interval. Time-division duplex operation combined with reverse-link pilots enables the base station to estimate the reciprocal forward- and reverse-link channels. The conjugate-transpose of the channel estimates are used as a linear precoder and combiner respectively on the forward and reverse links. Propagation, unknown to both terminals and base station, comprises fast fading, log-normal shadow fading, and geometric attenuation. In the limit of an infinite number of antennas a complete multi-cellular analysis, which accounts for inter-cellular interference and the overhead and errors associated with channel-state information, yields a number of mathematically exact conclusions and points to a desirable direction towards which cellular wireless could evolve. In particular the effects of uncorrelated noise and fast fading vanish, throughput and the number of terminals are independent of the size of the cells, spectral efficiency is independent of bandwidth, and the required transmitted energy per bit vanishes. The only remaining impairment is inter-cellular interference caused by re-use of the pilot sequences in other cells (pilot contamination) which does not vanish with unlimited number of antennas. --- paper_title: Evolution Towards 5G Multi-tier Cellular Wireless Networks: An Interference Management Perspective paper_content: The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems. --- paper_title: Emerging Technologies and Research Challenges for 5G Wireless Networks paper_content: As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This article identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose. --- paper_title: Millimeter-wave access and backhauling: the solution to the exponential data traffic increase in 5G mobile communications systems? paper_content: The exponential increase of mobile data traffic requires disrupting approaches for the realization of future 5G systems. In this article, we overview the technologies that will pave the way for a novel cellular architecture that integrates high-data-rate access and backhaul networks based on millimeter-wave frequencies (57-66, 71-76, and 81-86 GHz). We evaluate the feasibility of short- and medium-distance links at these frequencies and analyze the requirements from the transceiver architecture and technology, antennas, and modulation scheme points of view. Technical challenges are discussed, and design options highlighted; finally, a performance evaluation quantifies the benefits of millimeter- wave systems with respect to current cellular technologies. --- paper_title: Large-scale antenna systems with hybrid analog and digital beamforming for millimeter wave 5G paper_content: With the severe spectrum shortage in conventional cellular bands, large-scale antenna systems in the mmWave bands can potentially help to meet the anticipated demands of mobile traffic in the 5G era. There are many challenging issues, however, regarding the implementation of digital beamforming in large-scale antenna systems: complexity, energy consumption, and cost. In a practical large-scale antenna deployment, hybrid analog and digital beamforming structures can be important alternative choices. In this article, optimal designs of hybrid beamforming structures are investigated, with the focus on an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid beamforming structure. Optimal analog and digital beamforming designs in a multi-user beamforming scenario are discussed. Also, the energy efficiency and spectrum efficiency of the N × M beamforming structure are analyzed, including their relationship at the green point (i.e., the point with the highest energy efficiency) on the energy efficiency-spectrum efficiency curve, the impact of N on the energy efficiency performance at a given spectrum efficiency value, and the impact of N on the green point energy efficiency. These results can be conveniently utilized to guide practical LSAS design for optimal energy/ spectrum efficiency trade-off. Finally, a reference signal design for the hybrid beamform structure is presented, which achieves better channel estimation performance than the method solely based on analog beamforming. It is expected that large-scale antenna systems with hybrid beamforming structures in the mmWave band can play an important role in 5G. --- paper_title: Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results paper_content: The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage. --- paper_title: What Will 5G Be? paper_content: What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue. --- paper_title: Full duplex techniques for 5G networks: self-interference cancellation, protocol design, and relay selection paper_content: The wireless research community aspires to conceive full duplex operation by supporting concurrent transmission and reception in a single time/frequency channel for the sake of improving the attainable spectral efficiency by a factor of two as compared to the family of conventional half duplex wireless systems. The main challenge encountered in implementing FD wireless devices is that of finding techniques for mitigating the performance degradation imposed by self-interference. In this article, we investigate the potential FD techniques, including passive suppression, active analog cancellation, and active digital cancellation, and highlight their pros and cons. Furthermore, the troubles of FD medium access control protocol design are discussed for addressing the problems such as the resultant end-to-end delay and network congestion. Additionally, an opportunistic decode-andforward- based relay selection scheme is analyzed in underlay cognitive networks communicating over independent and identically distributed Rayleigh and Nakagami-m fading channels in the context of FD relaying. We demonstrate that the outage probability of multi-relay cooperative communication links can be substantially reduced. Finally, we discuss the challenges imposed by the aforementioned techniques and a range of critical issues associated with practical FD implementations. It is shown that numerous open challenges, such as efficient SI suppression, high-performance FD MAC-layer protocol design, low power consumption, and hybrid FD/HD designs, have to be tackled before successfully implementing FD-based systems. --- paper_title: Device-to-device communication in 5G cellular networks: challenges, solutions, and future directions paper_content: In a conventional cellular system, devices are not allowed to directly communicate with each other in the licensed cellular bandwidth and all communications take place through the base stations. In this article, we envision a two-tier cellular network that involves a macrocell tier (i.e., BS-to-device communications) and a device tier (i.e., device-to-device communications). Device terminal relaying makes it possible for devices in a network to function as transmission relays for each other and realize a massive ad hoc mesh network. This is obviously a dramatic departure from the conventional cellular architecture and brings unique technical challenges. In such a two-tier cellular system, since the user data is routed through other users? devices, security must be maintained for privacy. To ensure minimal impact on the performance of existing macrocell BSs, the two-tier network needs to be designed with smart interference management strategies and appropriate resource allocation schemes. Furthermore, novel pricing models should be designed to tempt devices to participate in this type of communication. Our article provides an overview of these major challenges in two-tier networks and proposes some pricing schemes for different types of device relaying. --- paper_title: Enabling technologies and architectures for 5G wireless paper_content: The proliferation of smart devices and the resulting exponential growth in data traffic has increased the need for higher capacity wireless networks. The cellular systems industry is envisioning an increase in network capacity by a factor of 1000 over the next decade to meet this traffic demand. In addition, with the emergence of Internet of Things (IoT), billions of devices will be connected and managed by wireless networks. Future networks must satisfy the above mentioned requirements with high energy efficiency and at low cost. Hence, the industry attention is now shifting towards the next set of innovations in architecture and technologies that will address capacity and service demands envisioned for 2020, which cannot be met only with the evolution of 4G systems. These innovations are expected to form the so called fifth generation wireless communications systems, or 5G. Candidate 5G solutions include i) higher densification of heterogeneous networks with massive deployment of small base stations supporting various Radio Access Technologies (RATs), ii) use of very large Multiple Input Multiple Output (MIMO) arrays, iii) use of millimeter Wave spectrum where larger wider frequency bands are available, iv) direct device to device (D2D) communication, and v) simultaneous transmission and reception, among others. In this paper, we present the main 5G technologies. We also discuss the network and device evolution towards 5G. --- paper_title: The requirements, challenges, and technologies for 5G of terrestrial mobile telecommunication paper_content: In this article, we summarize the 5G mobile communication requirements and challenges. First, essential requirements for 5G are pointed out, including higher traffic volume, indoor or hotspot traffic, and spectrum, energy, and cost efficiency. Along with these changes of requirements, we present a potential step change for the evolution toward 5G, which shows that macro-local coexisting and coordinating paths will replace one macro-dominated path as in 4G and before. We hereafter discuss emerging technologies for 5G within international mobile telecommunications. Challenges and directions in hardware, including integrated circuits and passive components, are also discussed. Finally, a whole picture for the evolution to 5G is predicted and presented. --- paper_title: Cellular architecture and key technologies for 5G wireless communication networks paper_content: The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed. --- paper_title: Five Disruptive Technology Directions for 5G paper_content: New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. --- paper_title: Enabling device-to-device communications in millimeter-wave 5G cellular networks paper_content: Millimeter-wave communication is a promising technology for future 5G cellular networks to provide very high data rate (multi-gigabits-persecond) for mobile devices. Enabling D2D communications over directional mmWave networks is of critical importance to efficiently use the large bandwidth to increase network capacity. In this article, the propagation features of mmWave communication and the associated impacts on 5G cellular networks are discussed. We introduce an mmWave+4G system architecture with TDMA-based MAC structure as a candidate for 5G cellular networks. We propose an effective resource sharing scheme by allowing non-interfering D2D links to operate concurrently. We also discuss neighbor discovery for frequent handoffs in 5G cellular networks. --- paper_title: Toward 5G densenets: architectural advances for effective machine-type communications over femtocells paper_content: Ubiquitous, reliable and low-latency machinetype communication, MTC, systems are considered to be value-adds of emerging 5G cellular networks. To meet the technical and economical requirements for exponentially growing MTC traffic, we advocate the use of small cells to handle the massive and dense MTC rollout. We introduce a novel 3GPP-compliant architecture that absorbs the MTC traffic via home evolved NodeBs, allowing us to significantly reduce congestion and overloading of radio access and core networks. A major design challenge has been to deal with the interference to human-type traffic and the large degree of freedom of the system, due to the unplanned deployments of small cells and the enormous amount of MTC devices. Simulation results in terms of MTC access delay, energy consumption, and delivery rate corroborate the superiority of the proposed working architecture. --- paper_title: Emerging Technologies and Research Challenges for 5G Wireless Networks paper_content: As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This article identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose. --- paper_title: Cognitive radio in 5G: a perspective on energy-spectral efficiency trade-off paper_content: A cognitive cellular network, which integrates conventional licensed cellular radio and cognitive radio into a holistic system, is a promising paradigm for the fifth generation mobile communication systems. Understanding the trade-off between energy efficiency, EE, and spectral efficiency, SE, in cognitive cellular networks is of fundamental importance for system design and optimization. This article presents recent research progress on the EE-SE trade-off of cognitive cellular networks. We show how EE-SE trade-off studies can be performed systematically with respect to different architectures, levels of analysis, and capacity metrics. Three representative examples are given to illustrate how EE-SE trade-off analysis can lead to important insights and useful design guidelines for future cognitive cellular networks. --- paper_title: Applications of self-interference cancellation in 5G and beyond paper_content: Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond. --- paper_title: Scaling up MIMO: Opportunities and challenges with very large arrays paper_content: Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. --- paper_title: Massive MIMO performance evaluation based on measured propagation data paper_content: Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA) which has a physically large aperture, and a practical uniform cylindrical array (UCA) which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels. --- paper_title: Dynamic Subarrays for Hybrid Precoding in Wideband mmWave MIMO Systems paper_content: Hybrid analog/digital precoding architectures can address the trade-off between achievable spectral efficiency and power consumption in large-scale MIMO systems. This makes it a promising candidate for millimeter wave systems, which require deploying large antenna arrays at both the transmitter and receiver to guarantee sufficient received signal power. Most prior work on hybrid precoding focused on narrowband channels and assumed fully-connected hybrid architectures. MmWave systems, though, are expected to be wideband with frequency selectivity. In this paper, a closed-form solution for fully-connected OFDM-based hybrid analog/digital precoding is developed for frequency selective mmWave systems. This solution is then extended to partially-connected but fixed architectures in which each RF chain is connected to a specific subset of the antennas. The derived solutions give insights into how the hybrid subarray structures should be designed. Based on them, a novel technique that dynamically constructs the hybrid subarrays based on the long-term channel characteristics is developed. Simulation results show that the proposed hybrid precoding solutions achieve spectral efficiencies close to that obtained with fully-digital architectures in wideband mmWave channels. Further, the results indicate that the developed dynamic subarray solution outperforms the fixed hybrid subarray structures in various system and channel conditions. --- paper_title: Variable-phase-shift-based RF-baseband codesign for MIMO antenna selection paper_content: We introduce a novel soft antenna selection approach for multiple antenna systems through a joint design of both RF (radio frequency) and baseband signal processing. When only a limited number of frequency converters are available, conventional antenna selection schemes show severe performance degradation in most fading channels. To alleviate those degradations, we propose to adopt a transformation of the signals in the RF domain that requires only simple, variable phase shifters and combiners to reduce the number of RF chains. The constrained optimum design of these shifters, adapting to the channel state, is given in analytical form, which requires no search or iterations. The resulting system shows a significant performance advantage for both correlated and uncorrelated channels. The technique works for both transmitter and receiver design, which leads to the joint transceiver antenna selection. When only a single information stream is transmitted through the channel, the new design can achieve the same SNR gain as the full-complexity system while requiring, at most, two RF chains. With multiple information streams transmitted, it is demonstrated by computer experiments that the capacity performance is close to optimum. --- paper_title: Massive MIMO in the UL/DL of Cellular Networks: How Many Antennas Do We Need? paper_content: We consider the uplink (UL) and downlink (DL) of non-cooperative multi-cellular time-division duplexing (TDD) systems, assuming that the number N of antennas per base station (BS) and the number K of user terminals (UTs) per cell are large. Our system model accounts for channel estimation, pilot contamination, and an arbitrary path loss and antenna correlation for each link. We derive approximations of achievable rates with several linear precoders and detectors which are proven to be asymptotically tight, but accurate for realistic system dimensions, as shown by simulations. It is known from previous work assuming uncorrelated channels, that as N→∞ while K is fixed, the system performance is limited by pilot contamination, the simplest precoders/detectors, i.e., eigenbeamforming (BF) and matched filter (MF), are optimal, and the transmit power can be made arbitrarily small. We analyze to which extent these conclusions hold in the more realistic setting where N is not extremely large compared to K. In particular, we derive how many antennas per UT are needed to achieve η% of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively. --- paper_title: Five Disruptive Technology Directions for 5G paper_content: New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. --- paper_title: Full-dimension MIMO (FD-MIMO) for next generation cellular technology paper_content: This article considers a practical implementation of massive MIMO systems [1]. Although the best performance can be achieved when a large number of active antennas are placed only in the horizontal domain, BS form factor limitation often makes horizontal array placement infeasible. To cope with this limitation, this article introduces full-dimension MIMO (FD-MIMO) cellular wireless communication system, where active antennas are placed in a 2D grid at BSs. For analysis of the FD-MIMO systems, a 3D spatial channel model is introduced, on which system-level simulations are conducted. The simulation results show that the proposed FD-MIMO system with 32 antenna ports achieves 2-3.6 times cell average throughput gain and 1.5-5 times cell edge throughput gain compared to the 4G LTE system of two antenna ports at the BS. --- paper_title: Near-Optimal Hybrid Processing for Massive MIMO Systems via Matrix Decomposition paper_content: For the practical implementation of massive multiple-input multiple-output (MIMO) systems, the hybrid processing (precoding/combining) structure is promising to reduce the high cost rendered by large number of RF chains of the traditional processing structure. The hybrid processing is performed through low-dimensional digital baseband processing combined with analog RF processing enabled by phase shifters. We propose to design hybrid RF and baseband precoders/combiners for multi-stream transmission in point-to-point massive MIMO systems, by directly decomposing the pre-designed unconstrained digital precoder/combiner of a large dimension. The constant amplitude constraint of analog RF processing results in the matrix decomposition problem non-convex. Based on an alternate optimization technique, the non-convex matrix decomposition problem can be decoupled into a series of convex sub-problems and effectively solved by restricting the phase increment of each entry in the RF precoder/combiner within a small vicinity of its preceding iterate. A singular value decomposition based technique is proposed to secure an initial point sufficiently close to the global solution of the original non-convex problem. Through simulation, the convergence of the alternate optimization for such a matrix decomposition based hybrid processing (MD-HP) scheme is examined, and the performance of the MD-HP scheme is demonstrated to be near-optimal. --- paper_title: Spatially Sparse Precoding in Millimeter Wave MIMO Systems paper_content: Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding/combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered. --- paper_title: Alternating beamforming methods for hybrid analog and digital MIMO transmission paper_content: Hybrid beamforming has drawn attention with the increasing concern on high frequency band communication and the appearance of flexible structures of base stations. This paper considers the practical transmitter structure that each antenna is only connected to a unique RF chain and optimizes the analog and digital beamforming matrices to maximize the achievable rate with transmit power constraint. Following alternating optimization principle, closed-form relationships between analog and digital precoders are obtained when both amplitude and phase or only phase can be adjusted to form analog beams. Simulation results demonstrate the advantage of our proposed methods over the existing beam steering method in terms of achievable rate with different scale antenna array. This reveals that the method can be applied to both low and high frequency band communications. When increasing the number of propagation paths, contrast to the traditional beam steering method, the performance of proposed methods becomes better while the complexity almost keeps at an acceptable constant level. --- paper_title: Hybrid Digital and Analog Beamforming Design for Large-Scale Antenna Arrays paper_content: The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multiple-input multiple-output (MIMO) system and a downlink multi-user multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used. --- paper_title: Joint Spatial Division and Multiplexing—The Large-Scale Array Regime paper_content: We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink/downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users. --- paper_title: Large-scale antenna systems with hybrid analog and digital beamforming for millimeter wave 5G paper_content: With the severe spectrum shortage in conventional cellular bands, large-scale antenna systems in the mmWave bands can potentially help to meet the anticipated demands of mobile traffic in the 5G era. There are many challenging issues, however, regarding the implementation of digital beamforming in large-scale antenna systems: complexity, energy consumption, and cost. In a practical large-scale antenna deployment, hybrid analog and digital beamforming structures can be important alternative choices. In this article, optimal designs of hybrid beamforming structures are investigated, with the focus on an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid beamforming structure. Optimal analog and digital beamforming designs in a multi-user beamforming scenario are discussed. Also, the energy efficiency and spectrum efficiency of the N × M beamforming structure are analyzed, including their relationship at the green point (i.e., the point with the highest energy efficiency) on the energy efficiency-spectrum efficiency curve, the impact of N on the energy efficiency performance at a given spectrum efficiency value, and the impact of N on the green point energy efficiency. These results can be conveniently utilized to guide practical LSAS design for optimal energy/ spectrum efficiency trade-off. Finally, a reference signal design for the hybrid beamform structure is presented, which achieves better channel estimation performance than the method solely based on analog beamforming. It is expected that large-scale antenna systems with hybrid beamforming structures in the mmWave band can play an important role in 5G. --- paper_title: Massive MIMO: An Introduction paper_content: Demand for wireless throughput, both mobile and fixed, will always increase. One can anticipate that, in five or ten years, millions of augmented reality users in a large city will want to transmit and receive 3D personal high-definition video more or less continuously, say 100 megabits per second per user in each direction. Massive MIMO-also called Large-Scale Antenna Systems-is a promising candidate technology for meeting this demand. Fifty-fold or greater spectral efficiency improvements over fourth generation (4G) technology are frequently mentioned. A multiplicity of physically small, individually controlled antennas performs aggressive multiplexing/demultiplexing for all active users, utilizing directly measured channel characteristics. Unlike today's Point-to-Point MIMO, by leveraging time-division duplexing (TDD), Massive MIMO is scalable to any desired degree with respect to the number of service antennas. Adding more antennas is always beneficial for increased throughput, reduced radiated power, uniformly great service everywhere in the cell, and greater simplicity in signal processing. Massive MIMO is a brand new technology that has yet to be reduced to practice. Notwithstanding, its principles of operation are well understood, and surprisingly simple to elucidate. --- paper_title: Channel Estimation and Hybrid Precoding for Millimeter Wave Cellular Systems paper_content: Millimeter wave (mmWave) cellular systems will enable gigabit-per-second data rates thanks to the large bandwidth available at mmWave frequencies. To realize sufficient link margin, mmWave systems will employ directional beamforming with large antenna arrays at both the transmitter and receiver. Due to the high cost and power consumption of gigasample mixed-signal devices, mmWave precoding will likely be divided among the analog and digital domains. The large number of antennas and the presence of analog beamforming requires the development of mmWave-specific channel estimation and precoding algorithms. This paper develops an adaptive algorithm to estimate the mmWave channel parameters that exploits the poor scattering nature of the channel. To enable the efficient operation of this algorithm, a novel hierarchical multi-resolution codebook is designed to construct training beamforming vectors with different beamwidths. For single-path channels, an upper bound on the estimation error probability using the proposed algorithm is derived, and some insights into the efficient allocation of the training power among the adaptive stages of the algorithm are obtained. The adaptive channel estimation algorithm is then extended to the multi-path case relying on the sparse nature of the channel. Using the estimated channel, this paper proposes a new hybrid analog/digital precoding algorithm that overcomes the hardware constraints on the analog-only beamforming, and approaches the performance of digital solutions. Simulation results show that the proposed low-complexity channel estimation algorithm achieves comparable precoding gains compared to exhaustive channel training algorithms. The results illustrate that the proposed channel estimation and precoding algorithms can approach the coverage probability achieved by perfect channel knowledge even in the presence of interference. --- paper_title: What Will 5G Be? paper_content: What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue. --- paper_title: Design and analysis of a reduced complexity MRC V-BLAST receiver for massive MIMO paper_content: In this paper, we take a proposed low-complexity Vertical Bell Laboratories Layered Space Time receiver based on maximal ratio combining and further simplify the algorithm by replacing the channel norm ordering by a power based ordering. The receiver operates in an uplink massive multiple-input-multiple-output deployment with distributed single-antenna users and a large base station (BS) array. The novel receiver is compared with other more complex detection schemes such as linear zero forcing. Moreover, an explicit closed form analysis for error probability for both co-located and distributed BSs is provided. It is shown that the error performance of the distributed scenario is well approximated by a modified version of a co-located scenario. The simulation study demonstrates the performance of the proposed scheme and confirms the accuracy of analytical results. --- paper_title: Scaling up MIMO: Opportunities and challenges with very large arrays paper_content: Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. --- paper_title: Massive MIMO in the UL/DL of Cellular Networks: How Many Antennas Do We Need? paper_content: We consider the uplink (UL) and downlink (DL) of non-cooperative multi-cellular time-division duplexing (TDD) systems, assuming that the number N of antennas per base station (BS) and the number K of user terminals (UTs) per cell are large. Our system model accounts for channel estimation, pilot contamination, and an arbitrary path loss and antenna correlation for each link. We derive approximations of achievable rates with several linear precoders and detectors which are proven to be asymptotically tight, but accurate for realistic system dimensions, as shown by simulations. It is known from previous work assuming uncorrelated channels, that as N→∞ while K is fixed, the system performance is limited by pilot contamination, the simplest precoders/detectors, i.e., eigenbeamforming (BF) and matched filter (MF), are optimal, and the transmit power can be made arbitrarily small. We analyze to which extent these conclusions hold in the more realistic setting where N is not extremely large compared to K. In particular, we derive how many antennas per UT are needed to achieve η% of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively. --- paper_title: Five Disruptive Technology Directions for 5G paper_content: New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. --- paper_title: Network densification: the dominant theme for wireless evolution into 5G paper_content: This article explores network densification as the key mechanism for wireless evolution over the next decade. Network densification includes densification over space (e.g, dense deployment of small cells) and frequency (utilizing larger portions of radio spectrum in diverse bands). Large-scale cost-effective spatial densification is facilitated by self-organizing networks and intercell interference management. Full benefits of network densification can be realized only if it is complemented by backhaul densification, and advanced receivers capable of interference cancellation. --- paper_title: Evolution Towards 5G Multi-tier Cellular Wireless Networks: An Interference Management Perspective paper_content: The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems. --- paper_title: Orthogonal Frequency Division Multiplexing With Index Modulation paper_content: In this paper, a novel orthogonal frequency division multiplexing (OFDM) scheme, which is called OFDM with index modulation (OFDM-IM), is proposed for frequency-selective fading channels. In this scheme, inspiring from the recently introduced spatial modulation concept for multiple-input multiple-output (MIMO) channels, the information is conveyed not only by M-ary signal constellations as in classical OFDM, but also by the indices of the subcarriers, which are activated according to the incoming bit stream. Different transceiver structures are proposed and a theoretical error performance analysis is provided for the new scheme. It is shown via computer simulations that the proposed scheme achieves significantly better error performance than classical OFDM due to the information bits carried in the spatial domain by the indices of OFDM subcarriers. --- paper_title: Spatial Modulation for Generalized MIMO: Challenges, Opportunities, and Implementation paper_content: A key challenge of future mobile communication research is to strike an attractive compromise between wire- less network's area spectral efficiency and energy efficiency. This necessitates a clean-slate approach to wireless system design, embracing the rich body of existing knowledge, espe- cially on multiple-input-multiple-ouput (MIMO) technologies. This motivates the proposal of an emerging wireless commu- nications concept conceived for single-radio-frequency (RF) large-scale MIMO communications, which is termed as SM. The concept of SM has established itself as a beneficial transmission paradigm, subsuming numerous members of the MIMO system family. The research of SM has reached sufficient maturity to motivate its comparison to state-of-the-art MIMO communica- tions, as well as to inspire its application to other emerging wireless systems such as relay-aided, cooperative, small-cell, optical wireless, and power-efficient communications. Further- more, it has received sufficient research attention to be im- plemented in testbeds, and it holds the promise of stimulating further vigorous interdisciplinary research in the years to come. This tutorial paper is intended to offer a comprehensive state-of-the-art survey on SM-MIMO research, to provide a critical appraisal of its potential advantages, and to promote the discussion of its beneficial application areas and their research challenges leading to the analysis of the technological issues associated with the implementation of SM-MIMO. The paper is concluded with the description of the world's first experimental activities in this vibrant research field. --- paper_title: Massive MIMO with 1-bit ADC paper_content: We investigate massive multiple-input-multiple output (MIMO) uplink systems with 1-bit analog-to-digital converters (ADCs) on each receiver antenna. Receivers that rely on 1-bit ADC do not need energy-consuming interfaces such as automatic gain control (AGC). This decreases both ADC building and operational costs. Our design is based on maximal ratio combining (MRC), zero-forcing (ZF), and least squares (LS) detection, taking into account the effects of the 1-bit ADC on channel estimation. Through numerical results, we show good performance of the system in terms of mutual information and symbol error rate (SER). Furthermore, we provide an analytical approach to calculate the mutual information and SER of the MRC receiver. The analytical approach reduces complexity in the sense that a symbol and channel noise vectors Monte Carlo simulation is avoided. --- paper_title: On the comparison between code-index modulation and spatial modulation techniques paper_content: Recently, two promising modulation techniques have been developed aiming to increase data rate and save energy while being simple to implement. These modulation schemes belong to two different communication methods, however, they share the common structure of using an index as an additional parameter to convey information. The first scheme known as spatial modulation (SM), is a scheme that uses multiple antennas at the transmitter side where just one antenna is activated at a time and its index is used as means to convey information. The second is known as code-index modulation (CIM), a system that uses multiple spreading codes, where a certain code is selected and its index is used as a mechanism to ferry data. In this paper, we present these two modulation techniques and we discuss the associated set of challenges for each scheme. Moreover, in order to evaluate the advantages and disadvantages of each technique, we compare the energy efficiency, the system complexity, and the bit error rate performance of the SM and CIM schemes. --- paper_title: MMSE precoder for massive MIMO using 1-bit quantization paper_content: We propose a novel linear minimum-mean-squared-error (MMSE) precoder design for a downlink (DL) massive multiple-input-multiple-output (MIMO) scenario. For economical and computational efficiency reasons low resolution 1-bit digital-to-analog (DAC) and analog-to-digital (ADC) converters are used. This comes at the cost of performance gain that can be recovered by the large number of antennas deployed at the base station (BS) and an appropiate pre-coder design to mitigate the distortions due to the coarse quantization. The proposed precoder takes the quantization non-linearities into account and is split into a digital precoder and an analog precoder. We formulate the two-stage precoding problem such that the MSE of the users is minimized under the 1-bit constraint. In the simulations, we compare the new optimized precoding scheme with previously proposed linear precoders in terms of uncoded bit error ratio (BER). --- paper_title: On one-bit quantized ZF precoding for the multiuser massive MIMO downlink paper_content: We study low complexity precoding for a downlink massive MIMO multiuser system assuming a base station that employs one-bit digital-to-analog converters (DACs) in order to mitigate power usage. The use of one-bit DACs is equivalent to constraining the transmit signal to be drawn from a QPSK alphabet. While the precoding problem can be formulated using a standard maximum likelihood (ML) encoder, the implementation cost is prohibitive for massive numbers of antennas, even if a sphere encoding approach is used. Instead, we study the performance of a one-bit quantized zero-forcing precoder, and we show that it asymptotically provides the desired downlink vector with low complexity. Simulations show that the quantized ZF precoder can actually outperform the ML encoder for low to moderate signal-to-noise ratios. --- paper_title: User selection and power schedule for downlink non-orthogonal multiple access (NOMA) system paper_content: As a promising multiple access techniques for 5th generation (5G) wireless systems, non-orthogonal multiple access (NOMA) has received many attentions recently. In this paper, we study the NOMA based downlink multi-user beamforming system, where the base station (BS) tries to transmit information to multiple user clusters and each beam serves one user cluster compromising of two users simultaneously. User selection algorithm is proposed to reduce the interference and improve the system information rate. Moreover, we also provide the user power schedule scheme to guarantee the advantage of the proposed NOMA downlink multi-user beamforming system. Simulation results show that the proposed user selection algorithm and the power schedule scheme can improve the sum-rate of NOMA based downlink multi-user system. --- paper_title: On the Performance of Non-Orthogonal Multiple Access in 5G Systems with Randomly Deployed Users paper_content: In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met. --- paper_title: Pseudo-random phase precoded spatial modulation and precoder index modulation paper_content: Spatial modulation (SM) is a transmission scheme that uses multiple transmit antennas but only one transmit RF chain. At each time instant, only one among the transmit antennas will be active and the others remain silent. The index of the active transmit antenna will also convey information bits in addition to the information bits conveyed through modulation symbols. Pseudo-random phase precoding (PRPP) is a technique that can achieve high diversity orders even in single antenna systems without the need for channel state information at the transmitter and transmit power control. In this paper, we exploit the advantages of both SM and PRPP simultaneously. We propose a pseudo-random phase precoded SM (PRPP-SM) scheme, where both the modulation bits and the antenna index bits are precoded by pseudo-random phases. The proposed PRPP-SM system gives significant performance gains over SM system without PRPP and PRPP system without SM. Since maximum likelihood (ML) detection becomes exponentially complex in large dimensions, we propose a low complexity local search based detection (LSD) algorithm suited for PRPP-SM systems with large precoder sizes. Our simulation results show that with 4 transmit antennas, 1 receive antenna, 5 × 20 pseudo-random phase precoder matrix and BPSK modulation, the performance of PRPP-SM using ML detection is better than SM without PRPP with ML detection by about 9 dB at 102 BER. This performance advantage gets even better for large precoding sizes. We also propose a precoder index modulation (PIM) scheme, which conveys additional information bits through the choice of a precoding matrix among a set of pre-determined PRPP matrices. Finally, combining the PIM and PRPP-SM schemes, we propose a PIM-SM scheme which conveys bits through both antenna index as well as precoder index. --- paper_title: Cellular architecture and key technologies for 5G wireless communication networks paper_content: The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed. --- paper_title: Emerging Technologies and Research Challenges for 5G Wireless Networks paper_content: As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This article identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose. --- paper_title: Cognitive radio in 5G: a perspective on energy-spectral efficiency trade-off paper_content: A cognitive cellular network, which integrates conventional licensed cellular radio and cognitive radio into a holistic system, is a promising paradigm for the fifth generation mobile communication systems. Understanding the trade-off between energy efficiency, EE, and spectral efficiency, SE, in cognitive cellular networks is of fundamental importance for system design and optimization. This article presents recent research progress on the EE-SE trade-off of cognitive cellular networks. We show how EE-SE trade-off studies can be performed systematically with respect to different architectures, levels of analysis, and capacity metrics. Three representative examples are given to illustrate how EE-SE trade-off analysis can lead to important insights and useful design guidelines for future cognitive cellular networks. --- paper_title: Full duplex techniques for 5G networks: self-interference cancellation, protocol design, and relay selection paper_content: The wireless research community aspires to conceive full duplex operation by supporting concurrent transmission and reception in a single time/frequency channel for the sake of improving the attainable spectral efficiency by a factor of two as compared to the family of conventional half duplex wireless systems. The main challenge encountered in implementing FD wireless devices is that of finding techniques for mitigating the performance degradation imposed by self-interference. In this article, we investigate the potential FD techniques, including passive suppression, active analog cancellation, and active digital cancellation, and highlight their pros and cons. Furthermore, the troubles of FD medium access control protocol design are discussed for addressing the problems such as the resultant end-to-end delay and network congestion. Additionally, an opportunistic decode-andforward- based relay selection scheme is analyzed in underlay cognitive networks communicating over independent and identically distributed Rayleigh and Nakagami-m fading channels in the context of FD relaying. We demonstrate that the outage probability of multi-relay cooperative communication links can be substantially reduced. Finally, we discuss the challenges imposed by the aforementioned techniques and a range of critical issues associated with practical FD implementations. It is shown that numerous open challenges, such as efficient SI suppression, high-performance FD MAC-layer protocol design, low power consumption, and hybrid FD/HD designs, have to be tackled before successfully implementing FD-based systems. --- paper_title: Applications of self-interference cancellation in 5G and beyond paper_content: Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond. --- paper_title: Massive MIMO performance evaluation based on measured propagation data paper_content: Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA) which has a physically large aperture, and a practical uniform cylindrical array (UCA) which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels. --- paper_title: Wideband Millimeter-Wave Propagation Measurements and Channel Models for Future Wireless Communication System Design paper_content: The relatively unused millimeter-wave (mmWave) spectrum offers excellent opportunities to increase mobile capacity due to the enormous amount of available raw bandwidth. This paper presents experimental measurements and empirically-based propagation channel models for the 28, 38, 60, and 73 GHz mmWave bands, using a wideband sliding correlator channel sounder with steerable directional horn antennas at both the transmitter and receiver from 2011 to 2013. More than 15,000 power delay profiles were measured across the mmWave bands to yield directional and omnidirectional path loss models, temporal and spatial channel models, and outage probabilities. Models presented here offer side-by-side comparisons of propagation characteristics over a wide range of mmWave bands, and the results and models are useful for the research and standardization process of future mmWave systems. Directional and omnidirectional path loss models with respect to a 1 m close-in free space reference distance over a wide range of mmWave frequencies and scenarios using directional antennas in real-world environments are provided herein, and are shown to simplify mmWave path loss models, while allowing researchers to globally compare and standardize path loss parameters for emerging mmWave wireless networks. A new channel impulse response modeling framework, shown to agree with extensive mmWave measurements over several bands, is presented for use in link-layer simulations, using the observed fact that spatial lobes contain multipath energy that arrives at many different propagation time intervals. The results presented here may assist researchers in analyzing and simulating the performance of next-generation mmWave wireless networks that will rely on adaptive antennas and multiple-input and multiple-output (MIMO) antenna systems. --- paper_title: Large scale antenna arrays with increasing antennas in limited physical space paper_content: Large Scale multiple input multiple output (MIMO) systems have recently emerged as a promising technology for 5G communications. While they have been shown to offer significant performance benefits in theoretical studies, the large scale MIMO transmitters will have to be deployed in the limited physical space of today's base stations (BSs). Accordingly, this paper examines effects of deploying increasing numbers of antennas in fxed physical space, by reducing the antenna spacing. We focus on the resulting performance of large-scale MIMO transmitters using low complexity closed form precoding techniques. In particular, we investigate the combined effect of reducing the distance between the antenna elements with increasing the number of elements in a fxed transmitter space. This gives rise to two contradicting phenomena: the reduction of spatial diversity due to reducing the separation between antennas and the increase in transmit diversity by increasing the number of elements. To quantify this tradeoff, we investigate densely deployed uniform antenna arrays modelled by detailed electromagnetic simulation. Our results show the somewhat surprising result that, by reducing the separations between the antennas to signifcantly less than the transmit wavelength to ft more antennas, the resulting system performance improves. --- paper_title: Towards Very Large Aperture Massive MIMO: a measurement based study paper_content: Massive MIMO is a new technique for wireless communications that claims to offer very high system throughput and energy efficiency in multi-user scenarios. The cost is to add a very large number of antennas at the base station. Theoretical research has probed these benefits, but very few measurements have showed the potential of Massive MIMO in practice. We investigate the properties of measured Massive MIMO channels in a large indoor venue. We describe a measurement campaign using 3 arrays having different shape and aperture, with 64 antennas and 8 users with 2 antennas each. We focus on the impact of the array aperture which is the main limiting factor in the degrees of freedom available in the multiple antenna channel. We find that performance is improved as the aperture increases, with an impact mostly visible in crowded scenarios where the users are closely spaced. We also test MIMO capability within a same user device with user proximity effect. We see a good channel resolvability with confirmation of the strong effect of the user hand grip. At last, we highlight that propagation conditions where line-of-sight is dominant can be favorable. --- paper_title: Five Disruptive Technology Directions for 5G paper_content: New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. --- paper_title: Channel measurements and analysis for very large array systems at 2.6 GHz paper_content: Very large MIMO is a technique that potentially can offer large network capacities in multi-user scenarios where the users are equipped only with single antennas. In this paper we are investigating channel properties for a realistic, though somewhat extreme, outdoor base station scenario using a large array. We present measurement results using a 128 element linear array base station and 26 different user position in line-of-sight (LOS) and 10 different user position in non line-of-sight (NLOS). We analyze the Ricean K-factor, received power levels over the array, antenna correlation and eigenvalue distributions. We show that the statistical properties of the received signal vary significantly over the large array. Near field effects and the non-stationarities over the array help decorrelating the channel for different users, thereby providing a favorable channel conditions with stable channels and low interference for the considered single antenna users. --- paper_title: Multidimensional high-resolution channel sounding in mobile radio paper_content: Multidimensional high resolution channel sounding is a method of wave propagation analysis, channel modeling, and link performance evaluation in mobile radio. For this application antenna array architecture design, calibration, and a maximum likelihood channel parameter estimation framework (RIMAX) is described. Depending on the available measured dimensions, the algorithm estimates the four coefficients of the polymetric path weight matrix, the direction of arrival, the direction of departure, the time delay of arrival, and the Doppler-shift of the specular paths. Moreover, the parameters of a statistical model of the dense multipath components resulting from distributed diffuse scattering are estimated. Additional reliability measures are calculated to enhance the robustness of the estimate. --- paper_title: Base station polarization diversity reception for mobile radio paper_content: Base station polarization diversity reception in which signals are received by dual polarization (ex. ±45° polarization) is discussed. A theoretical analysis is presented on correlation coefficient ρ between diversity branches, and received signal level decrease L caused by polarization difference at the base and mobile station. The generalized expressions of ρ and L are then derived. Measurements were also carried out at 900 MHz in an urban area. Consequently, it was found that the ρ and L values are expressed by three factors, ρ is lower than 0.6 and L is smaller than 2.5 dB. It is concluded that this polarization diversity reception can be used as an effective diversity reception. --- paper_title: Wireless Communication for Factory Automation: an opportunity for LTE and 5G systems paper_content: The evolution of wireless communication from 4G toward 5G is driven by application demands and business models envisioned for 2020 and beyond. This requires network support for novel use cases in addition to classical mobile broadband services. Wireless factory automation is an application area with highly demanding communication requirements. We classify these requirements and identify the opportunities for the current LTE air interface for factory automation applications. Moreover, we give an outlook on the relevant design considerations to be addressed by 5G communication systems. --- paper_title: 5GNOW: non-orthogonal, asynchronous waveforms for future mobile applications paper_content: This article provides some fundamental indications about wireless communications beyond LTE/LTE-A (5G), representing the key findings of the European research project 5GNOW. We start with identifying the drivers for making the transition to 5G networks. Just to name one, the advent of the Internet of Things and its integration with conventional human-initiated transmissions creates a need for a fundamental system redesign. Then we make clear that the strict paradigm of synchronism and orthogonality as applied in LTE prevents efficiency and scalability. We challenge this paradigm and propose new key PHY layer technology components such as a unified frame structure, multicarrier waveform design including a filtering functionality, sparse signal processing mechanisms, a robustness framework, and transmissions with very short latency. These components enable indeed an efficient and scalable air interface supporting the highly varying set of requirements originating from the 5G drivers. --- paper_title: FANTASTIC-5G: flexible air interface for scalable service delivery within wireless communication networks of the 5th generation paper_content: 5th generation mobile networks will have to cope with a high degree of heterogeneity in terms of services, mobility, number of devices and so on. Thus, diverse and often contradicting key performance indicators need to be supported, but having multiple radio access technologies for multi-service support below 6i¾źGHz will be too costly. FANTASTIC-5G will develop a new multi-service air interface through a modular design. To allow the system to adapt to the anticipated heterogeneity, some properties need to be pursued, like simplicity, flexibility, scalability, versatility, efficiency and future proofness. Based on these properties, a selected set of use cases and link and network design will be presented. The paper will also comprise validation and system level simulations through some indicative results and will conclude with the overall impact to 5G standardisation. Copyright © 2016 John Wiley & Sons, Ltd. --- paper_title: Understanding the Current Operation and Future Roles of Wireless Networks: Co-Existence, Competition and Co-Operation in the Unlicensed Spectrum Bands paper_content: Technology and policy are coming together to enable a paradigmatic change to the most widely used mechanism, exclusive rights, which allows mobile telecommunications operators to use the radio spectrum. Although spectrum sharing is not a new idea, the limited supply of spectrum and the enormous demand for mobile broadband services are forcing spectrum authorities to look more closely into a range of tools that might accelerate its adoption. This paper seeks to understand how co-existence and co-operation of Wi-Fi and cellular networks in the unlicensed spectrum can increase the overall capacity of heterogeneous wireless networks. It also reveals the challenges posed by new uses, such as machine-to-machine communications and the Internet of Things. It also brings together two major proposed regulatory approaches, such as those by the U.K.’s Ofcom and the European Commission, which currently represent leading efforts to provide spectrum authorities with robust spectrum sharing frameworks, to discuss policy tools likely to be implemented. --- paper_title: Filtered-OFDM - Enabler for Flexible Waveform in the 5th Generation Cellular Networks paper_content: The underlying waveform has always been a shaping factor for each generation of the cellular networks, such as orthogonal frequency division multiplexing (OFDM) for the 4th generation cellular networks (4G). To meet the diversified and pronounced expectations upon the upcoming 5G cellular networks, here we present an enabler for flexible waveform configuration, named as filtered-OFDM (f-OFDM). With the conventional OFDM, a unified numerology is applied across the bandwidth provided, balancing among the channel characteristics and the service requirements, and the spectrum efficiency is limited by the compromise we made. In contrast, with f-OFDM, the assigned bandwidth is split up into several subbands, and different types of services are accommodated in different subbands with the most suitable waveform and numerology, leading to an improved spectrum utilization. After outlining the general framework of f-OFDM, several important design aspects are also discussed, including filter design and guard tone arrangement. In addition, an extensive comparison among the existing 5G waveform candidates is also included to illustrate the advantages of f-OFDM. Our simulations indicate that, in a specific scenario with four distinct types of services, f-OFDM provides up to 46% of throughput gains over the conventional OFDM scheme. --- paper_title: Relaxed synchronization support of universal filtered multi-carrier including autonomous timing advance paper_content: 5G wireless systems may benefit by waveforms supporting relaxed synchronization, as this enables reduced energy consumption, better support of low-end devices and reduction of signaling overhead. In this paper we evaluate UFMC (Universal Filtered Multi-Carrier), also known as UF-OFDM (universal filtered OFDM) — the recently appeared waveform option for 5G — with respect to its performance in scenarios with relaxed synchronization. Both carrier frequency offset, e.g. due to low-cost oscillators used in low-end devices, and relative fractional delay, e.g. due to the absence of an energy consuming closed-loop ranging mechanism, is considered. We introduce a concept called autonomous timing advance (ATA) improving the overall system performance. With ATA the system can operate purely based on open-loop synchronization. For comparing UFMC with CP-OFDM, we evaluate the mean squared error (MSE) in the receiver after frequency conversion. With applying a limit regarding the tolerable amount of distortion, we calculate the supported link distance for a system applying either UFMC or CP-OFDM for LTE-like settings. With applying UFMC, higher link distances are supported than with CP-OFDM, if the system applies open-loop synchronization. --- paper_title: Spectrum Sharing in mmWave Cellular Networks via Cell Association, Coordination, and Beamforming paper_content: This paper investigates the extent to which spectrum sharing in millimeter-wave (mmWave) networks with multiple cellular operators is a viable alternative to traditional dedicated spectrum allocation. Specifically, we develop a general mathematical framework to characterize the performance gain that can be obtained when spectrum sharing is used, as a function of the underlying beamforming, operator coordination, bandwidth, and infrastructure sharing scenarios. The framework is based on joint beamforming and cell association optimization, with the objective of maximizing the long-term throughput of the users. Our asymptotic and non-asymptotic performance analyses reveal five key points: 1) spectrum sharing with light on-demand intra- and inter-operator coordination is feasible, especially at higher mmWave frequencies (for example, 73 GHz); 2) directional communications at the user equipment substantially alleviate the potential disadvantages of spectrum sharing (such as higher multiuser interference); 3) large numbers of antenna elements can reduce the need for coordination and simplify the implementation of spectrum sharing; 4) while inter-operator coordination can be neglected in the large-antenna regime, intra-operator coordination can still bring gains by balancing the network load; and 5) critical control signals among base stations, operators, and user equipment should be protected from the adverse effects of spectrum sharing, for example by means of exclusive resource allocation. The results of this paper, and their extensions obtained by relaxing some ideal assumptions, can provide important insights for future standardization and spectrum policy. --- paper_title: Pulse shaped OFDM for asynchronous uplink access paper_content: This paper considers the scenario of massive machine type communication (MTC) in cellular uplink for short packet traffic. For this traffic type, the closed loop Timing Advance (TA) adjustment process applied in current LTE uplink introduces significant overhead in signaling and energy consumption. We propose to pulse shape the OFDM signals using time-frequency localized prototype filters. This improves the robustness of the waveform against timing offsets, so that the uplink transmission procedure can be simplified. Building on that property, an asynchronous uplink access scheme is developed. In the performance evaluation, we compare with both DFTs-OFDM and CP-OFDM schemes and demonstrate that higher spectral efficiency can be achieved by the pulse shaped OFDM scheme, highlighting the benefits in properly designing the pulse shapes of a multicarrier waveform. --- paper_title: Orthogonal Time Frequency Space Modulation paper_content: A new two-dimensional modulation technique called Orthogonal Time Frequency Space (OTFS) modulation designed in the delay-Doppler domain is introduced. Through this design, which exploits full diversity over time and frequency, OTFS coupled with equalization converts the fading, time-varying wireless channel experienced by modulated signals such as OFDM into a time-independent channel with a complex channel gain that is roughly constant for all symbols. Thus, transmitter adaptation is not needed. This extraction of the full channel diversity allows OTFS to greatly simplify system operation and significantly improves performance, particular in systems with high Doppler, short packets, and large antenna arrays. Simulation results indicate at least several dB of block error rate performance improvement for OTFS over OFDM in all of these settings. In addition these results show that even at very high Dopplers (500 km#x002F;h), OTFS approaches channel capacity through linear scaling of throughput with the MIMO order, whereas the performance of OFDM under typical design parameters breaks down completely. --- paper_title: FBMC receiver for multi-user asynchronous transmission on fragmented spectrum paper_content: Relaxed synchronization and access to fragmented spectrum are considered for future generations of wireless networks. Frequency division multiple access for filter bank multicarrier (FBMC) modulation provides promising performance without strict synchronization requirements contrary to conventional orthogonal frequency division multiplexing (OFDM). The architecture of a FBMC receiver suitable for this scenario is considered. Carrier frequency offset (CFO) compensation is combined with intercarrier interference (ICI) cancellation and performs well under very large frequency offsets. Channel estimation and interpolation had to be adapted and proved effective even for heavily fragmented spectrum usage. Channel equalization can sustain large delay spread. Because all the receiver baseband signal processing functionalities are proposed in the frequency domain, the overall architecture is suitable for multiuser asynchronous transmission on fragmented spectrum. --- paper_title: Flexible Configured OFDM for 5G Air Interface paper_content: A flexible orthogonal frequency division multiplex (OFDM)-based modulation scheme is proposed under the name of Flexible Configured OFDM (FC-OFDM). It enables a flexible subband configuration and targets a multi-service scenario, which is envisioned for future 5G networks. The proposed FC-OFDM scheme provides a good compromise between the filter bank multi-carrier with offset quadrature amplitude modulation and the classical cyclic prefix-based OFDM system. The detailed system structure is illustrated in this paper, together with efficiency evaluations. --- paper_title: Zero-tail DFT-spread-OFDM signals paper_content: In the existing scheduled radio standards using Orthogonal Frequency Division Multiplexing (OFDM) or Discrete Fourier Transform-spread-OFDM (DFT-s-OFDM) modulation, the Cyclic Prefix (CP) duration is usually hard-coded and set as a compromise between the expected channel characteristics and the necessity of fitting a predefined frame duration. This may lead to system inefficiencies as well as bad coexistence with networks using different CP settings. In this paper, we propose the usage of zero-tail DFT-s-OFDM signals as a solution for decoupling the radio numerology from the expected channel characteristics. Zero-tail DFT-s-OFDM modulation allows to adapt the overhead to the estimated delay spread/propagation delay. Moreover, it enables networks operating over channels with different characteristics to adopt the same numerology, thus improving their coexistence. An analytical description of the zero-tail DFT-s-OFDM signals is provided, as well as a numerical performance evaluation with Monte Carlo simulations. Zero-tail DFT-s-OFDM signals are shown to have approximately the same Block Error Rate (BLER) performance of traditional OFDM, with the further benefit of lower out-of-band (OOB) emissions. --- paper_title: A study on the coexistence of fixed satellite service and cellular networks in a mmWave scenario paper_content: The use of a larger bandwith in the millimeter wave (mmWave) spectrum is one of the key components of next generation cellular networks. Currently, part of this band is allocated on a co-primary basis to a number of other applications, such as the fixed satellite services (FSSs). In this paper, we investigate the coexistence between a cellular network and FSSs in a mmWave scenario. In light of the parameters recommended by the standard and the recent results presented in the literature on the mmWave channel model, we analyze different BSs deployments and different antenna configurations at the transmitters. Finally, we show how, exploiting the features of a mmWave scenario, the coexistence between cellular and satellite services is feasible and the interference at the FSS antenna can be kept below recommended levels. --- paper_title: QAM-FBMC: A New Multi-Carrier System for Post-OFDM Wireless Communications paper_content: Recently, as asynchronous heterogeneous network scenario becoming one of the key features for next generation wireless communications, superior spectrum confinement as well as higher spectral efficiency compared to cyclic prefixed orthogonal frequency division multiplexing (CP-OFDM) has been taken into consideration for future radio access technologies. In this paper, we propose a new quadrature amplitude modulation based filter-bank multi-carrier (QAM-FBMC) system that provides with an inherent spectral efficiency gain against CP-OFDM, which comes from reduction of the redundancies such as cyclic prefix (CP) and guard-band, while keeping the symbol rate same as in CP-less OFDM. We propose a new transceiver structure consisting of at least two different filter-bank bases at both transmitter and receiver sides. Practical algorithms including channel estimation and equalization to mitigate multi-path fading channel without CP are proposed. Various evaluation results show that the proposed system performs comparable to the CP-OFDM system even without CP and guard-band reduction is also available from the well-confined spectrum. --- paper_title: FS-FBMC: A flexible robust scheme for efficient multicarrier broadband wireless access paper_content: An alternative implementation of the filter bank multicarrier (FBMC) concept is introduced. It is based on an FFT whose size is the length of the prototype filter. The approach clarifies the connection with OFDM and its main benefit is in the receiver, where high performance sub-channel equalization and timing offset compensation are achieved in a straightforward manner without additional delay. The scheme is particularly appropriate for broadband wireless access, to cope with fragmented frequency bands and to optimize the utilization of the spectrum, for example with the help of water-filling based sub-channel loading algorithms. The context of TV white spaces is taken for illustration. An issue with the proposed scheme is the computational complexity in the receiver and an approach having the potential for substantial savings is mentioned. --- paper_title: Investigating Spectrum Sharing between 5G Millimeter Wave Networks and Fixed Satellite Systems paper_content: In this paper we study coexistence of 5G small cells with fixed satellite systems (FSSs) in a scenario where both systems operate co-channel in the large spectrum bandwidth available around 28 GHz. Such studies are of great importance to inform the research community,industry and regulators which are currently investigating spectrum requirements and technology options for 5G systems. Focusing on the FSS uplink scenario, we use realistic FSS parameters and radiation pattern, combined with very recent channel from the literature, we analyzed the impact of interference resulting from FSS radiation on the achievable capacity and throughput of 5G small cells considering various multiple antenna configurations at the base stations (BSs) and different deployments of the mobile transmitters when no cooperation is allowed between the BSs. Starting from the lower bound, represented by an omnidirectional configuration of the transmitters, we extend our work to the analysis of large antenna arrays that will be used in the new generation of mobile cellular systems. Our results indicate that by exploiting a large number of antennas at the BSs and properly setting the protection distance between FSS and cellular BS, co-channel deployment of 5G small cells with FSS earth stations is possible, in the sense that adequate user data rates could be provided to the majority of mobile users. --- paper_title: A flexible 100-antenna testbed for Massive MIMO paper_content: Massive multiple-input multiple-output (MIMO) is one of the main candidates to be included in the fifth generation (5G) cellular systems. For further system development it is desirable to have real-time testbeds showing possibilities and limitations of the technology. In this paper we describe the Lund University Massive MIMO testbed — LuMaMi. It is a flexible testbed where the base station operates with up to 100 coherent radio-frequency transceiver chains based on software radio technology. Orthogonal Frequency Division Multiplex (OFDM) based signaling is used for each of the 10 simultaneous users served in the 20 MHz bandwidth. Real time MIMO precoding and decoding is distributed across 50 Xilinx Kintex-7 FPGAs with PCI-Express interconnects. The unique features of this system are: (i) high throughput processing of 384 Gbps of real time baseband data in both the transmit and receive directions, (ii) low-latency architecture with channel estimate to precoder turnaround of less than 500 micro seconds, and (iii) a flexible extension up to 128 antennas. We detail the design goals of the testbed, discuss the signaling and system architecture, and show initial measured results for a uplink Massive MIMO over-the-air transmission from four single-antenna UEs to 100 BS antennas. --- paper_title: Filtered OFDM: A new waveform for future wireless systems paper_content: A spectrally-localized waveform is proposed based on filtered orthogonal frequency division multiplexing (f-OFDM). By allowing the filter length to exceed the cyclic prefix (CP) length of OFDM and designing the filter appropriately, the proposed f-OFDM waveform can achieve a desirable frequency localization for bandwidths as narrow as a few tens of subcarriers, while keeping the inter-symbol interference/inter-carrier interference (ISI/ICI) within an acceptable limit. Enabled by the proposed f-OFDM, an asynchronous filtered orthogonal frequency division multiple access (f-OFDMA)/filtered discrete-Fourier transform-spread OFDMA (f-DFT-S-OFDMA) scheme is introduced, which uses the spectrum shaping filter at each transmitter for side lobe leakage elimination and a bank of filters at the receiver for inter-user interference rejection. Per-user downsampling and short fast Fourier transform (FFT) are used at the receiver to ensure a reasonable complexity of implementation. The proposed scheme removes the inter-user time-synchronization overhead required in the synchronous OFDMA/DFT-S-OFDMA. The performance of the asynchronous f-OFDMA is evaluated and compared with that of the universal-filtered OFDM (UF-OFDM), proposed in [1], [2]. --- paper_title: 5G air interface design based on Universal Filtered (UF-)OFDM paper_content: In this paper we discuss 5G air interface design with respect to waveforms, multiple access and frame structure. We start by introducing the 5G system level requirements and expected scenarios. 5G will be driven by supporting very heterogeneous service and device classes. A unified frame structure for handling those heterogeneous traffic types is presented. Multiple access for 5G will make use of strict synchronicity where it is justifiable and will drop it where signalling overhead and energy consumption will demand for. In order to serve this unified frame structure best, the choice of the underlying waveform is discussed. CP-OFDM has its limitations in spectral properties and in conjunction with relaxed time-frequency alignment. The most discussed contender so far is Filter-Bank based Multi-Carrier (FBMC), with better spectral properties but new drawbacks introduced by offset-QAM and long filter lengths. Hence, a new alternative is required: Universal-Filtered OFDM (UF-OFDM), also known as Universal Filtered Multi-Carrier (UFMC), is a recent technology close to OFDM. UF-OFDM, according to encouraging results so far, summarized in this paper, fits best to the 5G system requirements. A further feature of the Unified Frame Structure is the usage of multiple signal layers. Here, users can be separated e.g. based on their interleavers, as done in Interleave-Division Multiple-Access (IDMA). This will introduce an additional degree of freedom for the system, improve robustness against crosstalk and helps to exploit the capacity of the multiple access channel (MAC). Altogether, the proposed new concepts offer an emboldening approach for dealing with the new challenges, faced by 5G wireless system designers. --- paper_title: Filtered-OFDM - Enabler for Flexible Waveform in the 5th Generation Cellular Networks paper_content: The underlying waveform has always been a shaping factor for each generation of the cellular networks, such as orthogonal frequency division multiplexing (OFDM) for the 4th generation cellular networks (4G). To meet the diversified and pronounced expectations upon the upcoming 5G cellular networks, here we present an enabler for flexible waveform configuration, named as filtered-OFDM (f-OFDM). With the conventional OFDM, a unified numerology is applied across the bandwidth provided, balancing among the channel characteristics and the service requirements, and the spectrum efficiency is limited by the compromise we made. In contrast, with f-OFDM, the assigned bandwidth is split up into several subbands, and different types of services are accommodated in different subbands with the most suitable waveform and numerology, leading to an improved spectrum utilization. After outlining the general framework of f-OFDM, several important design aspects are also discussed, including filter design and guard tone arrangement. In addition, an extensive comparison among the existing 5G waveform candidates is also included to illustrate the advantages of f-OFDM. Our simulations indicate that, in a specific scenario with four distinct types of services, f-OFDM provides up to 46% of throughput gains over the conventional OFDM scheme. --- paper_title: A survey: Several technologies of non-orthogonal transmission for 5G paper_content: One key advantage of 4G OFDM system is the relatively simple receiver implementation due to the orthogonal resource allocation. However, from sum-capacity and spectral efficiency points of view, orthogonal systems are never the achieving schemes. With the rapid development of mobile communication systems, a novel concept of non-orthogonal transmission for 5G mobile communications has attracted researches all around the world. In this trend, many new multiple access schemes and waveform modulation technologies were proposed. In this paper, some promising ones of them were discussed which include Non-orthogonal Multiple Access (NOMA), Sparse Code Multiple Access (SCMA), Multi-user Shared Access (MUSA), Pattern Division Multiple Access (PDMA) and some main new waveforms including Filter-bank based Multicarrier (FBMC), Universal Filtered Multi-Carrier (UFMC), Generalized Frequency Division Multiplexing (GFDM). By analyzing and comparing features of these technologies, a research direction of guiding on future 5G multiple access and waveform are given. --- paper_title: Pattern division multiple access (PDMA) for cellular future radio access paper_content: This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity. --- paper_title: FB-OFDM: A novel multicarrier scheme for 5G paper_content: In this paper, a novel multicarrier waveform called filter bank OFDM (FB-OFDM) is proposed for use in 5G. FB-OFDM adopts complex modulation and filtering on subcarrier level. It is compatible to CP-OFDM utilized by LTE and its transceiver can be implemented by simple polyphase filter. Moreover, FB-OFDM system can support asynchrony between subcarriers and choose pulse function adaptively under different scenarios. A pulse function that has good locality in both time and frequency domain is also constructed based on root raised cosine function. Simulation results are provided to validate the performance of FB-OFDM and the proposed pulse function. --- paper_title: Argos: practical many-antenna base stations paper_content: Multi-user multiple-input multiple-output theory predicts manyfold capacity gains by leveraging many antennas on wireless base stations to serve multiple clients simultaneously through multi-user beamforming (MUBF). However, realizing a base station with a large number antennas is non-trivial, and has yet to be achieved in the real-world. We present the design, realization, and evaluation of Argos, the first reported base station architecture that is capable of serving many terminals simultaneously through MUBF with a large number of antennas (M >> 10). Designed for extreme flexibility and scalability, Argos exploits hierarchical and modular design principles, properly partitions baseband processing, and holistically considers real-time requirements of MUBF. Argos employs a novel, completely distributed, beamforming technique, as well as an internal calibration procedure to enable implicit beamforming with channel estimation cost independent of the number of base station antennas. We report an Argos prototype with 64 antennas and capable of serving 15 clients simultaneously. We experimentally demonstrate that by scaling from 1 to 64 antennas the prototype can achieve up to 6.7 fold capacity gains while using a mere 1/64th of the transmission power. ---
Title: 5G: A Tutorial Overview of Standards, Trials, Challenges, Deployment, and Practice Section 1: INTRODUCTION Description 1: Introduce the evolution and necessity of 5G technology, including its usage scenarios and key performance parameters. Section 2: 5G Requirements Description 2: Discuss the minimum technical performance requirements for 5G and relevant key performance parameters. Section 3: Spectrum Regulation Description 3: Explain the frequency bands approved for 5G, including existing and new mm-wave bands, spectrum management, and coexistence. Section 4: Standardisation for 5G Description 4: Provide details on the standardization efforts by ITU-R and 3GPP, including the phased standardization process and key decisions. Section 5: Key Technologies for RF Interface(s) of 5G Description 5: Outline the significant technologies contributing to the 1000 times gain in capacity, including increased bandwidth, massive MIMO, network densification, and new waveforms. Section 6: Channel Characteristics Description 6: Summarize the propagation channel characteristics for 5G, including massive MIMO, distributed systems, and mm-wave channels. Section 7: Signal Processing Techniques for 5G Description 7: Discuss the major signal processing methods for 5G, including linear precoding, hybrid precoding, MU-MIMO receiver processing, and other important approaches. Section 8: Antenna Layouts Description 8: Detail the design and arrangement of large antenna arrays for 5G systems, including uniform linear arrays, rectangular arrays, and circular arrays. Section 9: 5G New Waveforms and Channel Access Description 9: Describe the waveforms and multi-user access schemes proposed for 5G, emphasizing flexibility and various requirements to support diverse use cases. Section 10: Trials, Test-Beds, and Deployment Description 10: Provide an overview of current 5G prototypes, test-beds, experimental trials, and announced early commercial deployments. Section 11: 5G Core Network and Cloud RAN Architectures Description 11: Explain the significant challenges and innovations in the core network and Cloud RAN architectures for 5G, focusing on SDN, NFV, network slicing, and fronthaul solutions. Section 12: Deployment Description 12: Discuss the deployment challenges for 5G networks, including transport and core network issues, beamforming strategies, and the implications of different deployment architectures. Section 13: SUMMARY AND CHALLENGES Description 13: Summarize the progression towards 5G commercialization, the technical challenges ahead, and key questions relating to its deployment.
Short-term real-time traffic prediction methods: A survey
7
--- paper_title: Traffic Flow Forecast Survey paper_content: Short-term traffic flow forecasting is an important aspect of the ITS as traffic predication can alleviate congestion, which causes drivers to incur a longer traveling time and economical loses. In addition, traffic congestion increases the pollution and the fuel usage. Thus, it is one of the severe problems in Metropolitan areas. Further, in tunnels the forecasting may help scheduling the ventilation fans. This way, the ventilation cost might be decreased while the air quality increased. Additional aspect of traffic prediction is that it may enable the drivers to plan their departure time and traveling path, as they posses the predictive information. In this paper, we survey the different techniques used for traffic forecasting, the input data for these techniques, the output provided by them, as well as some general insights. --- paper_title: Adaptive Kalman filter approach for stochastic short-term traffic flow rate prediction and uncertainty quantification paper_content: Short term traffic flow forecasting has received sustained attention for its ability to provide the anticipatory traffic condition required for proactive traffic control and management. Recently, a stochastic seasonal autoregressive integrated moving average plus generalized autoregressive conditional heteroscedasticity (SARIMA + GARCH) process has gained increasing notice for its ability to jointly generate traffic flow level prediction and associated prediction interval. Considering the need for real time processing, Kalman filters have been utilized to implement this SARIMA + GARCH structure. Since conventional Kalman filters assume constant process variances, adaptive Kalman filters that can update the process variances are investigated in this paper. Empirical comparisons using real world traffic flow data aggregated at 15-min interval showed that the adaptive Kalman filter approach can generate workable level forecasts and prediction intervals; in particular, the adaptive Kalman filter approach demonstrates improved adaptability when traffic is highly volatile. Sensitivity analyses show that the performance of the adaptive Kalman filter stabilizes with the increase of its memory size. Remarks are provided on improving the performance of short term traffic flow forecasting. --- paper_title: Microscopic Traffic Simulation with Intelligent Agents paper_content: The subject of microscopic traffic simulation has gained increasing significance in recent years. It enables the testing of traffic scenarios in the laboratory and the evaluation of changes in the infrastructure prior to their physical realization, which saves time and cost. Additionally, agents play an important role in artificial intelligence and are emerging in other fields of science as well, including microscopic simulation of traffic networks. Using agents, it is possible to simulate different driver characteristics and hence it enables a realistic simulation of human driving behaviour. In this book a suitable architecture for a microscopic traffic simulation with intelligent agents is developed and the necessary simulation parameters and components are discussed. Simulation parameters are a very important part of the simulation, since they are the factors that influence vehicle drivers. Fuzzy logic is used to model these parameters to assure a human-like flow of information and to enable human reasoning. On this basis, a conceptual architecture that represents the interrelation between the single simulation components is developed. --- paper_title: Active Traffic Management as a Tool for Addressing Traffic Congestion paper_content: Recurrent and non-recurrent congestion in urban areas continues to be a major concern due to its adverse impacts on delays, fuel consumption and pollution, driver frustration, and traffic safety. In the U.S., limited public funding for roadway expansion and improvement projects, coupled with continued growth in travel along congested urban freeway corridors, creates a pressing need for innovative congestion management approaches. --- paper_title: Traffic Flow Forecast Survey paper_content: Short-term traffic flow forecasting is an important aspect of the ITS as traffic predication can alleviate congestion, which causes drivers to incur a longer traveling time and economical loses. In addition, traffic congestion increases the pollution and the fuel usage. Thus, it is one of the severe problems in Metropolitan areas. Further, in tunnels the forecasting may help scheduling the ventilation fans. This way, the ventilation cost might be decreased while the air quality increased. Additional aspect of traffic prediction is that it may enable the drivers to plan their departure time and traveling path, as they posses the predictive information. In this paper, we survey the different techniques used for traffic forecasting, the input data for these techniques, the output provided by them, as well as some general insights. --- paper_title: Real-time road traffic prediction with spatio-temporal correlations paper_content: Real-time road traffic prediction is a fundamental capability needed to make use of advanced, smart transportation technologies. Both from the point of view of network operators as well as from the point of view of travelers wishing real-time route guidance, accurate short-term traffic prediction is a necessary first step. While techniques for short-term traffic prediction have existed for some time, emerging smart transportation technologies require the traffic prediction capability to be both fast and scalable to full urban networks. We present a method that has proven to be able to meet this challenge. The method presented provides predictions of speed and volume over 5-min intervals for up to 1 h in advance. --- paper_title: The Geography of Transport Systems paper_content: 1. Transportation and Geography 2. Transportation Systems and Networks 3. Economic and Spatial Structure of Transport Systems 4. Transportation Modes 5. Transportation Terminals 6. International and Regional Transportation 7. Urban Transportation 8. Transport and Environment 9. Transport Planning and Policy Conclusion: Issues and Challenges in Transport Geography. Glossary and Index --- paper_title: Microscopic Traffic Simulation with Intelligent Agents paper_content: The subject of microscopic traffic simulation has gained increasing significance in recent years. It enables the testing of traffic scenarios in the laboratory and the evaluation of changes in the infrastructure prior to their physical realization, which saves time and cost. Additionally, agents play an important role in artificial intelligence and are emerging in other fields of science as well, including microscopic simulation of traffic networks. Using agents, it is possible to simulate different driver characteristics and hence it enables a realistic simulation of human driving behaviour. In this book a suitable architecture for a microscopic traffic simulation with intelligent agents is developed and the necessary simulation parameters and components are discussed. Simulation parameters are a very important part of the simulation, since they are the factors that influence vehicle drivers. Fuzzy logic is used to model these parameters to assure a human-like flow of information and to enable human reasoning. On this basis, a conceptual architecture that represents the interrelation between the single simulation components is developed. --- paper_title: A Hidden Markov Model for short term prediction of traffic conditions on freeways paper_content: Accurate short-term prediction of traffic conditions on freeways and major arterials has recently become increasingly important because of its vital role in the basic traffic management functions and trip decision making processes. Given the dynamic and stochastic nature of freeway traffic, this study proposes a stochastic approach, Hidden Markov Model (HMM), for short-term freeway traffic prediction during peak periods. The data used in the study was gathered from real-time traffic monitoring devices over six years on a 60.8-km (38-mile) corridor of Interstate-4 in Orlando, Florida. The HMM defines traffic states in a two-dimensional space using first-order statistics (Mean) and second-order statistics (Contrast) of speed observations. The dynamic changes of freeway traffic conditions are addressed with state transition probabilities. For a sequence of traffic speed observations, HMMs estimate the most likely sequence of traffic states. The model performance was evaluated using prediction errors, which are measured by the relative length of the distance between the predicted state and the observed state in the two-dimensional space. Reasonable prediction errors lower than or around 10% were obtained from HMMs. Also, the model performance was not remarkably affected by location, travel direction, and peak period time. The HMMs were compared to two naive predication methods. The results showed that HMMs perform better and are more robust than the naive methods. Therefore, the study concludes that the HMM approach was successful in modeling short-term traffic condition prediction during peak periods and in accounting for the inherent stochastic nature of traffic conditions. --- paper_title: Traffic Flow Forecast Survey paper_content: Short-term traffic flow forecasting is an important aspect of the ITS as traffic predication can alleviate congestion, which causes drivers to incur a longer traveling time and economical loses. In addition, traffic congestion increases the pollution and the fuel usage. Thus, it is one of the severe problems in Metropolitan areas. Further, in tunnels the forecasting may help scheduling the ventilation fans. This way, the ventilation cost might be decreased while the air quality increased. Additional aspect of traffic prediction is that it may enable the drivers to plan their departure time and traveling path, as they posses the predictive information. In this paper, we survey the different techniques used for traffic forecasting, the input data for these techniques, the output provided by them, as well as some general insights. --- paper_title: Pattern-based short-term prediction of urban congestion propagation and automatic response paper_content: This paper presents a method for the online prediction of urban congestion patterns including their spatio-temporal propagation based on historic state data. Traffic state data for each link and time interval within the Berlin street network comes from a dynamic route choice and traffic assignment model. From extensive historic traffic state data congestion patterns are generated and classified in an appropriate manner. Based on this analysis, a method was developed to predict the propagation of congestion within the network based on pattern recognition. Significant parts of the network-wide prognosis are selected and sent as messages to the operator of the traffic management centre. A further step identifies actuators at in- and outflow areas of current and predicted congestion in order to increase the outflow from and decrease the inflow to the congested area. Messages for variable message signs are generated automatically and displayed to the operator with other appropriate measures. The work presented was carried out within the German research project IQ Mobility, which was funded by the initiative Verkehrsmanagement 2010 (Traffic Management 2010). --- paper_title: Real-time travel time prediction using particle filtering with a non-explicit state-transition model paper_content: The research presented in this paper develops a particle filter approach for the real-time short to medium-term travel time prediction using real-time and historical data. Given the challenges in defining the particle filter time update process, the proposed algorithm selects particles from a historical database and propagates particles using historical data sequences as opposed to using a state-transition model. A partial resampling strategy is then developed to address the degeneracy problem by replacing invalid or low weighted particles with historical data that provide similar data sequences to real-time traffic measurements. As a result, each particle generates a predicted travel time with a corresponding weight that represents the level of confidence in the prediction. Consequently, the prediction can produce a distribution of travel times by aggregating all weighted particles. A 95-mile freeway stretch from Richmond to Virginia Beach along I-64 and I-264 is used to test the proposed algorithm. Both the absolute and relative prediction errors using the leave-one-out cross validation concept demonstrate that the proposed method produces the least deviation from ground truth travel times, compared to instantaneous travel times, two Kalman filter algorithms and a K nearest neighbor (k-NN) method. Moreover, the maximum prediction error for the proposed method is the least of all the algorithms and maintains a stable performance for all test days. The confidence boundaries of the predicted travel times demonstrate that the proposed approach provides good accuracy in predicting travel time reliability. Lastly, the fast computation time and online processing ensure the method can be used in real-time applications. --- paper_title: DynaMIT 2.0: The Next Generation Real-Time Dynamic Traffic Assignment System paper_content: Real-time transportation models are proven to be highly useful for traffic management and generation of traveler guidance information. The current state of the practice in real-time transportation modeling is represented by DynaMIT, which generates consistent anticipatory information about the future state of the transportation network based on current real-time data. DynaMIT has been effectively applied across a variety of locations and sensor configurations. The next generation of real-time models will be multi-modal and include representation of dynamic pricing and commercial vehicles. To support this, these models will be based on activity-based demand and will make use of the latest software design strategies, enhanced data availability and personal/vehicle connectivity. --- paper_title: Integration of Activity-Based Modeling and Dynamic Traffic Assignment paper_content: The traditional trip-based approach to transportation modeling has been used for the past 30 years. Because of limitations of traditional planning for short-term policy analysis, researchers have explored alternative paradigms for incorporating more behavioral realism in planning methodologies. On the demand side, activity-based approaches have evolved as an alternative to traditional trip-based transportation demand forecasting. On the supply side, dynamic traffic assignment models have been developed as an alternative to static assignment procedures. Much of the research effort in activity-based approaches (the demand side) and dynamic traffic assignment techniques (the supply side) has been undertaken relatively independently. To maximize benefits from these advanced methodologies, it is essential to combine them through a unified framework. The objective of this paper is to develop a conceptual framework and explore practical integration issues for combining the two streams of research. Technical, com... --- paper_title: AN EVALUATION OF TRAFFIC SIMULATION MODELS FOR SUPPORTING ITS DEVELOPMENT paper_content: In this report, the authors evaluate existing traffic simulation models in order to identify those models that can be potentially applied in Intelligent Transportation Systems (ITS) equipped networks. The models are categorized according to type (macroscopic microscopic, or mesoscopic), as well as functionality (highway signal, integrated). The evaluation is conducted through two steps: initial screening and in-depth evaluation. From the research, it is concluded that the CORSIM and INTEGRATION models appear to have the highest probability of success in real-world applications. The authors also find that by adding more calibration and validation in the U.S., the AIMSUN 2 and the PARAMICS models will be brought to the forefront in the near term for use with ITS applications. --- paper_title: Calibration and validation of TRANSIMS microsimulator for an urban arterial network paper_content: TRANSIMS, a travel demand modeling software package initially developed by the Los Alamos National Laboratory (LANL), enables modeling of individual activities and provides second-by-second simulation results on vehicular movements. TRANSIMS has been applied more than a decade, but calibration and validation of TRANSIMS Microsimulator have not received proper attention from transportation engineering community. This paper presents a case study of a TRANSIMS Microsimulator calibration and validation using an experimental design approach. An urban arterial network consisted of four signalized intersections was calibrated against field measured travel times and traffic count data. The distributions of travel times and traffic count obtained from the multiple replications of the default parameter values were not able to replicate field conditions, while the proposed approach did well. Even though this case study used a small corridor network in urban area, the calibration and validation procedure could be extended to a larger scale network in TRANSIMS modeling as well as transportation planning applications. --- paper_title: A cellular automaton model for freeway traffic paper_content: We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stop- waves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained. --- paper_title: Parallel DYNEMO : meso-scopic traffic flow simulation on large networks paper_content: Traffic flow simulation is recently being applied to types of studies which require at the same time a large study area and models deeper than macroscopic assignments. Examples are investigations of the effects dynamic route guidance over large networks and the coupling with environmental models on large domains. The computational complexity can be reduced first by moving from microscopic to mesoscopic traffic flow models of the Payne-Cremer type. Where still more computing power is needed, parallelisation of the model may offer a solution. This paper describes a particular mesoscopic traffic flow model and its parallelisation undertaken in the ESPRIT project SIMTRAP. A new two step decomposition strategy is detailed along with its implementation using the message passing model PVM. The resulting speed-up is reported, drawing on both theoretical considerations and on measurements taken in the SIMTRAP demonstrator applications. The paper concludes with lessons learned from the experiments and directions for future work. --- paper_title: A Review of Models of Urban Traffic Networks (With Particular reference to the Requirements for Modelling Dynamic Route Guidance Systems) paper_content: This paper reviews a number of existing models of urban traffic networks developed in Europe and North America. The primary intention is to evaluate the various models with regard to their suitability to simulate traffic conditions and driver behavior when a dynamic route guidance system is in operation. --- paper_title: DYNEMO: A MODEL FOR THE SIMULATION OF TRAFFIC FLOW IN MOTORWAY NETWORKS paper_content: The paper presents the simulation model dynemo which has been designed for the development, evaluation and optimization of traffic control systems for motorway networks. A new traffic flow model included with the simulation package combines the advantages of a macroscopic model (computational simplicity) with the advantages of a microscopic model (output statistics relating to individual vehicles). For each stretch in the network, the model needs as input a relationship between traffic density and mean speed and the distribution of free flow speeds. The new traffic flow model is validated by use of an example. The simulation package is implemented on a 16-bit microcomputer. A real network with a traffic control system has been simulated with the model. (Author/TRRL) --- paper_title: VISUM-ONLINE - TRAFFIC MANAGEMENT FOR THE EXPO 2000 BASED ON A TRAFFIC MODEL paper_content: This year the World Exposition is being held in Hanover, Germany. A number of measures were proposed to handle the expected surplus traffic. This article describes the core system for the traffic control and traffic information centre as it is being implemented in Hanover. The objectives of the traffic control centre include providing reliable prediction of travel times, alternative routing advice, mode choice information, parking guidance as well as reducing disruption caused by incidents. These objectives are tackled by linking various traffic data measurements techniques, a number of ITS traffic control measures and a realtime traffic simulation model. A simulation model called VISUM-online considers all available historical and present traffic data of the Hanover region and uses traffic flow models to predict the current and future situation. The Path Flow Estimator is used to update historical origin-destination-matrices using current traffic counts and assignment modelling. For the covering abstract see ITRD E114174. --- paper_title: An Improved K-nearest Neighbor Model for Short-term Traffic Flow Prediction paper_content: Abstract In order to accurately predict the short-term traffic flow, this paper presents a k-nearest neighbor (KNN) model. Short-term urban expressway flow prediction system based on k-NN is established in three aspects: the historical database, the search mechanism and algorithm parameters, and the predication plan. At first, preprocess the original data and then standardized the effective data in order to avoid the magnitude difference of the sample data and improve the prediction accuracy. At last, a short-term traffic prediction based on k-NN nonparametric regression model is developed in the Matlab platform. Utilizing the Shanghai urban expressway section measured traffic flow data, the comparison of average and weighted k-NN nonparametric regression model is discussed and the reliability of the predicting result is analyzed. Results show that the accuracy of the proposed method is over 90 percent and it also rereads that the feasibility of the methods is used in short-term traffic flow prediction. --- paper_title: A Hidden Markov Model for short term prediction of traffic conditions on freeways paper_content: Accurate short-term prediction of traffic conditions on freeways and major arterials has recently become increasingly important because of its vital role in the basic traffic management functions and trip decision making processes. Given the dynamic and stochastic nature of freeway traffic, this study proposes a stochastic approach, Hidden Markov Model (HMM), for short-term freeway traffic prediction during peak periods. The data used in the study was gathered from real-time traffic monitoring devices over six years on a 60.8-km (38-mile) corridor of Interstate-4 in Orlando, Florida. The HMM defines traffic states in a two-dimensional space using first-order statistics (Mean) and second-order statistics (Contrast) of speed observations. The dynamic changes of freeway traffic conditions are addressed with state transition probabilities. For a sequence of traffic speed observations, HMMs estimate the most likely sequence of traffic states. The model performance was evaluated using prediction errors, which are measured by the relative length of the distance between the predicted state and the observed state in the two-dimensional space. Reasonable prediction errors lower than or around 10% were obtained from HMMs. Also, the model performance was not remarkably affected by location, travel direction, and peak period time. The HMMs were compared to two naive predication methods. The results showed that HMMs perform better and are more robust than the naive methods. Therefore, the study concludes that the HMM approach was successful in modeling short-term traffic condition prediction during peak periods and in accounting for the inherent stochastic nature of traffic conditions. --- paper_title: Freeway traffic estimation in Beijing based on particle filter paper_content: Short-term traffic flow data is characterized by rapid and dramatic fluctuations. It reflects the nature of the frequent congestion in the lane, which shows a strong nonlinear feature. Traffic state estimation based on the data gained by electronic sensors is critical for much intelligent traffic management and the traffic control. In this paper, a solution to freeway traffic estimation in Beijing is proposed using a particle filter, based on macroscopic traffic flow model, which estimates both traffic density and speed. Particle filter is a nonlinear prediction method, which has obvious advantages for traffic flows prediction. However, with the increase of sampling period, the volatility of the traffic state curve will be much dramatic. Therefore, the prediction accuracy will be affected and difficulty of forecasting is raised. In this paper, particle filter model is applied to estimate the short-term traffic flow. Numerical study is conducted based on the Beijing freeway data with the sampling period of 2 min. The relatively high accuracy of the results indicates the superiority of the proposed model. --- paper_title: Short-Term Freeway Traffic Flow Prediction: Bayesian Combined Neural Network Approach paper_content: Short-term traffic flow prediction has long been regarded as a critical concern for intelligent transportation systems. On the basis of many existing prediction models, each having good performance only in a particular period, an improved approach is to combine these single predictors together for prediction in a span of periods. In this paper, a neural network model is introduced that combines the prediction from single neural network predictors according to an adaptive and heuristic credit assignment algorithm based on the theory of conditional probability and Bayes' rule. Two single predictors, i.e., the back propagation and the radial basis function neural networks are designed and combined linearly into a Bayesian combined neural network model. The credit value for each predictor in the combined model is calculated according to the proposed credit assignment algorithm and largely depends on the accumulative prediction perfor- mance of these predictors during the previous prediction intervals. For experimental test, two data sets comprising traffic flow rates in 15-min time intervals have been collected from Singapore's Ayer Rajah Expressway. One data set is used to train the two single neural networks and the other to test and compare the performances between the combined and singular models. Three indices, i.e., the mean absolute percentage error, the variance of absolute percentage error, and the probability of percentage error, are employed to compare the forecasting performance. It is found that most of the time, the combined model outperforms the singular predictors. More importantly, for a given time period, it is the role of this newly proposed model to track the predictors' performance online, so as to always select and combine the best-performing predictors for prediction. --- paper_title: Real-time travel time prediction using particle filtering with a non-explicit state-transition model paper_content: The research presented in this paper develops a particle filter approach for the real-time short to medium-term travel time prediction using real-time and historical data. Given the challenges in defining the particle filter time update process, the proposed algorithm selects particles from a historical database and propagates particles using historical data sequences as opposed to using a state-transition model. A partial resampling strategy is then developed to address the degeneracy problem by replacing invalid or low weighted particles with historical data that provide similar data sequences to real-time traffic measurements. As a result, each particle generates a predicted travel time with a corresponding weight that represents the level of confidence in the prediction. Consequently, the prediction can produce a distribution of travel times by aggregating all weighted particles. A 95-mile freeway stretch from Richmond to Virginia Beach along I-64 and I-264 is used to test the proposed algorithm. Both the absolute and relative prediction errors using the leave-one-out cross validation concept demonstrate that the proposed method produces the least deviation from ground truth travel times, compared to instantaneous travel times, two Kalman filter algorithms and a K nearest neighbor (k-NN) method. Moreover, the maximum prediction error for the proposed method is the least of all the algorithms and maintains a stable performance for all test days. The confidence boundaries of the predicted travel times demonstrate that the proposed approach provides good accuracy in predicting travel time reliability. Lastly, the fast computation time and online processing ensure the method can be used in real-time applications. --- paper_title: Adaptive Kalman filter approach for stochastic short-term traffic flow rate prediction and uncertainty quantification paper_content: Short term traffic flow forecasting has received sustained attention for its ability to provide the anticipatory traffic condition required for proactive traffic control and management. Recently, a stochastic seasonal autoregressive integrated moving average plus generalized autoregressive conditional heteroscedasticity (SARIMA + GARCH) process has gained increasing notice for its ability to jointly generate traffic flow level prediction and associated prediction interval. Considering the need for real time processing, Kalman filters have been utilized to implement this SARIMA + GARCH structure. Since conventional Kalman filters assume constant process variances, adaptive Kalman filters that can update the process variances are investigated in this paper. Empirical comparisons using real world traffic flow data aggregated at 15-min interval showed that the adaptive Kalman filter approach can generate workable level forecasts and prediction intervals; in particular, the adaptive Kalman filter approach demonstrates improved adaptability when traffic is highly volatile. Sensitivity analyses show that the performance of the adaptive Kalman filter stabilizes with the increase of its memory size. Remarks are provided on improving the performance of short term traffic flow forecasting. --- paper_title: An Aggregation Approach to Short-Term Traffic Flow Prediction paper_content: In this paper, an aggregation approach is proposed for traffic flow prediction that is based on the moving average (MA), exponential smoothing (ES), autoregressive MA (ARIMA), and neural network (NN) models. The aggregation approach assembles information from relevant time series. The source time series is the traffic flow volume that is collected 24 h/day over several years. The three relevant time series are a weekly similarity time series, a daily similarity time series, and an hourly time series, which can be directly generated from the source time series. The MA, ES, and ARIMA models are selected to give predictions of the three relevant time series. The predictions that result from the different models are used as the basis of the NN in the aggregation stage. The output of the trained NN serves as the final prediction. To assess the performance of the different models, the naive, ARIMA, nonparametric regression, NN, and data aggregation (DA) models are applied to the prediction of a real vehicle traffic flow, from which data have been collected at a data-collection point that is located on National Highway 107, Guangzhou, Guangdong, China. The outcome suggests that the DA model obtains a more accurate forecast than any individual model alone. The aggregation strategy can offer substantial benefits in terms of improving operational forecasting. --- paper_title: Real-time road traffic prediction with spatio-temporal correlations paper_content: Real-time road traffic prediction is a fundamental capability needed to make use of advanced, smart transportation technologies. Both from the point of view of network operators as well as from the point of view of travelers wishing real-time route guidance, accurate short-term traffic prediction is a necessary first step. While techniques for short-term traffic prediction have existed for some time, emerging smart transportation technologies require the traffic prediction capability to be both fast and scalable to full urban networks. We present a method that has proven to be able to meet this challenge. The method presented provides predictions of speed and volume over 5-min intervals for up to 1 h in advance. --- paper_title: THE IMPACT OF TRAFFIC INFORMATION ON DRIVERS' ROUTE CHOICE-USING COMPETENCE SETS ANALYSIS paper_content: Generally speaking, drivers' route choice is a fuzzy problem. However, if drivers' habitual domain becomes stable and without significant stimuli, route choice becomes a routine problem. Route choice can become fuzzy again if drivers perceive information stimuli. Intuitively, traffic information should help drivers to reach destination in an efficient way, but the abundant and complex information could overwhelm the drivers. In this work, a different and novel approach to complement the driver route choice decision models is exploited. The concept of habitual domains and competence sets proposed by Yu in 1980 is applied to the route choice problem. The effect of traffic information on route choice is isolated to analysis the behavior of route choice decision. Performance indexes of route choice decision are developed to help the drivers or the traffic information providers in expanding their competence sets to fully address the needs of route choice decision. --- paper_title: Short-Term Freeway Traffic Flow Prediction: Bayesian Combined Neural Network Approach paper_content: Short-term traffic flow prediction has long been regarded as a critical concern for intelligent transportation systems. On the basis of many existing prediction models, each having good performance only in a particular period, an improved approach is to combine these single predictors together for prediction in a span of periods. In this paper, a neural network model is introduced that combines the prediction from single neural network predictors according to an adaptive and heuristic credit assignment algorithm based on the theory of conditional probability and Bayes' rule. Two single predictors, i.e., the back propagation and the radial basis function neural networks are designed and combined linearly into a Bayesian combined neural network model. The credit value for each predictor in the combined model is calculated according to the proposed credit assignment algorithm and largely depends on the accumulative prediction perfor- mance of these predictors during the previous prediction intervals. For experimental test, two data sets comprising traffic flow rates in 15-min time intervals have been collected from Singapore's Ayer Rajah Expressway. One data set is used to train the two single neural networks and the other to test and compare the performances between the combined and singular models. Three indices, i.e., the mean absolute percentage error, the variance of absolute percentage error, and the probability of percentage error, are employed to compare the forecasting performance. It is found that most of the time, the combined model outperforms the singular predictors. More importantly, for a given time period, it is the role of this newly proposed model to track the predictors' performance online, so as to always select and combine the best-performing predictors for prediction. --- paper_title: Real-time road traffic prediction with spatio-temporal correlations paper_content: Real-time road traffic prediction is a fundamental capability needed to make use of advanced, smart transportation technologies. Both from the point of view of network operators as well as from the point of view of travelers wishing real-time route guidance, accurate short-term traffic prediction is a necessary first step. While techniques for short-term traffic prediction have existed for some time, emerging smart transportation technologies require the traffic prediction capability to be both fast and scalable to full urban networks. We present a method that has proven to be able to meet this challenge. The method presented provides predictions of speed and volume over 5-min intervals for up to 1 h in advance. --- paper_title: An Aggregation Approach to Short-Term Traffic Flow Prediction paper_content: In this paper, an aggregation approach is proposed for traffic flow prediction that is based on the moving average (MA), exponential smoothing (ES), autoregressive MA (ARIMA), and neural network (NN) models. The aggregation approach assembles information from relevant time series. The source time series is the traffic flow volume that is collected 24 h/day over several years. The three relevant time series are a weekly similarity time series, a daily similarity time series, and an hourly time series, which can be directly generated from the source time series. The MA, ES, and ARIMA models are selected to give predictions of the three relevant time series. The predictions that result from the different models are used as the basis of the NN in the aggregation stage. The output of the trained NN serves as the final prediction. To assess the performance of the different models, the naive, ARIMA, nonparametric regression, NN, and data aggregation (DA) models are applied to the prediction of a real vehicle traffic flow, from which data have been collected at a data-collection point that is located on National Highway 107, Guangzhou, Guangdong, China. The outcome suggests that the DA model obtains a more accurate forecast than any individual model alone. The aggregation strategy can offer substantial benefits in terms of improving operational forecasting. --- paper_title: Adaptive Kalman filter approach for stochastic short-term traffic flow rate prediction and uncertainty quantification paper_content: Short term traffic flow forecasting has received sustained attention for its ability to provide the anticipatory traffic condition required for proactive traffic control and management. Recently, a stochastic seasonal autoregressive integrated moving average plus generalized autoregressive conditional heteroscedasticity (SARIMA + GARCH) process has gained increasing notice for its ability to jointly generate traffic flow level prediction and associated prediction interval. Considering the need for real time processing, Kalman filters have been utilized to implement this SARIMA + GARCH structure. Since conventional Kalman filters assume constant process variances, adaptive Kalman filters that can update the process variances are investigated in this paper. Empirical comparisons using real world traffic flow data aggregated at 15-min interval showed that the adaptive Kalman filter approach can generate workable level forecasts and prediction intervals; in particular, the adaptive Kalman filter approach demonstrates improved adaptability when traffic is highly volatile. Sensitivity analyses show that the performance of the adaptive Kalman filter stabilizes with the increase of its memory size. Remarks are provided on improving the performance of short term traffic flow forecasting. --- paper_title: Real-time travel time prediction using particle filtering with a non-explicit state-transition model paper_content: The research presented in this paper develops a particle filter approach for the real-time short to medium-term travel time prediction using real-time and historical data. Given the challenges in defining the particle filter time update process, the proposed algorithm selects particles from a historical database and propagates particles using historical data sequences as opposed to using a state-transition model. A partial resampling strategy is then developed to address the degeneracy problem by replacing invalid or low weighted particles with historical data that provide similar data sequences to real-time traffic measurements. As a result, each particle generates a predicted travel time with a corresponding weight that represents the level of confidence in the prediction. Consequently, the prediction can produce a distribution of travel times by aggregating all weighted particles. A 95-mile freeway stretch from Richmond to Virginia Beach along I-64 and I-264 is used to test the proposed algorithm. Both the absolute and relative prediction errors using the leave-one-out cross validation concept demonstrate that the proposed method produces the least deviation from ground truth travel times, compared to instantaneous travel times, two Kalman filter algorithms and a K nearest neighbor (k-NN) method. Moreover, the maximum prediction error for the proposed method is the least of all the algorithms and maintains a stable performance for all test days. The confidence boundaries of the predicted travel times demonstrate that the proposed approach provides good accuracy in predicting travel time reliability. Lastly, the fast computation time and online processing ensure the method can be used in real-time applications. --- paper_title: Freeway traffic estimation in Beijing based on particle filter paper_content: Short-term traffic flow data is characterized by rapid and dramatic fluctuations. It reflects the nature of the frequent congestion in the lane, which shows a strong nonlinear feature. Traffic state estimation based on the data gained by electronic sensors is critical for much intelligent traffic management and the traffic control. In this paper, a solution to freeway traffic estimation in Beijing is proposed using a particle filter, based on macroscopic traffic flow model, which estimates both traffic density and speed. Particle filter is a nonlinear prediction method, which has obvious advantages for traffic flows prediction. However, with the increase of sampling period, the volatility of the traffic state curve will be much dramatic. Therefore, the prediction accuracy will be affected and difficulty of forecasting is raised. In this paper, particle filter model is applied to estimate the short-term traffic flow. Numerical study is conducted based on the Beijing freeway data with the sampling period of 2 min. The relatively high accuracy of the results indicates the superiority of the proposed model. --- paper_title: Short-Term Freeway Traffic Flow Prediction: Bayesian Combined Neural Network Approach paper_content: Short-term traffic flow prediction has long been regarded as a critical concern for intelligent transportation systems. On the basis of many existing prediction models, each having good performance only in a particular period, an improved approach is to combine these single predictors together for prediction in a span of periods. In this paper, a neural network model is introduced that combines the prediction from single neural network predictors according to an adaptive and heuristic credit assignment algorithm based on the theory of conditional probability and Bayes' rule. Two single predictors, i.e., the back propagation and the radial basis function neural networks are designed and combined linearly into a Bayesian combined neural network model. The credit value for each predictor in the combined model is calculated according to the proposed credit assignment algorithm and largely depends on the accumulative prediction perfor- mance of these predictors during the previous prediction intervals. For experimental test, two data sets comprising traffic flow rates in 15-min time intervals have been collected from Singapore's Ayer Rajah Expressway. One data set is used to train the two single neural networks and the other to test and compare the performances between the combined and singular models. Three indices, i.e., the mean absolute percentage error, the variance of absolute percentage error, and the probability of percentage error, are employed to compare the forecasting performance. It is found that most of the time, the combined model outperforms the singular predictors. More importantly, for a given time period, it is the role of this newly proposed model to track the predictors' performance online, so as to always select and combine the best-performing predictors for prediction. --- paper_title: Freeway traffic estimation in Beijing based on particle filter paper_content: Short-term traffic flow data is characterized by rapid and dramatic fluctuations. It reflects the nature of the frequent congestion in the lane, which shows a strong nonlinear feature. Traffic state estimation based on the data gained by electronic sensors is critical for much intelligent traffic management and the traffic control. In this paper, a solution to freeway traffic estimation in Beijing is proposed using a particle filter, based on macroscopic traffic flow model, which estimates both traffic density and speed. Particle filter is a nonlinear prediction method, which has obvious advantages for traffic flows prediction. However, with the increase of sampling period, the volatility of the traffic state curve will be much dramatic. Therefore, the prediction accuracy will be affected and difficulty of forecasting is raised. In this paper, particle filter model is applied to estimate the short-term traffic flow. Numerical study is conducted based on the Beijing freeway data with the sampling period of 2 min. The relatively high accuracy of the results indicates the superiority of the proposed model. ---
Title: Short-term real-time traffic prediction methods: A survey Section 1: INTRODUCTION Description 1: Introduce the background and motivation for short-term real-time traffic prediction methods and outline the contributions of the paper. Section 2: PRELIMINARIES Description 2: Discuss important concepts related to traffic prediction and contextualize forecasting systems. Section 3: CHARACTERIZATION OF PREDICTIVE METHODS Description 3: Detail the characteristics relevant for estimation in real-time systems and describe the application context and motivation, including the collection of real-time data, prediction targets, and measuring prediction accuracy. Section 4: MODEL DRIVEN Description 4: Present and analyze model-driven approaches and simulators used for traffic prediction. Section 5: DATA DRIVEN Description 5: Describe recent data-driven approaches to short-term traffic prediction. Section 6: DISCUSSION Description 6: Summarize the main findings, observations, and provide insights and future directions for developing short-term traffic prediction systems. Section 7: CONCLUSION Description 7: Summarize the literature review and related work about traffic prediction models, comparing model-driven and data-driven approaches, and suggest areas for future work.
Software development outsourcing relationships trust: a systematic literature review protocol
8
--- paper_title: Establishing and maintaining trust in software outsourcing relationships: An empirical investigation paper_content: Our research objective is to understand software outsourcing practitioners' perceptions of the role of trust in managing client-vendor relationships and the factors that are critical to trust in off-shore software outsourcing relationships. Participants were 12 Vietnamese software development practitioners developing software for Far Eastern, European, and American clients. They identified that cultural understanding, creditability, capabilities, and personal visits are important factors in gaining the initial trust of a client, while cultural understanding, communication strategies, contract conformance, and timely delivery are vital factors in maintaining that trust. We contrast Vietnamese and Indian practitioners' views on factors affecting trust relationships. --- paper_title: Dynamic nature of trust in virtual teams paper_content: We empirically examine the dynamic nature of trust and the differences between high- and low-performing virtual teams in the changing patterns in cognition- and affect-based trust over time (early, middle, and late stages of project). Using data from 36, four-person MBA student teams from six universities competing in a web-based business simulation game over an 8-week period, we found that both high- and low-performing teams started with similar levels of trust in both cognitive and affective dimensions. However, high-performing teams were better at developing and maintaining the trust level throughout the project life. Moreover, virtual teams relied more on a cognitive than an affective element of trust. These findings provide a preliminary step toward understanding the dynamic nature and relative importance of cognition- and affect-based trust over time. --- paper_title: Effects of offshore outsourcing of information technology work on client project management paper_content: Purpose – While strategic outsourcing decisions are crafted by senior executives, they are executed by middle managers and staff who may not share the vision or enthusiasm of their senior leadership team. The purpose of this paper is to provide a deep understanding of the effects of outsourcing on one of those stakeholder groups – the client project managers – responsible for the implementation of outsourcing strategies, and to identify practices to better empower and enable them.Design/methodology/approach – Interviews with 67 client project managers in 25 organizations responsible for integrating suppliers into project teams.Findings – Client project managers report 27 effects of outsourcing on their roles, including six positive effects and 21 negative effects.Practical implications – Senior executives who implemented the following practices had more success with their outsourcing decisions: provide enough resources to implement the sourcing strategy, be willing to change internal work practices, build... --- paper_title: Outsourcing Decisions and Models - Some Practical Considerations for Large Organizations paper_content: Outsourcing has recently spurred broad discussions due to the relatively high failure rate of outsourced activities. To analyze how organizations can increase their success rate of outsourcing activities, the authors take a two-prong approach to the outsourcing decision and execution process, covering the "why" and "how to" outsource. To determine the optimal setup, the authors introduce six outsourcing dimensions, which trigger the decision process and the subsequent procurement and execution processes. Strategic and operational considerations as well as risk implications are further elaborated. --- paper_title: Trust in Software Outsourcing Relationships: An Analysis of Vietnamese Practitioners ’ Views paper_content: Trust is considered one of the most important factors for successfully managing software outsourcing relationships. However, there is lack of research into factors that are considered important in establishing and maintaining trust between clients and vendors. The goal of this research is to gain an understanding of vendors' perceptions of the importance of factors that are critical to the establishment and maintenance of trust in software outsourcing projects in Vietnam. We used a multiple case study methodology to guide our research and in-depth interviews to collect qualitative data. The participants of study were 12 Vietnamese software development practitioners drawn from 8 companies that have been developing software for off shore clients. Vendor companies identified that cultural understanding, creditability, capabilities, and personal visits are important factors in gaining the initial trust of a client, while cultural understanding, communication strategies, contract conformance, and timely delivery are vital factors in maintaining that trust. We also identify similarities and differences between Vietnamese and Indian practitioners' views on factors affecting trust relationships. --- paper_title: Effects of offshore outsourcing of information technology work on client project management paper_content: Purpose – While strategic outsourcing decisions are crafted by senior executives, they are executed by middle managers and staff who may not share the vision or enthusiasm of their senior leadership team. The purpose of this paper is to provide a deep understanding of the effects of outsourcing on one of those stakeholder groups – the client project managers – responsible for the implementation of outsourcing strategies, and to identify practices to better empower and enable them.Design/methodology/approach – Interviews with 67 client project managers in 25 organizations responsible for integrating suppliers into project teams.Findings – Client project managers report 27 effects of outsourcing on their roles, including six positive effects and 21 negative effects.Practical implications – Senior executives who implemented the following practices had more success with their outsourcing decisions: provide enough resources to implement the sourcing strategy, be willing to change internal work practices, build... --- paper_title: Critical Success Factors for Offshore Software Development Outsourcing Vendors: A Systematic Literature Review paper_content: CONTEXT – Offshore software development outsourcing is a modern business strategy for producing high quality software at low cost. OBJECTIVE – To identify various Critical Success Factors (CSFs) that have a positive impact on software outsourcing clients in the selection process of offshore software development outsourcing vendors. METHOD – We have performed a Systematic Literature Review process for the identification of factors in the selection process of offshore software development outsourcing vendors. RESULTS – We have identified factors ‘cost-saving’, ‘skilled human resource’, ‘appropriate infrastructure’ and ‘quality of product and services’ that are generally considered important by the outsourcing clients. The results also reveal the similarities and differences in the factors identified in different continents. CONCLUSIONS – Cost-saving should not be considered as the only prime factor in the selection process of software development outsourcing vendors. Vendors should have to address other factors in order to compete in the offshore outsourcing business. ---
Title: Software Development Outsourcing Relationships Trust: A Systematic Literature Review Protocol Section 1: INTRODUCTION Description 1: Introduce the importance of trust in managing software development outsourcing relationships and outline the main research questions. Section 2: BACKGROUND Description 2: Provide context regarding software development outsourcing practices and highlight previous research in the area, focusing on the lack of systematic literature reviews on trust. Section 3: SYSTEMATIC LITERATURE REVIEW PROTOCOL FOR SOFTWARE DEVELOPMENT OUTSOURCING TRUST Description 3: Describe the systematic review process, including planning, conducting, and reporting phases. Section 4: Search Strategy Description 4: Explain the strategy for identifying search terms, resources to be searched, and search constraints and validation. Section 5: Publication Selection Description 5: Specify the inclusion and exclusion criteria for selecting relevant literature and detail the process for selecting primary sources. Section 6: Publication Quality Assessment Description 6: Outline the criteria and process for assessing the quality of selected publications. Section 7: Data Extraction Strategy Description 7: Describe the process for extracting data from the selected literature, including the role of primary and secondary reviewers. Section 8: Data Synthesis Description 8: Detail the methods for synthesizing data to address the research questions, including creating summary tables for factors and their frequencies and percentages.
A review of modularization techniques in artificial neural networks
9
--- paper_title: Gradient-Based Learning Applied to Document Recognition paper_content: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day. --- paper_title: The columnar organization of the neocortex paper_content: The modular organization of nervous systems is a widely documented principle of design for both vertebrate and invertebrate brains of which the columnar organization of the neocortex is an example. The classical cytoarchitectural areas of the neocortex are composed of smaller units, local neural circuits repeated iteratively within each area. Modules may vary in cell type and number, in internal and external connectivity, and in mode of neuronal processing between different large entities; within any single large entity they have a basic similarity of internal design and operation. Modules are most commonly grouped into entities by sets of dominating external connections. This unifying factor is most obvious for the heterotypical sensory and motor areas of the neocortex. Columnar defining factors in homotypical areas are generated, in part, within the cortex itself. The set of all modules composing such an entity may be fractionated into different modular subsets by different extrinsic connections. Linkages between them and subsets in other large entities form distributed systems. The neighborhood relations between connected subsets of modules in different entities result in nested distributed systems that serve distributed functions. A cortical area defined in classical cytoarchitectural terms may belong to more than one and sometimes to several distributed systems. Columns in cytoarchitectural areas located at some distance from one another, but with some common properties, may be linked by long-range, intracortical connections. --- paper_title: The Design and Evolution of Modular Neural Network Architectures paper_content: To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist of several modules, standard subnetworks that serve as higher order units with a distinct structure and function. The simulations rely on a particular network module called the categorizing and learning module. This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is presented. A number of small-scale simulation studies shows how intermodule connectivity patterns implement ''neural assemblies'' that induce a particular category structure in the network. Learning and categorization improves because the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks: replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying genetic algorithms (a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalize its learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary. --- paper_title: Modularity in neural computing paper_content: This paper considers neural computing models for information processing in terms of collections of subnetwork modules. Two approaches to generating such networks are studied. The first approach includes networks with functionally independent subnetworks, where each subnetwork is designed to have specific functions, communication, and adaptation characteristics. The second approach is based on algorithms that can actually generate network and subnetwork topologies, connections, and weights to satisfy specific constraints. Associated algorithms to attain these goals include evolutionary computation and self-organizing maps. We argue that this modular approach to neural computing is more in line with the neurophysiology of the vertebrate cerebral cortex, particularly with respect to sensation and perception. We also argue that this approach has the potential to aid in solutions to large-scale network computational problems - an identified weakness of simply defined artificial neural networks. --- paper_title: Recurrent neuronal circuits in the neocortex paper_content: We thank our colleagues John Anderson and Tom Binzegger for their collaboration; and EU grant DAISY (FP6-2005-015803) for financial support. --- paper_title: On combining artificial neural nets paper_content: This paper reviews research on combining artificial neural nets, and provides an overview of, and an introduction to, the papers contained in this special issue, and its companion (Connection Science, 9, 1). Two main approaches, ensemble-based, and modular, are identified and considered. An ensemble, or committee, is made up of a set of nets, each of which is a general function approximator. The members of the ensemble are combined in order to obtain better generalization performance than would be achieved by any of the individual nets. The main issues considered here under the heading of ensemble-based approaches are a how to combine the outputs of the ensemble members, b how to create candidate ensemble members and c which methods lead to the most effective ensembles? Under the heading of modular approaches, we begin by considering a divide-and-conquer approach by which a function is automatically decomposed into a number of subfunctions which are treated by specialist modules. Other modular approaches ... --- paper_title: Community structure and modularity in networks of correlated brain activity paper_content: Functional connectivity patterns derived from neuroimaging data may be represented as graphs or networks, with individual image voxels or anatomically-defined structures representing the nodes, and a measure of correlation between the responses in each pair of nodes determining the edges. This explicit network representation allows network-analysis approaches to be applied to the characterization of functional connections within the brain. Much recent research in complex networks has focused on methods to identify community structure, i.e. cohesive clusters of strongly interconnected nodes. One class of such algorithms determines a partition of a network into ‘sub-networks' based on the optimization of a modularity parameter, thus also providing a measure of the degree of segregation versus integration in the full network. Here, we demonstrate that a community structure algorithm based on the maximization of modularity, applied to a functional connectivity network calculated from the responses to acute fluoxetine challenge in the rat, can identify communities whose distributions correspond to anatomically meaningful structures and include compelling functional subdivisions in the brain. We also discuss the biological interpretation of the modularity parameter in terms of segregation and integration of brain function. --- paper_title: Spontaneous evolution of modularity and network motifs paper_content: Biological networks have an inherent simplicity: they are modular with a design that can be separated into units that perform almost independently. Furthermore, they show reuse of recurring patterns termed network motifs. Little is known about the evolutionary origin of these properties. Current models of biological evolution typically produce networks that are highly nonmodular and lack understandable motifs. Here, we suggest a possible explanation for the origin of modularity and network motifs in biology. We use standard evolutionary algorithms to evolve networks. A key feature in this study is evolution under an environment (evolutionary goal) that changes in a modular fashion. That is, we repeatedly switch between several goals, each made of a different combination of subgoals. We find that such “modularly varying goals” lead to the spontaneous evolution of modular network structure and network motifs. The resulting networks rapidly evolve to satisfy each of the different goals. Such switching between related goals may represent biological evolution in a changing environment that requires different combinations of a set of basic biological functions. The present study may shed light on the evolutionary forces that promote structural simplicity in biological networks and offers ways to improve the evolutionary design of engineered systems. --- paper_title: The evolutionary origins of modularity paper_content: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks—their organization as functional, sparsely connected subunits—but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes. --- paper_title: Eye Smarter than Scientists Believed: Neural Computations in Circuits of the Retina paper_content: We rely on our visual system to cope with the vast barrage of incoming light patterns and to extract features from the scene that are relevant to our well-being. The necessary reduction of visual information already begins in the eye. In this review, we summarize recent progress in understanding the computations performed in the vertebrate retina and how they are implemented by the neural circuitry. A new picture emerges from these findings that helps resolve a vexing paradox between the retina's structure and function. Whereas the conventional wisdom treats the eye as a simple prefilter for visual images, it now appears that the retina solves a diverse set of specific tasks and provides the results explicitly to downstream brain areas. --- paper_title: Revealing Modular Architecture of Human Brain Structural Networks by Using Cortical Thickness from MRI paper_content: Modularity, presumably shaped by evolutionary constraints, underlies the functionality of most complex networks ranged from social to biological networks. However, it remains largely unknown in human cortical networks. In a previous study, we demonstrated a network of correlations of cortical thickness among specific cortical areas and speculated that these correlations reflected an underlying structural connectivity among those brain regions. Here, we further investigated the intrinsic modular architecture of the human brain network derived from cortical thickness measurement. Modules were defined as groups of cortical regions that are connected morphologically to achieve the maximum network modularity. We show that the human cortical network is organized into 6 topological modules that closely overlap known functional domains such as auditory/language, strategic/executive, sensorimotor, visual, and mnemonic processing. The identified structure-based modular architecture may provide new insights into the functionality of cortical regions and connections between structural brain modules. This study provides the first report of modular architecture of the structural network in the human brain using cortical thickness measurements. --- paper_title: Neocognitron: A neural network model for a mechanism of visual pattern recognition paper_content: A recognition with a large-scale network is simulated on a PDP-11/34 minicomputer and is shown to have a great capability for visual pattern recognition. The model consists of nine layers of cells. The authors demonstrate that the model can be trained to recognize handwritten Arabic numerals even with considerable deformations in shape. A learning-with-a-teacher process is used for the reinforcement of the modifiable synapses in the new large-scale model, instead of the learning-without-a-teacher process applied to a previous model. The authors focus on the mechanism for pattern recognition rather than that for self-organization. --- paper_title: Modular neural networks with applications to pattern profiling problems paper_content: We study the feasibility and the performance of modular design concept as applied to pattern profiling problems using artificial neural network. By decomposing the given pattern profiling problem into smaller modules, it is shown that comparable performance can be achieved with improvement on computation and design complexity. A survey of typical modular neural networks shows that large-scale nonlinear problems can alleviate its dimensionality curse with modular technique. Overview of modular neural networks based on how the problem is modularized through various decomposition and subsequent aggregation is given. A pattern recognition problem for aircraft trajectory prediction using NeuroFuzzy learning with a two stage modular learning design is presented. Decoupled data are used to train respective neural network modules. A genetic algorithm is used to aggregate all the learned modules so that it is ready for online pattern recognition purpose. As compared with the non-modular approach, the modular approach offers comparable prediction performance with significantly lower overall computation time. This study validates that modular design is a promising solution for large-scale soft computing problems. --- paper_title: Modular Construction of Time-Delay Neural Networks for Speech Recognition paper_content: Several strategies are described that overcome limitations of basic network models as steps towards the design of large connectionist speech recognition systems. The two major areas of concern are the problem of time and the problem of scaling. Speech signals continuously vary over time and encode and transmit enormous amounts of human knowledge. To decode these signals, neural networks must be able to use appropriate representations of time and it must be possible to extend these nets to almost arbitrary sizes and complexity within finite resources. The problem of time is addressed by the development of a Time-Delay Neural Network; the problem of scaling by Modularity and Incremental Design of large nets based on smaller subcomponent nets. It is shown that small networks trained to perform limited tasks develop time invariant, hidden abstractions that can subsequently be exploited to train larger, more complex nets efficiently. Using these techniques, phoneme recognition networks of increasing complexity... --- paper_title: MODULAR NEURAL NETWORKS: A SURVEY paper_content: Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented. --- paper_title: Synaptic clustering within dendrites: An emerging theory of memory formation paper_content: It is generally accepted that complex memories are stored in distributed representations throughout the brain, however the mechanisms underlying these representations are not understood. Here, we review recent findings regarding the subcellular mechanisms implicated in memory formation, which provide evidence for a dendrite-centered theory of memory. Plasticity-related phenomena which affect synaptic properties, such as synaptic tagging and capture, synaptic clustering, branch strength potentiation and spinogenesis provide the foundation for a model of memory storage that relies heavily on processes operating at the dendrite level. The emerging picture suggests that clusters of functionally related synapses may serve as key computational and memory storage units in the brain. We discuss both experimental evidence and theoretical models that support this hypothesis and explore its advantages for neuronal function. --- paper_title: Optimal hierarchical modular topologies for producing limited sustained activation of neural networks paper_content: An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In this range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show a wider parameter range for LSA than random or small-world networks not possessing hierarchical organization or multiple modules. Here we explored how variation in the number of hierarchical levels and modules per level influenced network dynamics and occurrence of LSA. We tested hierarchical configurations of different network sizes, approximating the large-scale networks linking cortical columns in one hemisphere of the rat, cat, or macaque monkey brain. Scaling of the network size affected the number of hierarchical levels and modules in the optimal networks, also depending on whether global edge density or the numbers of connections per node were kept constant. For constant edge density, only few network configurations, possessing an intermediate number of levels and a large number of modules, led to a large range of LSA independent of brain size. For a constant number of node connections, there was a trend for optimal configurations in larger-size networks to possess a larger number of hierarchical levels or more modules. These results may help to explain the trend to greater network complexity apparent in larger brains and may indicate that this complexity is required for maintaining stable levels of neural activation. --- paper_title: Deep learning paper_content: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. --- paper_title: A parallel circuit approach for improving the speed and generalization properties of neural networks paper_content: One of the common problems of neural networks, especially those with many layers consists of their lengthy training times. We attempted to solve this problem at the algorithmic (not hardware) level, proposing a simple parallel design inspired by the parallel circuits found in the human retina. To avoid large matrix calculations, we split the original network vertically into parallel circuits and let the BP algorithm flow in each subnetwork independently. Experimental results have shown the speed advantage of the proposed approach but also pointed out that the reduction is affected by multiple dependencies. The results also suggest that parallel circuits improve the generalization ability of neural networks presumably due to automatic problem decomposition. --- paper_title: Methods of combining multiple classifiers and their applications to handwriting recognition paper_content: Possible solutions to the problem of combining classifiers can be divided into three categories according to the levels of information available from the various classifiers. Four approaches based on different methodologies are proposed for solving this problem. One is suitable for combining individual classifiers such as Bayesian, k-nearest-neighbor, and various distance classifiers. The other three could be used for combining any kind of individual classifiers. On applying these methods to combine several classifiers for recognizing totally unconstrained handwritten numerals, the experimental results show that the performance of individual classifiers can be improved significantly. For example, on the US zipcode database, 98.9% recognition with 0.90% substitution and 0.2% rejection can be obtained, as well as high reliability with 95% recognition, 0% substitution, and 5% rejection. > --- paper_title: Modular Neural Network Classifiers: A Comparative Study paper_content: There is a wide variety of Modular Neural Network (MNN) classifiers in the literature. They differ according to the design of their architecture, task-decomposition scheme, learning procedure, and multi-module decision-making strategy. Meanwhile, there is a lack of comparative studies in the MNN literature. This paper compares ten MNN classifiers which give a good representation of design varieties, viz., Decoupled; Other-output; ART-BP; Hierarchical; Multiple-experts; Ensemble (majority vote); Ensemble (average vote); Merge-glue; Hierarchical Competitive Neural Net; and Cooperative Modular Neural Net. Two benchmark applications of different degree and nature of complexity are used for performance comparison, and the strength-points and drawbacks of the different networks are outlined. The aim is to help a potential user to choose an appropriate model according to the application in hand and the available computational resources. --- paper_title: Detecting community structure in networks paper_content: There has been considerable recent interest in algorithms for finding communities in networks— groups of vertices within which connections are dense, but between which connections are sparser. Here we review the progress that has been made towards this end. We begin by describing some traditional methods of community detection, such as spectral bisection, the Kernighan-Lin algorithm and hierarchical clustering based on similarity measures. None of these methods, however, is ideal for the types of real-world network data with which current research is concerned, such as Internet and web data and biological and social networks. We describe a number of more recent algorithms that appear to work well with these data, including algorithms based on edge betweenness scores, on counts of short loops in networks and on voltage differences in resistor networks. --- paper_title: Email as Spectroscopy: Automated Discovery of Community Structure within Organizations paper_content: We describe a methodology for the automatic identification of communities of practice from email logs within an organization. We use a betweeness centrality algorithm that can rapidly find communities within a graph representing information flows. We apply this algorithm to an email corpus of nearly one million messages collected over a two-month span, and show that the method is effective at identifying true communities, both formal and informal, within these scale-free graphs. This approach also enables the identification of leadership roles within the communities. These studies are complemented by a qualitative evaluation of the results in the field. --- paper_title: The minicolumn hypothesis in neuroscience paper_content: The minicolumn is a continuing source of research and debate more than half a century after it was identified as a component of brain organization. The minicolumn is a sophisticated local network that contains within it the elements for redundancy and plasticity. Although it is sometimes compared to subcortical nuclei, the design of the minicolumn is a distinctive form of module that has evolved specifically in the neocortex. It unites the horizontal and vertical components of cortex within the same cortical space. Minicolumns are often considered highly repetitive, even clone‐like, units. However, they display considerable heterogeneity between areas and species, perhaps even within a given macrocolumn. Despite a growing recognition of the anatomical basis of the cortical minicolumn, as well as its physiological properties, the potential of the minicolumn has not been exploited in fields such as comparative neuroanatomy, abnormalities of the brain and mind, and evolution. --- paper_title: The small world of the cerebral cortex paper_content: While much information is available on the structural connectivity of the cerebral cortex, especially in the primate, the main organizational principles of the connection patterns linking brain areas, columns and individual cells have remained elusive. We attempt to characterize a wide variety of cortical connectivity data sets using a specific set of graph theory methods. We measure global aspects of cortical graphs including the abundance of small structural motifs such as cycles, the degree of local clustering of connections and the average path length. We examine large-scale cortical connection matrices obtained from neuroanatomical data bases, as well as probabilistic connection matrices at the level of small cortical neuronal populations linked by intra-areal and interareal connections. All cortical connection matrices examined in this study exhibit “small-world” attributes, characterized by the presence of abundant clustering of connections combined with short average distances between neuronal elements. We discuss the significance of these universal organizational features of cortex in light of functional brain anatomy. Supplementary materials are at www.indiana.edu/∼cortex/lab.htm. --- paper_title: Brain Graphs: Graphical Models of the Human Brain Connectome paper_content: Brain graphs provide a relatively simple and increasingly popular way of modeling the human brain connectome, using graph theory to abstractly define a nervous system as a set of nodes (denoting anatomical regions or recording electrodes) and interconnecting edges (denoting structural or functional connections). Topological and geometrical properties of these graphs can be measured and compared to random graphs and to graphs derived from other neuroscience data or other (nonneural) complex systems. Both structural and functional human brain graphs have consistently demonstrated key topological properties such as small-worldness, modularity, and heterogeneous degree distributions. Brain graphs are also physically embedded so as to nearly minimize wiring cost, a key geometric property. Here we offer a conceptual review and methodological guide to graphical analysis of human neuroimaging data, with an emphasis on some of the key assumptions, issues, and trade-offs facing the investigator. --- paper_title: Community detection in networks: Modularity optimization and maximum likelihood are equivalent paper_content: We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter. --- paper_title: The human connectome: a complex network. paper_content: The human brain is a complex network. An important first step toward understanding the function of such a network is to map its elements and connections, to create a comprehensive structural description of the network architecture. This paper reviews current empirical efforts toward generating a network map of the human brain, the human connectome, and explores how the connectome can provide new insights into the organization of the brain's structural connections and their role in shaping functional dynamics. Network studies of structural connectivity obtained from noninvasive neuroimaging have revealed a number of highly nonrandom network attributes, including high clustering and modularity combined with high efficiency and short path length. The combination of these attributes simultaneously promotes high specialization and high integration within a modular small-world architecture. Structural and functional networks share some of the same characteristics, although their relationship is complex and nonlinear. Future studies of the human connectome will greatly expand our knowledge of network topology and dynamics in the healthy, developing, aging, and diseased brain. --- paper_title: Spontaneous evolution of modularity and network motifs paper_content: Biological networks have an inherent simplicity: they are modular with a design that can be separated into units that perform almost independently. Furthermore, they show reuse of recurring patterns termed network motifs. Little is known about the evolutionary origin of these properties. Current models of biological evolution typically produce networks that are highly nonmodular and lack understandable motifs. Here, we suggest a possible explanation for the origin of modularity and network motifs in biology. We use standard evolutionary algorithms to evolve networks. A key feature in this study is evolution under an environment (evolutionary goal) that changes in a modular fashion. That is, we repeatedly switch between several goals, each made of a different combination of subgoals. We find that such “modularly varying goals” lead to the spontaneous evolution of modular network structure and network motifs. The resulting networks rapidly evolve to satisfy each of the different goals. Such switching between related goals may represent biological evolution in a changing environment that requires different combinations of a set of basic biological functions. The present study may shed light on the evolutionary forces that promote structural simplicity in biological networks and offers ways to improve the evolutionary design of engineered systems. --- paper_title: The evolutionary origins of modularity paper_content: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks—their organization as functional, sparsely connected subunits—but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes. --- paper_title: Defining and identifying communities in networks paper_content: The investigation of community structures in networks is an important issue in many domains and disciplines. This problem is relevant for social tasks (objective analysis of relationships on the web), biological inquiries (functional studies in metabolic and protein networks), or technological problems (optimization of large infrastructures). Several types of algorithms exist for revealing the community structure in networks, but a general and quantitative definition of community is not implemented in the algorithms, leading to an intrinsic difficulty in the interpretation of the results without any additional nontopological information. In this article we deal with this problem by showing how quantitative definitions of community are implemented in practice in the existing algorithms. In this way the algorithms for the identification of the community structure become fully self-contained. Furthermore, we propose a local algorithm to detect communities which outperforms the existing algorithms with respect to computational cost, keeping the same level of reliability. The algorithm is tested on artificial and real-world graphs. In particular, we show how the algorithm applies to a network of scientific collaborations, which, for its size, cannot be attacked with the usual methods. This type of local algorithm could open the way to applications to large-scale technological and biological systems. --- paper_title: Synaptic clustering within dendrites: An emerging theory of memory formation paper_content: It is generally accepted that complex memories are stored in distributed representations throughout the brain, however the mechanisms underlying these representations are not understood. Here, we review recent findings regarding the subcellular mechanisms implicated in memory formation, which provide evidence for a dendrite-centered theory of memory. Plasticity-related phenomena which affect synaptic properties, such as synaptic tagging and capture, synaptic clustering, branch strength potentiation and spinogenesis provide the foundation for a model of memory storage that relies heavily on processes operating at the dendrite level. The emerging picture suggests that clusters of functionally related synapses may serve as key computational and memory storage units in the brain. We discuss both experimental evidence and theoretical models that support this hypothesis and explore its advantages for neuronal function. --- paper_title: On Modularity Clustering paper_content: Modularity is a recently introduced quality measure for graph clusterings. It has immediately received considerable attention in several disciplines, particularly in the complex systems literature, although its properties are not well understood. We study the problem of finding clusterings with maximum modularity, thus providing theoretical foundations for past and present work based on this measure. More precisely, we prove the conjectured hardness of maximizing modularity both in the general case and with the restriction to cuts and give an Integer Linear Programming formulation. This is complemented by first insights into the behavior and performance of the commonly applied greedy agglomerative approach. --- paper_title: Complex brain networks: graph theoretical analysis of structural and functional systems paper_content: In recent years, the principles of network science have increasingly been applied to the study of the brain's structural and functional organization. Bullmore and Sporns review this growing field of research and discuss its contributions to our understanding of brain function. --- paper_title: Neuron theory, the cornerstone of neuroscience, on the centenary of the Nobel Prize award to Santiago Ramón y Cajal paper_content: Abstract Exactly 100 years ago, the Nobel Prize for Physiology and Medicine was awarded to Santiago Ramon y Cajal, “in recognition of his meritorious work on the structure of the nervous system”. Cajal's great contribution to the history of science is undoubtedly the postulate of neuron theory. The present work makes a historical analysis of the circumstances in which Cajal formulated his theory, considering the authors and works that influenced his postulate, the difficulties he encountered for its dissemination, and the way it finally became established. At the time when Cajal began his neurohistological studies, in 1887, Gerlach's reticular theory (a diffuse protoplasmic network of the grey matter of the nerve centres), also defended by Golgi, prevailed among the scientific community. In the first issue of the Revista Trimestral de Histologia Normal y Patologica (May, 1888), Cajal presented the definitive evidence underpinning neuron theory, thanks to staining of the axon of the small, star-shaped cells of the molecular layer of the cerebellum of birds, whose collaterals end up surrounding the Purkinje cell bodies, in the form of baskets or nests. He thus demonstrated once and for all that the relationship between nerve cells was not one of continuity, but rather of contiguity. Neuron theory is one of the principal scientific conquests of the 20th century, and which has withstood, with scarcely any modifications, the passage of more than a 100 years, being reaffirmed by new technologies, as the electron microscopy. Today, no neuroscientific discipline could be understood without recourse to the concept of neuronal individuality and nervous transmission at a synaptic level, as basic units of the nervous system. --- paper_title: Multi-column Deep Neural Networks for Image Classification paper_content: Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks. --- paper_title: Multi-path Convolutional Neural Networks for Complex Image Classification paper_content: Convolutional Neural Networks demonstrate high performance on ImageNet Large-Scale Visual Recognition Challenges contest. Nevertheless, the published results only show the overall performance for all image classes. There is no further analysis why certain images get worse results and how they could be improved. In this paper, we provide deep performance analysis based on different types of images and point out the weaknesses of convolutional neural networks through experiment. We design a novel multiple paths convolutional neural network, which feeds different versions of images into separated paths to learn more comprehensive features. This model has better presentation for image than the traditional single path model. We acquire better classification results on complex validation set on both top 1 and top 5 scores than the best ILSVRC 2013 classification model. --- paper_title: Multi-column Deep Neural Networks for Image Classification paper_content: Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks. --- paper_title: Curriculum learning paper_content: Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Here, we formalize such training strategies in the context of machine learning, and call them "curriculum learning". In the context of recent research studying the difficulty of training in the presence of non-convex training criteria (for deep deterministic and stochastic neural networks), we explore curriculum learning in various set-ups. The experiments show that significant improvements in generalization can be achieved. We hypothesize that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and, in the case of non-convex criteria, on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions). --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks paper_content: The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q"3) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. --- paper_title: Type-1 and type-2 fuzzy inference systems as integration methods in modular neural networks for multimodal biometry and its optimization with genetic algorithms paper_content: We describe in this paper a comparative study between fuzzy inference systems as methods of integration in modular neural networks for multimodal biometry. These methods of integration are based on techniques of type-1 fuzzy logic and type-2 fuzzy logic. Also, the fuzzy systems are optimized with simple genetic algorithms with the goal of having optimized versions of both types of fuzzy systems. First, we considered the use of type-1 fuzzy logic and later the approach with type-2 fuzzy logic. The fuzzy systems were developed using genetic algorithms to handle fuzzy inference systems with different membership functions, like the triangular, trapezoidal and Gaussian; since these algorithms can generate fuzzy systems automatically. Then the response integration of the modular neural network was tested with the optimized fuzzy systems of integration. The comparative study of the type-1 and type-2 fuzzy inference systems was made to observe the behavior of the two different integration methods for modular neural networks for multimodal biometry. --- paper_title: Spatio-Temporal Short-Term Urban Traffic Volume Forecasting Using Genetically Optimized Modular Networks paper_content: Current interest in short-term traffic volume forecasting focuses on incorporating temporal and spatial volume characteristics in the forecasting process. This paper addresses the problem of integrating and optimizing predictive information from multiple locations of an urban signalized arterial roadway and proposes a modular neural predictor consisting of temporal genetically optimized structures of multilayer perceptrons that are fed with volume data from sequential locations to improve the accuracy of short-term forecasts. Results show that the proposed methodology provides more accurate forecasts compared to the conventional statistical methodologies applied, as well as to the static forms of neural networks. --- paper_title: Multi-column Deep Neural Networks for Image Classification paper_content: Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks. --- paper_title: Multi-path Convolutional Neural Networks for Complex Image Classification paper_content: Convolutional Neural Networks demonstrate high performance on ImageNet Large-Scale Visual Recognition Challenges contest. Nevertheless, the published results only show the overall performance for all image classes. There is no further analysis why certain images get worse results and how they could be improved. In this paper, we provide deep performance analysis based on different types of images and point out the weaknesses of convolutional neural networks through experiment. We design a novel multiple paths convolutional neural network, which feeds different versions of images into separated paths to learn more comprehensive features. This model has better presentation for image than the traditional single path model. We acquire better classification results on complex validation set on both top 1 and top 5 scores than the best ILSVRC 2013 classification model. --- paper_title: Text-independent talker identification with neural networks paper_content: The authors introduce a novel method for partitioning a large classification problem using N*(N-1)/2 binary pair classifiers. The binary pair classifier has been applied to a speaker identification problem using neural networks for the binary classifiers. The binary partitioned approach was used to develop an identification system for the 47 male speakers belonging to the Northern dialect region of the TIMIT database. The system performs with 100% accuracy in a text-independent mode when trained with about 9 to 14 s of speech and tested with 8 s of speech. The partitioned approach performs comparably, or even better, than a single large neural network. For large values of N (>10), the partitioned approach requires only a fraction of the training time required for a single large network. For N=47, the training time for the partitioned network would be about two orders of magnitude less than for the single large network. > --- paper_title: Multi-class pattern classification using neural networks paper_content: Multi-class pattern classification has many applications including text document classification, speech recognition, object recognition, etc. Multi-class pattern classification using neural networks is not a trivial extension from two-class neural networks. This paper presents a comprehensive and competitive study in multi-class neural learning with focuses on issues including neural network architecture, encoding schemes, training methodology and training time complexity. Our study includes multi-class pattern classification using either a system of multiple neural networks or a single neural network, and modeling pattern classes using one-against-all, one-against-one, one-against-higher-order, and P-against-Q. We also discuss implementations of these approaches and analyze training time complexity associated with each approach. We evaluate six different neural network system architectures for multi-class pattern classification along the dimensions of imbalanced data, large number of pattern classes, large vs. small training data through experiments conducted on well-known benchmark data. --- paper_title: A Modular Fault-Diagnostic System for Analog Electronic Circuits Using Neural Networks With Wavelet Transform as a Preprocessor paper_content: We have developed a modular analog circuit fault- diagnostic system based on neural networks using wavelet decomposition, principal component analysis, and data normalization as preprocessors. Our proposed system has the ability to identify faulty components or modules in an analog circuit by analyzing its impulse response. In this approach, the circuit is divided into modules, which, in turn, are divided into smaller submodules successively. At each level, where a module is divided into submodules, a neural network is trained to identify the submodule that inherits the fault of interest from the parent module. This procedure finds the faulty component or module of any desirable size in an analog circuit by consecutive divisions of modules as many times as necessary. Our proposed approach has three advantages over the traditional neural-network-based diagnostic systems, which directly look for faulty components in the entire circuit. First, the performance of the modular systems is reliable and robust independent of the circuit size and can successfully classify similar fault classes with a significant overlap in the feature space where the traditional approach completely fails. Second, the modular approach requires significantly smaller neural network architectures, leading to much more efficient training. Third, for large real circuit boards, our diagnostic system proceeds to systematically reduce the size of the faulty modules until it is feasible to replace it. --- paper_title: Efficient classification for multiclass problems using modular neural networks paper_content: The rate of convergence of net output error is very low when training feedforward neural networks for multiclass problems using the backpropagation algorithm. While backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the differences between some of the components of these vectors increase in the first iteration. Furthermore, the magnitudes of subsequent weight changes in each iteration are very small, so that many iterations are required to compensate for the increased error in some components in the initial iterations. Our approach is to use a modular network architecture, reducing a K-class problem to a set of K two-class problems, with a separately trained network for each of the simpler problems. Speedups of one order of magnitude have been obtained experimentally, and in some cases convergence was possible using the modular approach but not using a nonmodular network. > --- paper_title: Learning capability and storage capacity of two-hidden-layer feedforward networks paper_content: The problem of the necessary complexity of neural networks is of interest in applications. In this paper, learning capability and storage capacity of feedforward neural networks are considered. We markedly improve the recent results by introducing neural-network modularity logically. This paper rigorously proves in a constructive method that two-hidden-layer feedforward networks (TLFNs) with 2/spl radic/(m+2)N (/spl Lt/N) hidden neurons can learn any N distinct samples (x/sub i/, t/sub i/) with any arbitrarily small error, where m is the required number of output neurons. It implies that the required number of hidden neurons needed in feedforward networks can be decreased significantly, comparing with previous results. Conversely, a TLFN with Q hidden neurons can store at least Q/sup 2//4(m+2) any distinct data (x/sub i/, t/sub i/) with any desired precision. --- paper_title: A Survey of 2D Face Recognition Techniques paper_content: Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition techniques over the last few years. These techniques can generally be divided into three categories, based on the face data processing methodology. There are methods that use the entire face as input data for the proposed recognition system, methods that do not consider the whole face, but only some features or areas of the face and methods that use global and local face characteristics simultaneously. In this paper, we present an overview of some well-known methods in each of these categories. First, we expose the benefits of, as well as the challenges to the use of face recognition as a biometric tool. Then, we present a detailed survey of the well-known methods by expressing each method’s principle. After that, a comparison between the three categories of face recognition techniques is provided. Furthermore, the databases used in face recognition are mentioned, and some results of the applications of these methods on face recognition databases are presented. Finally, we highlight some new promising research directions that have recently appeared. --- paper_title: Detection and classification of power quality disturbances using S-transform and modular neural network paper_content: Abstract This paper presents an S-transform based modular neural network (NN) classifier for recognition of power quality disturbances. The excellent time—frequency resolution characteristics of the S-transform makes it an attractive candidate for the analysis of power quality (PQ) disturbances under noisy condition and has the ability to detect the disturbance correctly. On the other hand, the performance of wavelet transform (WT) degrades while detecting and localizing the disturbances in the presence of noise. Features extracted by using the S-transform are applied to a modular NN for automatic classification of the PQ disturbances that solves a relatively complex problem by decomposing it into simpler subtasks. Modularity of neural network provides better classification, model complexity reduction and better learning capability, etc. Eleven types of PQ disturbances are considered for the classification. The simulation results show that the combination of the S-transform and a modular NN can effectively detect and classify different power quality disturbances. --- paper_title: Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks paper_content: The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q"3) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. --- paper_title: Multiclass Pattern Recognition Extension for the New C-Mantec Constructive Neural Network Algorithm paper_content: The new C-Mantec algorithm constructs compact neural network architectures for classsification problems, incorporating new features like competition between neurons and a built-in filtering stage of noisy examples. It was originally designed for tackling two class problems and in this work the extension of the algorithm to multiclass problems is analyzed. Three different approaches are investigated for the extension of the algorithm to multi-category pattern classification tasks: One-Against-All (OAA), One-Against-One (OAO), and P-against-Q (PAQ). A set of different sizes benchmark problems is used in order to analyze the prediction accuracy of the three multi-class implemented schemes and to compare the results to those obtained using other three standard classification algorithms. --- paper_title: Spatio-Temporal Short-Term Urban Traffic Volume Forecasting Using Genetically Optimized Modular Networks paper_content: Current interest in short-term traffic volume forecasting focuses on incorporating temporal and spatial volume characteristics in the forecasting process. This paper addresses the problem of integrating and optimizing predictive information from multiple locations of an urban signalized arterial roadway and proposes a modular neural predictor consisting of temporal genetically optimized structures of multilayer perceptrons that are fed with volume data from sequential locations to improve the accuracy of short-term forecasts. Results show that the proposed methodology provides more accurate forecasts compared to the conventional statistical methodologies applied, as well as to the static forms of neural networks. --- paper_title: Ten-Microsecond Molecular Dynamics Simulation of a Fast-Folding WW Domain paper_content: All-atom molecular dynamics (MD) simulations of protein folding allow analysis of the folding process at an unprecedented level of detail. Unfortunately, such simulations have not yet reached their full potential both due to difficulties in sufficiently sampling the microsecond timescales needed for folding, and because the force field used may yield neither the correct dynamical sequence of events nor the folded structure. The ongoing study of protein folding through computational methods thus requires both improvements in the performance of molecular dynamics programs to make longer timescales accessible, and testing of force fields in the context of folding simulations. We report a ten-microsecond simulation of an incipient downhill-folding WW domain mutant along with measurement of a molecular time and activated folding time of 1.5 microseconds and 13.3 microseconds, respectively. The protein simulated in explicit solvent exhibits several metastable states with incorrect topology and does not assume the native state during the present simulations. --- paper_title: Swapout: Learning an ensemble of deep architectures paper_content: We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. Swapout samples from a rich set of architectures including dropout, stochastic depth and residual architectures as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model. --- paper_title: Modular neural networks to predict the nitrate distribution in ground water using the on-ground nitrogen loading and recharge data paper_content: Abstract Artificial neural networks have proven to be an attractive mathematical tool to represent complex relationships in many branches of hydrology. Due to this attractive feature, neural networks are increasingly being applied in subsurface modeling where intricate physical processes and lack of detailed field data prevail. In this paper, a methodology using modular neural networks (MNN) is proposed to simulate the nitrate concentrations in an agriculture-dominated aquifer. The methodology relies on geographic information system (GIS) tools in the preparation and processing of the MNN input–output data. The basic premise followed in developing the MNN input–output response patterns is to designate the optimal radius of a specified circular-buffered zone centered by the nitrate receptor so that the input parameters at the upgradient areas correlate with nitrate concentrations in ground water. A three-step approach that integrates the on-ground nitrogen loadings, soil nitrogen dynamics, and fate and transport in ground water is described and the critical parameters to predict nitrate concentration using MNN are selected. The sensitivity of MNN performance to different MNN architecture is assessed. The applicability of MNN is considered for the Sumas-Blaine aquifer of Washington State using two scenarios corresponding to current land use practices and a proposed protection alternative. The results of MNN are further analyzed and compared to those obtained from a physically-based fate and transport model to evaluate the overall applicability of MNN. --- paper_title: Beyond Fine Tuning: A Modular Approach to Learning on Small Data paper_content: In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data. --- paper_title: Weight Uncertainty in Neural Networks paper_content: We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning. --- paper_title: Dropout: a simple way to prevent neural networks from overfitting paper_content: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. --- paper_title: Hybrid model building methodology using unsupervised fuzzy clustering and supervised neural networks paper_content: This paper suggests a model building methodology for dealing with new processes. The methodology, called Hybrid Fuzzy Neural Networks (HFNN), combines unsupervised fuzzy clustering and supervised neural networks in order to create simple and flexible models. Fuzzy clustering was used to define relevant domains on the input space. Then, sets of multilayer perceptrons (MLP) were trained (one for each domain) to map input-output relations, creating, in the process, a set of specified sub-models. The estimated output of the model was obtained by fusing the different sub-model outputs weighted by their predicted possibilities. On-line reinforcement learning enabled improvement of the model. The determination of the optimal number of clusters is fundamental to the success of the HFNN approach. The effectiveness of several validity measures was compared to the generalization capability of the model and information criteria. The validity measures were tested with fermentation simulations and real fermentations of a yeast-like fungus, Aureobasidium pullulans. The results outline the criteria limitations. The learning capability of the HFNN was tested with the fermentation data. The results underline the advantages of HFNN over a single neural network. --- paper_title: Short-Term Freeway Traffic Flow Prediction: Bayesian Combined Neural Network Approach paper_content: Short-term traffic flow prediction has long been regarded as a critical concern for intelligent transportation systems. On the basis of many existing prediction models, each having good performance only in a particular period, an improved approach is to combine these single predictors together for prediction in a span of periods. In this paper, a neural network model is introduced that combines the prediction from single neural network predictors according to an adaptive and heuristic credit assignment algorithm based on the theory of conditional probability and Bayes' rule. Two single predictors, i.e., the back propagation and the radial basis function neural networks are designed and combined linearly into a Bayesian combined neural network model. The credit value for each predictor in the combined model is calculated according to the proposed credit assignment algorithm and largely depends on the accumulative prediction perfor- mance of these predictors during the previous prediction intervals. For experimental test, two data sets comprising traffic flow rates in 15-min time intervals have been collected from Singapore's Ayer Rajah Expressway. One data set is used to train the two single neural networks and the other to test and compare the performances between the combined and singular models. Three indices, i.e., the mean absolute percentage error, the variance of absolute percentage error, and the probability of percentage error, are employed to compare the forecasting performance. It is found that most of the time, the combined model outperforms the singular predictors. More importantly, for a given time period, it is the role of this newly proposed model to track the predictors' performance online, so as to always select and combine the best-performing predictors for prediction. --- paper_title: Deep Networks with Stochastic Depth paper_content: Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 % on CIFAR-10). --- paper_title: Learning to Compose Neural Networks for Question Answering paper_content: We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains. --- paper_title: Improving Neural Network Generalization by Combining Parallel Circuits with Dropout paper_content: In an attempt to solve the lengthy training times of neural networks, we proposed Parallel Circuits (PCs), a biologically inspired architecture. Previous work has shown that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way Dropout prevents node co-adaption, in this paper, we suggest an improvement by extending Dropout to the PC architecture. The paper provides multiple insights into this combination, including a variety of fusion approaches. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach. --- paper_title: Blue Gene: a vision for protein science using a petaflop supercomputer paper_content: In December 1999, IBM announced the start of a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project has two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. This project should enable biomolecular simulations that are orders of magnitude larger than current technology permits. Major areas of investigation include: how to most effectively utilize this novel platform to meet our scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets, with reasonable cost, through novel machine architectures. This paper provides an overview of the Blue Gene project at IBM Research. It includes some of the plans that have been made, the intended goals, and the anticipated challenges regarding the scientific work, the software application, and the hardware design. --- paper_title: Modeling Relationships in Referential Expressions with Compositional Modular Networks paper_content: People often refer to entities in an image in terms of their relationships with other entities. For example, the black cat sitting under the table refers to both a black cat entity and its relationship with another table entity. Understanding these relationships is essential for interpreting and grounding such natural language expressions. Most prior work focuses on either grounding entire referential expressions holistically to one region, or localizing relationships based on a fixed set of categories. In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene. We call this approach Compositional Modular Networks (CMNs): a novel architecture that learns linguistic analysis and visual inference end-to-end. Our approach is built around two types of neural modules that inspect local regions and pairwise interactions between regions. We evaluate CMNs on multiple referential expression datasets, outperforming state-of-the-art approaches on all tasks. --- paper_title: A hybrid modular neural network architecture with fuzzy Sugeno integration for time series forecasting paper_content: We describe in this paper the application of a modular neural network architecture to the problem of simulating and predicting the dynamic behavior of complex economic time series. We use several neural network models and training algorithms to compare the results and decide at the end, which one is best for this application. We also compare the simulation results with the traditional approach of using a statistical model. In this case, we use real time series of prices of consumer goods to test our models. Real prices of tomato in the U.S. show complex fluctuations in time and are very complicated to predict with traditional statistical approaches. For this reason, we have chosen a neural network approach to simulate and predict the evolution of these prices in the U.S. market. --- paper_title: Face Recognition With an Improved Interval Type-2 Fuzzy Logic Sugeno Integral and Modular Neural Networks paper_content: In this paper, a modification of the Sugeno integral with interval type-2 fuzzy logic is proposed. The modification includes changing the original equations of the Sugeno Measures and the Sugeno integral that were initially proposed for type-1 fuzzy logic. The proposed modification enables calculation of the interval type-2 Sugeno integral for combining multiple source of information with a higher degree of uncertainty than with the traditional type-1 Sugeno integral. The advantages of the interval type-2 Sugeno integral are illustrated by reporting improved recognition rates in benchmark face databases. This new concept could also be a useful tool in other areas of applications. Also, the improvement provided by the type-2 integral is verified to be statistically significant in the recognition results for complex face databases (like the FERET database) when compared with the type-1 Sugeno integral. The proposed Sugeno integral is used to combine the modules' outputs of a modular neural network for face recognition. Simulation results show that the interval type-2 Sugeno integral is able to improve the recognition rate for the benchmark face databases. Recognition results are better or comparable to results produced by alternative approaches present in the literature reported for the same benchmark problems. --- paper_title: FractalNet: Ultra-Deep Neural Networks without Residuals paper_content: We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer. --- paper_title: Designing architectures of convolutional neural networks to solve practical problems paper_content: Abstract The Convolutional Neural Network (CNN) figures among the state-of-the-art Deep Learning (DL) algorithms due to its robustness to support data shift, scale variations, and its capability of extracting relevant information from large-scale input data. However, setting appropriate parameters to define CNN architectures is still a challenging issue, mainly to tackle real-world problems. A typical approach consists in empirically assessing different CNN settings in order to select the most appropriate one. This procedure has clear limitations, including the choice of suitable predefined configurations as well as the high computational cost involved in evaluating each of them. This work presents a novel methodology to tackle the previously mentioned issues, providing mechanisms to estimate effective CNN configurations, including the size of convolutional masks (convolutional kernels) and the number of convolutional units (CNN neurons) per layer. Based on the False Nearest Neighbors (FNN), a well-known tool from the area of Dynamical Systems, the proposed method helps estimating CNN architectures that are less complex and produce good results. Our experiments confirm that architectures estimated through the proposed approach are as effective as the complex ones defined by empirical and computationally intensive strategies. --- paper_title: DropCircuit : A Modular Regularizer for Parallel Circuit Networks paper_content: How to design and train increasingly large neural network models is a topic that has been actively researched for several years. However, while there exists a large number of studies on training deeper and/or wider models, there is relatively little systematic research particularly on the effective usage of wide modular neural networks. Addressing this gap, and in an attempt to solve the problem of lengthy training times, we proposed Parallel Circuits (PCs), a biologically inspired architecture based on the design of the retina. In previous work we showed that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way dropout prevents node co-adaptation, in this paper, we suggest an improvement by extending dropout to the parallel-circuit architecture. The paper provides empirical proof and multiple insights into this combination. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach. --- paper_title: A divide-and-conquer methodology for modular supervised neural network design paper_content: A novel learning strategy based on the divide-and-conquer concept is proposed to effectively overcome the slow learning speed and hard-determined network size problems in supervised learning neural networks. The proposed method first partitions the whole complex training set into several manageable subsets and then generates small size networks to 'conquer' (or learn) all these training subsets. In order to achieve efficient partition on a train set, we have proposed an error correlation partitioning (ECP) scheme such that sub-training-sets are formed with small (acceptable) training error. Based on this learning strategy, a self-growing modular neural network system can be developed. By applying the proposed learning strategy, a neural network is not only useful for pattern classification problems but also for continuous valued function approximation problems. > --- paper_title: Type-1 and type-2 fuzzy inference systems as integration methods in modular neural networks for multimodal biometry and its optimization with genetic algorithms paper_content: We describe in this paper a comparative study between fuzzy inference systems as methods of integration in modular neural networks for multimodal biometry. These methods of integration are based on techniques of type-1 fuzzy logic and type-2 fuzzy logic. Also, the fuzzy systems are optimized with simple genetic algorithms with the goal of having optimized versions of both types of fuzzy systems. First, we considered the use of type-1 fuzzy logic and later the approach with type-2 fuzzy logic. The fuzzy systems were developed using genetic algorithms to handle fuzzy inference systems with different membership functions, like the triangular, trapezoidal and Gaussian; since these algorithms can generate fuzzy systems automatically. Then the response integration of the modular neural network was tested with the optimized fuzzy systems of integration. The comparative study of the type-1 and type-2 fuzzy inference systems was made to observe the behavior of the two different integration methods for modular neural networks for multimodal biometry. --- paper_title: Knowledge transfer in deep block-modular neural networks paper_content: Although deep neural networks DNNs have demonstrated impressive results during the last decade, they remain highly specialized tools, which are trained --- often from scratch --- to solve each particular task. The human brain, in contrast, significantly re-uses existing capacities when learning to solve new tasks. In the current study we explore a block-modular architecture for DNNs, which allows parts of the existing network to be re-used to solve a new task without a decrease in performance when solving the original task. We show that networks with such architectures can outperform networks trained from scratch, or perform comparably, while having to learn nearly 10 times fewer weights than the networks trained from scratch. --- paper_title: Modular Representation of Layered Neural Networks paper_content: Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood.In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1)Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. --- paper_title: The evolutionary origins of modularity paper_content: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks—their organization as functional, sparsely connected subunits—but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes. --- paper_title: FractalNet: Ultra-Deep Neural Networks without Residuals paper_content: We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer. --- paper_title: Aggregated Residual Transformations for Deep Neural Networks paper_content: We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. --- paper_title: Complex brain networks: graph theoretical analysis of structural and functional systems paper_content: In recent years, the principles of network science have increasingly been applied to the study of the brain's structural and functional organization. Bullmore and Sporns review this growing field of research and discuss its contributions to our understanding of brain function. --- paper_title: Highway Networks paper_content: There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on"information highways". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. --- paper_title: Relational inductive biases, deep learning, and graph networks paper_content: Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between"hand-engineering"and"end-to-end"learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice. --- paper_title: Efficiency and Cost of Economical Brain Functional Networks paper_content: Brain anatomical networks are sparse, complex, and have economical small-world properties. We investigated the efficiency and cost of human brain functional networks measured using functional magnetic resonance imaging (fMRI) in a factorial design: two groups of healthy old (N = 11; mean age = 66.5 years) and healthy young (N = 15; mean age = 24.7 years) volunteers were each scanned twice in a no-task or “resting” state following placebo or a single dose of a dopamine receptor antagonist (sulpiride 400 mg). Functional connectivity between 90 cortical and subcortical regions was estimated by wavelet correlation analysis, in the frequency interval 0.06–0.11 Hz, and thresholded to construct undirected graphs. These brain functional networks were small-world and economical in the sense of providing high global and local efficiency of parallel information processing for low connection cost. Efficiency was reduced disproportionately to cost in older people, and the detrimental effects of age on efficiency were localised to frontal and temporal cortical and subcortical regions. Dopamine antagonism also impaired global and local efficiency of the network, but this effect was differentially localised and did not interact with the effect of age. Brain functional networks have economical small-world properties—supporting efficient parallel information transfer at relatively low cost—which are differently impaired by normal aging and pharmacological blockade of dopamine transmission. --- paper_title: Neural Message Passing for Quantum Chemistry paper_content: Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels. --- paper_title: COVNET: a cooperative coevolutionary model for evolving artificial neural networks paper_content: This paper presents COVNET, a new cooperative coevolutionary model for evolving artificial neural networks. This model is based on the idea of coevolving subnetworks that must cooperate to form a solution for a specific problem, instead of evolving complete networks. The combination of this subnetworks is part of a coevolutionary process. The best combinations of subnetworks must be evolved together with the coevolution of the subnetworks. Several subpopulations of subnetworks coevolve cooperatively and genetically isolated. The individual of every subpopulation are combined to form whole networks. This is a different approach from most current models of evolutionary neural networks which try to develop whole networks. COVNET places as few restrictions as possible over the network structure, allowing the model to reach a wide variety of architectures during the evolution and to be easily extensible to other kind of neural networks. The performance of the model in solving three real problems of classification is compared with a modular network, the adaptive mixture of experts and with the results presented in the bibliography. COVNET has shown better generalization and produced smaller networks than the adaptive mixture of experts and has also achieved results, at least, comparable with the results in the bibliography. --- paper_title: Spontaneous evolution of modularity and network motifs paper_content: Biological networks have an inherent simplicity: they are modular with a design that can be separated into units that perform almost independently. Furthermore, they show reuse of recurring patterns termed network motifs. Little is known about the evolutionary origin of these properties. Current models of biological evolution typically produce networks that are highly nonmodular and lack understandable motifs. Here, we suggest a possible explanation for the origin of modularity and network motifs in biology. We use standard evolutionary algorithms to evolve networks. A key feature in this study is evolution under an environment (evolutionary goal) that changes in a modular fashion. That is, we repeatedly switch between several goals, each made of a different combination of subgoals. We find that such “modularly varying goals” lead to the spontaneous evolution of modular network structure and network motifs. The resulting networks rapidly evolve to satisfy each of the different goals. Such switching between related goals may represent biological evolution in a changing environment that requires different combinations of a set of basic biological functions. The present study may shed light on the evolutionary forces that promote structural simplicity in biological networks and offers ways to improve the evolutionary design of engineered systems. --- paper_title: The evolutionary origins of modularity paper_content: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks—their organization as functional, sparsely connected subunits—but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes. --- paper_title: Constraining connectivity to encourage modularity in HyperNEAT paper_content: A challenging goal of generative and developmental systems (GDS) is to effectively evolve neural networks as complex and capable as those found in nature. Two key properties of neural structures in nature are regularity and modularity. While HyperNEAT has proven capable of generating neural network connectivity patterns with regularities, its ability to evolve modularity remains in question. This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity. In particular, an extension is introduced called a Link Expression Output (HyperNEAT-LEO) that allows HyperNEAT to evolve the pattern of weights independently from the pattern of connection expression. Because HyperNEAT evolves such patterns as functions of geometry, important general topographic principles for organizing connectivity can be seeded into the initial population. For example, a key topographic concept in nature that encourages modularity is locality, that is, components of a module are located near each other. As experiments in this paper show, by seeding HyperNEAT with a bias towards local connectivity implemented through the LEO, modular structures arise naturally. Thus this paper provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity. --- paper_title: Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique paper_content: One of humanity's grand scientific challenges is to create artificially intelligent robots that rival natural animals in intelligence and agility. A key enabler of such animal complexity is the fact that animal brains are structurally organized in that they exhibit modularity and regularity, amongst other attributes. Modularity is the localization of function within an encapsulated unit. Regularity refers to the compressibility of the information describing a structure, and typically involves symmetries and repetition. These properties improve evolvability, but they rarely emerge in evolutionary algorithms without specific techniques to encourage them. It has been shown that (1) modularity can be evolved in neural networks by adding a cost for neural connections and, separately, (2) that the HyperNEAT algorithm produces neural networks with complex, functional regularities. In this paper we show that adding the connection cost technique to HyperNEAT produces neural networks that are significantly more modular, regular, and higher performing than HyperNEAT without a connection cost, even when compared to a variant of HyperNEAT that was specifically designed to encourage modularity. Our results represent a stepping stone towards the goal of producing artificial neural networks that share key organizational properties with the brains of natural animals. --- paper_title: A simple neural network module for relational reasoning paper_content: Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations. --- paper_title: Complex brain networks: graph theoretical analysis of structural and functional systems paper_content: In recent years, the principles of network science have increasingly been applied to the study of the brain's structural and functional organization. Bullmore and Sporns review this growing field of research and discuss its contributions to our understanding of brain function. --- paper_title: Efficient associative memory using small-world architecture paper_content: Abstract Most models of neural associative memory have used networks with broad connectivity. However, from both a neurobiological viewpoint and an implementation perspective, it is logical to minimize the length of inter-neural connections and consider networks whose connectivity is predominantly local. The “small-world networks” model described recently by Watts and Strogatz provides an interesting approach to this issue. In this paper, we show that associative memory networks with small-world architectures can provide the same retrieval performance as randomly connected networks while using a fraction of the total connection length. --- paper_title: Block-Based Neural Networks for Personalized ECG Signal Classification paper_content: This paper presents evolvable block-based neural networks (BbNNs) for personalized ECG heartbeat pattern classification. A BbNN consists of a 2-D array of modular component NNs with flexible structures and internal configurations that can be implemented using reconfigurable digital hardware such as field-programmable gate arrays (FPGAs). Signal flow between the blocks determines the internal configuration of a block as well as the overall structure of the BbNN. Network structure and the weights are optimized using local gradient-based search and evolutionary operators with the rates changing adaptively according to their effectiveness in the previous evolution period. Such adaptive operator rate update scheme ensures higher fitness on average compared to predetermined fixed operator rates. The Hermite transform coefficients and the time interval between two neighboring R-peaks of ECG signals are used as inputs to the BbNN. A BbNN optimized with the proposed evolutionary algorithm (EA) makes a personalized heartbeat pattern classifier that copes with changing operating environments caused by individual difference and time-varying characteristics of ECG signals. Simulation results using the Massachusetts Institute of Technology/Beth Israel Hospital (MIT-BIH) arrhythmia database demonstrate high average detection accuracies of ventricular ectopic beats (98.1%) and supraventricular ectopic beats (96.6%) patterns for heartbeat monitoring, being a significant improvement over previously reported electrocardiogram (ECG) classification results. --- paper_title: Block-Based Neural Networks paper_content: This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. --- paper_title: Multi-path Convolutional Neural Networks for Complex Image Classification paper_content: Convolutional Neural Networks demonstrate high performance on ImageNet Large-Scale Visual Recognition Challenges contest. Nevertheless, the published results only show the overall performance for all image classes. There is no further analysis why certain images get worse results and how they could be improved. In this paper, we provide deep performance analysis based on different types of images and point out the weaknesses of convolutional neural networks through experiment. We design a novel multiple paths convolutional neural network, which feeds different versions of images into separated paths to learn more comprehensive features. This model has better presentation for image than the traditional single path model. We acquire better classification results on complex validation set on both top 1 and top 5 scores than the best ILSVRC 2013 classification model. --- paper_title: Going deeper with convolutions paper_content: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. --- paper_title: Rethinking the Inception Architecture for Computer Vision paper_content: Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set. --- paper_title: Modular Cellular Neural Network Structure for Wave-Computing-Based Image Processing paper_content: This paper introduces the modular cellular neural network (CNN), which is a new CNN structure constructed from nine one-layer modules with intercellular interactions between different modules. The new network is suitable for implementing many image processing operations. Inputting an image into the modules results in nine outputs. The topographic characteristic of the cell interactions allows the outputs to introduce new properties for image processing tasks. The stability of the system is proven and the performance is evaluated in several image processing applications. Experiment results on texture segmentation show the power of the proposed structure. The performance of the structure in a real edge detection application using the Berkeley dataset BSDS300 is also evaluated. --- paper_title: Parallel Multi-Dimensional LSTM, With Application to Fast Biomedical Volumetric Image Segmentation paper_content: Convolutional Neural Networks (CNNs) can be shifted across 2D images or 3D videos to segment them. They have a fixed input size and typically perceive only small local contexts of the pixels to be classified as foreground or background. In contrast, Multi-Dimensional Recurrent NNs (MD-RNNs) can perceive the entire spatio-temporal context of each pixel in a few sweeps through all pixels, especially when the RNN is a Long Short-Term Memory (LSTM). Despite these theoretical advantages, however, unlike CNNs, previous MD-LSTM variants were hard to parallelise on GPUs. Here we re-arrange the traditional cuboid order of computations in MD-LSTM in pyramidal fashion. The resulting PyraMiD-LSTM is easy to parallelise, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM achieved best known pixel-wise brain image segmentation results on MRBrainS13 (and competitive results on EM-ISBI12). --- paper_title: Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations paper_content: Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient information processing. --- paper_title: Parallel growing and training of neural networks using output parallelism paper_content: In order to find an appropriate architecture for a large-scale real-world application automatically and efficiently, a natural method is to divide the original problem into a set of subproblems. In this paper, we propose a simple neural-network task decomposition method based on output parallelism. By using this method, a problem can be divided flexibly into several subproblems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for one subproblem) is responsible for producing a fraction of the output vector of the original problem. The hidden structure for the original problem's output units are decoupled. These modules can be grown and trained in parallel on parallel processing elements. Incorporated with a constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Some benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computational time, increase learning speed and improve generalization accuracy for both classification and regression problems. --- paper_title: Modular and Hierarchically Modular Organization of Brain Networks paper_content: Brain networks are increasingly understood as one of a large class of information processing systems that share important organizational principles in common, including the property of a modular community structure. A module is topologically defined as a subset of highly inter-connected nodes which are relatively sparsely connected to nodes in other modules. In brain networks, topological modules are often made up of anatomically neighboring and/or functionally related cortical regions, and inter-modular connections tend to be relatively long distance. Moreover, brain networks and many other complex systems demonstrate the property of hierarchical modularity, or modularity on several topological scales: within each module there will be a set of sub-modules, and within each sub-module a set of sub-sub-modules, etc. There are several general advantages to modular and hierarchically modular network organization, including greater robustness, adaptivity, and evolvability of network function. In this context, we review some of the mathematical concepts available for quantitative analysis of (hierarchical) modularity in brain networks and we summarize some of the recent work investigating modularity of structural and functional brain networks derived from analysis of human neuroimaging data. --- paper_title: Block based neural network for hypoglycemia detection paper_content: In this paper, evolvable block based neural network (BBNN) is presented for detection of hypoglycemia episodes. The structure of BBNN consists of a two-dimensional (2D) array of fundamental blocks with four variable input-output nodes and weight connections. Depending on the structure settings, each block can have one of four different internal configurations. To provide early detection of hypoglycemia episodes, the physiological parameters such as heart rate (HR) and corrected QT interval (QTc) of electrocardiogram (ECG) signal are used as the inputs of BBNN. The overall structure and weights of BBNN are optimized by an evolutionary algorithm called hybrid particle swarm optimization with wavelet mutation (HPSOWM). The optimized structures and weights of BBNN are capable to compensate large variations of ECG patterns caused by individual and temporal difference since a fixed structure classifiers are easy to fail to trace ECG signals with large variations. The ECG data of 15 patients are organized into a training set, a testing set and a validation set, each of which has randomly selected 5 patients. The simulation results shows that the proposed algorithm, BBNN with HPSOWM can successfully detect the hypoglycemic episodes in T1DM in term of testing sensitivity (76.74%) and test specificity (50.91%). --- paper_title: Real-life voice activity detection with LSTM Recurrent Neural Networks and an application to Hollywood movies paper_content: A novel, data-driven approach to voice activity detection is presented. The approach is based on Long Short-Term Memory Recurrent Neural Networks trained on standard RASTA-PLP frontend features. To approximate real-life scenarios, large amounts of noisy speech instances are mixed by using both read and spontaneous speech from the TIMIT and Buckeye corpora, and adding real long term recordings of diverse noise types. The approach is evaluated on unseen synthetically mixed test data as well as a real-life test set consisting of four full-length Hollywood movies. A frame-wise Equal Error Rate (EER) of 33.2% is obtained for the four movies and an EER of 9.6% is obtained for the synthetic test data at a peak SNR of 0 dB, clearly outperforming three state-of-the-art reference algorithms under the same conditions. --- paper_title: Eye Smarter than Scientists Believed: Neural Computations in Circuits of the Retina paper_content: We rely on our visual system to cope with the vast barrage of incoming light patterns and to extract features from the scene that are relevant to our well-being. The necessary reduction of visual information already begins in the eye. In this review, we summarize recent progress in understanding the computations performed in the vertebrate retina and how they are implemented by the neural circuitry. A new picture emerges from these findings that helps resolve a vexing paradox between the retina's structure and function. Whereas the conventional wisdom treats the eye as a simple prefilter for visual images, it now appears that the retina solves a diverse set of specific tasks and provides the results explicitly to downstream brain areas. --- paper_title: Residual Networks Behave Like Ensembles of Relatively Shallow Networks paper_content: In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks. --- paper_title: Improving Neural Network Generalization by Combining Parallel Circuits with Dropout paper_content: In an attempt to solve the lengthy training times of neural networks, we proposed Parallel Circuits (PCs), a biologically inspired architecture. Previous work has shown that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way Dropout prevents node co-adaption, in this paper, we suggest an improvement by extending Dropout to the PC architecture. The paper provides multiple insights into this combination, including a variety of fusion approaches. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach. --- paper_title: Application of LSTM Neural Networks in Language Modelling paper_content: Artificial neural networks have become state-of-the-art in the task of language modelling on a small corpora. While feed-forward networks are able to take into account only a fixed context length to predict the next word, recurrent neural networks (RNN) can take advantage of all previous words. Due the difficulties in training of RNN, the way could be in using Long Short Term Memory (LSTM) neural network architecture. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. --- paper_title: Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models paper_content: We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings. --- paper_title: Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks paper_content: We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. --- paper_title: FractalNet: Ultra-Deep Neural Networks without Residuals paper_content: We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer. --- paper_title: A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks paper_content: Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution. --- paper_title: Hierarchical Recurrent Neural Encoder for Video Representation with Application to Captioning paper_content: Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal transitions between frame chunks with different granularities, i.e., it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks. Notably, even using a single network with only RGB stream as input, HRNE beats all the recent systems which combine multiple inputs, such as RGB ConvNet plus 3D ConvNet. --- paper_title: DropCircuit : A Modular Regularizer for Parallel Circuit Networks paper_content: How to design and train increasingly large neural network models is a topic that has been actively researched for several years. However, while there exists a large number of studies on training deeper and/or wider models, there is relatively little systematic research particularly on the effective usage of wide modular neural networks. Addressing this gap, and in an attempt to solve the problem of lengthy training times, we proposed Parallel Circuits (PCs), a biologically inspired architecture based on the design of the retina. In previous work we showed that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way dropout prevents node co-adaptation, in this paper, we suggest an improvement by extending dropout to the parallel-circuit architecture. The paper provides empirical proof and multiple insights into this combination. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach. --- paper_title: Aggregated Residual Transformations for Deep Neural Networks paper_content: We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. --- paper_title: Modular neural networks with radial neural columnar architecture paper_content: Abstract A new radial columnar architecture for the modular assembly neural network is proposed together with a modification of this architecture which differs in less number of learning connections in the network versus the former full-connected modular assembly neural networks. Validation of the latter architecture has been done in experiments on recognition of handwritten digits of the MNIST database. The experiments allow us to conclude that efficiency of the modular neural network with reduced number of learning connections is only slightly less than that of the full-connected modular neural network. Also, the experiments have demonstrated that its recognition capability is higher than that of the LiRA classifier. The main result of the work is that the full-connected network can be successfully replaced by its reduced version with retaining almost the same performance and with acquisition of a much higher speed of image processing. --- paper_title: A parallel circuit approach for improving the speed and generalization properties of neural networks paper_content: One of the common problems of neural networks, especially those with many layers consists of their lengthy training times. We attempted to solve this problem at the algorithmic (not hardware) level, proposing a simple parallel design inspired by the parallel circuits found in the human retina. To avoid large matrix calculations, we split the original network vertically into parallel circuits and let the BP algorithm flow in each subnetwork independently. Experimental results have shown the speed advantage of the proposed approach but also pointed out that the reduction is affected by multiple dependencies. The results also suggest that parallel circuits improve the generalization ability of neural networks presumably due to automatic problem decomposition. --- paper_title: Dynamic Routing Between Capsules paper_content: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule. --- paper_title: Experimentally Induced Retinal Projections to the Ferret Auditory Thalamus: Development of Clustered Eye-Specific Patterns in a Novel Target paper_content: We have examined the relative role of afferents and targets in pattern formation using a novel preparation, in which retinal projections in ferrets are induced to innervate the medial geniculate nucleus (MGN). We find that retinal projections to the MGN are arranged in scattered clusters. Clusters arising from the ipsilateral eye are frequently adjacent to, but spatially segregated from, clusters arising from the contralateral eye. Both clustering and eye-specific segregation in the MGN arise as a refinement of initially diffuse and overlapped projections. The shape, size, and orientation of retinal terminal clusters in the MGN closely match those of relay cell dendrites arrayed within fibrodendritic laminae in the MGN. We conclude that specific aspects of a projection system are regulated by afferents and others by targets. Clustering of retinal projections within the MGN and eye-specific segregation involve progressive remodeling of retinal axon arbors, over a time period that closely parallels pattern formation by retinal afferents within their normal target, the lateral geniculate nucleus (LGN). Thus, afferent-driven mechanisms are implicated in these events. However, the termination zones are aligned within the normal cellular organization of the MGN, which does not differentiate into eye-specific cell layers similar to the LGN. Thus, target-driven mechanisms are implicated in lamina formation and cellular differentiation. --- paper_title: Generating Neuronal Diversity in the Mammalian Cerebral Cortex paper_content: The neocortex is the part of the brain responsible for execution of higher-order brain functions, including cognition, sensory perception, and sophisticated motor control. During evolution, the neocortex has developed an unparalleled neuronal diversity, which still remains partly unclassified and unmapped at the functional level. Here, we broadly review the structural blueprint of the neocortex and discuss the current classification of its neuronal diversity. We then cover the principles and mechanisms that build neuronal diversity during cortical development and consider the impact of neuronal class-specific identity in shaping cortical connectivity and function. --- paper_title: Highway Networks paper_content: There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on"information highways". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. --- paper_title: Multi-column Deep Neural Networks for Image Classification paper_content: Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks. --- paper_title: Adaptive Mixtures of Local Experts paper_content: We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network. --- paper_title: MAttNet: Modular Attention Network for Referring Expression Comprehension paper_content: In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. --- paper_title: Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies paper_content: While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuo-motor policies for real robotic systems without relying entirely on large real-world robot datasets. --- paper_title: Video captioning with recurrent networks based on frame- and video-level features and visual content classification paper_content: In this paper, we describe the system for generating textual descriptions of short video clips using recurrent neural networks (RNN), which we used while participating in the Large Scale Movie Description Challenge 2015 in ICCV 2015. Our work builds on static image captioning systems with RNN based language models and extends this framework to videos utilizing both static image features and video-specific features. In addition, we study the usefulness of visual content classifiers as a source of additional information for caption generation. With experimental results we show that utilizing keyframe based features, dense trajectory video features and content classifier outputs together gives better performance than any one of them individually. --- paper_title: Learning to Discover Cross-Domain Relations with Generative Adversarial Networks paper_content: While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations when given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. --- paper_title: Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks paper_content: The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q"3) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. --- paper_title: Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks paper_content: We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. --- paper_title: Hierarchical Recurrent Neural Encoder for Video Representation with Application to Captioning paper_content: Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal transitions between frame chunks with different granularities, i.e., it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks. Notably, even using a single network with only RGB stream as input, HRNE beats all the recent systems which combine multiple inputs, such as RGB ConvNet plus 3D ConvNet. --- paper_title: Generative Adversarial Nets paper_content: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. --- paper_title: COVNET: a cooperative coevolutionary model for evolving artificial neural networks paper_content: This paper presents COVNET, a new cooperative coevolutionary model for evolving artificial neural networks. This model is based on the idea of coevolving subnetworks that must cooperate to form a solution for a specific problem, instead of evolving complete networks. The combination of this subnetworks is part of a coevolutionary process. The best combinations of subnetworks must be evolved together with the coevolution of the subnetworks. Several subpopulations of subnetworks coevolve cooperatively and genetically isolated. The individual of every subpopulation are combined to form whole networks. This is a different approach from most current models of evolutionary neural networks which try to develop whole networks. COVNET places as few restrictions as possible over the network structure, allowing the model to reach a wide variety of architectures during the evolution and to be easily extensible to other kind of neural networks. The performance of the model in solving three real problems of classification is compared with a modular network, the adaptive mixture of experts and with the results presented in the bibliography. COVNET has shown better generalization and produced smaller networks than the adaptive mixture of experts and has also achieved results, at least, comparable with the results in the bibliography. --- paper_title: PathNet: Evolution Channels Gradient Descent in Super Neural Networks paper_content: For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C). --- paper_title: Evolving Deep Neural Networks paper_content: Abstract The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future. --- paper_title: Duplication of Modules Facilitates the Evolution of Functional Specialization paper_content: The evolution of simulated robots with three different architectures is studied in this article. We compare a nonmodular feed-forward network, a hardwired modular, and a duplication-based modular motor control network. We conclude that both modular architectures outperform the non-modular architecture, both in terms of rate of adaptation as well as the level of adaptation achieved. The main difference between the hardwired and duplication-based modular architectures is that in the latter the modules reached a much higher degree of functional specialization of their motor control units with regard to high-level behavioral functions. The hardwired architectures reach the same level of performance, but have a more distributed assignment of functional tasks to the motor control units. We conclude that the mechanism through which functional specialization is achieved is similar to the mechanism proposed for the evolution of duplicated genes. It is found that the duplication of multifunctional modules first leads to a change in the regulation of the module, leading to a differentiation of the functional context in which the module is used. Then the module adapts to the new functional context. After this second step the system is locked into a functionally specialized state. We suggest that functional specialization may be an evolutionary absorption state. --- paper_title: Analysis of biologically inspired Small-World networks paper_content: Small-World networks are highly clusterized networks with small distances between their nodes. There are some well known biological networks that present this kind of connectivity. On the other hand, the usual models of Small-World networks make use of undirected and unweighted graphs in order to represent the connectivity between the nodes of the network. These kind of graphs cannot model some essential characteristics of neural networks as, for example, the direction or the weight of the synaptic connections. In this paper we analyze different kinds of directed graphs and show that they can also present a Small-World topology when they are shifted from regular to random. Also analytical expressions are given for the cluster coefficient and the characteristic path of these graphs. --- paper_title: COVNET: a cooperative coevolutionary model for evolving artificial neural networks paper_content: This paper presents COVNET, a new cooperative coevolutionary model for evolving artificial neural networks. This model is based on the idea of coevolving subnetworks that must cooperate to form a solution for a specific problem, instead of evolving complete networks. The combination of this subnetworks is part of a coevolutionary process. The best combinations of subnetworks must be evolved together with the coevolution of the subnetworks. Several subpopulations of subnetworks coevolve cooperatively and genetically isolated. The individual of every subpopulation are combined to form whole networks. This is a different approach from most current models of evolutionary neural networks which try to develop whole networks. COVNET places as few restrictions as possible over the network structure, allowing the model to reach a wide variety of architectures during the evolution and to be easily extensible to other kind of neural networks. The performance of the model in solving three real problems of classification is compared with a modular network, the adaptive mixture of experts and with the results presented in the bibliography. COVNET has shown better generalization and produced smaller networks than the adaptive mixture of experts and has also achieved results, at least, comparable with the results in the bibliography. --- paper_title: The evolutionary origins of modularity paper_content: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks—their organization as functional, sparsely connected subunits—but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes. --- paper_title: Evolving Neural Networks through Augmenting Topologies paper_content: An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution. --- paper_title: Evolving Modular Architectures for Neural Networks paper_content: Neural networks that learn the What and Where task perform better if they possess a modular architecture for separately processing the identity and spatial location of objects. In previous simulations the modular architecture either was hardwired or it developed during an individual’s life based on a preference for short connections given a set of hardwired unit locations. We present two sets of simulations in which the network architecture is genetically inherited and it evolves in a population of neural networks in two different conditions: (1) both the architecture and the connection weights evolve; (2) the network architecture is inherited and it evolves but the connection weights are learned during life. The best results are obtained in condition (2). Condition (1) gives unsatisfactory results because (a) adapted sets of weights can suddenly become maladaptive if the architecture changes, (b) evolution fails to properly assign computational resources (hidden units) to the two tasks, (c) genetic linkage between sets of weights for different modules can result in a favourable mutation in one set of weights being accompanied by an unfavourable mutation in another set of weights. --- paper_title: Task-dependent evolution of modularity in neural networks paper_content: There exist many ideas and assumptions about the development and meaning of modularity in biological and technical neural systems. We empirically study the evolution of connectionist models in the context of modular problems. For this purpose, we define quantitative measures for the degree of modularity and monitor them during evolutionary processes under different constraints. It turns out that the modularity of the problem is reflected by the architecture of adapted systems, although learning can counterbalance some imperfection of the architecture. The demand for fast learning systems increases the selective pressure towards modularity. --- paper_title: Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique paper_content: One of humanity's grand scientific challenges is to create artificially intelligent robots that rival natural animals in intelligence and agility. A key enabler of such animal complexity is the fact that animal brains are structurally organized in that they exhibit modularity and regularity, amongst other attributes. Modularity is the localization of function within an encapsulated unit. Regularity refers to the compressibility of the information describing a structure, and typically involves symmetries and repetition. These properties improve evolvability, but they rarely emerge in evolutionary algorithms without specific techniques to encourage them. It has been shown that (1) modularity can be evolved in neural networks by adding a cost for neural connections and, separately, (2) that the HyperNEAT algorithm produces neural networks with complex, functional regularities. In this paper we show that adding the connection cost technique to HyperNEAT produces neural networks that are significantly more modular, regular, and higher performing than HyperNEAT without a connection cost, even when compared to a variant of HyperNEAT that was specifically designed to encourage modularity. Our results represent a stepping stone towards the goal of producing artificial neural networks that share key organizational properties with the brains of natural animals. --- paper_title: Complex brain networks: graph theoretical analysis of structural and functional systems paper_content: In recent years, the principles of network science have increasingly been applied to the study of the brain's structural and functional organization. Bullmore and Sporns review this growing field of research and discuss its contributions to our understanding of brain function. --- paper_title: Option Pricing With Modular Neural Networks paper_content: This paper investigates a nonparametric modular neural network (MNN) model to price the S&P-500 European call options. The modules are based on time to maturity and moneyness of the options. The option price function of interest is homogeneous of degree one with respect to the underlying index price and the strike price. When compared to an array of parametric and nonparametric models, the MNN method consistently exerts superior out-of-sample pricing performance. We conclude that modularity improves the generalization properties of standard feedforward neural network option pricing models (with and without the homogeneity hint). --- paper_title: Evolution of Neural Networks for Helicopter Control: Why Modularity Matters paper_content: The problem of the automatic development of controllers for vehicles for which the exact characteristics are not known is considered in the context of miniature helicopter flocking. A methodology is proposed in which neural network based controllers are evolved in a simulation using a dynamic model qualitatively similar to the physical helicopter. Several network architectures and evolutionary sequences are investigated, and two approaches are found that can evolve very competitive controllers. The division of the neural network into modules and of the task into incremental steps seems to be a precondition for success, and we analyse why this might be so. --- paper_title: End-to-end text recognition with convolutional neural networks paper_content: Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003. --- paper_title: Hierarchical semantic segmentation using modular convolutional neural networks paper_content: Image recognition tasks that involve identifying parts of an object or the contents of a vessel can be viewed as a hierarchical problem, which can be solved by initial recognition of the main object, followed by recognition of its parts or contents. To achieve such modular recognition, it is necessary to use the output of one recognition method (which identifies the general object) as the input for a second method (which identifies the parts or contents). In recent years, convolutional neural networks have emerged as the dominant method for segmentation and classification of images. This work examines a method for serially connecting convolutional neural networks for semantic segmentation of materials inside transparent vessels. It applies one fully convolutional neural net to segment the image into vessel and background, and the vessel region is used as an input for a second net which recognizes the contents of the glass vessel. Transferring the segmentation map generated by the first nets to the second net was performed using the valve filter attention method that involves using different filters on different segments of the image. This modular semantic segmentation method outperforms a single step method in which both the vessel and its contents are identified using a single net. An advantage of the modular neural net is that it allows networks to be built from existing trained modules, as well the transfer and reuse of trained net modules without the need for any retraining of the assembled net. --- paper_title: Dense and Diverse Capsule Networks: Making the Capsules Learn Better paper_content: Past few years have witnessed exponential growth of interest in deep learning methodologies with rapidly improving accuracies and reduced computational complexity. In particular, architectures using Convolutional Neural Networks (CNNs) have produced state-of-the-art performances for image classification and object recognition tasks. Recently, Capsule Networks (CapsNet) achieved significant increase in performance by addressing an inherent limitation of CNNs in encoding pose and deformation. Inspired by such advancement, we asked ourselves, can we do better? We propose Dense Capsule Networks (DCNet) and Diverse Capsule Networks (DCNet++). The two proposed frameworks customize the CapsNet by replacing the standard convolutional layers with densely connected convolutions. This helps in incorporating feature maps learned by different layers in forming the primary capsules. DCNet, essentially adds a deeper convolution network, which leads to learning of discriminative feature maps. Additionally, DCNet++ uses a hierarchical architecture to learn capsules that represent spatial information in a fine-to-coarser manner, which makes it more efficient for learning complex data. Experiments on image classification task using benchmark datasets demonstrate the efficacy of the proposed architectures. DCNet achieves state-of-the-art performance (99.75%) on MNIST dataset with twenty fold decrease in total training iterations, over the conventional CapsNet. Furthermore, DCNet++ performs better than CapsNet on SVHN dataset (96.90%), and outperforms the ensemble of seven CapsNet models on CIFAR-10 by 0.31% with seven fold decrease in number of parameters. --- paper_title: Going deeper with convolutions paper_content: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. --- paper_title: Hierarchical Representations for Efficient Architecture Search paper_content: We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour. --- paper_title: FractalNet: Ultra-Deep Neural Networks without Residuals paper_content: We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer. --- paper_title: Dynamic Routing Between Capsules paper_content: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule. ---
<format> Title: A Review of Modularization Techniques in Artificial Neural Networks Section 1: Introduction Description 1: Provide an overview of modularity in artificial neural networks, including its benefits and biological inspirations. Section 2: Modularity Description 2: Explain the concept of modularity within neural networks and its importance in the context of both artificial and biological neural systems. Section 3: Modularization Techniques Description 3: Present a taxonomy of modularization techniques, categorized based on levels of abstraction and analyze each technique in detail. Section 4: Domain Description 4: Discuss how domain modularization techniques partition the input space, including manual and learned approaches. Section 5: Topology Description 5: Describe various topological modular structures, such as highly-clustered non-regular, repeated block, multi-path, modular node, sequential, recursive, and multi-architectural. Section 6: Formation Description 6: Explain the methods used to construct the topology of modular neural networks, including manual, evolutionary, and learned formation techniques. Section 7: Integration Description 7: Detail the techniques for integrating module outputs, including cooperative and competitive strategies, and arithmetical-logical approaches. Section 8: Case Studies Description 8: Present case studies of state-of-the-art modular neural networks, emphasizing the applied modular techniques. Section 9: Conclusion Description 9: Summarize the main findings, discuss the advantages and challenges of modularity, and predict future directions in the field of modular neural networks. </format> The outline:
A frame semantic overview of NLP-based information extraction for cancer-related EHR notes
15
--- paper_title: Hierarchical attention networks for information extraction from cancer pathology reports paper_content: Objective ::: We explored how a deep learning (DL) approach based on hierarchical attention networks (HANs) can improve model performance for multiple information extraction tasks from unstructured cancer pathology reports compared to conventional methods that do not sufficiently capture syntactic and semantic contexts from free-text documents. ::: ::: ::: Materials and Methods ::: Data for our analyses were obtained from 942 deidentified pathology reports collected by the National Cancer Institute Surveillance, Epidemiology, and End Results program. The HAN was implemented for 2 information extraction tasks: (1) primary site, matched to 12 International Classification of Diseases for Oncology topography codes (7 breast, 5 lung primary sites), and (2) histological grade classification, matched to G1-G4. Model performance metrics were compared to conventional machine learning (ML) approaches including naive Bayes, logistic regression, support vector machine, random forest, and extreme gradient boosting, and other DL models, including a recurrent neural network (RNN), a recurrent neural network with attention (RNN w/A), and a convolutional neural network. ::: ::: ::: Results ::: Our results demonstrate that for both information tasks, HAN performed significantly better compared to the conventional ML and DL techniques. In particular, across the 2 tasks, the mean micro and macro F-scores for the HAN with pretraining were (0.852,0.708), compared to naive Bayes (0.518, 0.213), logistic regression (0.682, 0.453), support vector machine (0.634, 0.434), random forest (0.698, 0.508), extreme gradient boosting (0.696, 0.522), RNN (0.505, 0.301), RNN w/A (0.637, 0.471), and convolutional neural network (0.714, 0.460). ::: ::: ::: Conclusions ::: HAN-based DL models show promise in information abstraction tasks within unstructured clinical pathology reports. --- paper_title: Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer paper_content: Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting. --- paper_title: Natural Language Processing Improves Identification of Colorectal Cancer Testing in the Electronic Medical Record paper_content: Background. Difficulty identifying patients in need of colorectal cancer (CRC) screening contributes to low screening rates. Objective. To use Electronic Health Record (EHR) data to identify patients with prior CRC testing. Design. A clinical natural language processing (NLP) system was modified to identify 4 CRC tests (colonoscopy, flexible sigmoidoscopy, fecal occult blood testing, and double contrast barium enema) within electronic clinical documentation. Text phrases in clinical notes referencing CRC tests were interpreted by the system to determine whether testing was planned or completed and to estimate the date of completed tests. Setting. Large academic medical center. Patients. 200 patients ≥50 years old who had completed ≥2 non-acute primary care visits within a 1-year period. Measures. Recall and precision of the NLP system, billing records, and human chart review were compared to a reference standard of human review of all available information sources. Results. For identification of all CRC t... --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports paper_content: Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports. --- paper_title: Integrating Feature Selection and Feature Extraction Methods With Deep Learning to Predict Clinical Outcome of Breast Cancer paper_content: In many microarray studies, classifiers have been constructed based on gene signatures to predict clinical outcomes for various cancer sufferers. However, signatures originating from different studies often suffer from poor robustness when used in the classification of data sets independent from which they were generated from. In this paper, we present an unsupervised feature learning framework by integrating a principal component analysis algorithm and autoencoder neural network to identify different characteristics from gene expression profiles. As the foundation for the obtained features, an ensemble classifier based on the AdaBoost algorithm (PCA-AE-Ada) was constructed to predict clinical outcomes in breast cancer. During the experiments, we established an additional classifier with the same classifier learning strategy (PCA-Ada) in order to perform as a baseline to the proposed method, where the only difference is the training inputs. The area under the receiver operating characteristic curve index, Matthews correlation coefficient index, accuracy, and other evaluation parameters of the proposed method were tested on several independent breast cancer data sets and compared with representative gene signature-based algorithms including the baseline method. Experimental results demonstrate that the proposed method using deep learning techniques performs better than others. --- paper_title: Deep learning enabled national cancer surveillance paper_content: Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this talk I will discuss the latest deep learning technology, presenting both theoretical and practical perspectives that are relevant to natural language processing of clinical pathology reports. Using different deep learning architectures, I will present benchmark studies for various information extraction tasks and discuss their importance in supporting a comprehensive and scalable national cancer surveillance program. --- paper_title: FrameNet: A Knowledge Base for Natural Language Processing paper_content: Prof. Charles J. Fillmore had a lifelong interest in lexical semantics, and this culminated in the latter part of his life in a major research project, the FrameNet Project at the International Computer Science Institute in Berkeley, California (http://framenet. icsi.berkeley.edu). This paper reports on the background of this ongoing project, its connections to Fillmore’s other research interests, and briefly outlines applications and current directions of growth for FrameNet, including FrameNets in languages other than English. --- paper_title: Extracting timing and status descriptors for colonoscopy testing from electronic medical records paper_content: Colorectal cancer (CRC) screening rates are low despite confirmed benefits. The authors investigated the use of natural language processing (NLP) to identify previous colonoscopy screening in electronic records from a random sample of 200 patients at least 50 years old. The authors developed algorithms to recognize temporal expressions and ‘status indicators’, such as ‘patient refused’, or ‘test scheduled’. The new methods were added to the existing KnowledgeMap concept identifier system, and the resulting system was used to parse electronic medical records (EMR) to detect completed colonoscopies. Using as the ‘gold standard’ expert physicians’ manual review of EMR notes, the system identified timing references with a recall of 0.91 and precision of 0.95, colonoscopy status indicators with a recall of 0.82 and precision of 0.95, and references to actually completed colonoscopies with recall of 0.93 and precision of 0.95. The system was superior to using colonoscopy billing codes alone. Health services researchers and clinicians may find NLP a useful adjunct to traditional methods to detect CRC screening status. Further investigations must validate extension of NLP approaches for other types of CRC screening applications. --- paper_title: caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research. paper_content: The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)dan application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for textderived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multicenter collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multiinstitutional translational research programs. --- paper_title: Automated Information Extraction on Treatment and Prognosis for Non–Small Cell Lung Cancer Radiotherapy Patients: Clinical Study paper_content: Background: In outcome studies of oncology patients undergoing radiation, researchers extract valuable information from medical records generated before, during, and after radiotherapy visits, such as survival data, toxicities, and complications. Clinical studies rely heavily on these data to correlate the treatment regimen with the prognosis to develop evidence-based radiation therapy paradigms. These data are available mainly in forms of narrative texts or table formats with heterogeneous vocabularies. Manual extraction of the related information from these data can be time consuming and labor intensive, which is not ideal for large studies. Objective: The objective of this study was to adapt the interactive information extraction platform Information and Data Extraction using Adaptive Learning (IDEAL-X) to extract treatment and prognosis data for patients with locally advanced or inoperable non–small cell lung cancer (NSCLC). Methods: We transformed patient treatment and prognosis documents into normalized structured forms using the IDEAL-X system for easy data navigation. The adaptive learning and user-customized controlled toxicity vocabularies were applied to extract categorized treatment and prognosis data, so as to generate structured output. Results: In total, we extracted data from 261 treatment and prognosis documents relating to 50 patients, with overall precision and recall more than 93% and 83%, respectively. For toxicity information extractions, which are important to study patient posttreatment side effects and quality of life, the precision and recall achieved 95.7% and 94.5% respectively. Conclusions: The IDEAL-X system is capable of extracting study data regarding NSCLC chemoradiation patients with significant accuracy and effectiveness, and therefore can be used in large-scale radiotherapy clinical data studies. [JMIR Med Inform 2018;6(1):e8] --- paper_title: Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning paper_content: Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. --- paper_title: Automated extraction of Biomarker information from pathology reports paper_content: Pathology reports are written in free-text form, which precludes efficient data gathering. We aimed to overcome this limitation and design an automated system for extracting biomarker profiles from accumulated pathology reports. We designed a new data model for representing biomarker knowledge. The automated system parses immunohistochemistry reports based on a “slide paragraph” unit defined as a set of immunohistochemistry findings obtained for the same tissue slide. Pathology reports are parsed using context-free grammar for immunohistochemistry, and using a tree-like structure for surgical pathology. The performance of the approach was validated on manually annotated pathology reports of 100 randomly selected patients managed at Seoul National University Hospital. High F-scores were obtained for parsing biomarker name and corresponding test results (0.999 and 0.998, respectively) from the immunohistochemistry reports, compared to relatively poor performance for parsing surgical pathology findings. However, applying the proposed approach to our single-center dataset revealed information on 221 unique biomarkers, which represents a richer result than biomarker profiles obtained based on the published literature. Owing to the data representation model, the proposed approach can associate biomarker profiles extracted from an immunohistochemistry report with corresponding pathology findings listed in one or more surgical pathology reports. Term variations are resolved by normalization to corresponding preferred terms determined by expanded dictionary look-up and text similarity-based search. Our proposed approach for biomarker data extraction addresses key limitations regarding data representation and can handle reports prepared in the clinical setting, which often contain incomplete sentences, typographical errors, and inconsistent formatting. --- paper_title: Natural Language Processing in Oncology: A Review. paper_content: IMPORTANCE ::: Natural language processing (NLP) has the potential to accelerate translation of cancer treatments from the laboratory to the clinic and will be a powerful tool in the era of personalized medicine. This technology can harvest important clinical variables trapped in the free-text narratives within electronic medical records. ::: ::: ::: OBSERVATIONS ::: Natural language processing can be used as a tool for oncological evidence-based research and quality improvement. Oncologists interested in applying NLP for clinical research can play pivotal roles in building NLP systems and, in doing so, contribute to both oncological and clinical NLP research. Herein, we provide an introduction to NLP and its potential applications in oncology, a description of specific tools available, and a review on the state of the current technology with respect to cancer case identification, staging, and outcomes quantification. ::: ::: ::: CONCLUSIONS AND RELEVANCE ::: More automated means of leveraging unstructured data from daily clinical practice is crucial as therapeutic options and access to individual-level health information increase. Research-minded oncologists may push the avenues of evidence-based research by taking advantage of the new technologies available with clinical NLP. As continued progress is made with applying NLP toward oncological research, incremental gains will lead to large impacts, building a cost-effective infrastructure for advancing cancer care. --- paper_title: Classifying Lung Cancer Knowledge in PubMed According to GO Terms Using Extreme Learning Machine paper_content: For a well-established digital library e.g., PubMed, searching in terms of a newly established ontology e.g., Gene Ontology GO is an extremely difficult task. Making such a digital library adaptive to any new ontology or to reorganize knowledge automatically is our main objective. The decomposition of the knowledge base into classes is a first step toward our main objective. In this paper, we will demonstrate an automated linking scheme for PubMed citations with GO terms using an improved version of extreme learning machine ELM type algorithms. ELM is an emergent technology, which has shown excellent performance in large data classification problems, with fast learning speeds. --- paper_title: Effective Mapping of Biomedical Text to the UMLS Metathesaurus: The MetaMap Program paper_content: Abstract ::: The UMLS Metathesaurus, the largest thesaurus in the biomedical domain, provides a representation of biomedical knowledge consisting of concepts classified by semantic type and both hierarchical and non-hierarchical relationships among the concepts. This knowledge has proved useful for many applications including decision support systems, management of patient records, information retrieval (IR) and data mining. Gaining effective access to the knowledge is critical to the success of these applications. This paper describes MetaMap, a program developed at the National Library of Medicine (NLM) to map biomedical text to the Metathesaurus or, equivalently, to discover Metathesaurus concepts referred to in text. MetaMap uses a knowledge intensive approach based on symbolic, natural language processing (NLP) and computational linguistic techniques. Besides being applied for both IR and data mining applications, MetaMap is one of the foundations of NLM's Indexing Initiative System which is being applied to both semi-automatic and fully automatic indexing of the biomedical literature at the library. --- paper_title: Application of Text Information Extraction System for Real-Time Cancer Case Identification in an Integrated Healthcare Organization paper_content: Background: Surgical pathology reports (SPR) contain rich clinical diagnosis information. The text information extraction system (TIES) is an end-to-end application leveraging natural language processing technologies and focused on the processing of pathology and/or radiology reports. Methods: We deployed the TIES system and integrated SPRs into the TIES system on a daily basis at Kaiser Permanente Southern California. The breast cancer cases diagnosed in December 2013 from the Cancer Registry (CANREG) were used to validate the performance of the TIES system. The National Cancer Institute Metathesaurus (NCIM) concept terms and codes to describe breast cancer were identified through the Unified Medical Language System Terminology Service (UTS) application. The identified NCIM codes were used to search for the coded SPRs in the back-end datastore directly. The identified cases were then compared with the breast cancer patients pulled from CANREG. Results: A total of 437 breast cancer concept terms and 14 combinations of “breast” and “cancer” terms were identified from the UTS application. A total of 249 breast cancer cases diagnosed in December 2013 was pulled from CANREG. Out of these 249 cases, 241 were successfully identified by the TIES system from a total of 457 reports. The TIES system also identified an additional 277 cases that were not part of the validation sample. Out of the 277 cases, 11% were determined as highly likely to be cases after manual examinations, and 86% were in CANREG but were diagnosed in months other than December of 2013. Conclusions: The study demonstrated that the TIES system can effectively identify potential breast cancer cases in our care setting. Identified potential cases can be easily confirmed by reviewing the corresponding annotated reports through the front-end visualization interface. The TIES system is a great tool for identifying potential various cancer cases in a timely manner and on a regular basis in support of clinical research studies. --- paper_title: Efficient identification of nationally mandated reportable cancer cases using natural language processing and machine learning paper_content: Objective To help cancer registrars efficiently and accurately identify reportable cancer cases. ::: ::: Material and Methods The Cancer Registry Control Panel (CRCP) was developed to detect mentions of reportable cancer cases using a pipeline built on the Unstructured Information Management Architecture – Asynchronous Scaleout (UIMA-AS) architecture containing the National Library of Medicine’s UIMA MetaMap annotator as well as a variety of rule-based UIMA annotators that primarily act to filter out concepts referring to nonreportable cancers. CRCP inspects pathology reports nightly to identify pathology records containing relevant cancer concepts and combines this with diagnosis codes from the Clinical Electronic Data Warehouse to identify candidate cancer patients using supervised machine learning. Cancer mentions are highlighted in all candidate clinical notes and then sorted in CRCP’s web interface for faster validation by cancer registrars. ::: ::: Results CRCP achieved an accuracy of 0.872 and detected reportable cancer cases with a precision of 0.843 and a recall of 0.848. CRCP increases throughput by 22.6% over a baseline (manual review) pathology report inspection system while achieving a higher precision and recall. Depending on registrar time constraints, CRCP can increase recall to 0.939 at the expense of precision by incorporating a data source information feature. ::: ::: Conclusion CRCP demonstrates accurate results when applying natural language processing features to the problem of detecting patients with cases of reportable cancer from clinical notes. We show that implementing only a portion of cancer reporting rules in the form of regular expressions is sufficient to increase the precision, recall, and speed of the detection of reportable cancer cases when combined with off-the-shelf information extraction software and machine learning. --- paper_title: Talking About My Care: Detecting Mentions of Hormonal Therapy Adherence Behavior in an Online Breast Cancer Community. paper_content: Hormonal therapy adherence is challenging for many patients with hormone-receptor-positive breast cancer. Gaining intuition into their adherence behavior would assist in improving outcomes by pinpointing, and eventually addressing, why patients fail to adhere. While traditional adherence studies rely on survey-based methods or electronic medical records, online health communities provide a supplemental data source to learn about such behavior and often on a much larger scale. In this paper, we focus on an online breast cancer discussion forum and propose a framework to automatically extract hormonal therapy adherence behavior (HTAB) mentions. The framework compares medical term usage when describing when a patient is taking hormonal therapy medication and interrupting their treatment (e.g., stop/pause taking medication). We show that by using shallow neural networks, in the form of wordlvec, the learned features can be applied to build efficient HTAB mention classifiers. Through medical term comparison, we find that patients who exhibit an interruption behavior are more likely to mention depression and their care providers, while patients with continuation behavior are more likely to mention common side effects (e.g., hot flashes, nausea and osteoporosis), vitamins and exercise. --- paper_title: Using Twitter for breast cancer prevention: an analysis of breast cancer awareness month paper_content: BackgroundOne in eight women will develop breast cancer in her lifetime. The best-known awareness event is breast cancer awareness month (BCAM). BCAM month outreach efforts have been associated with increased media coverage, screening mammography and online information searching. Traditional mass media coverage has been enhanced by social media. However, there is a dearth of literature about how social media is used during awareness-related events. The purpose of this research was to understand how Twitter is being used during BCAM.MethodsThis was a cross-sectional, descriptive study. We collected breast cancer- related tweets from 26 September - 12 November 2012, using Twitter’s application programming interface. We classified Twitter users into organizations, individuals, and celebrities; each tweet was classified as an original or a retweet, and inclusion of a mention, meaning a reference to another Twitter user with @username. Statistical methods included ANOVA and chi square. For content analysis, we used computational linguistics techniques, specifically the MALLET implementation of the unsupervised topic modeling algorithm Latent Dirichlet Allocation.ResultsThere were 1,351,823 tweets by 797,827 unique users. Tweets spiked dramatically the first few days then tapered off. There was an average of 1.69 tweets per user. The majority of users were individuals. Nearly all of the tweets were original. Organizations and celebrities posted more often than individuals. On average celebrities made far more impressions; they were also retweeted more often and their tweets were more likely to include mentions. Individuals were more likely to direct a tweet to a specific person. Organizations and celebrities emphasized fundraisers, early detection, and diagnoses while individuals tweeted about wearing pink.ConclusionsTweeting about breast cancer was a singular event. The majority of tweets did not promote any specific preventive behavior. Twitter is being used mostly as a one-way communication tool. To expand the reach of the message and maximize the potential for word-of-mouth marketing using Twitter, organizations need a strategic communications plan to ensure on-going social media conversations. Organizations may consider collaborating with individuals and celebrities in these conversations. Social media communication strategies that emphasize fundraising for breast cancer research seem particularly appropriate. --- paper_title: Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications paper_content: We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at . The cTAKES builds on existing open-source technologies—the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text. --- paper_title: Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model paper_content: We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets. --- paper_title: Using Clinical Narratives and Structured Data to Identify Distant Recurrences in Breast Cancer paper_content: Accurately identifying distant recurrences in breast cancer from the Electronic Health Records (EHR) is important for both clinical care and secondary analysis. Although multiple applications have been developed for computational phenotyping in breast cancer, distant recurrence identification still relies heavily on manual chart review. In this study, we aim to develop a model that identifies distant recurrences in breast cancer using clinical narratives and structured data from EHR. We apply MetaMap to extract features from clinical narratives and also retrieve structured clinical data from EHR. Using these features, we train a support vector machine model to identify distant recurrences in breast cancer patients. We train the model using 1,396 double-annotated subjects and validate the model using 599 double-annotated subjects. In addition, we validate the model on a set of 4,904 single-annotated subjects as a generalization test. We obtained a high area under curve (AUC) score of 0.92 (SD=0.01) in the cross-validation using the training dataset, then obtained AUC scores of 0.95 and 0.93 in the held-out test and generalization test using 599 and 4,904 samples respectively. Our model can accurately and efficiently identify distant recurrences in breast cancer by combining features extracted from unstructured clinical narratives and structured clinical data. --- paper_title: Automating the Determination of Prostate Cancer Risk Strata From Electronic Medical Records. paper_content: Purpose ::: Risk stratification underlies system-wide efforts to promote the delivery of appropriate prostate cancer care. Although the elements of risk stratum are available in the electronic medical record, manual data collection is resource intensive. Therefore, we investigated the feasibility and accuracy of an automated data extraction method using natural language processing (NLP) to determine prostate cancer risk stratum. ::: ::: ::: Methods ::: Manually collected clinical stage, biopsy Gleason score, and preoperative prostate-specific antigen (PSA) values from our prospective prostatectomy database were used to categorize patients as low, intermediate, or high risk by D'Amico risk classification. NLP algorithms were developed to automate the extraction of the same data points from the electronic medical record, and risk strata were recalculated. The ability of NLP to identify elements sufficient to calculate risk (recall) was calculated, and the accuracy of NLP was compared with that of manually collected data using the weighted Cohen's κ statistic. ::: ::: ::: Results ::: Of the 2,352 patients with available data who underwent prostatectomy from 2010 to 2014, NLP identified sufficient elements to calculate risk for 1,833 (recall, 78%). NLP had a 91% raw agreement with manual risk stratification (κ = 0.92; 95% CI, 0.90 to 0.93). The κ statistics for PSA, Gleason score, and clinical stage extraction by NLP were 0.86, 0.91, and 0.89, respectively; 91.9% of extracted PSA values were within ± 1.0 ng/mL of the manually collected PSA levels. ::: ::: ::: Conclusion ::: NLP can achieve more than 90% accuracy on D'Amico risk stratification of localized prostate cancer, with adequate recall. This figure is comparable to other NLP tasks and illustrates the known trade off between recall and accuracy. Automating the collection of risk characteristics could be used to power real-time decision support tools and scale up quality measurement in cancer care. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: Automatic Processing of Anatomic Pathology Reports in the Italian Language to Enhance the Reuse of Clinical Data. paper_content: Medical reports often contain a lot of relevant information in the form of free text. To reuse these unstructured texts for biomedical research, it is important to extract structured data from them. In this work, we adapted a previously developed information extraction system to the oncology domain, to process a set of anatomic pathology reports in the Italian language. The information extraction system relies on a domain ontology, which was adapted and refined in an iterative way. The final output was evaluated by a domain expert, with promising results. --- paper_title: Using machine learning to parse breast pathology reports paper_content: PURPOSE ::: Extracting information from electronic medical record is a time-consuming and expensive process when done manually. Rule-based and machine learning techniques are two approaches to solving this problem. In this study, we trained a machine learning model on pathology reports to extract pertinent tumor characteristics, which enabled us to create a large database of attribute searchable pathology reports. This database can be used to identify cohorts of patients with characteristics of interest. ::: ::: ::: METHODS ::: We collected a total of 91,505 breast pathology reports from three Partners hospitals: Massachusetts General Hospital, Brigham and Women's Hospital, and Newton-Wellesley Hospital, covering the period from 1978 to 2016. We trained our system with annotations from two datasets, consisting of 6295 and 10,841 manually annotated reports. The system extracts 20 separate categories of information, including atypia types and various tumor characteristics such as receptors. We also report a learning curve analysis to show how much annotation our model needs to perform reasonably. ::: ::: ::: RESULTS ::: The model accuracy was tested on 500 reports that did not overlap with the training set. The model achieved accuracy of 90% for correctly parsing all carcinoma and atypia categories for a given patient. The average accuracy for individual categories was 97%. Using this classifier, we created a database of 91,505 parsed pathology reports. ::: ::: ::: CONCLUSIONS ::: Our learning curve analysis shows that the model can achieve reasonable results even when trained on a few annotations. We developed a user-friendly interface to the database that allows physicians to easily identify patients with target characteristics and export the matching cohort. This model has the potential to reduce the effort required for analyzing large amounts of data from medical records, and to minimize the cost and time required to glean scientific insight from these data. --- paper_title: Pathologic findings in reduction mammoplasty procedures identified by natural language processing of breast pathology reports: A surrogate for the population incidence of cancer and high risk lesions. paper_content: e13569Background: Breast reduction surgery removes a random sample of breast tissue in otherwise asymptomatic women and thus provides a method to evaluate the background incidence of breast patholo... --- paper_title: Development of a Natural Language Processing Engine to Generate Bladder Cancer Pathology Data for Health Services Research paper_content: OBJECTIVE ::: To take the first step toward assembling population-based cohorts of patients with bladder cancer with longitudinal pathology data, we developed and validated a natural language processing (NLP) engine that abstracts pathology data from full-text pathology reports. ::: ::: ::: METHODS ::: Using 600 bladder pathology reports randomly selected from the Department of Veterans Affairs, we developed and validated an NLP engine to abstract data on histology, invasion (presence vs absence and depth), grade, the presence of muscularis propria, and the presence of carcinoma in situ. Our gold standard was based on an independent review of reports by 2 urologists, followed by adjudication. We assessed the NLP performance by calculating the accuracy, the positive predictive value, and the sensitivity. We subsequently applied the NLP engine to pathology reports from 10,725 patients with bladder cancer. ::: ::: ::: RESULTS ::: When comparing the NLP output to the gold standard, NLP achieved the highest accuracy (0.98) for the presence vs the absence of carcinoma in situ. Accuracy for histology, invasion (presence vs absence), grade, and the presence of muscularis propria ranged from 0.83 to 0.96. The most challenging variable was depth of invasion (accuracy 0.68), with an acceptable positive predictive value for lamina propria (0.82) and for muscularis propria (0.87) invasion. The validated engine was capable of abstracting pathologic characteristics for 99% of the patients with bladder cancer. ::: ::: ::: CONCLUSION ::: NLP had high accuracy for 5 of 6 variables and abstracted data for the vast majority of the patients. This now allows for the assembly of population-based cohorts with longitudinal pathology data. --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Extent of Risk-Aligned Surveillance for Cancer Recurrence Among Patients With Early-Stage Bladder Cancer paper_content: Importance Cancer care guidelines recommend aligning surveillance frequency with underlying cancer risk, ie, more frequent surveillance for patients at high vs low risk of cancer recurrence. Objective To assess the extent to which such risk-aligned surveillance is practiced within US Department of Veterans Affairs facilities by classifying surveillance patterns for low- vs high-risk patients with early-stage bladder cancer. Design, Setting, and Participants US national retrospective cohort study of a population-based sample of patients diagnosed with low-risk or high-risk early-stage bladder between January 1, 2005, and December 31, 2011, with follow-up through December 31, 2014. Analyses were performed March 2017 to April 2018. The study included all Veterans Affairs facilities (n = 85) where both low- and high-risk patients were treated. Exposures Low-risk vs high-risk cancer status, based on definitions from the European Association of Urology risk stratification guidelines and on data extracted from diagnostic pathology reports via validated natural language processing algorithms. Main Outcomes and Measures Adjusted cystoscopy frequency for low-risk and high-risk patients for each facility, estimated using multilevel modeling. Results The study included 1278 low-risk and 2115 high-risk patients (median [interquartile range] age, 77 [71-82] years; 99% [3368 of 3393] male). Across facilities, the adjusted frequency of surveillance cystoscopy ranged from 3.7 to 6.2 (mean, 4.8) procedures over 2 years per patient for low-risk patients and from 4.6 to 6.0 (mean, 5.4) procedures over 2 years per patient for high-risk patients. In 70 of 85 facilities, surveillance was performed at a comparable frequency for low- and high-risk patients, differing by less than 1 cystoscopy over 2 years. Surveillance frequency among high-risk patients statistically significantly exceeded surveillance among low-risk patients at only 4 facilities. Across all facilities, surveillance frequencies for low- vs high-risk patients were moderately strongly correlated (r = 0.52;P Conclusions and Relevance Patients with early-stage bladder cancer undergo cystoscopic surveillance at comparable frequencies regardless of risk. This finding highlights the need to understand barriers to risk-aligned surveillance with the goal of making it easier for clinicians to deliver it in routine practice. --- paper_title: Automated Cancer Registry Notifications: Validation of a Medical Text Analytics System for Identifying Patients with Cancer from a State-Wide Pathology Repository. paper_content: The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports. --- paper_title: Automatic Extraction of Breast Cancer Information from Clinical Reports paper_content: The majority of clinical data is only available in unstructured text documents. Thus, their automated usage in data-based clinical application scenarios, like quality assurance and clinical decision support by treatment suggestions, is hindered because it requires high manual annotation efforts. In this work, we introduce a system for the automated processing of clinical reports of mamma carcinoma patients that allows for the automatic extraction and seamless processing of relevant textual features. Its underlying information extraction pipeline employs a rule-based grammar approach that is integrated with semantic technologies to determine the relevant information from the patient record. The accuracy of the system, developed with nine thousand clinical documents, reaches accuracy levels of 90% for lymph node status and 69% for the structurally most complex feature, the hormone status. --- paper_title: Machine learning to parse breast pathology reports in Chinese paper_content: Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages. --- paper_title: Structured Pathology Reporting for Cancer from Free Text: Lung Cancer Case Study paper_content: Objective: To automatically generate structured reports for cancer, including TNM (Tumour-Node-Metastases) staging information, from free-text (non-structured) pathology reports. Method: A symbolic rule-based classification approach was proposed to identify symbols (or clinical concepts) in free-text reports that were subsumed by items specified in a structured report. Systematized Nomenclature of Medicine – Clinical Terms (SNOMED CT) was used as a base ontology to provide the semantics and relationships between concepts for subsumption querying. Synthesised values from the structured report such as TNM stages were also classified by building logic from relevant structured report items. The College of American Pathologists’ (CAP) surgical lung resection cancer checklist was used to demonstrate the methodology. Results: Checklist items were identified in the free text report and used for structured reporting. The synthesised TNM staging values classified by the system were evaluated against explicitly mentioned TNM stages from 487 reports and achieved an overall accuracy of 78%, 89% and 95% for T, N and M stages respectively. Conclusion: A system to generate structured cancer case reports from free-text pathology reports using symbolic rule-based classification techniques was developed and shows promise. The approach can be easily adapted for other cancer case structured reports. --- paper_title: Pathologic findings in reduction mammoplasty specimens: a surrogate for the population prevalence of breast cancer and high-risk lesions paper_content: Mammoplasty removes random samples of breast tissue from asymptomatic women providing a unique method for evaluating background prevalence of breast pathology in normal population. Our goal was to identify the rate of atypical breast lesions and cancers in women of various ages in the largest mammoplasty cohort reported to date. We analyzed pathologic reports from patients undergoing bilateral mammoplasty, using natural language processing algorithm, verified by human review. Patients with a prior history of breast cancer or atypia were excluded. A total of 4775 patients were deemed eligible. Median age was 40 (range 13–86) and was higher in patients with any incidental finding compared to patients with normal reports (52 vs. 39 years, p = 0.0001). Pathological findings were detected in 7.06% (337) of procedures. Benign high-risk lesions were found in 299 patients (6.26%). Invasive carcinoma and ductal carcinoma in situ were detected in 15 (0.31%) and 23 (0.48%) patients, respectively. The rate of atypias and cancers increased with age. The overall rate of abnormal findings in asymptomatic patients undergoing mammoplasty was 7.06%, increasing with age. As these results are based on random sample of breast tissue, they likely underestimate the prevalence of abnormal findings in asymptomatic women. --- paper_title: Filter pruning of Convolutional Neural Networks for text classification: A case study of cancer pathology report comprehension paper_content: Convolutional Neural Networks (CNN) have recently demonstrated effective performance in many Natural Language Processing tasks. In this study, we explore a novel approach for pruning a CNN's convolution filters using our new data-driven utility score. We have applied this technique to an information extraction task of classifying a dataset of cancer pathology reports by cancer type, a highly imbalanced dataset. Compared to standard CNN training, our new algorithm resulted in a nearly .07 increase in the micro-averaged F1-score and a strong .22 increase in the macro-averaged F1-score using a model with nearly a third fewer network weights. We show how directly utilizing a network's interpretation of data can result in strong performance gains, particularly with severely imbalanced datasets. --- paper_title: Creating a rule based system for text mining of Norwegian breast cancer pathology reports paper_content: National cancer registries collect cancer related information from multiple sources and make it available for research. Part of this information originates from pathology reports, and in this pre-study the possibility of a system for automatic extraction of information from Norwegian pathology reports is investigated. A set of 40 pathology reports describing breast cancer tissue samples has been used to develop a rule based system for information extraction. To validate the performance of this system its output has been compared to the data produced by experts doing manual encoding of the same pathology reports. On average, a precision of 80%, a recall of 98% and an F-score of 86% has been achieved, showing that such a system is indeed feasible. --- paper_title: Deep learning enabled national cancer surveillance paper_content: Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this talk I will discuss the latest deep learning technology, presenting both theoretical and practical perspectives that are relevant to natural language processing of clinical pathology reports. Using different deep learning architectures, I will present benchmark studies for various information extraction tasks and discuss their importance in supporting a comprehensive and scalable national cancer surveillance program. --- paper_title: Assessing the natural language processing capabilities of IBM Watson for oncology using real Australian lung cancer cases. paper_content: e18229Background: Optimal treatment decisions require information from multiple sources and formats even in the context of electronic medical records. In addition, structured data such as that coll... --- paper_title: FrameNet: A Knowledge Base for Natural Language Processing paper_content: Prof. Charles J. Fillmore had a lifelong interest in lexical semantics, and this culminated in the latter part of his life in a major research project, the FrameNet Project at the International Computer Science Institute in Berkeley, California (http://framenet. icsi.berkeley.edu). This paper reports on the background of this ongoing project, its connections to Fillmore’s other research interests, and briefly outlines applications and current directions of growth for FrameNet, including FrameNets in languages other than English. --- paper_title: Abstract PR05: Using medical informatics to evaluate the risk of colorectal cancer in patients with clinically diagnosed sessile serrated polyps paper_content: Background : Recent research suggests that in addition to advanced adenomas, sessile serrated polyps (SSPs) may be important precursors for proximal colon cancer. In this study, we conducted the first large cohort study to evaluate the risk of colorectal cancer (CRC) in patients diagnosed with SSPs through usual care. Methods : The University of Washington Medical Center (UWMC) uses a comprehensive electronic medical records (EMR) system to track patient demographics and health-related information, including diagnoses and procedures for all patients. We used procedure codes to identify a cohort of patients receiving colonoscopies at UWMC from 2003-2013. Natural language processing of text in the final diagnosis section of the pathology report was used to characterize the type of polyps present at each colonoscopy procedure, including non-advanced conventional adenomas, advanced conventional adenomas (defined as having villous histology, high-grade dysplasia, or a diameter ≥ 10 mm), and sessile serrated polyps. These colonoscopy records were then linked to the Puget Sound Surveillance, Epidemiology, and End Results Cancer Registry (SEER) and subsequent EMR data to identify incident CRCs occurring through December 31, 2014 within this cohort. Those who lived outside of the SEER catchment area or who had prior colectomy, inflammatory bowel disease, or CRC were excluded from analyses. Cox proportional hazards regression models were used to calculate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) comparing the risk of CRC in each polyp group to those who were polyp-free at an index colonoscopy. HR estimates were adjusted for age, sex, race/ethnicity, smoking status, and body mass index. Results : From 2003 through 2013, 32,136 colonoscopies were performed at UWMC, and 17,424 colonoscopies from 14,846 patients met the study inclusion criteria. Of these patients, 8,908 were polyp-free, 4,145 had only non-advanced conventional adenomas, 927 had advanced conventional adenomas and no SSPs, 314 had ≥1 SSP and ≥1 conventional adenoma, and 552 had SSPs and no conventional adenomas at an index colonoscopy. Median follow-up time in the study cohort was 5.8 years, and 66 incident colorectal cancers occurred during the follow-up period. The risk of incident CRC in those with advanced conventional adenomas at their index colonoscopy was significantly higher than those who were polyp-free (HR=3.9; CI: 1.9-7.7). However, there was not a statistically significant difference in the risk of incident CRC between those who were polyp-free at their index colonoscopy and those who had only non-advanced conventional adenomas (HR=1.5; CI: 0.8-2.6), SSPs with conventional adenomas (HR=2.0; CI: 0.5-8.7), or SSPs without conventional adenomas (HR=1.3; CI: 0.3-5.5). Discussion : Despite recent evidence from cross-sectional studies suggesting that SSPs are high-risk precursors for a subset of colorectal cancers, our results indicate that the risk of CRC in patients with clinically-diagnosed SSPs is similar to the risk of CRC in those with non-advanced adenomas. Additional longitudinal studies of SSPs diagnosed through usual care are needed to inform guidelines for the surveillance of patients with SSPs. This abstract is also being presented as Poster B01. Citation Format: Andrea Burnett-Hartman, Polly A. Newcomb, Chan X. Zeng, Yingye Zheng, John M. Inadomi, Christine Fong, Melissa P. Upton, William M. Grady. Using medical informatics to evaluate the risk of colorectal cancer in patients with clinically diagnosed sessile serrated polyps. [abstract]. In: Proceedings of the AACR Special Conference on Colorectal Cancer: From Initiation to Outcomes; 2016 Sep 17-20; Tampa, FL. Philadelphia (PA): AACR; Cancer Res 2017;77(3 Suppl):Abstract nr PR05. --- paper_title: Identifying primary and recurrent cancers using a SAS-based natural language processing algorithm paper_content: Objective Significant limitations exist in the timely and complete identification of primary and recurrent cancers for clinical and epidemiologic research. A SAS-based coding, extraction, and nomenclature tool (SCENT) was developed to address this problem. ::: ::: Materials and methods SCENT employs hierarchical classification rules to identify and extract information from electronic pathology reports. Reports are analyzed and coded using a dictionary of clinical concepts and associated SNOMED codes. To assess the accuracy of SCENT, validation was conducted using manual review of pathology reports from a random sample of 400 breast and 400 prostate cancer patients diagnosed at Kaiser Permanente Southern California. Trained abstractors classified the malignancy status of each report. ::: ::: Results Classifications of SCENT were highly concordant with those of abstractors, achieving κ of 0.96 and 0.95 in the breast and prostate cancer groups, respectively. SCENT identified 51 of 54 new primary and 60 of 61 recurrent cancer cases across both groups, with only three false positives in 792 true benign cases. Measures of sensitivity, specificity, positive predictive value, and negative predictive value exceeded 94% in both cancer groups. ::: ::: Discussion Favorable validation results suggest that SCENT can be used to identify, extract, and code information from pathology report text. Consequently, SCENT has wide applicability in research and clinical care. Further assessment will be needed to validate performance with other clinical text sources, particularly those with greater linguistic variability. ::: ::: Conclusion SCENT is proof of concept for SAS-based natural language processing applications that can be easily shared between institutions and used to support clinical and epidemiologic research. --- paper_title: Contralateral Breast Cancer Event Detection Using Nature Language Processing. paper_content: To facilitate the identification of contralateral breast cancer events for large cohort study, we proposed and implemented a new method based on features extracted from narrative text in progress notes and features from numbers of pathology reports for each side of breast cancer. Our method collects medical concepts and their combinations to detect contralateral events in progress notes. In addition, the numbers of pathology reports generated for either left or right side of breast cancer were derived as additional features. We experimented with support vector machine using the derived features to detect contralateral events. In the cross-validation and held-out tests, the area under curve score is 0.93 and 0.89 respectively. This method can be replicated due to the simplicity of feature generation. --- paper_title: Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model paper_content: We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets. --- paper_title: Automating the Determination of Prostate Cancer Risk Strata From Electronic Medical Records. paper_content: Purpose ::: Risk stratification underlies system-wide efforts to promote the delivery of appropriate prostate cancer care. Although the elements of risk stratum are available in the electronic medical record, manual data collection is resource intensive. Therefore, we investigated the feasibility and accuracy of an automated data extraction method using natural language processing (NLP) to determine prostate cancer risk stratum. ::: ::: ::: Methods ::: Manually collected clinical stage, biopsy Gleason score, and preoperative prostate-specific antigen (PSA) values from our prospective prostatectomy database were used to categorize patients as low, intermediate, or high risk by D'Amico risk classification. NLP algorithms were developed to automate the extraction of the same data points from the electronic medical record, and risk strata were recalculated. The ability of NLP to identify elements sufficient to calculate risk (recall) was calculated, and the accuracy of NLP was compared with that of manually collected data using the weighted Cohen's κ statistic. ::: ::: ::: Results ::: Of the 2,352 patients with available data who underwent prostatectomy from 2010 to 2014, NLP identified sufficient elements to calculate risk for 1,833 (recall, 78%). NLP had a 91% raw agreement with manual risk stratification (κ = 0.92; 95% CI, 0.90 to 0.93). The κ statistics for PSA, Gleason score, and clinical stage extraction by NLP were 0.86, 0.91, and 0.89, respectively; 91.9% of extracted PSA values were within ± 1.0 ng/mL of the manually collected PSA levels. ::: ::: ::: Conclusion ::: NLP can achieve more than 90% accuracy on D'Amico risk stratification of localized prostate cancer, with adequate recall. This figure is comparable to other NLP tasks and illustrates the known trade off between recall and accuracy. Automating the collection of risk characteristics could be used to power real-time decision support tools and scale up quality measurement in cancer care. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: A natural language processing algorithm to measure quality prostate cancer care. paper_content: 232Background: Electronic health records (EHRs) are a widely adopted but underutilized source of data for systematic assessment of healthcare quality. Barriers for use of this data source include its vast complexity, lack of structure, and the lack of use of standardized vocabulary and terminology by clinicians. This project aims to develop generalizable algorithms to extract useful knowledge regarding prostate cancer quality metrics from EHRs. Methods: We used EHR ICD-9/10 codes to identify prostate cancer patients receiving care at our academic medical center. Patients were confirmed in the California Cancer Registry (CCR), which provided data on tumor characteristics, treatment data, treatment outcomes and survival. We focused on three potential pretreatment process quality measures, which included documentation within 6 months prior to initial treatment of prostate-specific antigen (PSA), digital rectal exam (DRE) performance, and Gleason score. Each quality metric was defined using target terms and c... --- paper_title: Automatic Processing of Anatomic Pathology Reports in the Italian Language to Enhance the Reuse of Clinical Data. paper_content: Medical reports often contain a lot of relevant information in the form of free text. To reuse these unstructured texts for biomedical research, it is important to extract structured data from them. In this work, we adapted a previously developed information extraction system to the oncology domain, to process a set of anatomic pathology reports in the Italian language. The information extraction system relies on a domain ontology, which was adapted and refined in an iterative way. The final output was evaluated by a domain expert, with promising results. --- paper_title: Natural Language Processing Improves Identification of Colorectal Cancer Testing in the Electronic Medical Record paper_content: Background. Difficulty identifying patients in need of colorectal cancer (CRC) screening contributes to low screening rates. Objective. To use Electronic Health Record (EHR) data to identify patients with prior CRC testing. Design. A clinical natural language processing (NLP) system was modified to identify 4 CRC tests (colonoscopy, flexible sigmoidoscopy, fecal occult blood testing, and double contrast barium enema) within electronic clinical documentation. Text phrases in clinical notes referencing CRC tests were interpreted by the system to determine whether testing was planned or completed and to estimate the date of completed tests. Setting. Large academic medical center. Patients. 200 patients ≥50 years old who had completed ≥2 non-acute primary care visits within a 1-year period. Measures. Recall and precision of the NLP system, billing records, and human chart review were compared to a reference standard of human review of all available information sources. Results. For identification of all CRC t... --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Extent of Risk-Aligned Surveillance for Cancer Recurrence Among Patients With Early-Stage Bladder Cancer paper_content: Importance Cancer care guidelines recommend aligning surveillance frequency with underlying cancer risk, ie, more frequent surveillance for patients at high vs low risk of cancer recurrence. Objective To assess the extent to which such risk-aligned surveillance is practiced within US Department of Veterans Affairs facilities by classifying surveillance patterns for low- vs high-risk patients with early-stage bladder cancer. Design, Setting, and Participants US national retrospective cohort study of a population-based sample of patients diagnosed with low-risk or high-risk early-stage bladder between January 1, 2005, and December 31, 2011, with follow-up through December 31, 2014. Analyses were performed March 2017 to April 2018. The study included all Veterans Affairs facilities (n = 85) where both low- and high-risk patients were treated. Exposures Low-risk vs high-risk cancer status, based on definitions from the European Association of Urology risk stratification guidelines and on data extracted from diagnostic pathology reports via validated natural language processing algorithms. Main Outcomes and Measures Adjusted cystoscopy frequency for low-risk and high-risk patients for each facility, estimated using multilevel modeling. Results The study included 1278 low-risk and 2115 high-risk patients (median [interquartile range] age, 77 [71-82] years; 99% [3368 of 3393] male). Across facilities, the adjusted frequency of surveillance cystoscopy ranged from 3.7 to 6.2 (mean, 4.8) procedures over 2 years per patient for low-risk patients and from 4.6 to 6.0 (mean, 5.4) procedures over 2 years per patient for high-risk patients. In 70 of 85 facilities, surveillance was performed at a comparable frequency for low- and high-risk patients, differing by less than 1 cystoscopy over 2 years. Surveillance frequency among high-risk patients statistically significantly exceeded surveillance among low-risk patients at only 4 facilities. Across all facilities, surveillance frequencies for low- vs high-risk patients were moderately strongly correlated (r = 0.52;P Conclusions and Relevance Patients with early-stage bladder cancer undergo cystoscopic surveillance at comparable frequencies regardless of risk. This finding highlights the need to understand barriers to risk-aligned surveillance with the goal of making it easier for clinicians to deliver it in routine practice. --- paper_title: Automatic Extraction of Breast Cancer Information from Clinical Reports paper_content: The majority of clinical data is only available in unstructured text documents. Thus, their automated usage in data-based clinical application scenarios, like quality assurance and clinical decision support by treatment suggestions, is hindered because it requires high manual annotation efforts. In this work, we introduce a system for the automated processing of clinical reports of mamma carcinoma patients that allows for the automatic extraction and seamless processing of relevant textual features. Its underlying information extraction pipeline employs a rule-based grammar approach that is integrated with semantic technologies to determine the relevant information from the patient record. The accuracy of the system, developed with nine thousand clinical documents, reaches accuracy levels of 90% for lymph node status and 69% for the structurally most complex feature, the hormone status. --- paper_title: Validation of natural language processing (NLP) for automated ascertainment of EGFR and ALK tests in SEER cases of non-small cell lung cancer (NSCLC). paper_content: 6528Background: The Surveillance, Epidemiology, and End Results (SEER) registries lack information on the Epidermal Growth Factor Receptor (EGFR) mutation and Anaplastic Lymphoma Kinase (ALK) gene ... --- paper_title: Development and Validation of an Automated Method to Identify Patients Undergoing Radical Cystectomy for Bladder Cancer Using Natural Language Processing paper_content: Abstract Introduction Measurement for quality improvement relies on accurate case identification and characterization. With electronic health records now widely deployed, natural language processing, the use of software to transform text into structured data, may enrich quality measurement. Accordingly we evaluated the application of natural language processing to radical cystectomy procedures for patients with bladder cancer. Methods From a sample of 497 procedures performed from March 2013 to October 2014 we identified radical cystectomy for primary bladder cancer using the approaches of 1) a natural language processing enhanced algorithm, 2) an administrative claims based algorithm and 3) manual chart review. We also characterized treatment with robotic surgery and continent urinary diversion. Using chart review as the reference standard we calculated the observed agreement (kappa statistic), sensitivity, specificity, positive predictive value and negative predictive value for natural language processing and administrative claims. Results We confirmed 84 radical cystectomies were performed for bladder cancer, with 50.0% robotic and 38.6% continent diversions. The natural language processing enhanced and claims based algorithms demonstrated 99.8% (κ=0.993, 95% CI 0.979–1.000) and 98.6% (κ=0.951, 95% CI 0.915–0.987) agreement with manual review, respectively. Both approaches accurately characterized robotic vs open surgery, with natural language processing enhanced algorithms showing 98.8% (κ=0.976, 95% CI 0.930–1.000) and claims based 90.5% (κ=0.810, 95% CI 0.686–0.933) agreement. For urinary diversion natural language processing enhanced algorithms correctly specified 96.4% of cases (κ=0.924, 95% CI 0.839–1.000) compared with 83.3% (κ=0.655, 95% CI 0.491–0.819). Conclusions Natural language processing enhanced and claims based algorithms accurately identified radical cystectomy cases at our institution. However, natural language processing appears to better classify specific aspects of cystectomy surgery, highlighting a potential advantage of this emerging methodology. --- paper_title: Automated Extraction of Grade, Stage, and Quality Information From Transurethral Resection of Bladder Tumor Pathology Reports Using Natural Language Processing paper_content: PurposeBladder cancer is initially diagnosed and staged with a transurethral resection of bladder tumor (TURBT). Patient survival is dependent on appropriate sampling of layers of the bladder, but pathology reports are dictated as free text, making large-scale data extraction for quality improvement challenging. We sought to automate extraction of stage, grade, and quality information from TURBT pathology reports using natural language processing (NLP).MethodsPatients undergoing TURBT were retrospectively identified using the Northwestern Enterprise Data Warehouse. An NLP algorithm was then created to extract information from free-text pathology reports and was iteratively improved using a training set of manually reviewed TURBTs. NLP accuracy was then validated using another set of manually reviewed TURBTs, and reliability was calculated using Cohen’s κ.ResultsOf 3,042 TURBTs identified from 2006 to 2016, 39% were classified as benign, 35% as Ta, 11% as T1, 4% as T2, and 10% as isolated carcinoma in situ... --- paper_title: Assessing the natural language processing capabilities of IBM Watson for oncology using real Australian lung cancer cases. paper_content: e18229Background: Optimal treatment decisions require information from multiple sources and formats even in the context of electronic medical records. In addition, structured data such as that coll... --- paper_title: Extracting timing and status descriptors for colonoscopy testing from electronic medical records paper_content: Colorectal cancer (CRC) screening rates are low despite confirmed benefits. The authors investigated the use of natural language processing (NLP) to identify previous colonoscopy screening in electronic records from a random sample of 200 patients at least 50 years old. The authors developed algorithms to recognize temporal expressions and ‘status indicators’, such as ‘patient refused’, or ‘test scheduled’. The new methods were added to the existing KnowledgeMap concept identifier system, and the resulting system was used to parse electronic medical records (EMR) to detect completed colonoscopies. Using as the ‘gold standard’ expert physicians’ manual review of EMR notes, the system identified timing references with a recall of 0.91 and precision of 0.95, colonoscopy status indicators with a recall of 0.82 and precision of 0.95, and references to actually completed colonoscopies with recall of 0.93 and precision of 0.95. The system was superior to using colonoscopy billing codes alone. Health services researchers and clinicians may find NLP a useful adjunct to traditional methods to detect CRC screening status. Further investigations must validate extension of NLP approaches for other types of CRC screening applications. --- paper_title: Automated Information Extraction on Treatment and Prognosis for Non–Small Cell Lung Cancer Radiotherapy Patients: Clinical Study paper_content: Background: In outcome studies of oncology patients undergoing radiation, researchers extract valuable information from medical records generated before, during, and after radiotherapy visits, such as survival data, toxicities, and complications. Clinical studies rely heavily on these data to correlate the treatment regimen with the prognosis to develop evidence-based radiation therapy paradigms. These data are available mainly in forms of narrative texts or table formats with heterogeneous vocabularies. Manual extraction of the related information from these data can be time consuming and labor intensive, which is not ideal for large studies. Objective: The objective of this study was to adapt the interactive information extraction platform Information and Data Extraction using Adaptive Learning (IDEAL-X) to extract treatment and prognosis data for patients with locally advanced or inoperable non–small cell lung cancer (NSCLC). Methods: We transformed patient treatment and prognosis documents into normalized structured forms using the IDEAL-X system for easy data navigation. The adaptive learning and user-customized controlled toxicity vocabularies were applied to extract categorized treatment and prognosis data, so as to generate structured output. Results: In total, we extracted data from 261 treatment and prognosis documents relating to 50 patients, with overall precision and recall more than 93% and 83%, respectively. For toxicity information extractions, which are important to study patient posttreatment side effects and quality of life, the precision and recall achieved 95.7% and 94.5% respectively. Conclusions: The IDEAL-X system is capable of extracting study data regarding NSCLC chemoradiation patients with significant accuracy and effectiveness, and therefore can be used in large-scale radiotherapy clinical data studies. [JMIR Med Inform 2018;6(1):e8] --- paper_title: Using Natural Language Processing to Extract Abnormal Results From Cancer Screening Reports paper_content: OBJECTIVES ::: Numerous studies show that follow-up of abnormal cancer screening results, such as mammography and Papanicolaou (Pap) smears, is frequently not performed in a timely manner. A contributing factor is that abnormal results may go unrecognized because they are buried in free-text documents in electronic medical records (EMRs), and, as a result, patients are lost to follow-up. By identifying abnormal results from free-text reports in EMRs and generating alerts to clinicians, natural language processing (NLP) technology has the potential for improving patient care. The goal of the current study was to evaluate the performance of NLP software for extracting abnormal results from free-text mammography and Pap smear reports stored in an EMR. ::: ::: ::: METHODS ::: A sample of 421 and 500 free-text mammography and Pap reports, respectively, were manually reviewed by a physician, and the results were categorized for each report. We tested the performance of NLP to extract results from the reports. The 2 assessments (criterion standard versus NLP) were compared to determine the precision, recall, and accuracy of NLP. ::: ::: ::: RESULTS ::: When NLP was compared with manual review for mammography reports, the results were as follows: precision, 98% (96%-99%); recall, 100% (98%-100%); and accuracy, 98% (96%-99%). For Pap smear reports, the precision, recall, and accuracy of NLP were all 100%. ::: ::: ::: CONCLUSIONS ::: Our study developed NLP models that accurately extract abnormal results from mammography and Pap smear reports. Plans include using NLP technology to generate real-time alerts and reminders for providers to facilitate timely follow-up of abnormal results. --- paper_title: Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry paper_content: Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute’s Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a “gold standard” based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6 % of screening mammograms, 12.1 % of diagnostic mammograms, and 9.4 % of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. --- paper_title: Validation of natural language processing to extract breast cancer pathology procedures and results paper_content: Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%), and evaluation (324, 10%) purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related). Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity), but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance. --- paper_title: Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model paper_content: We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: Automatic Processing of Anatomic Pathology Reports in the Italian Language to Enhance the Reuse of Clinical Data. paper_content: Medical reports often contain a lot of relevant information in the form of free text. To reuse these unstructured texts for biomedical research, it is important to extract structured data from them. In this work, we adapted a previously developed information extraction system to the oncology domain, to process a set of anatomic pathology reports in the Italian language. The information extraction system relies on a domain ontology, which was adapted and refined in an iterative way. The final output was evaluated by a domain expert, with promising results. --- paper_title: Using machine learning to parse breast pathology reports paper_content: PURPOSE ::: Extracting information from electronic medical record is a time-consuming and expensive process when done manually. Rule-based and machine learning techniques are two approaches to solving this problem. In this study, we trained a machine learning model on pathology reports to extract pertinent tumor characteristics, which enabled us to create a large database of attribute searchable pathology reports. This database can be used to identify cohorts of patients with characteristics of interest. ::: ::: ::: METHODS ::: We collected a total of 91,505 breast pathology reports from three Partners hospitals: Massachusetts General Hospital, Brigham and Women's Hospital, and Newton-Wellesley Hospital, covering the period from 1978 to 2016. We trained our system with annotations from two datasets, consisting of 6295 and 10,841 manually annotated reports. The system extracts 20 separate categories of information, including atypia types and various tumor characteristics such as receptors. We also report a learning curve analysis to show how much annotation our model needs to perform reasonably. ::: ::: ::: RESULTS ::: The model accuracy was tested on 500 reports that did not overlap with the training set. The model achieved accuracy of 90% for correctly parsing all carcinoma and atypia categories for a given patient. The average accuracy for individual categories was 97%. Using this classifier, we created a database of 91,505 parsed pathology reports. ::: ::: ::: CONCLUSIONS ::: Our learning curve analysis shows that the model can achieve reasonable results even when trained on a few annotations. We developed a user-friendly interface to the database that allows physicians to easily identify patients with target characteristics and export the matching cohort. This model has the potential to reduce the effort required for analyzing large amounts of data from medical records, and to minimize the cost and time required to glean scientific insight from these data. --- paper_title: Facilitating cancer research using natural language processing of pathology reports. paper_content: Many ongoing clinical research projects, such as projects involving studies associated with cancer, involve manual capture of information in surgical pathology reports so that the information can be used to determine the eligibility of recruited patients for the study and to provide other information, such as cancer prognosis. Natural language processing (NLP) systems offer an alternative to automated coding, but pathology reports have certain features that are difficult for NLP systems. This paper describes how a preprocessor was integrated with an existing NLP system (MedLEE) in order to reduce modification to the NLP system and to improve performance. The work was done in conjunction with an ongoing clinical research project that assesses disparities and risks of developing breast cancer for minority women. An evaluation of the system was performed using manually coded data from the research project's database as a gold standard. The evaluation outcome showed that the extended NLP system had a sensitivity of 90.6% and a precision of 91.6%. Results indicated that this system performed satisfactorily for capturing information for the cancer research project. --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Extent of Risk-Aligned Surveillance for Cancer Recurrence Among Patients With Early-Stage Bladder Cancer paper_content: Importance Cancer care guidelines recommend aligning surveillance frequency with underlying cancer risk, ie, more frequent surveillance for patients at high vs low risk of cancer recurrence. Objective To assess the extent to which such risk-aligned surveillance is practiced within US Department of Veterans Affairs facilities by classifying surveillance patterns for low- vs high-risk patients with early-stage bladder cancer. Design, Setting, and Participants US national retrospective cohort study of a population-based sample of patients diagnosed with low-risk or high-risk early-stage bladder between January 1, 2005, and December 31, 2011, with follow-up through December 31, 2014. Analyses were performed March 2017 to April 2018. The study included all Veterans Affairs facilities (n = 85) where both low- and high-risk patients were treated. Exposures Low-risk vs high-risk cancer status, based on definitions from the European Association of Urology risk stratification guidelines and on data extracted from diagnostic pathology reports via validated natural language processing algorithms. Main Outcomes and Measures Adjusted cystoscopy frequency for low-risk and high-risk patients for each facility, estimated using multilevel modeling. Results The study included 1278 low-risk and 2115 high-risk patients (median [interquartile range] age, 77 [71-82] years; 99% [3368 of 3393] male). Across facilities, the adjusted frequency of surveillance cystoscopy ranged from 3.7 to 6.2 (mean, 4.8) procedures over 2 years per patient for low-risk patients and from 4.6 to 6.0 (mean, 5.4) procedures over 2 years per patient for high-risk patients. In 70 of 85 facilities, surveillance was performed at a comparable frequency for low- and high-risk patients, differing by less than 1 cystoscopy over 2 years. Surveillance frequency among high-risk patients statistically significantly exceeded surveillance among low-risk patients at only 4 facilities. Across all facilities, surveillance frequencies for low- vs high-risk patients were moderately strongly correlated (r = 0.52;P Conclusions and Relevance Patients with early-stage bladder cancer undergo cystoscopic surveillance at comparable frequencies regardless of risk. This finding highlights the need to understand barriers to risk-aligned surveillance with the goal of making it easier for clinicians to deliver it in routine practice. --- paper_title: Automated Cancer Registry Notifications: Validation of a Medical Text Analytics System for Identifying Patients with Cancer from a State-Wide Pathology Repository. paper_content: The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports. --- paper_title: Machine learning to parse breast pathology reports in Chinese paper_content: Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages. --- paper_title: Automated extraction and normalization of findings from cancer-related free-text radiology reports. paper_content: We describe the performance of a particular natural language processing system that uses knowledge vectors to extract findings from radiology reports. LifeCode® (A-Life Medical, Inc.) has been successfully coding reports for billing purposes for several years. In this study, we describe the use of LifeCode® to code all findings within a set of 500 cancer-related radiology reports against a test set in which all findings were manually tagged. The system was trained with 1400 reports prior to running the test set. Results: LifeCode® had a recall of 84.5% and precision of 95.7% in the coding of cancer-related radiology report findings. Conclusion: Despite the use of a modest sized training set and minimal training iterations, when applied to cancer-related reports the system achieved recall and precision measures comparable to other reputable natural language processors in this domain. --- paper_title: Abstract P5-08-20: A real-world evidence study to define the prevalence of endocrine therapy-naïve hormone receptor-positive locally advanced or metastatic breast cancer in the US paper_content: BACKGROUND The recently completed FALCON trial (NCT01602380) compared the efficacy of the selective estrogen receptor degrader (SERD) fulvestrant 500 mg with anastrozole in postmenopausal women with hormone receptor (HR)-positive locally advanced or metastatic breast cancer (LA/MBC) who had received no prior endocrine therapy (ET). To better understand the size of the US population to which the results of the FALCON trial are applicable, this study estimated the proportion of postmenopausal patients with HR-positive, human epidermal growth factor receptor (HER)2-negative LA/MBC who had received no prior ET, using data from a US medical record database. METHODS This observational study retrospectively analyzed data from the Optum Electronic Health Record Database, obtained from provider groups across the US. Women over 40 years of age with breast cancer diagnoses (January 2008–March 2015) were included, provided they had at least 12 months of recorded medical history prior to index, and at least one recorded physician office visit in that time. Incident (newly diagnosed) and prevalent (previous diagnosis of early or advanced breast cancer) cases were identified. Free-text clinical notes were reviewed using natural language processing to identify a target patient population of postmenopausal women with HR-positive, HER2-negative LA/MBC who had received no prior ET (similar to the FALCON entry criteria); additionally, diagnostic codes or treatment history were also used to identify HR status in the absence of confirmation from the free-text notes. The results of this analysis were extrapolated to estimate the size of the target population at a national level, using statistics from the National Cancer Institute9s Surveillance, Epidemiology and End Results (SEER) database. Results are presented descriptively. RESULTS Overall, 63,962 women with breast cancer were identified, of whom 11,831 had discernible information on menopausal status, HER2 status, HR status, and disease stage. Of these, 1,923 patients were identified with postmenopausal, HR-positive, HER2-negative (or unknown) LA/MBC. Within this population of patients, 70.7% (1,360/1,923) had not previously received ET (88.5% [920/1,040] incident cases; 49.8% [440/883] prevalent cases), representing 11.5% (1,360/11,831) of the total breast cancer population with known menopausal status, HER2 status, HR status, and cancer disease stage information. The proportion of patients with postmenopausal, HR-positive, HER2-negative LA/MBC who had received no prior ET in this sample was extrapolated using US national estimates of the size of the postmenopausal, HR-positive, HER2-negative LA/MBC population taken from SEER. This approach suggests a 5-year limited-duration prevalence of postmenopausal patients with HR-positive, HER2-negative LA/MBC who have received no prior ET of approximately 50,000 cases and an annual incidence of about 15,000 patients. CONCLUSIONS These real-world data provide an estimate of the number of postmenopausal patients with HR-positive, HER2-negative, LA/MBC in the US who have not previously received ET. This population corresponds to the recently completed Phase 3 FALCON trial of fulvestrant compared with anastrozole. Citation Format: Nunes AP, Green E, Dalvi T, Lewis J, Jones N, Seeger JD. A real-world evidence study to define the prevalence of endocrine therapy-naive hormone receptor-positive locally advanced or metastatic breast cancer in the US [abstract]. In: Proceedings of the 2016 San Antonio Breast Cancer Symposium; 2016 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2017;77(4 Suppl):Abstract nr P5-08-20. --- paper_title: Assessing the natural language processing capabilities of IBM Watson for oncology using real Australian lung cancer cases. paper_content: e18229Background: Optimal treatment decisions require information from multiple sources and formats even in the context of electronic medical records. In addition, structured data such as that coll... --- paper_title: caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research. paper_content: The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)dan application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for textderived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multicenter collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multiinstitutional translational research programs. --- paper_title: Automated extraction of Biomarker information from pathology reports paper_content: Pathology reports are written in free-text form, which precludes efficient data gathering. We aimed to overcome this limitation and design an automated system for extracting biomarker profiles from accumulated pathology reports. We designed a new data model for representing biomarker knowledge. The automated system parses immunohistochemistry reports based on a “slide paragraph” unit defined as a set of immunohistochemistry findings obtained for the same tissue slide. Pathology reports are parsed using context-free grammar for immunohistochemistry, and using a tree-like structure for surgical pathology. The performance of the approach was validated on manually annotated pathology reports of 100 randomly selected patients managed at Seoul National University Hospital. High F-scores were obtained for parsing biomarker name and corresponding test results (0.999 and 0.998, respectively) from the immunohistochemistry reports, compared to relatively poor performance for parsing surgical pathology findings. However, applying the proposed approach to our single-center dataset revealed information on 221 unique biomarkers, which represents a richer result than biomarker profiles obtained based on the published literature. Owing to the data representation model, the proposed approach can associate biomarker profiles extracted from an immunohistochemistry report with corresponding pathology findings listed in one or more surgical pathology reports. Term variations are resolved by normalization to corresponding preferred terms determined by expanded dictionary look-up and text similarity-based search. Our proposed approach for biomarker data extraction addresses key limitations regarding data representation and can handle reports prepared in the clinical setting, which often contain incomplete sentences, typographical errors, and inconsistent formatting. --- paper_title: Identifying primary and recurrent cancers using a SAS-based natural language processing algorithm paper_content: Objective Significant limitations exist in the timely and complete identification of primary and recurrent cancers for clinical and epidemiologic research. A SAS-based coding, extraction, and nomenclature tool (SCENT) was developed to address this problem. ::: ::: Materials and methods SCENT employs hierarchical classification rules to identify and extract information from electronic pathology reports. Reports are analyzed and coded using a dictionary of clinical concepts and associated SNOMED codes. To assess the accuracy of SCENT, validation was conducted using manual review of pathology reports from a random sample of 400 breast and 400 prostate cancer patients diagnosed at Kaiser Permanente Southern California. Trained abstractors classified the malignancy status of each report. ::: ::: Results Classifications of SCENT were highly concordant with those of abstractors, achieving κ of 0.96 and 0.95 in the breast and prostate cancer groups, respectively. SCENT identified 51 of 54 new primary and 60 of 61 recurrent cancer cases across both groups, with only three false positives in 792 true benign cases. Measures of sensitivity, specificity, positive predictive value, and negative predictive value exceeded 94% in both cancer groups. ::: ::: Discussion Favorable validation results suggest that SCENT can be used to identify, extract, and code information from pathology report text. Consequently, SCENT has wide applicability in research and clinical care. Further assessment will be needed to validate performance with other clinical text sources, particularly those with greater linguistic variability. ::: ::: Conclusion SCENT is proof of concept for SAS-based natural language processing applications that can be easily shared between institutions and used to support clinical and epidemiologic research. --- paper_title: Validation of natural language processing to extract breast cancer pathology procedures and results paper_content: Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%), and evaluation (324, 10%) purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related). Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity), but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance. --- paper_title: Automatic Processing of Anatomic Pathology Reports in the Italian Language to Enhance the Reuse of Clinical Data. paper_content: Medical reports often contain a lot of relevant information in the form of free text. To reuse these unstructured texts for biomedical research, it is important to extract structured data from them. In this work, we adapted a previously developed information extraction system to the oncology domain, to process a set of anatomic pathology reports in the Italian language. The information extraction system relies on a domain ontology, which was adapted and refined in an iterative way. The final output was evaluated by a domain expert, with promising results. --- paper_title: Using machine learning to parse breast pathology reports paper_content: PURPOSE ::: Extracting information from electronic medical record is a time-consuming and expensive process when done manually. Rule-based and machine learning techniques are two approaches to solving this problem. In this study, we trained a machine learning model on pathology reports to extract pertinent tumor characteristics, which enabled us to create a large database of attribute searchable pathology reports. This database can be used to identify cohorts of patients with characteristics of interest. ::: ::: ::: METHODS ::: We collected a total of 91,505 breast pathology reports from three Partners hospitals: Massachusetts General Hospital, Brigham and Women's Hospital, and Newton-Wellesley Hospital, covering the period from 1978 to 2016. We trained our system with annotations from two datasets, consisting of 6295 and 10,841 manually annotated reports. The system extracts 20 separate categories of information, including atypia types and various tumor characteristics such as receptors. We also report a learning curve analysis to show how much annotation our model needs to perform reasonably. ::: ::: ::: RESULTS ::: The model accuracy was tested on 500 reports that did not overlap with the training set. The model achieved accuracy of 90% for correctly parsing all carcinoma and atypia categories for a given patient. The average accuracy for individual categories was 97%. Using this classifier, we created a database of 91,505 parsed pathology reports. ::: ::: ::: CONCLUSIONS ::: Our learning curve analysis shows that the model can achieve reasonable results even when trained on a few annotations. We developed a user-friendly interface to the database that allows physicians to easily identify patients with target characteristics and export the matching cohort. This model has the potential to reduce the effort required for analyzing large amounts of data from medical records, and to minimize the cost and time required to glean scientific insight from these data. --- paper_title: Electronic Health Record Phenotypes for Precision Medicine: Perspectives and Caveats From Treatment of Breast Cancer at a Single Institution paper_content: Precision medicine is at the forefront of biomedical research. Cancer registries provide rich perspectives and electronic health record(EHR)s are commonly utilized to gather additional clinical data elements needed for translational research. However, manual annotation is resource-intense and not readily scalable. Informatics-based phenotyping presents an ideal solution, but perspectives obtained can be impacted by both data source and algorithm selection. We derived breast cancer(BC) receptor status phenotypes from structured and unstructured EHR data using rule-based algorithms, including natural language processing(NLP). Overall, use of NLP increased BC receptor status coverage by 39.2% from 69.1% with structured medication information alone. Using all available EHR data, estrogen receptor-positive BC cases were ascertained with high precision(P = 0.976) and recall(R = 0.987) compared to gold standard chart-reviewed patients. However, status negation(R = 0.591), decreased 40.2% when relying on structured medications alone. Using multiple EHR data types (and thorough understanding perspectives offered) are necessary to derive robust EHR-based precision medicine phenotypes. ::: ::: This article is protected by copyright. All rights reserved --- paper_title: Pathologic findings in reduction mammoplasty procedures identified by natural language processing of breast pathology reports: A surrogate for the population incidence of cancer and high risk lesions. paper_content: e13569Background: Breast reduction surgery removes a random sample of breast tissue in otherwise asymptomatic women and thus provides a method to evaluate the background incidence of breast patholo... --- paper_title: Development of a Natural Language Processing Engine to Generate Bladder Cancer Pathology Data for Health Services Research paper_content: OBJECTIVE ::: To take the first step toward assembling population-based cohorts of patients with bladder cancer with longitudinal pathology data, we developed and validated a natural language processing (NLP) engine that abstracts pathology data from full-text pathology reports. ::: ::: ::: METHODS ::: Using 600 bladder pathology reports randomly selected from the Department of Veterans Affairs, we developed and validated an NLP engine to abstract data on histology, invasion (presence vs absence and depth), grade, the presence of muscularis propria, and the presence of carcinoma in situ. Our gold standard was based on an independent review of reports by 2 urologists, followed by adjudication. We assessed the NLP performance by calculating the accuracy, the positive predictive value, and the sensitivity. We subsequently applied the NLP engine to pathology reports from 10,725 patients with bladder cancer. ::: ::: ::: RESULTS ::: When comparing the NLP output to the gold standard, NLP achieved the highest accuracy (0.98) for the presence vs the absence of carcinoma in situ. Accuracy for histology, invasion (presence vs absence), grade, and the presence of muscularis propria ranged from 0.83 to 0.96. The most challenging variable was depth of invasion (accuracy 0.68), with an acceptable positive predictive value for lamina propria (0.82) and for muscularis propria (0.87) invasion. The validated engine was capable of abstracting pathologic characteristics for 99% of the patients with bladder cancer. ::: ::: ::: CONCLUSION ::: NLP had high accuracy for 5 of 6 variables and abstracted data for the vast majority of the patients. This now allows for the assembly of population-based cohorts with longitudinal pathology data. --- paper_title: Facilitating cancer research using natural language processing of pathology reports. paper_content: Many ongoing clinical research projects, such as projects involving studies associated with cancer, involve manual capture of information in surgical pathology reports so that the information can be used to determine the eligibility of recruited patients for the study and to provide other information, such as cancer prognosis. Natural language processing (NLP) systems offer an alternative to automated coding, but pathology reports have certain features that are difficult for NLP systems. This paper describes how a preprocessor was integrated with an existing NLP system (MedLEE) in order to reduce modification to the NLP system and to improve performance. The work was done in conjunction with an ongoing clinical research project that assesses disparities and risks of developing breast cancer for minority women. An evaluation of the system was performed using manually coded data from the research project's database as a gold standard. The evaluation outcome showed that the extended NLP system had a sensitivity of 90.6% and a precision of 91.6%. Results indicated that this system performed satisfactorily for capturing information for the cancer research project. --- paper_title: Deep learning analytics for diagnostic support of breast cancer disease management paper_content: Breast cancer continues to be one of the leading causes of cancer death among women. Mammogram is the standard of care for screening and diagnosis of breast cancer. The American College of Radiology developed the Breast Imaging Reporting and Data System (BI-RADS) lexicon to standardize mammographic reporting to assess cancer risk and facilitate biopsy decision-making. However, because substantial inter-observer variability remains in the application of the BI-RADS lexicon, including inappropriate term usage and missing data, current biopsy decision-making accuracy using the unstructured free text or semi-structured reports varies greatly. Hence, incorporating novel and accurate technique into breast cancer decision-making data is critical. Here, we combined natural language processing and deep learning methods to develop an analytic model that targets well-characterized and defined specific breast suspicious patient subgroups rather than a broad heterogeneous group for diagnostic support of breast cancer management. --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Extent of Risk-Aligned Surveillance for Cancer Recurrence Among Patients With Early-Stage Bladder Cancer paper_content: Importance Cancer care guidelines recommend aligning surveillance frequency with underlying cancer risk, ie, more frequent surveillance for patients at high vs low risk of cancer recurrence. Objective To assess the extent to which such risk-aligned surveillance is practiced within US Department of Veterans Affairs facilities by classifying surveillance patterns for low- vs high-risk patients with early-stage bladder cancer. Design, Setting, and Participants US national retrospective cohort study of a population-based sample of patients diagnosed with low-risk or high-risk early-stage bladder between January 1, 2005, and December 31, 2011, with follow-up through December 31, 2014. Analyses were performed March 2017 to April 2018. The study included all Veterans Affairs facilities (n = 85) where both low- and high-risk patients were treated. Exposures Low-risk vs high-risk cancer status, based on definitions from the European Association of Urology risk stratification guidelines and on data extracted from diagnostic pathology reports via validated natural language processing algorithms. Main Outcomes and Measures Adjusted cystoscopy frequency for low-risk and high-risk patients for each facility, estimated using multilevel modeling. Results The study included 1278 low-risk and 2115 high-risk patients (median [interquartile range] age, 77 [71-82] years; 99% [3368 of 3393] male). Across facilities, the adjusted frequency of surveillance cystoscopy ranged from 3.7 to 6.2 (mean, 4.8) procedures over 2 years per patient for low-risk patients and from 4.6 to 6.0 (mean, 5.4) procedures over 2 years per patient for high-risk patients. In 70 of 85 facilities, surveillance was performed at a comparable frequency for low- and high-risk patients, differing by less than 1 cystoscopy over 2 years. Surveillance frequency among high-risk patients statistically significantly exceeded surveillance among low-risk patients at only 4 facilities. Across all facilities, surveillance frequencies for low- vs high-risk patients were moderately strongly correlated (r = 0.52;P Conclusions and Relevance Patients with early-stage bladder cancer undergo cystoscopic surveillance at comparable frequencies regardless of risk. This finding highlights the need to understand barriers to risk-aligned surveillance with the goal of making it easier for clinicians to deliver it in routine practice. --- paper_title: Machine learning to parse breast pathology reports in Chinese paper_content: Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages. --- paper_title: Pathologic findings in reduction mammoplasty specimens: a surrogate for the population prevalence of breast cancer and high-risk lesions paper_content: Mammoplasty removes random samples of breast tissue from asymptomatic women providing a unique method for evaluating background prevalence of breast pathology in normal population. Our goal was to identify the rate of atypical breast lesions and cancers in women of various ages in the largest mammoplasty cohort reported to date. We analyzed pathologic reports from patients undergoing bilateral mammoplasty, using natural language processing algorithm, verified by human review. Patients with a prior history of breast cancer or atypia were excluded. A total of 4775 patients were deemed eligible. Median age was 40 (range 13–86) and was higher in patients with any incidental finding compared to patients with normal reports (52 vs. 39 years, p = 0.0001). Pathological findings were detected in 7.06% (337) of procedures. Benign high-risk lesions were found in 299 patients (6.26%). Invasive carcinoma and ductal carcinoma in situ were detected in 15 (0.31%) and 23 (0.48%) patients, respectively. The rate of atypias and cancers increased with age. The overall rate of abnormal findings in asymptomatic patients undergoing mammoplasty was 7.06%, increasing with age. As these results are based on random sample of breast tissue, they likely underestimate the prevalence of abnormal findings in asymptomatic women. --- paper_title: Abstract P5-08-20: A real-world evidence study to define the prevalence of endocrine therapy-naïve hormone receptor-positive locally advanced or metastatic breast cancer in the US paper_content: BACKGROUND The recently completed FALCON trial (NCT01602380) compared the efficacy of the selective estrogen receptor degrader (SERD) fulvestrant 500 mg with anastrozole in postmenopausal women with hormone receptor (HR)-positive locally advanced or metastatic breast cancer (LA/MBC) who had received no prior endocrine therapy (ET). To better understand the size of the US population to which the results of the FALCON trial are applicable, this study estimated the proportion of postmenopausal patients with HR-positive, human epidermal growth factor receptor (HER)2-negative LA/MBC who had received no prior ET, using data from a US medical record database. METHODS This observational study retrospectively analyzed data from the Optum Electronic Health Record Database, obtained from provider groups across the US. Women over 40 years of age with breast cancer diagnoses (January 2008–March 2015) were included, provided they had at least 12 months of recorded medical history prior to index, and at least one recorded physician office visit in that time. Incident (newly diagnosed) and prevalent (previous diagnosis of early or advanced breast cancer) cases were identified. Free-text clinical notes were reviewed using natural language processing to identify a target patient population of postmenopausal women with HR-positive, HER2-negative LA/MBC who had received no prior ET (similar to the FALCON entry criteria); additionally, diagnostic codes or treatment history were also used to identify HR status in the absence of confirmation from the free-text notes. The results of this analysis were extrapolated to estimate the size of the target population at a national level, using statistics from the National Cancer Institute9s Surveillance, Epidemiology and End Results (SEER) database. Results are presented descriptively. RESULTS Overall, 63,962 women with breast cancer were identified, of whom 11,831 had discernible information on menopausal status, HER2 status, HR status, and disease stage. Of these, 1,923 patients were identified with postmenopausal, HR-positive, HER2-negative (or unknown) LA/MBC. Within this population of patients, 70.7% (1,360/1,923) had not previously received ET (88.5% [920/1,040] incident cases; 49.8% [440/883] prevalent cases), representing 11.5% (1,360/11,831) of the total breast cancer population with known menopausal status, HER2 status, HR status, and cancer disease stage information. The proportion of patients with postmenopausal, HR-positive, HER2-negative LA/MBC who had received no prior ET in this sample was extrapolated using US national estimates of the size of the postmenopausal, HR-positive, HER2-negative LA/MBC population taken from SEER. This approach suggests a 5-year limited-duration prevalence of postmenopausal patients with HR-positive, HER2-negative LA/MBC who have received no prior ET of approximately 50,000 cases and an annual incidence of about 15,000 patients. CONCLUSIONS These real-world data provide an estimate of the number of postmenopausal patients with HR-positive, HER2-negative, LA/MBC in the US who have not previously received ET. This population corresponds to the recently completed Phase 3 FALCON trial of fulvestrant compared with anastrozole. Citation Format: Nunes AP, Green E, Dalvi T, Lewis J, Jones N, Seeger JD. A real-world evidence study to define the prevalence of endocrine therapy-naive hormone receptor-positive locally advanced or metastatic breast cancer in the US [abstract]. In: Proceedings of the 2016 San Antonio Breast Cancer Symposium; 2016 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2017;77(4 Suppl):Abstract nr P5-08-20. --- paper_title: Pattern-based information extraction from pathology reports for cancer registration paper_content: OBJECTIVE ::: To evaluate precision and recall rates for the automatic extraction of information from free-text pathology reports. To assess the impact that implementation of pattern-based methods would have on cancer registration completeness. ::: ::: ::: METHOD ::: Over 300,000 electronic pathology reports were scanned for the extraction of Gleason score, Clark level and Breslow depth, by a number of Perl routines progressively enhanced by a trial-and-error method. An additional test set of 915 reports potentially containing Gleason score was used for evaluation. ::: ::: ::: RESULTS ::: Values for recall and precision of over 98 and 99%, respectively, were easily reached. Potential increase in cancer staging completeness of up to 32% was proved. ::: ::: ::: CONCLUSIONS ::: In cancer registration, simple pattern matching applied to free-text documents can be effectively used to improve completeness and accuracy of pathology information. --- paper_title: Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model paper_content: We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets. --- paper_title: Automating the Determination of Prostate Cancer Risk Strata From Electronic Medical Records. paper_content: Purpose ::: Risk stratification underlies system-wide efforts to promote the delivery of appropriate prostate cancer care. Although the elements of risk stratum are available in the electronic medical record, manual data collection is resource intensive. Therefore, we investigated the feasibility and accuracy of an automated data extraction method using natural language processing (NLP) to determine prostate cancer risk stratum. ::: ::: ::: Methods ::: Manually collected clinical stage, biopsy Gleason score, and preoperative prostate-specific antigen (PSA) values from our prospective prostatectomy database were used to categorize patients as low, intermediate, or high risk by D'Amico risk classification. NLP algorithms were developed to automate the extraction of the same data points from the electronic medical record, and risk strata were recalculated. The ability of NLP to identify elements sufficient to calculate risk (recall) was calculated, and the accuracy of NLP was compared with that of manually collected data using the weighted Cohen's κ statistic. ::: ::: ::: Results ::: Of the 2,352 patients with available data who underwent prostatectomy from 2010 to 2014, NLP identified sufficient elements to calculate risk for 1,833 (recall, 78%). NLP had a 91% raw agreement with manual risk stratification (κ = 0.92; 95% CI, 0.90 to 0.93). The κ statistics for PSA, Gleason score, and clinical stage extraction by NLP were 0.86, 0.91, and 0.89, respectively; 91.9% of extracted PSA values were within ± 1.0 ng/mL of the manually collected PSA levels. ::: ::: ::: Conclusion ::: NLP can achieve more than 90% accuracy on D'Amico risk stratification of localized prostate cancer, with adequate recall. This figure is comparable to other NLP tasks and illustrates the known trade off between recall and accuracy. Automating the collection of risk characteristics could be used to power real-time decision support tools and scale up quality measurement in cancer care. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: An ICT infrastructure to integrate clinical and molecular data in oncology research paper_content: BackgroundThe ONCO-i2b2 platform is a bioinformatics tool designed to integrate clinical and research data and support translational research in oncology. It is implemented by the University of Pavia and the IRCCS Fondazione Maugeri hospital (FSM), and grounded on the software developed by the Informatics for Integrating Biology and the Bedside (i2b2) research center. I2b2 has delivered an open source suite based on a data warehouse, which is efficiently interrogated to find sets of interesting patients through a query tool interface.MethodsOnco-i2b2 integrates data coming from multiple sources and allows the users to jointly query them. I2b2 data are then stored in a data warehouse, where facts are hierarchically structured as ontologies. Onco-i2b2 gathers data from the FSM pathology unit (PU) database and from the hospital biobank and merges them with the clinical information from the hospital information system.Our main effort was to provide a robust integrated research environment, giving a particular emphasis to the integration process and facing different challenges, consecutively listed: biospecimen samples privacy and anonymization; synchronization of the biobank database with the i2b2 data warehouse through a series of Extract, Transform, Load (ETL) operations; development and integration of a Natural Language Processing (NLP) module, to retrieve coded information, such as SNOMED terms and malignant tumors (TNM) classifications, and clinical tests results from unstructured medical records. Furthermore, we have developed an internal SNOMED ontology rested on the NCBO BioPortal web services.ResultsOnco-i2b2 manages data of more than 6,500 patients with breast cancer diagnosis collected between 2001 and 2011 (over 390 of them have at least one biological sample in the cancer biobank), more than 47,000 visits and 96,000 observations over 960 medical concepts.ConclusionsOnco-i2b2 is a concrete example of how integrated Information and Communication Technology architecture can be implemented to support translational research. The next steps of our project will involve the extension of its capabilities by implementing new plug-in devoted to bioinformatics data analysis as well as a temporal query module. --- paper_title: Natural Language Processing Improves Identification of Colorectal Cancer Testing in the Electronic Medical Record paper_content: Background. Difficulty identifying patients in need of colorectal cancer (CRC) screening contributes to low screening rates. Objective. To use Electronic Health Record (EHR) data to identify patients with prior CRC testing. Design. A clinical natural language processing (NLP) system was modified to identify 4 CRC tests (colonoscopy, flexible sigmoidoscopy, fecal occult blood testing, and double contrast barium enema) within electronic clinical documentation. Text phrases in clinical notes referencing CRC tests were interpreted by the system to determine whether testing was planned or completed and to estimate the date of completed tests. Setting. Large academic medical center. Patients. 200 patients ≥50 years old who had completed ≥2 non-acute primary care visits within a 1-year period. Measures. Recall and precision of the NLP system, billing records, and human chart review were compared to a reference standard of human review of all available information sources. Results. For identification of all CRC t... --- paper_title: The ONCO-I2b2 Project: Integrating Biobank Information and Clinical Data to Support Translational Research in Oncology paper_content: The University of Pavia and the IRCCS Fondazione Salvatore Maugeri of Pavia (FSM), has recently started an IT initiative to support clinical research in oncology, called ONCO-i2b2. ONCO-i2b2, funded by the Lombardia region, grounds on the software developed by the Informatics for Integrating Biology and the Bedside (i2b2) NIH project. Using i2b2 and new software modules purposely designed, data coming from multiple sources are integrated and jointly queried. The core of the integration process stands in retrieving and merging data from the biobank management software and from the FSM hospital information system. The integration process is based on a ontology of the problem domain and on open-source software integration modules. A Natural Language Processing module has been implemented, too. This module automatically extracts clinical information of oncology patients from unstructured medical records. The system currently manages more than two thousands patients and will be further implemented and improved in the next two years. --- paper_title: Development of a Natural Language Processing Engine to Generate Bladder Cancer Pathology Data for Health Services Research paper_content: OBJECTIVE ::: To take the first step toward assembling population-based cohorts of patients with bladder cancer with longitudinal pathology data, we developed and validated a natural language processing (NLP) engine that abstracts pathology data from full-text pathology reports. ::: ::: ::: METHODS ::: Using 600 bladder pathology reports randomly selected from the Department of Veterans Affairs, we developed and validated an NLP engine to abstract data on histology, invasion (presence vs absence and depth), grade, the presence of muscularis propria, and the presence of carcinoma in situ. Our gold standard was based on an independent review of reports by 2 urologists, followed by adjudication. We assessed the NLP performance by calculating the accuracy, the positive predictive value, and the sensitivity. We subsequently applied the NLP engine to pathology reports from 10,725 patients with bladder cancer. ::: ::: ::: RESULTS ::: When comparing the NLP output to the gold standard, NLP achieved the highest accuracy (0.98) for the presence vs the absence of carcinoma in situ. Accuracy for histology, invasion (presence vs absence), grade, and the presence of muscularis propria ranged from 0.83 to 0.96. The most challenging variable was depth of invasion (accuracy 0.68), with an acceptable positive predictive value for lamina propria (0.82) and for muscularis propria (0.87) invasion. The validated engine was capable of abstracting pathologic characteristics for 99% of the patients with bladder cancer. ::: ::: ::: CONCLUSION ::: NLP had high accuracy for 5 of 6 variables and abstracted data for the vast majority of the patients. This now allows for the assembly of population-based cohorts with longitudinal pathology data. --- paper_title: Facilitating cancer research using natural language processing of pathology reports. paper_content: Many ongoing clinical research projects, such as projects involving studies associated with cancer, involve manual capture of information in surgical pathology reports so that the information can be used to determine the eligibility of recruited patients for the study and to provide other information, such as cancer prognosis. Natural language processing (NLP) systems offer an alternative to automated coding, but pathology reports have certain features that are difficult for NLP systems. This paper describes how a preprocessor was integrated with an existing NLP system (MedLEE) in order to reduce modification to the NLP system and to improve performance. The work was done in conjunction with an ongoing clinical research project that assesses disparities and risks of developing breast cancer for minority women. An evaluation of the system was performed using manually coded data from the research project's database as a gold standard. The evaluation outcome showed that the extended NLP system had a sensitivity of 90.6% and a precision of 91.6%. Results indicated that this system performed satisfactorily for capturing information for the cancer research project. --- paper_title: Extent of Risk-Aligned Surveillance for Cancer Recurrence Among Patients With Early-Stage Bladder Cancer paper_content: Importance Cancer care guidelines recommend aligning surveillance frequency with underlying cancer risk, ie, more frequent surveillance for patients at high vs low risk of cancer recurrence. Objective To assess the extent to which such risk-aligned surveillance is practiced within US Department of Veterans Affairs facilities by classifying surveillance patterns for low- vs high-risk patients with early-stage bladder cancer. Design, Setting, and Participants US national retrospective cohort study of a population-based sample of patients diagnosed with low-risk or high-risk early-stage bladder between January 1, 2005, and December 31, 2011, with follow-up through December 31, 2014. Analyses were performed March 2017 to April 2018. The study included all Veterans Affairs facilities (n = 85) where both low- and high-risk patients were treated. Exposures Low-risk vs high-risk cancer status, based on definitions from the European Association of Urology risk stratification guidelines and on data extracted from diagnostic pathology reports via validated natural language processing algorithms. Main Outcomes and Measures Adjusted cystoscopy frequency for low-risk and high-risk patients for each facility, estimated using multilevel modeling. Results The study included 1278 low-risk and 2115 high-risk patients (median [interquartile range] age, 77 [71-82] years; 99% [3368 of 3393] male). Across facilities, the adjusted frequency of surveillance cystoscopy ranged from 3.7 to 6.2 (mean, 4.8) procedures over 2 years per patient for low-risk patients and from 4.6 to 6.0 (mean, 5.4) procedures over 2 years per patient for high-risk patients. In 70 of 85 facilities, surveillance was performed at a comparable frequency for low- and high-risk patients, differing by less than 1 cystoscopy over 2 years. Surveillance frequency among high-risk patients statistically significantly exceeded surveillance among low-risk patients at only 4 facilities. Across all facilities, surveillance frequencies for low- vs high-risk patients were moderately strongly correlated (r = 0.52;P Conclusions and Relevance Patients with early-stage bladder cancer undergo cystoscopic surveillance at comparable frequencies regardless of risk. This finding highlights the need to understand barriers to risk-aligned surveillance with the goal of making it easier for clinicians to deliver it in routine practice. --- paper_title: Automated Cancer Registry Notifications: Validation of a Medical Text Analytics System for Identifying Patients with Cancer from a State-Wide Pathology Repository. paper_content: The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports. --- paper_title: Automatic Extraction of Breast Cancer Information from Clinical Reports paper_content: The majority of clinical data is only available in unstructured text documents. Thus, their automated usage in data-based clinical application scenarios, like quality assurance and clinical decision support by treatment suggestions, is hindered because it requires high manual annotation efforts. In this work, we introduce a system for the automated processing of clinical reports of mamma carcinoma patients that allows for the automatic extraction and seamless processing of relevant textual features. Its underlying information extraction pipeline employs a rule-based grammar approach that is integrated with semantic technologies to determine the relevant information from the patient record. The accuracy of the system, developed with nine thousand clinical documents, reaches accuracy levels of 90% for lymph node status and 69% for the structurally most complex feature, the hormone status. --- paper_title: Machine learning to parse breast pathology reports in Chinese paper_content: Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages. --- paper_title: Automated extraction and normalization of findings from cancer-related free-text radiology reports. paper_content: We describe the performance of a particular natural language processing system that uses knowledge vectors to extract findings from radiology reports. LifeCode® (A-Life Medical, Inc.) has been successfully coding reports for billing purposes for several years. In this study, we describe the use of LifeCode® to code all findings within a set of 500 cancer-related radiology reports against a test set in which all findings were manually tagged. The system was trained with 1400 reports prior to running the test set. Results: LifeCode® had a recall of 84.5% and precision of 95.7% in the coding of cancer-related radiology report findings. Conclusion: Despite the use of a modest sized training set and minimal training iterations, when applied to cancer-related reports the system achieved recall and precision measures comparable to other reputable natural language processors in this domain. --- paper_title: Creating a rule based system for text mining of Norwegian breast cancer pathology reports paper_content: National cancer registries collect cancer related information from multiple sources and make it available for research. Part of this information originates from pathology reports, and in this pre-study the possibility of a system for automatic extraction of information from Norwegian pathology reports is investigated. A set of 40 pathology reports describing breast cancer tissue samples has been used to develop a rule based system for information extraction. To validate the performance of this system its output has been compared to the data produced by experts doing manual encoding of the same pathology reports. On average, a precision of 80%, a recall of 98% and an F-score of 86% has been achieved, showing that such a system is indeed feasible. --- paper_title: Validation of natural language processing (NLP) for automated ascertainment of EGFR and ALK tests in SEER cases of non-small cell lung cancer (NSCLC). paper_content: 6528Background: The Surveillance, Epidemiology, and End Results (SEER) registries lack information on the Epidermal Growth Factor Receptor (EGFR) mutation and Anaplastic Lymphoma Kinase (ALK) gene ... --- paper_title: Extracting timing and status descriptors for colonoscopy testing from electronic medical records paper_content: Colorectal cancer (CRC) screening rates are low despite confirmed benefits. The authors investigated the use of natural language processing (NLP) to identify previous colonoscopy screening in electronic records from a random sample of 200 patients at least 50 years old. The authors developed algorithms to recognize temporal expressions and ‘status indicators’, such as ‘patient refused’, or ‘test scheduled’. The new methods were added to the existing KnowledgeMap concept identifier system, and the resulting system was used to parse electronic medical records (EMR) to detect completed colonoscopies. Using as the ‘gold standard’ expert physicians’ manual review of EMR notes, the system identified timing references with a recall of 0.91 and precision of 0.95, colonoscopy status indicators with a recall of 0.82 and precision of 0.95, and references to actually completed colonoscopies with recall of 0.93 and precision of 0.95. The system was superior to using colonoscopy billing codes alone. Health services researchers and clinicians may find NLP a useful adjunct to traditional methods to detect CRC screening status. Further investigations must validate extension of NLP approaches for other types of CRC screening applications. --- paper_title: Identification of Patients with Family History of Pancreatic Cancer - Investigation of an NLP System Portability paper_content: In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance. --- paper_title: Using Natural Language Processing to Extract Abnormal Results From Cancer Screening Reports paper_content: OBJECTIVES ::: Numerous studies show that follow-up of abnormal cancer screening results, such as mammography and Papanicolaou (Pap) smears, is frequently not performed in a timely manner. A contributing factor is that abnormal results may go unrecognized because they are buried in free-text documents in electronic medical records (EMRs), and, as a result, patients are lost to follow-up. By identifying abnormal results from free-text reports in EMRs and generating alerts to clinicians, natural language processing (NLP) technology has the potential for improving patient care. The goal of the current study was to evaluate the performance of NLP software for extracting abnormal results from free-text mammography and Pap smear reports stored in an EMR. ::: ::: ::: METHODS ::: A sample of 421 and 500 free-text mammography and Pap reports, respectively, were manually reviewed by a physician, and the results were categorized for each report. We tested the performance of NLP to extract results from the reports. The 2 assessments (criterion standard versus NLP) were compared to determine the precision, recall, and accuracy of NLP. ::: ::: ::: RESULTS ::: When NLP was compared with manual review for mammography reports, the results were as follows: precision, 98% (96%-99%); recall, 100% (98%-100%); and accuracy, 98% (96%-99%). For Pap smear reports, the precision, recall, and accuracy of NLP were all 100%. ::: ::: ::: CONCLUSIONS ::: Our study developed NLP models that accurately extract abnormal results from mammography and Pap smear reports. Plans include using NLP technology to generate real-time alerts and reminders for providers to facilitate timely follow-up of abnormal results. --- paper_title: Identifying primary and recurrent cancers using a SAS-based natural language processing algorithm paper_content: Objective Significant limitations exist in the timely and complete identification of primary and recurrent cancers for clinical and epidemiologic research. A SAS-based coding, extraction, and nomenclature tool (SCENT) was developed to address this problem. ::: ::: Materials and methods SCENT employs hierarchical classification rules to identify and extract information from electronic pathology reports. Reports are analyzed and coded using a dictionary of clinical concepts and associated SNOMED codes. To assess the accuracy of SCENT, validation was conducted using manual review of pathology reports from a random sample of 400 breast and 400 prostate cancer patients diagnosed at Kaiser Permanente Southern California. Trained abstractors classified the malignancy status of each report. ::: ::: Results Classifications of SCENT were highly concordant with those of abstractors, achieving κ of 0.96 and 0.95 in the breast and prostate cancer groups, respectively. SCENT identified 51 of 54 new primary and 60 of 61 recurrent cancer cases across both groups, with only three false positives in 792 true benign cases. Measures of sensitivity, specificity, positive predictive value, and negative predictive value exceeded 94% in both cancer groups. ::: ::: Discussion Favorable validation results suggest that SCENT can be used to identify, extract, and code information from pathology report text. Consequently, SCENT has wide applicability in research and clinical care. Further assessment will be needed to validate performance with other clinical text sources, particularly those with greater linguistic variability. ::: ::: Conclusion SCENT is proof of concept for SAS-based natural language processing applications that can be easily shared between institutions and used to support clinical and epidemiologic research. --- paper_title: Pattern-based information extraction from pathology reports for cancer registration paper_content: OBJECTIVE ::: To evaluate precision and recall rates for the automatic extraction of information from free-text pathology reports. To assess the impact that implementation of pattern-based methods would have on cancer registration completeness. ::: ::: ::: METHOD ::: Over 300,000 electronic pathology reports were scanned for the extraction of Gleason score, Clark level and Breslow depth, by a number of Perl routines progressively enhanced by a trial-and-error method. An additional test set of 915 reports potentially containing Gleason score was used for evaluation. ::: ::: ::: RESULTS ::: Values for recall and precision of over 98 and 99%, respectively, were easily reached. Potential increase in cancer staging completeness of up to 32% was proved. ::: ::: ::: CONCLUSIONS ::: In cancer registration, simple pattern matching applied to free-text documents can be effectively used to improve completeness and accuracy of pathology information. --- paper_title: Validation of natural language processing to extract breast cancer pathology procedures and results paper_content: Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%), and evaluation (324, 10%) purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related). Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity), but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance. --- paper_title: Tumor reference resolution and characteristic extraction in radiology reports for liver cancer stage prediction paper_content: Display Omitted Reference resolution is essential for understanding radiology report free text.Largest size and tumor count prediction improves with reference resolution.Different applications have different tolerances for reference resolution errors. BackgroundAnaphoric references occur ubiquitously in clinical narrative text. However, the problem, still very much an open challenge, is typically less aggressively focused on in clinical text domain applications. Furthermore, existing research on reference resolution is often conducted disjointly from real-world motivating tasks. ObjectiveIn this paper, we present our machine-learning system that automatically performs reference resolution and a rule-based system to extract tumor characteristics, with component-based and end-to-end evaluations. Specifically, our goal was to build an algorithm that takes in tumor templates and outputs tumor characteristic, e.g. tumor number and largest tumor sizes, necessary for identifying patient liver cancer stage phenotypes. ResultsOur reference resolution system reached a modest performance of 0.66 F1 for the averaged MUC, B-cubed, and CEAF scores for coreference resolution and 0.43 F1 for particularization relations. However, even this modest performance was helpful to increase the automatic tumor characteristics annotation substantially over no reference resolution. ConclusionExperiments revealed the benefit of reference resolution even for relatively simple tumor characteristics variables such as largest tumor size. However we found that different overall variables had different tolerances to reference resolution upstream errors, highlighting the need to characterize systems by end-to-end evaluations. --- paper_title: Hierarchical attention networks for information extraction from cancer pathology reports paper_content: Objective ::: We explored how a deep learning (DL) approach based on hierarchical attention networks (HANs) can improve model performance for multiple information extraction tasks from unstructured cancer pathology reports compared to conventional methods that do not sufficiently capture syntactic and semantic contexts from free-text documents. ::: ::: ::: Materials and Methods ::: Data for our analyses were obtained from 942 deidentified pathology reports collected by the National Cancer Institute Surveillance, Epidemiology, and End Results program. The HAN was implemented for 2 information extraction tasks: (1) primary site, matched to 12 International Classification of Diseases for Oncology topography codes (7 breast, 5 lung primary sites), and (2) histological grade classification, matched to G1-G4. Model performance metrics were compared to conventional machine learning (ML) approaches including naive Bayes, logistic regression, support vector machine, random forest, and extreme gradient boosting, and other DL models, including a recurrent neural network (RNN), a recurrent neural network with attention (RNN w/A), and a convolutional neural network. ::: ::: ::: Results ::: Our results demonstrate that for both information tasks, HAN performed significantly better compared to the conventional ML and DL techniques. In particular, across the 2 tasks, the mean micro and macro F-scores for the HAN with pretraining were (0.852,0.708), compared to naive Bayes (0.518, 0.213), logistic regression (0.682, 0.453), support vector machine (0.634, 0.434), random forest (0.698, 0.508), extreme gradient boosting (0.696, 0.522), RNN (0.505, 0.301), RNN w/A (0.637, 0.471), and convolutional neural network (0.714, 0.460). ::: ::: ::: Conclusions ::: HAN-based DL models show promise in information abstraction tasks within unstructured clinical pathology reports. --- paper_title: Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model paper_content: We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets. --- paper_title: Classifying tumor event attributes in radiology reports paper_content: Radiology reports contain vital diagnostic information that characterizes patient disease progression. However, information from reports is represented in free text, which is difficult to query against for secondary use. Automatic extraction of important information, such as tumor events using natural language processing, offers possibilities in improved clinical decision support, cohort identification, and retrospective evidence-based research for cancer patients. The goal of this work was to classify tumor event attributes: negation, temporality, and malignancy, using biomedical ontology and linguistically enriched features. We report our results on an annotated corpus of 101 hepatocellular carcinoma patient radiology reports, and show that the improved classification improves overall template structuring. Classification performances for negation identification, past temporality classification, and malignancy classification were at 0.94, 0.62, and 0.77 F1, respectively. Incorporating the attributes into full templates led to an improvement of 0.72 F1 for tumor-related events over a baseline of 0.65 F1. Improvement of negation, malignancy, and temporality classifications led to significant improvements in template extraction for the majority of categories. We present our machine-learning approach to identifying these several tumor event attributes from radiology reports, as well as highlight challenges and areas for improvement. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: Pathologic findings in reduction mammoplasty procedures identified by natural language processing of breast pathology reports: A surrogate for the population incidence of cancer and high risk lesions. paper_content: e13569Background: Breast reduction surgery removes a random sample of breast tissue in otherwise asymptomatic women and thus provides a method to evaluate the background incidence of breast patholo... --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Machine learning to parse breast pathology reports in Chinese paper_content: Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages. --- paper_title: Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports paper_content: Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports. --- paper_title: Pathologic findings in reduction mammoplasty specimens: a surrogate for the population prevalence of breast cancer and high-risk lesions paper_content: Mammoplasty removes random samples of breast tissue from asymptomatic women providing a unique method for evaluating background prevalence of breast pathology in normal population. Our goal was to identify the rate of atypical breast lesions and cancers in women of various ages in the largest mammoplasty cohort reported to date. We analyzed pathologic reports from patients undergoing bilateral mammoplasty, using natural language processing algorithm, verified by human review. Patients with a prior history of breast cancer or atypia were excluded. A total of 4775 patients were deemed eligible. Median age was 40 (range 13–86) and was higher in patients with any incidental finding compared to patients with normal reports (52 vs. 39 years, p = 0.0001). Pathological findings were detected in 7.06% (337) of procedures. Benign high-risk lesions were found in 299 patients (6.26%). Invasive carcinoma and ductal carcinoma in situ were detected in 15 (0.31%) and 23 (0.48%) patients, respectively. The rate of atypias and cancers increased with age. The overall rate of abnormal findings in asymptomatic patients undergoing mammoplasty was 7.06%, increasing with age. As these results are based on random sample of breast tissue, they likely underestimate the prevalence of abnormal findings in asymptomatic women. --- paper_title: Tumor information extraction in radiology reports for hepatocellular carcinoma patients paper_content: Hepatocellular carcinoma (HCC) is a deadly disease affecting the liver for which there are many available therapies. Targeting treatments towards specific patient groups necessitates defining patients by stage of disease. Criteria for such stagings include information on tumor number, size, and anatomic location, typically only found in narrative clinical text in the electronic medical record (EMR). Natural language processing (NLP) offers an automatic and scale-able means to extract this information, which can further evidence-based research. In this paper, we created a corpus of 101 radiology reports annotated for tumor information. Afterwards we applied machine learning algorithms to extract tumor information. Our inter-annotator partial match agreement scored at 0.93 and 0.90 F1 for entities and relations, respectively. Based on the annotated corpus, our sequential labeling entity extraction achieved 0.87 F1 partial match, and our maximum entropy classification relation extraction achieved scores 0.89 and 0. 74 F1 with gold and system entities, respectively. --- paper_title: Automated Extraction of Grade, Stage, and Quality Information From Transurethral Resection of Bladder Tumor Pathology Reports Using Natural Language Processing paper_content: PurposeBladder cancer is initially diagnosed and staged with a transurethral resection of bladder tumor (TURBT). Patient survival is dependent on appropriate sampling of layers of the bladder, but pathology reports are dictated as free text, making large-scale data extraction for quality improvement challenging. We sought to automate extraction of stage, grade, and quality information from TURBT pathology reports using natural language processing (NLP).MethodsPatients undergoing TURBT were retrospectively identified using the Northwestern Enterprise Data Warehouse. An NLP algorithm was then created to extract information from free-text pathology reports and was iteratively improved using a training set of manually reviewed TURBTs. NLP accuracy was then validated using another set of manually reviewed TURBTs, and reliability was calculated using Cohen’s κ.ResultsOf 3,042 TURBTs identified from 2006 to 2016, 39% were classified as benign, 35% as Ta, 11% as T1, 4% as T2, and 10% as isolated carcinoma in situ... --- paper_title: Identifying primary and recurrent cancers using a SAS-based natural language processing algorithm paper_content: Objective Significant limitations exist in the timely and complete identification of primary and recurrent cancers for clinical and epidemiologic research. A SAS-based coding, extraction, and nomenclature tool (SCENT) was developed to address this problem. ::: ::: Materials and methods SCENT employs hierarchical classification rules to identify and extract information from electronic pathology reports. Reports are analyzed and coded using a dictionary of clinical concepts and associated SNOMED codes. To assess the accuracy of SCENT, validation was conducted using manual review of pathology reports from a random sample of 400 breast and 400 prostate cancer patients diagnosed at Kaiser Permanente Southern California. Trained abstractors classified the malignancy status of each report. ::: ::: Results Classifications of SCENT were highly concordant with those of abstractors, achieving κ of 0.96 and 0.95 in the breast and prostate cancer groups, respectively. SCENT identified 51 of 54 new primary and 60 of 61 recurrent cancer cases across both groups, with only three false positives in 792 true benign cases. Measures of sensitivity, specificity, positive predictive value, and negative predictive value exceeded 94% in both cancer groups. ::: ::: Discussion Favorable validation results suggest that SCENT can be used to identify, extract, and code information from pathology report text. Consequently, SCENT has wide applicability in research and clinical care. Further assessment will be needed to validate performance with other clinical text sources, particularly those with greater linguistic variability. ::: ::: Conclusion SCENT is proof of concept for SAS-based natural language processing applications that can be easily shared between institutions and used to support clinical and epidemiologic research. --- paper_title: Discerning Tumor Status from Unstructured MRI Reports—Completeness of Information in Existing Reports and Utility of Automated Natural Language Processing paper_content: Information in electronic medical records is often in an unstructured free-text format. This format presents challenges for expedient data retrieval and may fail to convey important findings. Natural language processing (NLP) is an emerging technique for rapid and efficient clinical data retrieval. While proven in disease detection, the utility of NLP in discerning disease progression from free-text reports is untested. We aimed to (1) assess whether unstructured radiology reports contained sufficient information for tumor status classification; (2) develop an NLP-based data extraction tool to determine tumor status from unstructured reports; and (3) compare NLP and human tumor status classification outcomes. Consecutive follow-up brain tumor magnetic resonance imaging reports (2000–­2007) from a tertiary center were manually annotated using consensus guidelines on tumor status. Reports were randomized to NLP training (70%) or testing (30%) groups. The NLP tool utilized a support vector machines model with statistical and rule-based outcomes. Most reports had sufficient information for tumor status classification, although 0.8% did not describe status despite reference to prior examinations. Tumor size was unreported in 68.7% of documents, while 50.3% lacked data on change magnitude when there was detectable progression or regression. Using retrospective human classification as the gold standard, NLP achieved 80.6% sensitivity and 91.6% specificity for tumor status determination (mean positive predictive value, 82.4%; negative predictive value, 92.0%). In conclusion, most reports contained sufficient information for tumor status determination, though variable features were used to describe status. NLP demonstrated good accuracy for tumor status classification and may have novel application for automated disease status classification from electronic databases. --- paper_title: Symbolic rule-based classification of lung cancer stages from free-text pathology reports paper_content: Objective To classify automatically lung tumor–node–metastases (TNM) cancer stages from free-text pathology reports using symbolic rule-based classification. ::: ::: Design By exploiting report substructure and the symbolic manipulation of systematized nomenclature of medicine–clinical terms (SNOMED CT) concepts in reports, statements in free text can be evaluated for relevance against factors relating to the staging guidelines. Post-coordinated SNOMED CT expressions based on templates were defined and populated by concepts in reports, and tested for subsumption by staging factors. The subsumption results were used to build logic according to the staging guidelines to calculate the TNM stage. ::: ::: Measurements The accuracy measure and confusion matrices were used to evaluate the TNM stages classified by the symbolic rule-based system. The system was evaluated against a database of multidisciplinary team staging decisions and a machine learning-based text classification system using support vector machines. ::: ::: Results Overall accuracy on a corpus of pathology reports for 718 lung cancer patients against a database of pathological TNM staging decisions were 72%, 78%, and 94% for T, N, and M staging, respectively. The system's performance was also comparable to support vector machine classification approaches. ::: ::: Conclusion A system to classify lung TNM stages from free-text pathology reports was developed, and it was verified that the symbolic rule-based approach using SNOMED CT can be used for the extraction of key lung cancer characteristics from free-text reports. Future work will investigate the applicability of using the proposed methodology for extracting other cancer characteristics and types. --- paper_title: An ICT infrastructure to integrate clinical and molecular data in oncology research paper_content: BackgroundThe ONCO-i2b2 platform is a bioinformatics tool designed to integrate clinical and research data and support translational research in oncology. It is implemented by the University of Pavia and the IRCCS Fondazione Maugeri hospital (FSM), and grounded on the software developed by the Informatics for Integrating Biology and the Bedside (i2b2) research center. I2b2 has delivered an open source suite based on a data warehouse, which is efficiently interrogated to find sets of interesting patients through a query tool interface.MethodsOnco-i2b2 integrates data coming from multiple sources and allows the users to jointly query them. I2b2 data are then stored in a data warehouse, where facts are hierarchically structured as ontologies. Onco-i2b2 gathers data from the FSM pathology unit (PU) database and from the hospital biobank and merges them with the clinical information from the hospital information system.Our main effort was to provide a robust integrated research environment, giving a particular emphasis to the integration process and facing different challenges, consecutively listed: biospecimen samples privacy and anonymization; synchronization of the biobank database with the i2b2 data warehouse through a series of Extract, Transform, Load (ETL) operations; development and integration of a Natural Language Processing (NLP) module, to retrieve coded information, such as SNOMED terms and malignant tumors (TNM) classifications, and clinical tests results from unstructured medical records. Furthermore, we have developed an internal SNOMED ontology rested on the NCBO BioPortal web services.ResultsOnco-i2b2 manages data of more than 6,500 patients with breast cancer diagnosis collected between 2001 and 2011 (over 390 of them have at least one biological sample in the cancer biobank), more than 47,000 visits and 96,000 observations over 960 medical concepts.ConclusionsOnco-i2b2 is a concrete example of how integrated Information and Communication Technology architecture can be implemented to support translational research. The next steps of our project will involve the extension of its capabilities by implementing new plug-in devoted to bioinformatics data analysis as well as a temporal query module. --- paper_title: Automatic Extraction of Breast Cancer Information from Clinical Reports paper_content: The majority of clinical data is only available in unstructured text documents. Thus, their automated usage in data-based clinical application scenarios, like quality assurance and clinical decision support by treatment suggestions, is hindered because it requires high manual annotation efforts. In this work, we introduce a system for the automated processing of clinical reports of mamma carcinoma patients that allows for the automatic extraction and seamless processing of relevant textual features. Its underlying information extraction pipeline employs a rule-based grammar approach that is integrated with semantic technologies to determine the relevant information from the patient record. The accuracy of the system, developed with nine thousand clinical documents, reaches accuracy levels of 90% for lymph node status and 69% for the structurally most complex feature, the hormone status. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: A natural language processing algorithm to measure quality prostate cancer care. paper_content: 232Background: Electronic health records (EHRs) are a widely adopted but underutilized source of data for systematic assessment of healthcare quality. Barriers for use of this data source include its vast complexity, lack of structure, and the lack of use of standardized vocabulary and terminology by clinicians. This project aims to develop generalizable algorithms to extract useful knowledge regarding prostate cancer quality metrics from EHRs. Methods: We used EHR ICD-9/10 codes to identify prostate cancer patients receiving care at our academic medical center. Patients were confirmed in the California Cancer Registry (CCR), which provided data on tumor characteristics, treatment data, treatment outcomes and survival. We focused on three potential pretreatment process quality measures, which included documentation within 6 months prior to initial treatment of prostate-specific antigen (PSA), digital rectal exam (DRE) performance, and Gleason score. Each quality metric was defined using target terms and c... --- paper_title: Facilitating cancer research using natural language processing of pathology reports. paper_content: Many ongoing clinical research projects, such as projects involving studies associated with cancer, involve manual capture of information in surgical pathology reports so that the information can be used to determine the eligibility of recruited patients for the study and to provide other information, such as cancer prognosis. Natural language processing (NLP) systems offer an alternative to automated coding, but pathology reports have certain features that are difficult for NLP systems. This paper describes how a preprocessor was integrated with an existing NLP system (MedLEE) in order to reduce modification to the NLP system and to improve performance. The work was done in conjunction with an ongoing clinical research project that assesses disparities and risks of developing breast cancer for minority women. An evaluation of the system was performed using manually coded data from the research project's database as a gold standard. The evaluation outcome showed that the extended NLP system had a sensitivity of 90.6% and a precision of 91.6%. Results indicated that this system performed satisfactorily for capturing information for the cancer research project. --- paper_title: Automatic Extraction of Breast Cancer Information from Clinical Reports paper_content: The majority of clinical data is only available in unstructured text documents. Thus, their automated usage in data-based clinical application scenarios, like quality assurance and clinical decision support by treatment suggestions, is hindered because it requires high manual annotation efforts. In this work, we introduce a system for the automated processing of clinical reports of mamma carcinoma patients that allows for the automatic extraction and seamless processing of relevant textual features. Its underlying information extraction pipeline employs a rule-based grammar approach that is integrated with semantic technologies to determine the relevant information from the patient record. The accuracy of the system, developed with nine thousand clinical documents, reaches accuracy levels of 90% for lymph node status and 69% for the structurally most complex feature, the hormone status. --- paper_title: Automated extraction and normalization of findings from cancer-related free-text radiology reports. paper_content: We describe the performance of a particular natural language processing system that uses knowledge vectors to extract findings from radiology reports. LifeCode® (A-Life Medical, Inc.) has been successfully coding reports for billing purposes for several years. In this study, we describe the use of LifeCode® to code all findings within a set of 500 cancer-related radiology reports against a test set in which all findings were manually tagged. The system was trained with 1400 reports prior to running the test set. Results: LifeCode® had a recall of 84.5% and precision of 95.7% in the coding of cancer-related radiology report findings. Conclusion: Despite the use of a modest sized training set and minimal training iterations, when applied to cancer-related reports the system achieved recall and precision measures comparable to other reputable natural language processors in this domain. --- paper_title: Validation of natural language processing (NLP) for automated ascertainment of EGFR and ALK tests in SEER cases of non-small cell lung cancer (NSCLC). paper_content: 6528Background: The Surveillance, Epidemiology, and End Results (SEER) registries lack information on the Epidermal Growth Factor Receptor (EGFR) mutation and Anaplastic Lymphoma Kinase (ALK) gene ... --- paper_title: caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research. paper_content: The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)dan application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for textderived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multicenter collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multiinstitutional translational research programs. --- paper_title: Automated Information Extraction on Treatment and Prognosis for Non–Small Cell Lung Cancer Radiotherapy Patients: Clinical Study paper_content: Background: In outcome studies of oncology patients undergoing radiation, researchers extract valuable information from medical records generated before, during, and after radiotherapy visits, such as survival data, toxicities, and complications. Clinical studies rely heavily on these data to correlate the treatment regimen with the prognosis to develop evidence-based radiation therapy paradigms. These data are available mainly in forms of narrative texts or table formats with heterogeneous vocabularies. Manual extraction of the related information from these data can be time consuming and labor intensive, which is not ideal for large studies. Objective: The objective of this study was to adapt the interactive information extraction platform Information and Data Extraction using Adaptive Learning (IDEAL-X) to extract treatment and prognosis data for patients with locally advanced or inoperable non–small cell lung cancer (NSCLC). Methods: We transformed patient treatment and prognosis documents into normalized structured forms using the IDEAL-X system for easy data navigation. The adaptive learning and user-customized controlled toxicity vocabularies were applied to extract categorized treatment and prognosis data, so as to generate structured output. Results: In total, we extracted data from 261 treatment and prognosis documents relating to 50 patients, with overall precision and recall more than 93% and 83%, respectively. For toxicity information extractions, which are important to study patient posttreatment side effects and quality of life, the precision and recall achieved 95.7% and 94.5% respectively. Conclusions: The IDEAL-X system is capable of extracting study data regarding NSCLC chemoradiation patients with significant accuracy and effectiveness, and therefore can be used in large-scale radiotherapy clinical data studies. [JMIR Med Inform 2018;6(1):e8] --- paper_title: Using Natural Language Processing to Extract Abnormal Results From Cancer Screening Reports paper_content: OBJECTIVES ::: Numerous studies show that follow-up of abnormal cancer screening results, such as mammography and Papanicolaou (Pap) smears, is frequently not performed in a timely manner. A contributing factor is that abnormal results may go unrecognized because they are buried in free-text documents in electronic medical records (EMRs), and, as a result, patients are lost to follow-up. By identifying abnormal results from free-text reports in EMRs and generating alerts to clinicians, natural language processing (NLP) technology has the potential for improving patient care. The goal of the current study was to evaluate the performance of NLP software for extracting abnormal results from free-text mammography and Pap smear reports stored in an EMR. ::: ::: ::: METHODS ::: A sample of 421 and 500 free-text mammography and Pap reports, respectively, were manually reviewed by a physician, and the results were categorized for each report. We tested the performance of NLP to extract results from the reports. The 2 assessments (criterion standard versus NLP) were compared to determine the precision, recall, and accuracy of NLP. ::: ::: ::: RESULTS ::: When NLP was compared with manual review for mammography reports, the results were as follows: precision, 98% (96%-99%); recall, 100% (98%-100%); and accuracy, 98% (96%-99%). For Pap smear reports, the precision, recall, and accuracy of NLP were all 100%. ::: ::: ::: CONCLUSIONS ::: Our study developed NLP models that accurately extract abnormal results from mammography and Pap smear reports. Plans include using NLP technology to generate real-time alerts and reminders for providers to facilitate timely follow-up of abnormal results. --- paper_title: PS2-26: Coordinating Heterogeneous Data and Mixed Collection Methods to Support Population-Based Cancer Screening Research paper_content: Background/Aims ::: The central goal of Population-Based Research Optimizing Screening through Personalized Regimens (PROSPR), a recently-funded NCI initiative, is to develop multi-site, transdisciplinary research to improve the screening process for breast, colon, and cervical cancer. To support this goal, we aim to collect, document, and manage data for the entire colorectal cancer (CRC) screening process at Group Health (GH), an integrated health system and PROSPR Research Center. We describe the data sources, types, and collection methods being used to assemble the breadth of relevant information on patients, providers, tests, pathology, treatment, and outcomes this effort requires. --- paper_title: Toward Electronic Surveillance of Invasive Mold Diseases in Hematology-Oncology Patients: An Expert System Combining Natural Language Processing of Chest Computed Tomography Reports, Microbiology, and Antifungal Drug Data paper_content: PurposeProspective epidemiologic surveillance of invasive mold disease (IMD) in hematology patients is hampered by the absence of a reliable laboratory prompt. This study develops an expert system for electronic surveillance of IMD that combines probabilities using natural language processing (NLP) of computed tomography (CT) reports with microbiology and antifungal drug data to improve prediction of IMD.MethodsMicrobiology indicators and antifungal drug–dispensing data were extracted from hospital information systems at three tertiary hospitals for 123 hematology-oncology patients. Of this group, 64 case patients had 26 probable/proven IMD according to international definitions, and 59 patients were uninfected controls. Derived probabilities from NLP combined with medical expertise identified patients at high likelihood of IMD, with remaining patients processed by a machine-learning classifier trained on all available features.ResultsCompared with the baseline text classifier, the expert system that inco... --- paper_title: Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry paper_content: Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute’s Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a “gold standard” based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6 % of screening mammograms, 12.1 % of diagnostic mammograms, and 9.4 % of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. --- paper_title: Validation of natural language processing to extract breast cancer pathology procedures and results paper_content: Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%), and evaluation (324, 10%) purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related). Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity), but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance. --- paper_title: Deep learning analytics for diagnostic support of breast cancer disease management paper_content: Breast cancer continues to be one of the leading causes of cancer death among women. Mammogram is the standard of care for screening and diagnosis of breast cancer. The American College of Radiology developed the Breast Imaging Reporting and Data System (BI-RADS) lexicon to standardize mammographic reporting to assess cancer risk and facilitate biopsy decision-making. However, because substantial inter-observer variability remains in the application of the BI-RADS lexicon, including inappropriate term usage and missing data, current biopsy decision-making accuracy using the unstructured free text or semi-structured reports varies greatly. Hence, incorporating novel and accurate technique into breast cancer decision-making data is critical. Here, we combined natural language processing and deep learning methods to develop an analytic model that targets well-characterized and defined specific breast suspicious patient subgroups rather than a broad heterogeneous group for diagnostic support of breast cancer management. --- paper_title: Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry paper_content: Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute’s Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a “gold standard” based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6 % of screening mammograms, 12.1 % of diagnostic mammograms, and 9.4 % of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. --- paper_title: Deep learning analytics for diagnostic support of breast cancer disease management paper_content: Breast cancer continues to be one of the leading causes of cancer death among women. Mammogram is the standard of care for screening and diagnosis of breast cancer. The American College of Radiology developed the Breast Imaging Reporting and Data System (BI-RADS) lexicon to standardize mammographic reporting to assess cancer risk and facilitate biopsy decision-making. However, because substantial inter-observer variability remains in the application of the BI-RADS lexicon, including inappropriate term usage and missing data, current biopsy decision-making accuracy using the unstructured free text or semi-structured reports varies greatly. Hence, incorporating novel and accurate technique into breast cancer decision-making data is critical. Here, we combined natural language processing and deep learning methods to develop an analytic model that targets well-characterized and defined specific breast suspicious patient subgroups rather than a broad heterogeneous group for diagnostic support of breast cancer management. --- paper_title: Automatic Extraction of Breast Cancer Information from Clinical Reports paper_content: The majority of clinical data is only available in unstructured text documents. Thus, their automated usage in data-based clinical application scenarios, like quality assurance and clinical decision support by treatment suggestions, is hindered because it requires high manual annotation efforts. In this work, we introduce a system for the automated processing of clinical reports of mamma carcinoma patients that allows for the automatic extraction and seamless processing of relevant textual features. Its underlying information extraction pipeline employs a rule-based grammar approach that is integrated with semantic technologies to determine the relevant information from the patient record. The accuracy of the system, developed with nine thousand clinical documents, reaches accuracy levels of 90% for lymph node status and 69% for the structurally most complex feature, the hormone status. --- paper_title: Using Natural Language Processing to Extract Abnormal Results From Cancer Screening Reports paper_content: OBJECTIVES ::: Numerous studies show that follow-up of abnormal cancer screening results, such as mammography and Papanicolaou (Pap) smears, is frequently not performed in a timely manner. A contributing factor is that abnormal results may go unrecognized because they are buried in free-text documents in electronic medical records (EMRs), and, as a result, patients are lost to follow-up. By identifying abnormal results from free-text reports in EMRs and generating alerts to clinicians, natural language processing (NLP) technology has the potential for improving patient care. The goal of the current study was to evaluate the performance of NLP software for extracting abnormal results from free-text mammography and Pap smear reports stored in an EMR. ::: ::: ::: METHODS ::: A sample of 421 and 500 free-text mammography and Pap reports, respectively, were manually reviewed by a physician, and the results were categorized for each report. We tested the performance of NLP to extract results from the reports. The 2 assessments (criterion standard versus NLP) were compared to determine the precision, recall, and accuracy of NLP. ::: ::: ::: RESULTS ::: When NLP was compared with manual review for mammography reports, the results were as follows: precision, 98% (96%-99%); recall, 100% (98%-100%); and accuracy, 98% (96%-99%). For Pap smear reports, the precision, recall, and accuracy of NLP were all 100%. ::: ::: ::: CONCLUSIONS ::: Our study developed NLP models that accurately extract abnormal results from mammography and Pap smear reports. Plans include using NLP technology to generate real-time alerts and reminders for providers to facilitate timely follow-up of abnormal results. --- paper_title: Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry paper_content: Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute’s Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a “gold standard” based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6 % of screening mammograms, 12.1 % of diagnostic mammograms, and 9.4 % of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. --- paper_title: Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model paper_content: We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets. --- paper_title: Information Extraction for Tracking Liver Cancer Patients' Statuses: From Mixture of Clinical Narrative Report Types paper_content: Abstract Objective: To provide an efficient way for tracking patients' condition over long periods of time and to facilitate the collection of clinical data from different types of narrative reports, it is critical to develop an efficient method for smoothly analyzing the clinical data accumulated in narrative reports. Materials and Methods: To facilitate liver cancer clinical research, a method was developed for extracting clinical factors from various types of narrative clinical reports, including ultrasound reports, radiology reports, pathology reports, operation notes, admission notes, and discharge summaries. An information extraction (IE) module was developed for tracking disease progression in liver cancer patients over time, and a rule-based classifier was developed for answering whether patients met the clinical research eligibility criteria. The classifier provided the answers and direct/indirect evidence (evidence sentences) for the clinical questions. To evaluate the implemented IE module and ... --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . --- paper_title: Longitudinal analysis of pain in patients with metastatic prostate cancer using natural language processing of medical record text paper_content: Objectives To test the feasibility of using text mining to depict meaningfully the experience of pain in patients with metastatic prostate cancer, to identify novel pain phenotypes, and to propose methods for longitudinal visualization of pain status. ::: ::: Materials and methods Text from 4409 clinical encounters for 33 men enrolled in a 15-year longitudinal clinical/molecular autopsy study of metastatic prostate cancer (Project to ELIminate lethal CANcer) was subjected to natural language processing (NLP) using Unified Medical Language System-based terms. A four-tiered pain scale was developed, and logistic regression analysis identified factors that correlated with experience of severe pain during each month. ::: ::: Results NLP identified 6387 pain and 13 827 drug mentions in the text. Graphical displays revealed the pain ‘landscape’ described in the textual records and confirmed dramatically increasing levels of pain in the last years of life in all but two patients, all of whom died from metastatic cancer. Severe pain was associated with receipt of opioids (OR=6.6, p<0.0001) and palliative radiation (OR=3.4, p=0.0002). Surprisingly, no severe or controlled pain was detected in two of 33 subjects' clinical records. Additionally, the NLP algorithm proved generalizable in an evaluation using a separate data source (889 Informatics for Integrating Biology and the Bedside (i2b2) discharge summaries). ::: ::: Discussion Patterns in the pain experience, undetectable without the use of NLP to mine the longitudinal clinical record, were consistent with clinical expectations, suggesting that meaningful NLP-based pain status monitoring is feasible. Findings in this initial cohort suggest that ‘outlier’ pain phenotypes useful for probing the molecular basis of cancer pain may exist. ::: ::: Limitations The results are limited by a small cohort size and use of proprietary NLP software. ::: ::: Conclusions We have established the feasibility of tracking longitudinal patterns of pain by text mining of free text clinical records. These methods may be useful for monitoring pain management and identifying novel cancer phenotypes. --- paper_title: Pattern-based information extraction from pathology reports for cancer registration paper_content: OBJECTIVE ::: To evaluate precision and recall rates for the automatic extraction of information from free-text pathology reports. To assess the impact that implementation of pattern-based methods would have on cancer registration completeness. ::: ::: ::: METHOD ::: Over 300,000 electronic pathology reports were scanned for the extraction of Gleason score, Clark level and Breslow depth, by a number of Perl routines progressively enhanced by a trial-and-error method. An additional test set of 915 reports potentially containing Gleason score was used for evaluation. ::: ::: ::: RESULTS ::: Values for recall and precision of over 98 and 99%, respectively, were easily reached. Potential increase in cancer staging completeness of up to 32% was proved. ::: ::: ::: CONCLUSIONS ::: In cancer registration, simple pattern matching applied to free-text documents can be effectively used to improve completeness and accuracy of pathology information. --- paper_title: Deep learning analytics for diagnostic support of breast cancer disease management paper_content: Breast cancer continues to be one of the leading causes of cancer death among women. Mammogram is the standard of care for screening and diagnosis of breast cancer. The American College of Radiology developed the Breast Imaging Reporting and Data System (BI-RADS) lexicon to standardize mammographic reporting to assess cancer risk and facilitate biopsy decision-making. However, because substantial inter-observer variability remains in the application of the BI-RADS lexicon, including inappropriate term usage and missing data, current biopsy decision-making accuracy using the unstructured free text or semi-structured reports varies greatly. Hence, incorporating novel and accurate technique into breast cancer decision-making data is critical. Here, we combined natural language processing and deep learning methods to develop an analytic model that targets well-characterized and defined specific breast suspicious patient subgroups rather than a broad heterogeneous group for diagnostic support of breast cancer management. --- paper_title: Assessing the natural language processing capabilities of IBM Watson for oncology using real Australian lung cancer cases. paper_content: e18229Background: Optimal treatment decisions require information from multiple sources and formats even in the context of electronic medical records. In addition, structured data such as that coll... --- paper_title: Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry paper_content: Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute’s Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a “gold standard” based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6 % of screening mammograms, 12.1 % of diagnostic mammograms, and 9.4 % of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. --- paper_title: DeepPhe: A Natural Language Processing System for Extracting Cancer Phenotypes from Clinical Records paper_content: Precise phenotype information is needed to understand the effects of genetic and epigenetic changes on tumor behavior and responsiveness. Extraction and representation of cancer phenotypes is currently mostly performed manually, making it difficult to correlate phenotypic data to genomic data. In addition, genomic data are being produced at an increasingly faster pace, exacerbating the problem. The DeepPhe software enables automated extraction of detailed phenotype information from electronic medical records of cancer patients. The system implements advanced Natural Language Processing and knowledge engineering methods within a flexible modular architecture, and was evaluated using a manually annotated dataset of the University of Pittsburgh Medical Center breast cancer patients. The resulting platform provides critical and missing computational methods for computational phenotyping. Working in tandem with advanced analysis of high-throughput sequencing, these approaches will further accelerate the transition to precision cancer treatment. Cancer Res; 77(21); e115–8. ©2017 AACR . ---
Title: A frame semantic overview of NLP-based information extraction for cancer-related EHR notes Section 1: Introduction Description 1: This section introduces the increasing availability of unstructured clinical data in EHRs, the need for extracting detailed information for cancer research, and the focus of the paper on organizing the extracted information into frames using the linguistic theory of frame semantics. Section 2: Materials and Methods Description 2: This section details the study's goals, the search process for relevant papers, and the criteria for inclusion and exclusion of articles related to NLP methods for extracting cancer-related information from EHR notes. Section 3: Screening Process Description 3: This section outlines the process used to screen and select relevant articles for the review, including the stages of title/abstract screening and full-text review, along with the criteria applied to exclude irrelevant articles. Section 4: Frame Construction Description 4: This section describes how frames were constructed for each relevant article, the methods used to infer frame elements from NLP systems, and the iterative process for creating a common set of frames. Section 5: Frame Descriptions Description 5: This section explains the components of each constructed frame, including frame names, definitions, and elements, along with identifying the corresponding articles that extract each frame element. Section 6: Frame Relations Description 6: This section discusses the relationships between frames, such as 'parent-child', 'an element of', and 'associated with', and provides examples of these relations in the context of cancer information extraction. Section 7: Most Extracted Frame Elements Description 7: This section identifies the most commonly extracted frame elements across the reviewed papers and highlights the distribution of articles that focus on these elements. Section 8: Papers Extracting Multiple Frames Description 8: This section provides an overview of notable papers that extract varied types of cancer information corresponding to multiple frames and discusses common elements across different frames. Section 9: Significant Cancer Types Based on Elements Extracted Description 9: This section summarizes the extracted frame elements for the most commonly studied cancer types (e.g., breast, bladder, and prostate cancer) and notes the number of articles related to each cancer type. Section 10: Frames Related to Cancer Quality Measures Description 10: This section covers frames created for identifying quality measures information for cancer patient management and their potential applications. Section 11: Frames Related to Image Findings Description 11: This section reviews papers that extracted data elements from imaging findings, particularly related to breast cancer screening, and discusses the elements and findings extracted. Section 12: Discussion Description 12: This section synthesizes key findings from the review, highlights the redundancy and gaps in cancer information extraction efforts, and emphasizes the need for a general-purpose cancer NLP resource. Section 13: Impact of Frame Semantics Description 13: This section discusses the benefits and challenges of organizing cancer information using frame semantics and evaluates the utility of the proposed frames for NLP development and EHR text interpretation. Section 14: Limitations Description 14: This section acknowledges the limitations of the review, including possible omissions of relevant papers, inaccuracies in frame representation, and inconsistencies in frame assignment. Section 15: Conclusions Description 15: This section provides a summary of the review, describes its contributions to the cancer information extraction domain, and suggests how developers can use the findings to create a general-purpose cancer frame resource and NLP system.
Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review
16
--- paper_title: Fly visual course control: behaviour, algorithms and circuits paper_content: Understanding how the brain controls behaviour is undisputedly one of the grand goals of neuroscience research, and the pursuit of this goal has a long tradition in insect neuroscience. However, appropriate techniques were lacking for a long time. Recent advances in genetic and recording techniques now allow the participation of identified neurons in the execution of specific behaviours to be interrogated. By focusing on fly visual course control, I highlight what has been learned about the neuronal circuit modules that control visual guidance in Drosophila melanogaster through the use of these techniques. --- paper_title: Towards Computational Models of Insect Motion Detectors for Robot Vision paper_content: In this essay, we provide a brief survey of computational models of insect motion detectors, and bio-robotic solutions to build fast and reliable motion-sensing systems for robot vision. Vision is an important sensing modality for autonomous robots, since it can extract abundant useful features from visually cluttered and dynamic environments. Fast development of computer vision technology facilitates the modeling of dynamic vision systems for mobile robots. --- paper_title: Small Brains, Smart Machines: From Fly Vision to Robot Vision and Back Again paper_content: Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind were discovered in vertebrates' and invertebrates' visual systems more than 50 years ago, the principles and neural mechanisms involved have not yet been completely elucidated. Here, first, I propose to outline some of the findings we made during the last few decades by performing electrophysiological recordings on identified neurons in the housefly's eye while applying optical stimulation to identified photoreceptors. Whereas these findings shed light on the inner processing structure of an elementary motion detector (EMD), recent studies in which the latest genetic and neuroanatomical methods were applied to the fruitfly's visual system have identified some of the neurons in the visual chain which are possibly involved in the neural circuitry underlying a given EMD. Then, I will describe some of the proof-of-concept robots that we have developed on the basis of our biological findings. The 100-g robot OCTAVE, for example, is able to avoid the ground, react to wind, and land autonomously on a flat terrain without ever having to measure any state variables such as distances or speeds. The 100-g robots OSCAR 1 and OSCAR 2 inspired by the microscanner we discovered in the housefly's eye are able to stabilize their body using mainly visual means and track a moving edge with hyperacuity. These robots react to the optic flow, which is sensed by miniature optic flow sensors inspired by the housefly's EMDs. Constructing a “biorobot” gives us a unique opportunity of checking the soundness and robustness of a principle that is initially thought to be understood by bringing it face to face with the real physical world. Bio-inspired robotics not only help neurobiologists and neuroethologists to identify and investigate worthwhile problems in animals' sensory-motor systems, but they also provide engineers with ideas for developing novel devices and machines with promising future applications, in the field of smart autonomous vehicles and microvehicles, for example. --- paper_title: Optic flow-based collision-free strategies: From insects to robots paper_content: Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. --- paper_title: Seeing Things in Motion: Models, Circuits, and Mechanisms paper_content: Motion vision provides essential cues for navigation and course control as well as for mate, prey, or predator detection. Consequently, neurons responding to visual motion in a direction-selective way are found in almost all species that see. However, directional information is not explicitly encoded at the level of a single photoreceptor. Rather, it has to be computed from the spatio-temporal excitation level of at least two photoreceptors. How this computation is done and how this computation is implemented in terms of neural circuitry and membrane biophysics have remained the focus of intense research over many decades. Here, we review recent progress made in this area with an emphasis on insects and the vertebrate retina. --- paper_title: Common circuit design in fly and mammalian motion vision paper_content: Motion-sensitive neurons have long been studied in both the mammalian retina and the insect optic lobe, yet striking similarities have become obvious only recently. Detailed studies at the circuit level revealed that, in both systems, (i) motion information is extracted from primary visual information in parallel ON and OFF pathways; (ii) in each pathway, the process of elementary motion detection involves the correlation of signals with different temporal dynamics; and (iii) primary motion information from both pathways converges at the next synapse, resulting in four groups of ON-OFF neurons, selective for the four cardinal directions. Given that the last common ancestor of insects and mammals lived about 550 million years ago, this general strategy seems to be a robust solution for how to compute the direction of visual motion with neural hardware. --- paper_title: Synaptic Connections of First-Stage Visual Neurons in the Locust Schistocerca gregaria Extend Evolution of Tetrad Synapses Back 200 Million Years paper_content: The small size of some insects, and the crystalline regularity of their eyes, have made them ideal for large-scale reconstructions of visual circuits. In phylogenetically recent muscomorph flies, like Drosophila, precisely coordinated output to different motion-processing pathways is delivered by photoreceptors (R cells), targeting four different postsynaptic cells at each synapse (tetrad). Tetrads were linked to the evolution of aerial agility. To reconstruct circuits for vision in the larger brain of a locust, a phylogenetically old, flying insect, we adapted serial block-face scanning electron microscopy (SBEM). Locust lamina monopolar cells, L1 and L2, were the main targets of the R cell pathway, L1 and L2 each fed a different circuit, only L1 providing feedback onto R cells. Unexpectedly, 40% of all locust R cell synapses onto both L1 and L2 were tetrads, revealing the emergence of tetrads in an arthropod group present 200 million years before muscomorph flies appeared, coinciding with the early evolution of flight. J. Comp. Neurol. 523:298–312, 2015. © 2014 Wiley Periodicals, Inc. --- paper_title: Escapes with and without preparation: the neuroethology of visual startle in locusts. paper_content: Locusts respond to the images of approaching (looming) objects with responses that include gliding while in flight and jumping while standing. For both of these responses there is good evidence that the DCMD neuron (descending contralateral movement detector), which carries spike trains from the brain to the thoracic ganglia, is involved. Sudden glides during flight, which cause a rapid loss of height, are last-chance manoeuvres without prior preparation. Jumps from standing require preparation over several tens of milliseconds because of the need to store muscle-derived energy in a catapult-like mechanism. Locusts' DCMD neurons respond selectively to looming stimuli, and make connections with some motor neurons and interneurons known to be involved in flying and jumping. For glides, a burst of high-frequency DCMD spikes is a key trigger. For jumping, a similar burst can influence timing, but neither the DCMD nor any other single interneuron has been shown to be essential for triggering any stage in preparation or take-off. Responses by the DCMD to looming stimuli can alter in different behavioural contexts: in a flying locust, arousal ensures a high level of both DCMD responsiveness and glide occurrence; and there are significant differences in DCMD activity between locusts in the gregarious and the solitarious phase. --- paper_title: Visual processing in the central bee brain. paper_content: Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur. --- paper_title: Fly visual system inspired artificial neural network for collision detection paper_content: This work investigates one bio-inspired collision detection system based on fly visual neural structures, in which collision alarm is triggered if an approaching object in a direct collision course appears in the field of view of a camera or a robot, together with the relevant time region of collision. One such artificial system consists of one artificial fly visual neural network model and one collision detection mechanism. The former one is a computational model to capture membrane potentials produced by neurons. The latter one takes the outputs of the former one as its inputs, and executes three detection schemes: (i) identifying when a spike takes place through the membrane potentials and one threshold scheme; (ii) deciding the motion direction of a moving object by the Reichardt detector model; and (iii) sending collision alarms and collision regions. Experimentally, relying upon a series of video image sequences with different scenes, numerical results illustrated that the artificial system with some striking characteristics is a potentially alternative tool for collision detection. --- paper_title: Fly visual course control: behaviour, algorithms and circuits paper_content: Understanding how the brain controls behaviour is undisputedly one of the grand goals of neuroscience research, and the pursuit of this goal has a long tradition in insect neuroscience. However, appropriate techniques were lacking for a long time. Recent advances in genetic and recording techniques now allow the participation of identified neurons in the execution of specific behaviours to be interrogated. By focusing on fly visual course control, I highlight what has been learned about the neuronal circuit modules that control visual guidance in Drosophila melanogaster through the use of these techniques. --- paper_title: Spatial organization of visuomotor reflexes in Drosophila paper_content: In most animals, the visual system plays a central role in locomotor guidance. Here, we examined the functional organization of visuomotor reflexes in the fruit fly, Drosophila, using an electronic flight simulator. Flies exhibit powerful avoidance responses to visual expansion centered laterally. The amplitude of these expansion responses is three times larger than those generated by image rotation. Avoidance of a laterally positioned focus of expansion emerges from an inversion of the optomotor response when motion is restricted to the rear visual hemisphere. Furthermore, motion restricted to rear quarter-fields elicits turning responses that are independent of the direction of image motion about the animal's yaw axis. The spatial heterogeneity of visuomotor responses explains a seemingly peculiar behavior in which flies robustly fixate the contracting pole of a translating flow field. --- paper_title: Predator versus Prey: Locust Looming-Detector Neuron and Behavioural Responses to Stimuli Representing Attacking Bird Predators paper_content: Many arthropods possess escape-triggering neural mechanisms that help them evade predators. These mechanisms are important neuroethological models, but they are rarely investigated using predator-like stimuli because there is often insufficient information on real predator attacks. Locusts possess uniquely identifiable visual neurons (the descending contralateral movement detectors, DCMDs) that are well-studied looming motion detectors. The DCMDs trigger ‘glides’ in flying locusts, which are hypothesised to be appropriate last-ditch responses to the looms of avian predators. To date it has not been possible to study glides in response to stimuli simulating bird attacks because such attacks have not been characterised. We analyse video of wild black kites attacking flying locusts, and estimate kite attack speeds of 10.8±1.4 m/s. We estimate that the loom of a kite’s thorax towards a locust at these speeds should be characterised by a relatively low ratio of half size to speed (l/|v|) in the range 4–17 ms. Peak DCMD spike rate and gliding response occurrence are known to increase as l/|v| decreases for simple looming shapes. Using simulated looming discs, we investigate these trends and show that both DCMD and behavioural responses are strong to stimuli with kite-like l/|v| ratios. Adding wings to looming discs to produce a more realistic stimulus shape did not disrupt the overall relationships of DCMD and gliding occurrence to stimulus l/|v|. However, adding wings to looming discs did slightly reduce high frequency DCMD spike rates in the final stages of object approach, and slightly delay glide initiation. Looming discs with or without wings triggered glides closer to the time of collision as l/|v| declined, and relatively infrequently before collision at very low l/|v|. However, the performance of this system is in line with expectations for a last-ditch escape response. --- paper_title: Escape behaviors in insects paper_content: Escape behaviors are, by necessity, fast and robust, making them excellent systems with which to study the neural basis of behavior. This is especially true in insects, which have comparatively tractable nervous systems and members who are amenable to manipulation with genetic tools. Recent technical developments in high-speed video reveal that, despite their short duration, insect escape behaviors are more complex than previously appreciated. For example, before initiating an escape jump, a fly performs sophisticated posture and stimulus-dependent preparatory leg movements that enable it to jump away from a looming threat. This newfound flexibility raises the question of how the nervous system generates a behavior that is both rapid and flexible. Recordings from the cricket nervous system suggest that synchrony between the activity of specific interneuron pairs may provide a rapid cue for the cricket to detect the direction of an approaching predator and thus which direction it should run. Technical advances make possible wireless recording from neurons while locusts escape from a looming threat, enabling, for the first time, a direct correlation between the activity of multiple neurons and the time-course of an insect escape behavior. --- paper_title: Precise Subcellular Input Retinotopy and Its Computational Consequences in an Identified Visual Interneuron paper_content: The Lobula Giant Movement Detector (LGMD) is a higher-order visual interneuron of Orthopteran insects that responds preferentially to objects approaching on a collision course. It receives excitatory input from an entire visual hemifield that anatomical evidence suggests is retinotopic. We show that this excitatory projection activates calcium-permeable nicotinic acetylcholine receptors. In vivo calcium imaging reveals that the excitatory projection preserves retinotopy down to the level of a single ommatidium. Examining the impact of retinotopy on the LGMD's computational properties, we show that sublinear synaptic summation can explain orientation preference in this cell. Exploring retinotopy's impact on directional selectivity leads us to infer that the excitatory input to the LGMD is intrinsically directionally selective. Our results show that precise retinotopy has implications for the dendritic integration of visual information in a single neuron. --- paper_title: A visual motion detection circuit suggested by Drosophila connectomics paper_content: Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. Here we develop a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our results identify cellular targets for future functional investigations, and demonstrate that connectomes can provide key insights into neuronal computations. --- paper_title: A universal strategy for visually guided landing paper_content: Landing is a challenging aspect of flight because, to land safely, speed must be decreased to a value close to zero at touchdown. The mechanisms by which animals achieve this remain unclear. When landing on horizontal surfaces, honey bees control their speed by holding constant the rate of front-to-back image motion (optic flow) generated by the surface as they reduce altitude. As inclination increases, however, this simple pattern of optic flow becomes increasingly complex. How do honey bees control speed when landing on surfaces that have different orientations? To answer this, we analyze the trajectories of honey bees landing on a vertical surface that produces various patterns of motion. We find that landing honey bees control their speed by holding the rate of expansion of the image constant. We then test and confirm this hypothesis rigorously by analyzing landings when the apparent rate of expansion generated by the surface is manipulated artificially. This strategy ensures that speed is reduced, gradually and automatically, as the surface is approached. We then develop a mathematical model of this strategy and show that it can effectively be used to guide smooth landings on surfaces of any orientation, including horizontal surfaces. This biological strategy for guiding landings does not require knowledge about either the distance to the surface or the speed at which it is approached. The simplicity and generality of this landing strategy suggests that it is likely to be exploited by other flying animals and makes it ideal for implementation in the guidance systems of flying robots. --- paper_title: Background visual motion affects responses of an insect motion‐sensitive neuron to objects deviating from a collision course paper_content: Stimulus complexity affects the response of looming sensitive neurons in a variety of animal taxa. The Lobula Giant Movement Detector/Descending Contralateral Movement Detector (LGMD/DCMD) pathway is well‐characterized in the locust visual system. It responds to simple objects approaching on a direct collision course (i.e., looming) as well as complex motion defined by changes in stimulus velocity, trajectory, and transitions, all of which are affected by the presence or absence of background visual motion. In this study, we focused on DCMD responses to objects transitioning away from a collision course, which emulates a successful locust avoidance behavior. We presented each of 20 locusts with a sequence of complex three‐dimensional visual stimuli in simple, scattered, and progressive flow field backgrounds while simultaneously recording DCMD activity extracellularly. DCMD responses to looming stimuli were generally characteristic irrespective of stimulus background. However, changing background complexity affected, peak firing rates, peak time, and caused changes in peak rise and fall phases. The DCMD response to complex object motion also varied with the azimuthal approach angle and the dynamics of object edge expansion. These data fit with an existing correlational model that relates expansion properties to firing rate modulation during trajectory changes. --- paper_title: Visual Motion: Cellular Implementation of a Hybrid Motion Detector paper_content: Visual motion detection in insects is mediated by three-input detectors that compare inputs of different spatiotemporal properties. A new modeling study shows that only a small subset of possible arrangements of the input elements provides high direction-selectivity. --- paper_title: Defence behaviours of the praying mantis Tenodera aridifolia in response to looming objects paper_content: Defence responses to approaching objects were observed in the mantis Tenodera aridifolia. The mantis showed three kinds of behaviour, fixation, evasion and cryptic reaction. The cryptic reaction consisted of rapid retraction of the forelegs under the prothorax or rapid extending of the forelegs in the forward direction. Obstructing the mantis’ sight decreased its response rates, suggesting that the visual stimuli generated by an approaching object elicited the cryptic reaction. The response rate of the cryptic reactions was highest for objects that approached on a direct collision course. Deviation in a horizontal direction from the direct collision course resulted in a reduced response. The response rate of the cryptic reaction increased as the approaching velocity of the object increased, and the rate decreased as the object ceased its approach at a greater distance from the mantis. These results suggest that the function of the observed cryptic reactions is defence against impending collisions. The possible role of the looming-sensitive neuron in the cryptic reaction is also discussed. --- paper_title: Neural networks in the cockpit of the fly paper_content: Flies have been buzzing around on earth for over 300 million years. During this time they have radiated into more than 125,000 different species (Yeates and Wiegmann 1999), so that, by now, roughly every tenth described species is a fly. They thus represent one of the most successful animal groups on our planet. This evolutionary success might, at least in part, be a result of their acrobatic maneuverability, which enables them, for example, to chase mates at turning velocities of more than 3000° s–1 with delay times of less than 30 ms (Land and Collett 1974; Wagner 1986). It is this fantastic behavior, which has initiated much research during the last decades, both on its sensory control and the biophysical and aerodynamic principles of the flight output (Dickinson et al. 1999, 2000). Here, we review the current state of knowledge about the neural processing of visual motion, which represents one sensory component intimately involved in flight control. Other reviews on this topic have been published with a similar (Hausen 1981, 1984; Hausen and Egelhaaf 1989; Borst 1996) or different emphasis (Frye and Dickinson 2001; Borst and Dickinson 2002). Because of space limitations, we do not review the extensive work that has been done on fly motion-sensitive neurons to advance our understanding of neural coding (Bialek et al. 1991; Rieke et al. 1997; de Ruyter et al. 1997, 2000; Haag and Borst 1997, 1998; Borst and Haag 2001). Unless stated otherwise, all data presented in the following were obtained on the blowfly Calliphora vicina which we will often casually refer to as 'the fly'. --- paper_title: A neural network for pursuit tracking inspired by the fly visual system paper_content: Abstract This paper presents an artificial neural network that detects and tracks an object moving within its field of view. This novel network is inspired by processing functions observed in the fly visual system. The network detects changes in input light intensities, determines motion on both the local and the wide-field levels, and outputs displacement information necessary to control pursuit tracking. Software simulations demonstrate the current prototype successfully follows a moving target within specified radiance and motion constraints. The paper reviews these limiting constraints and suggests future network augmentations to remove them. Despite its current limitations, the existing prototype serves as a solid foundation for a future network that promises to provide machines with the improved abilities to do high-speed pursuit tracking, interception, and collision avoidance. --- paper_title: Fundamental mechanisms of visual motion detection: models, cells and functions paper_content: Taking a comparative approach, data from a range of visual species are discussed in the context of ideas about mechanisms of motion detection. The cellular basis of motion detection in the vertebrate retina, sub-cortical structures and visual cortex is reviewed alongside that of the insect optic lobes. Special care is taken to relate concepts from theoretical models to the neural circuitry in biological systems. Motion detection involves spatiotemporal pre-filters, temporal delay filters and non-linear interactions. A number of different types of non-linear mechanism such as facilitation, inhibition and division have been proposed to underlie direction selectivity. The resulting direction-selective mechanisms can be combined to produce speed-tuned motion detectors. Motion detection is a dynamic process with adaptation as a fundamental property. The behavior of adaptive mechanisms in motion detection is discussed, focusing on the informational basis of motion adaptation, its phenomenology in human vision, and its cellular basis. The question of whether motion adaptation serves a function or is simply the result of neural fatigue is critically addressed. --- paper_title: Object tracking in motion-blind flies paper_content: In response to the movement of its visual world, Drosophila is capable of optomotor response in head and body turning, as well as a visual fixation response. This study shows that blocking the visual pathway activity responsible for optokinetic response in flies does not affect the visual fixation response, suggesting two distinct pathways for processing each set of information. By doing so, the authors also devised a neural and behavioral hierarchy in fly visual system where fixation behavior and the neurons mediating fixation response are upstream of optokinetic response as performed by lobula plate neurons. --- paper_title: A look into the cockpit of the fly: visual orientation, algorithms, and identified neurons paper_content: The top-down approach to understanding brain function seeks to account for the behavior of an animal in terms of biophysical properties of nerve cells and synaptic interactions via a series of progressively reductive levels of explanation. Using the fly as a model system, this approach was pioneered by Werner Reichardt and his colleagues in the late 1950s. Quantitative input-output analyses led them to formal algorithms that related the input of the fly's eye to the orientation behavior of the animal. But it has been possible only recently to track down the implementation of part of these algorithms to the computations performed by individual neurons and small neuronal ensembles. Thus, the visually guided flight maneuvers of the fly have turned out to be one of the few cases in which it has been feasible to reach an understanding of the mechanisms underlying a complex behavioral performance at successively reductive levels of analysis. These recent findings illuminate some of the fundamental questions that are being debated in computational neuroscience (Marr and Poggio, 1977; Sejnowski et al., 1988; Churchland and Sejnowski, 1992): (1) Are some brain functions emergent properties present only at the systems level? (2) Does an understanding of brain function at the systems level help in understanding function at the cellular and subcellular level? (3) Can different levels of organization be understood independently of each other? In this review we concentrate on two basic computational tasks that have to be solved by the fly, as well as by many other moving animals: (1) stabilization of an intended course against disturbances and (2) intended deviations from a straight course in order to orient toward salient objecs. Performing these tasks depends on the extraction of motion information from the changing distribution of light intensity received by the eyes. --- paper_title: Behavioral Models of the Praying Mantis as a Basis for Robotic Behavior paper_content: Abstract Formal models of animal sensorimotor behavior can provide effective methods for generating robotic intelligence. In this article we describe how schema-theoretic models of the praying mantis derived from behavioral and neuroscientific data can be implemented on a hexapod robot equipped with a real time color vision system. This implementation incorporates a wide range of behaviors, including obstacle avoidance, prey acquisition, predator avoidance, mating, and chantlitaxia behaviors that can provide guidance to neuroscientists, ethologists, and roboticists alike. The goals of this study are threefold: to provide an understanding and means by which fielded robotic systems are not competing with other agents that are more effective at their designated task; to permit them to be successful competitors within the ecological system and capable of displacing less efficient agents; and that they are ecologically sensitive so that agent–environment dynamics are well-modeled and as predictable as possible whenever new robotic technology is introduced. --- paper_title: A look into the cockpit of the developing locust: Looming detectors and predator avoidance paper_content: For many animals, the visual detection of looming stimuli is crucial at any stage of their lives. For example, human babies of only 6 days old display evasive responses to looming stimuli (Bower et al. [1971]: Percept Psychophys 9: 193-196). This means the neuronal pathways involved in looming detection should mature early in life. Locusts have been used extensively to examine the neural circuits and mechanisms involved in sensing looming stimuli and triggering visually evoked evasive actions, making them ideal subjects in which to investigate the development of looming sensitivity. Two lobula giant movement detectors (LGMD) neurons have been identified in the lobula region of the locust visual system: the LGMD1 neuron responds selectively to looming stimuli and provides information that contributes to evasive responses such as jumping and emergency glides. The LGMD2 responds to looming stimuli and shares many response properties with the LGMD1. Both neurons have only been described in the adult. In this study, we describe a practical method combining classical staining techniques and 3D neuronal reconstructions that can be used, even in small insects, to reveal detailed anatomy of individual neurons. We have used it to analyze the anatomy of the fan-shaped dendritic tree of the LGMD1 and the LGMD2 neurons in all stages of the post-embryonic development of Locusta migratoria. We also analyze changes seen during the ontogeny of escape behaviors triggered by looming stimuli, specially the hiding response. --- paper_title: Higher-Order Figure Discrimination in Fly and Human Vision paper_content: Visually-guided animals rely on their ability to stabilize the panorama and simultaneously track salient objects, or figures, that are distinct from the background in order to avoid predators, pursue food resources and mates, and navigate spatially. Visual figures are distinguished by luminance signals that produce coherent motion cues as well as more enigmatic 'higher-order' statistical features. Figure discrimination is thus a complex form of motion vision requiring specialized neural processing. In this minireview, we will highlight recent advances in understanding the perceptual, behavioral, and neurophysiological basis of higher-order figure detection in flies, much of which is grounded in the historical perspective and mechanistic underpinnings of human psychophysics. --- paper_title: Feedback Network Controls Photoreceptor Output at the Layer of First Visual Synapses in Drosophila paper_content: At the layer of first visual synapses, information from photoreceptors is processed and transmitted towards the brain. In fly compound eye, output from photoreceptors (R1-R6) that share the same visual field is pooled and transmitted via histaminergic synapses to two classes of interneuron, large monopolar cells (LMCs) and amacrine cells (ACs). The interneurons also feed back to photoreceptor terminals via numerous ligand-gated synapses, yet the significance of these connections has remained a mystery. We investigated the role of feedback synapses by comparing intracellular responses of photoreceptors and LMCs in wild-type Drosophila and in synaptic mutants, to light and current pulses and to naturalistic light stimuli. The recordings were further subjected to rigorous statistical and information-theoretical analysis. We show that the feedback synapses form a negative feedback loop that controls the speed and amplitude of photoreceptor responses and hence the quality of the transmitted signals. These results highlight the benefits of feedback synapses for neural information processing, and suggest that similar coding strategies could be used in other nervous systems. --- paper_title: Processing properties of ON and OFF pathways for Drosophila motion detection paper_content: Four medulla neurons implement two critical processing steps to incoming signals in Drosophila motion detection. --- paper_title: The Emergence of Directional Selectivity in the Visual Motion Pathway of Drosophila paper_content: Summary The perception of visual motion is critical for animal navigation, and flies are a prominent model system for exploring this neural computation. In Drosophila , the T4 cells of the medulla are directionally selective and necessary for ON motion behavioral responses. To examine the emergence of directional selectivity, we developed genetic driver lines for the neuron types with the most synapses onto T4 cells. Using calcium imaging, we found that these neuron types are not directionally selective and that selectivity arises in the T4 dendrites. By silencing each input neuron type, we identified which neurons are necessary for T4 directional selectivity and ON motion behavioral responses. We then determined the sign of the connections between these neurons and T4 cells using neuronal photoactivation. Our results indicate a computational architecture for motion detection that is a hybrid of classic theoretical models. --- paper_title: Seeing what is coming: building collision-sensitive neurones paper_content: Abstract The image of a rapidly approaching object has to elicit a quick response. An animal needs to know that the object is approaching on a collision course and how imminent a collision is. The relevant information can be computed from the way that the image of the object grows on the retina of one eye. Firm data about the types of neurones that react to such looming stimuli and trigger avoidance reactions come from recent studies on the pigeon and the locust. The neurones responsible are tightly tuned to detect objects that are approaching on a direct collision course. In the pigeon these neurones signal the time remaining before collision whereas in the locust they have a crucial role in the simple strategy this animal uses to detect an object approaching on a collision course. --- paper_title: Principles of visual motion detection paper_content: Motion information is required for the solution of many complex tasks of the visual system such as depth perception by motion parallax and figure/ground discrimination by relative motion. However, motion information is not explicitly encoded at the level of the retinal input. Instead, it has to be computed from the time-dependent brightness patterns of the retinal image as sensed by the two-dimensional array of photoreceptors. Different models have been proposed which describe the neural computations underlying motion detection in various ways. To what extent do biological motion detectors approximate any of these models? As will be argued here, there is increasing evidence from the different disciplines studying biological motion vision, that, throughout the animal kingdom ranging from invertebrates to vertebrates including man, the mechanisms underlying motion detection can be attributed to only a few, essentially equivalent computational principles. Motion detection may, therefore, be one of the first examples in computational neurosciences where common principles can be found not only at the cellular level (e.g. dendritic integration, spike propagation, synaptic transmission) but also at the level of computations performed by small neural networks. --- paper_title: Impact and sources of neuronal variability in the fly’s motion vision pathway paper_content: Abstract Nervous systems encode information about dynamically changing sensory input by changes in neuronal activity. Neuronal activity changes, however, also arise from noise sources within and outside the nervous system or from changes of the animal’s behavioral state. The resulting variability of neuronal responses in representing sensory stimuli limits the reliability with which animals can respond to stimuli and may thus even affect the chances for survival in certain situations. Relevant sources of noise arising at different stages along the motion vision pathway have been investigated from the sensory input to the initiation of behavioral reactions. Here, we concentrate on the reliability of processing visual motion information in flies. Flies rely on visual motion information to guide their locomotion. They are among the best established model systems for the processing of visual motion information allowing us to bridge the gap between behavioral performance and underlying neuronal computations. It has been possible to directly assess the consequences of noise at major stages of the fly’s visual motion processing system on the reliability of neuronal signals. Responses of motion sensitive neurons and their variability have been related to optomotor movements as indicators for the overall performance of visual motion computation. We address whether and how noise already inherent in the stimulus, e.g. photon noise for the visual system, influences later processing stages and to what extent variability at the output level of the sensory system limits behavioral performance. Recent advances in circuit analysis and the progress in monitoring neuronal activity in behaving animals should now be applied to understand how the animal meets the requirements of fast and reliable manoeuvres in naturalistic situations. --- paper_title: Minimum viewing angle for visually guided ground speed control in bumblebees. paper_content: SUMMARY To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel9s length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23–30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered. --- paper_title: Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression paper_content: Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila , direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein–Reichardt correlator and the Barlow–Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation. SIGNIFICANCE STATEMENT Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges. --- paper_title: Seeing Things in Motion: Models, Circuits, and Mechanisms paper_content: Motion vision provides essential cues for navigation and course control as well as for mate, prey, or predator detection. Consequently, neurons responding to visual motion in a direction-selective way are found in almost all species that see. However, directional information is not explicitly encoded at the level of a single photoreceptor. Rather, it has to be computed from the spatio-temporal excitation level of at least two photoreceptors. How this computation is done and how this computation is implemented in terms of neural circuitry and membrane biophysics have remained the focus of intense research over many decades. Here, we review recent progress made in this area with an emphasis on insects and the vertebrate retina. --- paper_title: Common circuit design in fly and mammalian motion vision paper_content: Motion-sensitive neurons have long been studied in both the mammalian retina and the insect optic lobe, yet striking similarities have become obvious only recently. Detailed studies at the circuit level revealed that, in both systems, (i) motion information is extracted from primary visual information in parallel ON and OFF pathways; (ii) in each pathway, the process of elementary motion detection involves the correlation of signals with different temporal dynamics; and (iii) primary motion information from both pathways converges at the next synapse, resulting in four groups of ON-OFF neurons, selective for the four cardinal directions. Given that the last common ancestor of insects and mammals lived about 550 million years ago, this general strategy seems to be a robust solution for how to compute the direction of visual motion with neural hardware. --- paper_title: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects paper_content: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects --- paper_title: Vision for Mobile Robot Navigation: A Survey paper_content: Surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment. --- paper_title: What does robotics offer animal behaviour? paper_content: There is a growing body of robot-based research that makes a serious claim to be a new methodology for biology. Robots can be used as models of specific animal systems to test hypotheses regarding the control of behaviour. At levels from learning algorithms to specific dendritic circuits, implementing a proposed controller in a robotic device tests it against real environments in a way that is difficult to simulate. This often provides insight into the true nature of the problem. It also enforces complete specifications and combines bodies of data. Current work can sometimes be criticized for drawing unjustified conclusions given the limited evaluation and inevitable inaccuracies of robot models. Nevertheless, this approach has led to novel hypotheses for animal behaviour and seems likely to provide fruitful results in the future. Copyright 2000 The Association for the Study of Animal Behaviour. --- paper_title: Visuomotor control in flies and behavior-based agents paper_content: The development of artificial agents or robots using inspiration from living organisms is a promising approach to the study of biological systems, complementing traditional approaches in biology and the life sciences. The simulation of simple life forms and the implementation of behavioral models in robots are especially useful ways of testing models of biological information processing. --- paper_title: Small Brains, Smart Machines: From Fly Vision to Robot Vision and Back Again paper_content: Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind were discovered in vertebrates' and invertebrates' visual systems more than 50 years ago, the principles and neural mechanisms involved have not yet been completely elucidated. Here, first, I propose to outline some of the findings we made during the last few decades by performing electrophysiological recordings on identified neurons in the housefly's eye while applying optical stimulation to identified photoreceptors. Whereas these findings shed light on the inner processing structure of an elementary motion detector (EMD), recent studies in which the latest genetic and neuroanatomical methods were applied to the fruitfly's visual system have identified some of the neurons in the visual chain which are possibly involved in the neural circuitry underlying a given EMD. Then, I will describe some of the proof-of-concept robots that we have developed on the basis of our biological findings. The 100-g robot OCTAVE, for example, is able to avoid the ground, react to wind, and land autonomously on a flat terrain without ever having to measure any state variables such as distances or speeds. The 100-g robots OSCAR 1 and OSCAR 2 inspired by the microscanner we discovered in the housefly's eye are able to stabilize their body using mainly visual means and track a moving edge with hyperacuity. These robots react to the optic flow, which is sensed by miniature optic flow sensors inspired by the housefly's EMDs. Constructing a “biorobot” gives us a unique opportunity of checking the soundness and robustness of a principle that is initially thought to be understood by bringing it face to face with the real physical world. Bio-inspired robotics not only help neurobiologists and neuroethologists to identify and investigate worthwhile problems in animals' sensory-motor systems, but they also provide engineers with ideas for developing novel devices and machines with promising future applications, in the field of smart autonomous vehicles and microvehicles, for example. --- paper_title: Visual control of navigation in insects and its relevance for robotics paper_content: Flying insects display remarkable agility, despite their diminutive eyes and brains. This review describes our growing understanding of how these creatures use visual information to stabilize flight, avoid collisions with objects, regulate flight speed, detect and intercept other flying insects such as mates or prey, navigate to a distant food source, and orchestrate flawless landings. It also outlines the ways in which these insights are now being used to develop novel, biologically inspired strategies for the guidance of autonomous, airborne vehicles. --- paper_title: Robot navigation inspired by principles of insect vision paper_content: Abstract Recent studies of insect visual behaviour and navigation reveal a number of elegant strategies that can be profitably applied to the design of autonomous robots. The peering behaviour of grasshoppers, for example, has inspired the design of new rangefinding systems. The centring response of bees flying through a tunnel has led to simple methods for navigating through corridors. Experimental investigation of the bee's “odometer” has led to the implementation of schemes for visually driven odometry. These and other visually mediated insect behaviours are described along with a number of applications to robot navigation. --- paper_title: Honeybees as a Model for the Study of Visually Guided Flight, Navigation, and Biologically Inspired Robotics paper_content: Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles. --- paper_title: New VLSI smart sensor for collision avoidance inspired by insect vision paper_content: An analog VLSI implementation of a smart microsensor that mimics the early visual processing stage in insects is described with an emphasis on the overall concept and the front- end detection. The system employs the `smart sensor' paradigm in that the detectors and processing circuitry are integrated on the one chip. The integrated circuit is composed of sixty channels of photodetectors and parallel processing elements. The photodetection circuitry includes p-well junction diodes on a 2 micrometers CMOS process and a logarithmic compression to increase the dynamic range of the system. The future possibility of gallium arsenide implementation is discussed. The processing elements behind each photodetector contain a low frequency differentiator where subthreshold design methods have been used. The completed IC is ideal for motion detection, particularly collision avoidance tasks, as it essentially detects distance, speed & bearing of an object. The Horridge Template Model for insect vision has been directly mapped into VLSI and therefore the IC truly exploits the beauty of nature in that the insect eye is so compact with parallel processing, enabling compact motion detection without the computational overhead of intensive imaging, full image extraction and interpretation. This world-first has exciting applications in the areas of automobile anti- collision, IVHS, autonomous robot guidance, aids for the blind, continuous process monitoring/web inspection and automated welding, for example.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Optic flow-based collision-free strategies: From insects to robots paper_content: Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. --- paper_title: From Wheels to Wings with Evolutionary Spiking Circuits paper_content: We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots. --- paper_title: Fly visual course control: behaviour, algorithms and circuits paper_content: Understanding how the brain controls behaviour is undisputedly one of the grand goals of neuroscience research, and the pursuit of this goal has a long tradition in insect neuroscience. However, appropriate techniques were lacking for a long time. Recent advances in genetic and recording techniques now allow the participation of identified neurons in the execution of specific behaviours to be interrogated. By focusing on fly visual course control, I highlight what has been learned about the neuronal circuit modules that control visual guidance in Drosophila melanogaster through the use of these techniques. --- paper_title: Small Brains, Smart Machines: From Fly Vision to Robot Vision and Back Again paper_content: Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind were discovered in vertebrates' and invertebrates' visual systems more than 50 years ago, the principles and neural mechanisms involved have not yet been completely elucidated. Here, first, I propose to outline some of the findings we made during the last few decades by performing electrophysiological recordings on identified neurons in the housefly's eye while applying optical stimulation to identified photoreceptors. Whereas these findings shed light on the inner processing structure of an elementary motion detector (EMD), recent studies in which the latest genetic and neuroanatomical methods were applied to the fruitfly's visual system have identified some of the neurons in the visual chain which are possibly involved in the neural circuitry underlying a given EMD. Then, I will describe some of the proof-of-concept robots that we have developed on the basis of our biological findings. The 100-g robot OCTAVE, for example, is able to avoid the ground, react to wind, and land autonomously on a flat terrain without ever having to measure any state variables such as distances or speeds. The 100-g robots OSCAR 1 and OSCAR 2 inspired by the microscanner we discovered in the housefly's eye are able to stabilize their body using mainly visual means and track a moving edge with hyperacuity. These robots react to the optic flow, which is sensed by miniature optic flow sensors inspired by the housefly's EMDs. Constructing a “biorobot” gives us a unique opportunity of checking the soundness and robustness of a principle that is initially thought to be understood by bringing it face to face with the real physical world. Bio-inspired robotics not only help neurobiologists and neuroethologists to identify and investigate worthwhile problems in animals' sensory-motor systems, but they also provide engineers with ideas for developing novel devices and machines with promising future applications, in the field of smart autonomous vehicles and microvehicles, for example. --- paper_title: Optic flow-based collision-free strategies: From insects to robots paper_content: Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. --- paper_title: Non-Linear Neuronal Responses as an Emergent Property of Afferent Networks: A Case Study of the Locust Lobula Giant Movement Detector paper_content: In principle it appears advantageous for single neurons to perform non-linear operations. Indeed it has been reported that some neurons show signatures of such operations in their electrophysiological response. A particular case in point is the Lobula Giant Movement Detector (LGMD) neuron of the locust, which is reported to locally perform a functional multiplication. Given the wide ramifications of this suggestion with respect to our understanding of neuronal computations, it is essential that this interpretation of the LGMD as a local multiplication unit is thoroughly tested. Here we evaluate an alternative model that tests the hypothesis that the non-linear responses of the LGMD neuron emerge from the interactions of many neurons in the opto-motor processing structure of the locust. We show, by exposing our model to standard LGMD stimulation protocols, that the properties of the LGMD that were seen as a hallmark of local non-linear operations can be explained as emerging from the dynamics of the pre-synaptic network. Moreover, we demonstrate that these properties strongly depend on the details of the synaptic projections from the medulla to the LGMD. From these observations we deduce a number of testable predictions. To assess the real-time properties of our model we applied it to a high-speed robot. These robot results show that our model of the locust opto-motor system is able to reliably stabilize the movement trajectory of the robot and can robustly support collision avoidance. In addition, these behavioural experiments suggest that the emergent non-linear responses of the LGMD neuron enhance the system's collision detection acuity. We show how all reported properties of this neuron are consistently reproduced by this alternative model, and how they emerge from the overall opto-motor processing structure of the locust. Hence, our results propose an alternative view on neuronal computation that emphasizes the network properties as opposed to the local transformations that can be performed by single neurons. --- paper_title: Near range path navigation using LGMD visual neural networks paper_content: In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network - lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios. --- paper_title: Bio-inspired collision detector with enhanced selectivity for ground robotic vision system paper_content: There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot. --- paper_title: Loom-Sensitive Neurons Link Computation to Action in the Drosophila Visual System paper_content: Summary Background Many animals extract specific cues from rich visual scenes to guide appropriate behaviors. Such cues include visual motion signals produced both by self-movement and by moving objects in the environment. The complexity of these signals requires neural circuits to link particular patterns of motion to specific behavioral responses. Results Through electrophysiological recordings, we characterize genetically identified neurons in the optic lobe of Drosophila that are specifically tuned to detect motion signals produced by looming objects on a collision course with the fly. Using a genetic manipulation to specifically silence these neurons, we demonstrate that signals from these cells are important for flies to efficiently initiate the loom escape response. Moreover, through targeted expression of channelrhodopsin in these cells, in flies that are blind, we reveal that optogenetic stimulation of these neurons is typically sufficient to elicit escape, even in the absence of any visual stimulus. Conclusions In this compact nervous system, a small group of neurons that extract a specific visual cue from local motion inputs serve to trigger the ethologically appropriate behavioral response. --- paper_title: How accurate need sensory coding be for behaviour? Experiments using a mobile robot paper_content: Abstract This paper argues that for those neuronal systems which control behaviour, reliable responses are more appropriate than precise responses. We illustrate this argument using a mobile robot controlled by the responses of a neuronal model of the locust LGMD system, a visual system which responds to looming objects. Our experiments show that although the responses of the model LGMD vary widely as the robot approaches obstacles, they still trigger avoidance responses. --- paper_title: Predator versus Prey: Locust Looming-Detector Neuron and Behavioural Responses to Stimuli Representing Attacking Bird Predators paper_content: Many arthropods possess escape-triggering neural mechanisms that help them evade predators. These mechanisms are important neuroethological models, but they are rarely investigated using predator-like stimuli because there is often insufficient information on real predator attacks. Locusts possess uniquely identifiable visual neurons (the descending contralateral movement detectors, DCMDs) that are well-studied looming motion detectors. The DCMDs trigger ‘glides’ in flying locusts, which are hypothesised to be appropriate last-ditch responses to the looms of avian predators. To date it has not been possible to study glides in response to stimuli simulating bird attacks because such attacks have not been characterised. We analyse video of wild black kites attacking flying locusts, and estimate kite attack speeds of 10.8±1.4 m/s. We estimate that the loom of a kite’s thorax towards a locust at these speeds should be characterised by a relatively low ratio of half size to speed (l/|v|) in the range 4–17 ms. Peak DCMD spike rate and gliding response occurrence are known to increase as l/|v| decreases for simple looming shapes. Using simulated looming discs, we investigate these trends and show that both DCMD and behavioural responses are strong to stimuli with kite-like l/|v| ratios. Adding wings to looming discs to produce a more realistic stimulus shape did not disrupt the overall relationships of DCMD and gliding occurrence to stimulus l/|v|. However, adding wings to looming discs did slightly reduce high frequency DCMD spike rates in the final stages of object approach, and slightly delay glide initiation. Looming discs with or without wings triggered glides closer to the time of collision as l/|v| declined, and relatively infrequently before collision at very low l/|v|. However, the performance of this system is in line with expectations for a last-ditch escape response. --- paper_title: Computation of object approach by a system of visual motion-sensitive neurons in the crab Neohelice paper_content: Similar to most visual animals, crabs perform proper avoidance responses to objects directly approaching them. The monostratified lobula giant neurons of type 1 (MLG1) of crabs constitute an ensemble of 14–16 bilateral pairs of motion-detecting neurons projecting from the lobula (third optic neuropile) to the midbrain, with receptive fields that are distributed over the extensive visual field of the animal's eye. Considering the crab Neohelice (previously Chasmagnathus) granulata, here we describe the response of these neurons to looming stimuli that simulate objects approaching the animal on a collision course. We found that the peak firing time of MLG1 acts as an angular threshold detector signaling, with a delay of δ = 35 ms, the time at which an object reaches a fixed angular threshold of 49°. Using in vivo intracellular recordings, we detected the existence of excitatory and inhibitory synaptic currents that shape the neural response. Other functional features identified in the MLG1 neurons were phas... --- paper_title: An obstacle avoidance method for two wheeled mobile robot paper_content: In this paper, we proposed an obstacle avoidance method of mobile robot by using lobula giant movement detector (LGMD) method. The LGMD is an identified neuron in the locust brain that responds most strongly to the image of an approaching object such as a predator. As an assistance to the avoidance method, we add a distance measurement algorithm to LGMD method. The measurement algorithm is based on information from camera to detect obstacles and calculate the route for two wheeled mobile robot. Simulation result confirmed the effectiveness of the proposed method. --- paper_title: The anatomy and output connection of a locust visual interneurone; the lobular giant movement detector (LGMD) neurone paper_content: 1. ::: ::: The anatomy of a giant movement detector neurone in the locust lobula (the LGMD) is described on the basis of both intracellular injection of cobalt (Fig. 2) and the reconstruction of osmium-ethyl gallate and silver impregnated serial sections (Fig. 3). ::: ::: ::: ::: ::: 2. ::: ::: It is shown that the LGMD has an anatomically complex junction with a previously described interneurone, the Descending Contralateral Movement Detector (Fig. 4), and that spikes in the LGMD precede 1∶1 with fixed latency spikes in the DCMD (Figs. 1,5). ::: ::: ::: ::: ::: 3. ::: ::: Three separate dendritic subfields are seen in the lobula complex (Figs. 2, 3); these are tentatively ascribed to the three different classes of input to the cell. ::: ::: ::: ::: ::: 4. ::: ::: A large part of the LGMD's terminal arborisation appears to serve only a single functional junction, that with the DCMD (Fig. 4). --- paper_title: Seeing what is coming: building collision-sensitive neurones paper_content: Abstract The image of a rapidly approaching object has to elicit a quick response. An animal needs to know that the object is approaching on a collision course and how imminent a collision is. The relevant information can be computed from the way that the image of the object grows on the retina of one eye. Firm data about the types of neurones that react to such looming stimuli and trigger avoidance reactions come from recent studies on the pigeon and the locust. The neurones responsible are tightly tuned to detect objects that are approaching on a direct collision course. In the pigeon these neurones signal the time remaining before collision whereas in the locust they have a crucial role in the simple strategy this animal uses to detect an object approaching on a collision course. --- paper_title: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects paper_content: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects --- paper_title: Immunocytochemical evidence that collision sensing neurons in the locust visual system contain acetylcholine paper_content: The lobula giant movement detector (LGMD1 and -2) neurons in the locust visual system are parts of motion-sensitive pathways that detect objects approaching on a collision course. The dendritic processes of the LGMD1 and -2 in the lobula are localised to discrete regions, allowing the dendrites of each neuron to be distinguished uniquely. As was described previously for the LGMD1, the afferent processes onto the LGMD2 synapse directly with each other, and these synapses are immediately adjacent to their outputs onto the LGMD2. Here we present immunocytochemical evidence, using antibodies against choline-protein conjugates and a polyclonal antiserum against choline acetyltransferase (ChAT; Chemicon Ab 143), that the LGMD1 and -2 and the retinotopic units presynaptic to them contain acetylcholine (ACh). It is proposed that these retinotopic units excite the LGMD1 or -2 but inhibit each other. It is well established that ACh has both excitatory and inhibitory effects and may provide the substrate for a critical race in the LGMD1 or -2, between excitation caused by edges moving out over successive photoreceptors, and inhibition spreading laterally resulting in the selective response to objects approaching on a collision course. In the optic lobe, ACh was also found to be localised in discrete layers of the medulla and in the outer chiasm between the lamina and medulla. In the brain, the antennal lobes contained neurons that reacted positively for ACh. Silver- or haematoxylin and eosin-stained sections through the optic lobe confirmed the identities of the positively immunostained neurons. --- paper_title: Synaptic Connections of First-Stage Visual Neurons in the Locust Schistocerca gregaria Extend Evolution of Tetrad Synapses Back 200 Million Years paper_content: The small size of some insects, and the crystalline regularity of their eyes, have made them ideal for large-scale reconstructions of visual circuits. In phylogenetically recent muscomorph flies, like Drosophila, precisely coordinated output to different motion-processing pathways is delivered by photoreceptors (R cells), targeting four different postsynaptic cells at each synapse (tetrad). Tetrads were linked to the evolution of aerial agility. To reconstruct circuits for vision in the larger brain of a locust, a phylogenetically old, flying insect, we adapted serial block-face scanning electron microscopy (SBEM). Locust lamina monopolar cells, L1 and L2, were the main targets of the R cell pathway, L1 and L2 each fed a different circuit, only L1 providing feedback onto R cells. Unexpectedly, 40% of all locust R cell synapses onto both L1 and L2 were tetrads, revealing the emergence of tetrads in an arthropod group present 200 million years before muscomorph flies appeared, coinciding with the early evolution of flight. J. Comp. Neurol. 523:298–312, 2015. © 2014 Wiley Periodicals, Inc. --- paper_title: Spike frequency adaptation mediates looming stimulus selectivity in a collision-detecting neuron paper_content: Studying the mechanisms by which spike frequency adaptation shapes visual stimulus selectivity in the lobula giant movement detector interneuron of the locust visual system, the authors find that spike frequency adaptation selectively decreases this neuron's responses to nonpreferred stimuli. --- paper_title: Activity of descending contralateral movement detector neurons and collision avoidance behaviour in response to head-on visual stimuli in locusts paper_content: Abstract. We recorded the activity of the right and left descending contralateral movement detectors responding to 10-cm (small) or 20-cm (large) computer-generated spheres approaching along different trajectories in the locust's frontal field of view. In separate experiments we examined the steering responses of tethered flying locusts to identical stimuli. The descending contralateral movement detectors were more sensitive to variations in target trajectory in the horizontal plane than in the vertical plane. Descending contralateral movement detector activity was related to target trajectory and to target size and was most sensitive to small objects converging on a direct collision course from above and to one side. Small objects failed to induce collision avoidance manoeuvres whereas large objects produced reliable collision avoidance responses. Large targets approaching along a converging trajectory produced steering responses that were either away from or toward the side of approach of the object, whereas targets approaching along trajectories that were offset from the locust's mid-longitudinal body axis primarily evoked responses away from the target. We detected no differences in the discharge properties of the descending contralateral movement detector pair that could account for the different collision avoidance behaviours evoked by varying the target size and trajectories. We suggest that descending contralateral movement detector properties are better suited to predator evasion than collision avoidance. --- paper_title: Responses to object approach by a wide field visual neurone, the LGMD2 of the locust: Characterization and image cues paper_content: The LGMD2 belongs to a group of giant movement-detecting neurones which have fan-shaped arbors in the lobula of the locust optic lobe and respond to movements of objects. One of these neurones, the LGMD1, has been shown to respond directionally to movements of objects in depth, generating vigorous, maintained spike discharges during object approach. Here we compare the responses of the LGMD2 neurone with those of the LGMD1 to simulated movements of objects in depth and examine different image cues which could allow the LGMD2 to distinguish approaching from receding objects. In the absence of stimulation, the LGMD2 has a resting discharge of 10–40 spikes s−1 compared with <1 spike s−1 for the LGMD1. The most powerful excitatory stimulus for the LGMD2 is a dark object approaching the eye. Responses to approaching objects are suppressed by wide field movements of the background. Unlike the LGMD1, the LGMD2 is not excited by the approach of light objects; it specifically responds to movement of edges in the light to dark direction. Both neurones rely on the same monocular image cues to distinguish approaching from receding objects: an increase in the velocity with which edges of images travel over the eye; and an increase in the extent of edges in the image during approach. --- paper_title: Looming detection by identified visual interneurons during larval development of the locust Locusta migratoria paper_content: Insect larvae clearly react to visual stimuli, but the ability of any visual neuron in a newly hatched insect to respond selectively to particular stimuli has not been directly tested. We characterised a pair of neurons in locust larvae that have been extensively studied in adults, where they are known to respond selectively to objects approaching on a collision course: the lobula giant motion detector (LGMD) and its postsynaptic partner, the descending contralateral motion detector (DCMD). Our physiological recordings of DCMD axon spikes reveal that at the time of hatching, the neurons already respond selectively to objects approaching the locust and they discriminate between stimulus approach speeds with differences in spike frequency. For a particular approaching stimulus, both the number and peak frequency of spikes increase with instar. In contrast, the number of spikes in responses to receding stimuli decreases with instar, so performance in discriminating approaching from receding stimuli improves as the locust goes through successive moults. In all instars, visual movement over one part of the visual field suppresses a response to movement over another part. Electron microscopy demonstrates that the anatomical substrate for the selective response to approaching stimuli is present in all larval instars: small neuronal processes carrying information from the eye make synapses both onto LGMD dendrites and with each other, providing pathways for lateral inhibition that shape selectivity for approaching objects. --- paper_title: INTRACELLULAR CHARACTERIZATION OF NEURONS IN THE LOCUST BRAIN SIGNALING IMPENDING COLLISION paper_content: 1. In response to a rapidly approaching object, intracellular recordings show that excitation in the locust lobula giant movement detecting (LGMD) neuron builds up exponentially, particularly during the final stages of object approach. After the cessation of object motion, inhibitory potentials in the LGMD then help to terminate this excitation. Excitation in the LGMD follows object recession with a short, constant latency but is cut back rapidly by hyperpolarizing potentials. The timing of these hyperpolarizing potentials in the LGMD is variable, and their latency following object recession is shortest with the highest velocities of motion simulated. The hyperpolarizing potentials last from 50-300 ms and are often followed by re-excitation. The observed hyperpolarizations of the LGMD can occur without any preceding excitation and are accompanied by a measurable conductance increase. The hyperpolarizations are likely to be inhibitory postsynaptic potentials (PSPs). The behavior of the intracellularly recorded inhibitory PSPs (IPSPs) closely parallels that of the feed forward inhibitory loop in the neural network described by Rind and Bramwell. 2. The preference of the LGMD for approaching versus receding objects remains over a wide range of starting and finishing distances. The response to object approach, measured both as membrane potential and spike rate, remains single peaked with starting distances of between 200 and 2,100 mm, and approach speeds of 0.5-2 m/s. These results confirm the behavior predicted by the neural network described by Rind and Bramwell but contradicts the findings of Rind and Simmons, forcing a re-evaluation of the suitability of some of the mechanical visual stimuli used in that study. 3. For depolarization of the LGMD neuron to be maintained or increased throughout the motion of image edges, the edges must move with increasing velocity over the eye. Membrane potential declines before the end of edge motion with constant velocities of edge motion. 4. A second identified neuron, the LGMD2 also is shown to respond directionally to approaching objects. In both the LGMD and LGMD2 neurons, postsynaptic inhibition shapes the directional response to object motion. --- paper_title: Orthopteran DCMD neuron: a reevaluation of responses to moving objects. I. Selective responses to approaching objects paper_content: 1. The "descending contralateral movement detector" (DCMD) neuron in the locust has been challenged with a variety of moving stimuli, including scenes from a film (Star Wars), moving disks, and images generated by computer. The neuron responds well to any rapid movement. For a dark object moving along a straight path at a uniform velocity, the DCMD gives the strongest response when the object travels directly toward the eye, and the weakest when the object travels away from the eye. Instead of expressing selectivity for movements of small rather than large objects, the DCMD responds preferentially to approaching objects. 2. The neuron shows a clear selectivity for approach over recession for a variety of sizes and velocities of movement both of real objects and in simulated movements. When a disk that subtends > or = 5 degrees at the eye approaches the eye, there are two peaks in spike rate: one immediately after the start of movement; and a second that builds up during the approach. When a disk recedes f... --- paper_title: Neural network based on the input organization of an identified neuron signaling impending collision paper_content: 1. We describe a four-layered neural network (Fig. 1), based on the input organization of a collision signaling neuron in the visual system of the locust, the lobula giant movement detector (LGMD).... --- paper_title: Background visual motion affects responses of an insect motion‐sensitive neuron to objects deviating from a collision course paper_content: Stimulus complexity affects the response of looming sensitive neurons in a variety of animal taxa. The Lobula Giant Movement Detector/Descending Contralateral Movement Detector (LGMD/DCMD) pathway is well‐characterized in the locust visual system. It responds to simple objects approaching on a direct collision course (i.e., looming) as well as complex motion defined by changes in stimulus velocity, trajectory, and transitions, all of which are affected by the presence or absence of background visual motion. In this study, we focused on DCMD responses to objects transitioning away from a collision course, which emulates a successful locust avoidance behavior. We presented each of 20 locusts with a sequence of complex three‐dimensional visual stimuli in simple, scattered, and progressive flow field backgrounds while simultaneously recording DCMD activity extracellularly. DCMD responses to looming stimuli were generally characteristic irrespective of stimulus background. However, changing background complexity affected, peak firing rates, peak time, and caused changes in peak rise and fall phases. The DCMD response to complex object motion also varied with the azimuthal approach angle and the dynamics of object edge expansion. These data fit with an existing correlational model that relates expansion properties to firing rate modulation during trajectory changes. --- paper_title: The locust DCMD, a movement-detecting neurone tightly tuned to collision trajectories paper_content: A Silicon Graphics computer was used to challenge the locust descending contralateral movement detector (DCMD) neurone with images of approaching objects. The DCMD gave its strongest response, measured as either total spike number or spike frequency, to objects approaching on a direct collision course. Deviation in either a horizontal or vertical direction from a direct collision course resulted in a reduced response. The decline in the DCMD response with increasing deviation from a collision course was used as a measure of the tightness of DCMD tuning for collision trajectories. Tuning was defined as the half-width of the response when it had fallen to half its maximum level. The response tuning, measured as averaged mean spike number versus deviation away from a collision course, had a half-width at half-maximum response of 2.4 ds3.0 d for a deviation in the horizontal direction and 3.0 d for a deviation in the vertical direction. Mean peak spike frequency showed an even sharper tuning, with a half-width at half-maximum response of 1.8 d for deviations away from a collision course in the horizontal plane. --- paper_title: Multiplication and stimulus invariance in a looming-sensitive neuron paper_content: Multiplicative operations and invariance of neuronal responses are thought to play important roles in the processing of neural information in many sensory systems. Yet the biophysical mechanisms that underlie both multiplication and invariance of neuronal responses in vivo, either at the single cell or at the network level, remain to a large extent unknown. Recent work on an identified neuron in the locust visual system (the LGMD neuron) that responds well to objects looming on a collision course towards the animal suggests that this cell represents a good model to investigate the biophysical basis of multiplication and invariance at the single neuron level. Experimental and theoretical results are consistent with multiplication being implemented by subtraction of two logarithmic terms followed by exponentiation via active membrane conductances, according to a x 1/b = exp(log(a) - log(b)). Invariance appears to be in part due to non-linear integration of synaptic inputs within the dendritic tree of this neuron. --- paper_title: The anatomy and output connection of a locust visual interneurone; the lobular giant movement detector (LGMD) neurone paper_content: 1. ::: ::: The anatomy of a giant movement detector neurone in the locust lobula (the LGMD) is described on the basis of both intracellular injection of cobalt (Fig. 2) and the reconstruction of osmium-ethyl gallate and silver impregnated serial sections (Fig. 3). ::: ::: ::: ::: ::: 2. ::: ::: It is shown that the LGMD has an anatomically complex junction with a previously described interneurone, the Descending Contralateral Movement Detector (Fig. 4), and that spikes in the LGMD precede 1∶1 with fixed latency spikes in the DCMD (Figs. 1,5). ::: ::: ::: ::: ::: 3. ::: ::: Three separate dendritic subfields are seen in the lobula complex (Figs. 2, 3); these are tentatively ascribed to the three different classes of input to the cell. ::: ::: ::: ::: ::: 4. ::: ::: A large part of the LGMD's terminal arborisation appears to serve only a single functional junction, that with the DCMD (Fig. 4). --- paper_title: Seeing what is coming: building collision-sensitive neurones paper_content: Abstract The image of a rapidly approaching object has to elicit a quick response. An animal needs to know that the object is approaching on a collision course and how imminent a collision is. The relevant information can be computed from the way that the image of the object grows on the retina of one eye. Firm data about the types of neurones that react to such looming stimuli and trigger avoidance reactions come from recent studies on the pigeon and the locust. The neurones responsible are tightly tuned to detect objects that are approaching on a direct collision course. In the pigeon these neurones signal the time remaining before collision whereas in the locust they have a crucial role in the simple strategy this animal uses to detect an object approaching on a collision course. --- paper_title: Neuronal correlates of the visually elicited escape response of the crab Chasmagnathus upon seasonal variations, stimuli changes and perceptual alterations paper_content: When confronted with predators, animals are forced to take crucial decisions such as the timing and manner of escape. In the case of the crab Chasmagnathus, cumulative evidence suggests that the escape response to a visual danger stimulus (VDS) can be accounted for by the response of a group of lobula giant (LG) neurons. To further investigate this hypothesis, we examined the relationship between behavioral and neuronal activities within a variety of experimental conditions that affected the level of escape. The intensity of the escape response to VDS was influenced by seasonal variations, changes in stimulus features, and whether the crab perceived stimuli monocularly or binocularly. These experimental conditions consistently affected the response of LG neurons in a way that closely matched the effects observed at the behavioral level. In other words, the intensity of the stimulus-elicited spike activity of LG neurons faithfully reflected the intensity of the escape response. These results support the idea that the LG neurons from the lobula of crabs are deeply involved in the decision for escaping from VDS. --- paper_title: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects paper_content: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects --- paper_title: Non-Linear Neuronal Responses as an Emergent Property of Afferent Networks: A Case Study of the Locust Lobula Giant Movement Detector paper_content: In principle it appears advantageous for single neurons to perform non-linear operations. Indeed it has been reported that some neurons show signatures of such operations in their electrophysiological response. A particular case in point is the Lobula Giant Movement Detector (LGMD) neuron of the locust, which is reported to locally perform a functional multiplication. Given the wide ramifications of this suggestion with respect to our understanding of neuronal computations, it is essential that this interpretation of the LGMD as a local multiplication unit is thoroughly tested. Here we evaluate an alternative model that tests the hypothesis that the non-linear responses of the LGMD neuron emerge from the interactions of many neurons in the opto-motor processing structure of the locust. We show, by exposing our model to standard LGMD stimulation protocols, that the properties of the LGMD that were seen as a hallmark of local non-linear operations can be explained as emerging from the dynamics of the pre-synaptic network. Moreover, we demonstrate that these properties strongly depend on the details of the synaptic projections from the medulla to the LGMD. From these observations we deduce a number of testable predictions. To assess the real-time properties of our model we applied it to a high-speed robot. These robot results show that our model of the locust opto-motor system is able to reliably stabilize the movement trajectory of the robot and can robustly support collision avoidance. In addition, these behavioural experiments suggest that the emergent non-linear responses of the LGMD neuron enhance the system's collision detection acuity. We show how all reported properties of this neuron are consistently reproduced by this alternative model, and how they emerge from the overall opto-motor processing structure of the locust. Hence, our results propose an alternative view on neuronal computation that emphasizes the network properties as opposed to the local transformations that can be performed by single neurons. --- paper_title: Modelling LGMD2 visual neuron system paper_content: Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bio-inspired nonlinear functions, are introduced in our model, to achieve LGMD2's collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition. --- paper_title: Synaptic Connections of First-Stage Visual Neurons in the Locust Schistocerca gregaria Extend Evolution of Tetrad Synapses Back 200 Million Years paper_content: The small size of some insects, and the crystalline regularity of their eyes, have made them ideal for large-scale reconstructions of visual circuits. In phylogenetically recent muscomorph flies, like Drosophila, precisely coordinated output to different motion-processing pathways is delivered by photoreceptors (R cells), targeting four different postsynaptic cells at each synapse (tetrad). Tetrads were linked to the evolution of aerial agility. To reconstruct circuits for vision in the larger brain of a locust, a phylogenetically old, flying insect, we adapted serial block-face scanning electron microscopy (SBEM). Locust lamina monopolar cells, L1 and L2, were the main targets of the R cell pathway, L1 and L2 each fed a different circuit, only L1 providing feedback onto R cells. Unexpectedly, 40% of all locust R cell synapses onto both L1 and L2 were tetrads, revealing the emergence of tetrads in an arthropod group present 200 million years before muscomorph flies appeared, coinciding with the early evolution of flight. J. Comp. Neurol. 523:298–312, 2015. © 2014 Wiley Periodicals, Inc. --- paper_title: A Collision Detection System for a Mobile Robot Inspired by the Locust Visual System paper_content: The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the image of an approaching object such as a predator. A computational neural network model based on the structure of the LGMD and its afferent inputs is also able to detect approaching objects. In order for the LGMD network to be used as a robust collision detector for robotic applications, we proposed a new mechanism to enhance the feature of colliding objects before the excitations are gathered by LGMD cell. The new model favours grouped excitation but tends to ignore isolated excitation with selective passing coefficients. Experiments with a Khepera robot showed the proposed collision detector worked in real time in an arena surrounded with blocks. --- paper_title: Reactive direction control for a mobile robot: a locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated paper_content: Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using `randomised winner-take-all' or `steering wheel' algorithms for LGMD model integration, the Khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research. --- paper_title: Modeling disinhibition within a layered structure of the LGMD neuron paper_content: Due to their relatively simple nervous system, insects are an excellent way through which we can investigate how visual information is acquired and processed in order to trigger specific behaviours, as flight stabilization, flying speed adaptation, collision avoidance responses, among others. From the behaviours previously mentioned, we are particularly interested in visually evoked collision avoidance responses. These behaviors are, by necessity, fast and robust, making them excellent systems to study the neural basis of behavior. On the other hand, artificial collision avoidance is a complex task, in which the algorithms used need to be fast to process the captured data and then perform real time decisions. Consequently, neurorobotic models may provide a foundation for the development of more effective and autonomous devices. In this paper, we will focus our attention in the Lobula Giant Movement Detector (LGMD), which is a visual neuron, located in the third layer of the locust optic lobe, that responds selectively to approaching objects, being responsible for trigger collision avoidance maneuvers in locusts. This selectivity of the LGMD neuron to approaching objects seems to result from the dynamics of the network pre-synaptic to this neuron. Tipically, this modelation is done by a conventional Difference of Gaussians (DoG) filter. In this paper, we propose the integration of a different model, an Inversed Difference of Gaussians (IDoG) filter, which preserves the different level of brightness in the captured image, enhancing the contrast at the edges. This change is expected to increase the performance of the LGMD model. Finally, a comparative analysis of both modelations, as well as its effect in the final response of the LGMD neuron, will be performed. --- paper_title: Time-dependent activation of feedforward inhibition in a looming-sensitive neuron paper_content: The lobula giant movement detector (LGMD) is an identified neuron in the locust visual system that responds preferentially to objects approaching on a collision course with the animal. For such looming stimuli, the LGMD firing rate gradually increases, peaks, and decays toward the end of approach. The LGMD receives both excitatory and feed-forward inhibitory inputs on distinct branches of its dendritic tree, but little is known about the contribution of feed-forward inhibition to its response properties. We used picrotoxin, a chloride channel blocker, to selectively block feed-forward inhibition to the LGMD. We then computed differences in firing rate and membrane potential between control and picrotoxin conditions to study the activation of feed-forward inhibition. For looming stimuli, a significant activation of inhibition was observed early, as objects exceeded on average ∼23° in angular extent at the retina. Inhibition then increased in parallel with excitation over the remainder of approach trials. Experiments in which the final angular size of the approaching objects was systematically varied revealed that the relative activation of excitation and inhibition remains well balanced over most of the course of looming trials. Feed-forward inhibition actively contributed to the termination of the response to approaching objects and was particularly effective for large or slowly moving objects. Suddenly appearing and receding objects activated excitation and feed-forward inhibition nearly simultaneously, in contrast to looming stimuli. Under these conditions, the activation of excitation and feed-forward inhibition was weaker than for approaching objects, suggesting that both are preferentially tuned to approaching objects. These results support a phenomenological model of multiplication within the LGMD and provide new constraints for biophysical models of its responses to looming and receding stimuli. --- paper_title: Performance of an insect-inspired target tracker in natural conditions paper_content: Robust and efficient target-tracking algorithms embedded on moving platforms, are a requirement for many computer vision and robotic applications. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. As inspiration, we look to biological lightweight solutions-lightweight and low-powered flying insects. For example, dragonflies pursue prey and mates within cluttered, natural environments, deftly selecting their target amidst swarms. In our laboratory, we study the physiology and morphology of dragonfly 'small target motion detector' neurons likely to underlie this pursuit behaviour. Here we describe our insect-inspired tracking model derived from these data and compare its efficacy and efficiency with state-of-the-art engineering models. For model inputs, we use both publicly available video sequences, as well as our own task-specific dataset (small targets embedded within natural scenes). In the context of the tracking problem, we describe differences in object statistics within the video sequences. For the general dataset, our model often locks on to small components of larger objects, tracking these moving features. When input imagery includes small moving targets, for which our highly nonlinear filtering is matched, the robustness outperforms state-of-the-art trackers. In all scenarios, our insect-inspired tracker runs at least twice the speed of the comparison algorithms. (Less) --- paper_title: Processing of figure and background motion in the visual system of the fly paper_content: The visual system of the fly is able to extract different types of global retinal motion patterns as may be induced on the eyes during different flight maneuvers and to use this information to control visual orientation. The mechanisms underlying these tasks were analyzed by a combination of quantitative behavioral experiments on tethered flying flies (Musca domestica) and model simulations using different conditions of oscillatory large-field motion and relative motion of different segments of the stimulus pattern. Only torque responses about the vertical axis of the animal were determined. The stimulus patterns consisted of random dot textures ("Julesz patterns") which could be moved either horizontally or vertically. Horizontal rotatory large-field motion leads to compensatory optomotor turning responses, which under natural conditions would tend to stabilize the retinal image. The response amplitude depends on the oscillation frequency: It is much larger at low oscillation frequencies than at high ones. When an object and its background move relative to each other, the object may, in principle, be discriminated and then induce turning responses of the fly towards the object. However, whether the object is distinguished by the fly depends not only on the phase relationship between object and background motion but also on the oscillation frequency. At all phase relations tested, the object is detected only at high oscillation frequencies. For the patterns used here, the turning responses are only affected by motion along the horizontal axis of the eye. No influences caused by vertical motion could be detected. The experimental data can be explained best by assuming two parallel control systems with different temporal and spatial integration properties: TheLF-system which is most sensitive to coherent rotatory large-field motion and mediates compensatory optomotor responses mainly at low oscillation frequencies. In contrast, theSF-system is tuned to small-field and relative motion and thus specialized to discriminate a moving object from its background; it mediates turning responses towards objects mainly at high oscillation frequencies. The principal organization of the neural networks underlying these control systems could be derived from the characteristic features of the responses to the different stimulus conditions. The input to the model circuits responsible for the characteristic sensitivity of the SF-system to small-field and relative motion is provided by retinotopic arrays of local movement detectors. The movement detectors are integrated by a large-field element, the output cell of the network. The synapses between the detectors and the output cells have nonlinear transmission characteristics. Another type of large-field elements ("pool cells") which respond to motion in front of both eyes and have characteristic direction selectivities are assumed to interact with the local movement detector channels by inhibitory synapses of the shunting type, before the movement detectors are integrated by the output cells. The properties of the LF-system can be accounted for by similar model circuits which, however, differ with respect to the transmission characteristic of the synapses between the movement detectors and the output cell; moreover, their pool cells are only monocular. This type of network, however, is not necessary to account for the functional properties of the LF-system. Instead, intrinsic properties of single neurons may be sufficient. Computer simulations of the postulated mechanisms of the SF-and LF-system reveal that these can account for the specific features of the behavioral responses under quite different conditions of coherent large-field motion and relative motion of different pattern segments. --- paper_title: Insect Detection of Small Targets Moving in Visual Clutter paper_content: Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly, Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. --- paper_title: A Model for the Detection of Moving Targets in Visual Clutter Inspired by Insect Physiology paper_content: We present a computational model for target discrimination based on intracellular recordings from neurons in the fly visual system. Determining how insects detect and track small moving features, often against cluttered moving backgrounds, is an intriguing challenge, both from a physiological and a computational perspective. Previous research has characterized higher-order neurons within the fly brain, known as 'small target motion detectors' (STMD), that respond robustly to moving features, even when the velocity of the target is matched to the background (i.e. with no relative motion cues). We recorded from intermediate-order neurons in the fly visual system that are well suited as a component along the target detection pathway. This full-wave rectifying, transient cell (RTC) reveals independent adaptation to luminance changes of opposite signs (suggesting separate ON and OFF channels) and fast adaptive temporal mechanisms, similar to other cell types previously described. From this physiological data we have created a numerical model for target discrimination. This model includes nonlinear filtering based on the fly optics, the photoreceptors, the 1(st) order interneurons (Large Monopolar Cells), and the newly derived parameters for the RTC. We show that our RTC-based target detection model is well matched to properties described for the STMDs, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter' to successfully detect most targets from the background. Importantly, this model can explain this type of feature discrimination without the need for relative motion cues. --- paper_title: LGMD-based bio-inspired algorithm for detecting risk of collision of a road vehicle paper_content: LGMD (Lobula Giant Movement Detector) is part of a visual system of a locust, used to detect and evade approaching predators. Similar algorithm can be used in man-made systems like autonomous robots or safety systems in vehicles for detecting objects approaching on collision course. In this article usage of LGMD-based algorithms in road vehicles is investigated and a new solution is proposed. Video stream recorded from a moving vehicle inherently contains a lot of movement of the background due to vibrations and due to turning of the vehicle. This cause high output of LGMD and could trigger false alarms. The new approach, proposed in this article, enhances LGMD information and adds estimated expansion of objects in X and Y direction. Combination of all three sources of information gives a very good estimate of risk of collision. The new algorithm was developed using a reference set of test videos, where only one parameter was changed. Algorithm was tested with videos recorded from a moving vehicle in normal traffic and on the test ground, including real collisions in carton target at different speeds. Time between triggering of the alert and actual collision was measured. --- paper_title: A modified neural network model for Lobula Giant Movement Detector with additional depth movement feature paper_content: The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron that is located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of the approaching object and its proximity. It has been found that it can respond to looming stimuli very quickly and can trigger avoidance reactions whenever a rapidly approaching object is detected. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper proposes a modified LGMD model that provides additional movement depth direction information. The proposed model retains the simplicity of the previous neural network model, adding only a few new cells. It has been tested on both simulated and recorded video data sets. The experimental results shows that the modified model can very efficiently provide stable information on the depth direction of movement. --- paper_title: Neural circuit tuning fly visual interneurons to motion of small objects. I. Dissection of the circuit by pharmacological and photoinactivation techniques paper_content: 1. Visual interneurons tuned to the motion of small objects are found in many animal species and are assumed to be the neuronal basis of figure-ground discrimination by relative motion. A well-examined example is the FD1-cell in the third visual neuropil of blowflies. This cell type responds best to motion of small objects. Motion of extended patterns elicits only small responses. As a neuronal mechanism that leads to such a response characteristic, it was proposed that the FD1-cell is inhibited by the two presumably GABAergic and, thus, inhibitory CH-cells, the VCH- and the DCH-cell. The CH-cells respond best to exactly that type of motion by which the activity of the FD1-cell is reduced. The hypothesis that the CH-cells inhibit the FD1-cell and, thus, mediate its selectivity to small moving objects was tested by ablating the CH-cells either pharmacologically or by photoinactivation. ::: 2. After application of the gamma-aminobutyric acid (GABA) antagonist picrotoxinin, the FD1-cell responds more strongly to large-field than to small-field motion, i.e., it has lost its small-field selectivity. This suggests that the tuning of the FD1-cell to small moving objects relies on a GABAergic mechanism and, thus, most likely on the CH-cells. ::: 3. The role of each CH-cell for small-field tuning was determined by inactivating them individually. They were injected with a fluorescent dye and then ablated by laser illumination. Only photoinactivation of the VCH-cell liminated the specific selectivity of the FD1-cell for small-field motion. Ablation of the DCH-cell did not significantly change the response characteristic of the FD1-cell. This reveals the important role of the VCH-cells in mediating the characteristic sensitivity of the FD1-cell to motion of small objects. ::: 4. The FD1-cell is most sensitive to motion of small objects in the ventral part of the ipsilateral visual field, whereas motion in the dorsal part influences the cell only weakly. This specific feature fits well to the sensitivity of the VCH-cell to ipsilateral motion that is most pronounced in the ventral part of the visual field. The spatial sensitivity distribution of the FD1-cell matches also the characteristics of figure-ground discrimination and fixation behavior. --- paper_title: Near range path navigation using LGMD visual neural networks paper_content: In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network - lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios. --- paper_title: The centrifugal horizontal cells in the lobula plate of the blowfly, Phaenicia sericata paper_content: Intracellular recordings combined with iontophoretic injection of Procion Yellow M4RAN were used to study the anatomy and physiology of the centrifugal horizontal cells (CH-cells) in the lobula plate of the blowfly, Phaenicia sericata. ::: ::: Anatomy: The CH-cells comprise a set of two homolateral, giant visual interneurones (DCH, VCH) at the rostral surface of each lobula plate. Their extensive arborizations in the lobula plate possess bulbous swellings (boutons terminaux). The arborization of one cell (DCH) covers the dorsal, and the arborization of the other cell (VCH) the ventral half of the lobula plate. Their axons run jointly with those of the horizontal cells through the chiasma internum and the optic peduncle. Their protocerebral arborization possesses spines; they form a dense network together with the axonal arborization of the horizontal cells, a second type of giant homolateral cell most sensitive to horizontal motion. The protocerebral arborization of the CH-cells gives rise to a cell body fibre which traverses the protocerebrum dorsally to the oesophageal canal. The cell body lies on the contralateral side laterally and slightly dorsally to the oesophageal canal in the frontal cell body layer. ::: ::: Physiology: The CH-cells respond with graded potentials to rotatory movements of their surround. Cells in the right lobula plate respond with excitation (excitatory postsynaptic potentials, membrane depolarization) to clockwise motion (contralateral regressive, ipsilateral progressive), and with inhibition (inhibitory postsynaptic potentials, membrane hyperpolarization) to counterclockwise motion in either or both receptive fields; CH-cells respond to motion presented to the ipsilateral and/or contralateral eye. Cells of the left lobula plate respond correspondingly to the reverse directions of motion. Vertical pattern motion and stationary patterns are ineffective. ::: ::: The heterolateral H1-neurone elicits excitatory postsynaptic potentials in the DCH-cell; these postsynaptic potentials are tightly correlated 1:1 to the preceding H1-action potential. The delay between the peak of the action potential and the beginning of the DCH-postsynaptic potential is 1.15 msec, agreeing very well with the value reported previously for the blowfly, Calliphora (Hausen, 1976a). The synaptic input and output connections of the CH-cells are discussed. --- paper_title: A bio-inspired visual collision detection mechanism for cars: Combining insect inspired neurons to create a robust system paper_content: The lobula giant movement detector (LGMD) of locusts is a visual interneuron that responds with an increasing spike frequency to an object approaching on a direct collision course. Recent studies involving the use of LGMD models to detect car collisions showed that it could detect collisions, but the neuron produced collision alerts to non-colliding, translating, stimuli in many cases. This study presents a modified model to address these problems. It shows how the neurons pre-synaptic to the LGMD show a remarkable ability to filter images, and only colliding and translating stimuli produce excitation in the neuron. It then integrates the LGMD network with models based on the elementary movement detector (EMD) neurons from the fly visual system, which are used to analyse directional excitation patterns in the biologically filtered images. Combining the information from the LGMD neuron and four directionally sensitive neurons produces a robust collision detection system for a wide range of automotive test situations. --- paper_title: Properties of neuronal facilitation that improve target tracking in natural pursuit simulations paper_content: Although flying insects have limited visual acuity (approx. 1°) and relatively small brains, many species pursue tiny targets against cluttered backgrounds with high success. Our previous computational model, inspired by electrophysiological recordings from insect 'small target motion detector' (STMD) neurons, did not account for several key properties described from the biological system. These include the recent observations of response 'facilitation' (a slow build-up of response to targets that move on long, continuous trajectories) and 'selective attention', a competitive mechanism that selects one target from alternatives. Here, we present an elaborated STMD-inspired model, implemented in a closed loop target-tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. We test this system against heavily cluttered natural scenes. Inclusion of facilitation not only substantially improves success for even short-duration pursuits, but it also enhances the ability to 'attend' to one target in the presence of distracters. Our model predicts optimal facilitation parameters that are static in space and dynamic in time, changing with respect to the amount of background clutter and the intended purpose of the pursuit. Our results provide insights into insect neurophysiology and show the potential of this algorithm for implementation in artificial visual systems and robotic applications. --- paper_title: Responses to object approach by a wide field visual neurone, the LGMD2 of the locust: Characterization and image cues paper_content: The LGMD2 belongs to a group of giant movement-detecting neurones which have fan-shaped arbors in the lobula of the locust optic lobe and respond to movements of objects. One of these neurones, the LGMD1, has been shown to respond directionally to movements of objects in depth, generating vigorous, maintained spike discharges during object approach. Here we compare the responses of the LGMD2 neurone with those of the LGMD1 to simulated movements of objects in depth and examine different image cues which could allow the LGMD2 to distinguish approaching from receding objects. In the absence of stimulation, the LGMD2 has a resting discharge of 10–40 spikes s−1 compared with <1 spike s−1 for the LGMD1. The most powerful excitatory stimulus for the LGMD2 is a dark object approaching the eye. Responses to approaching objects are suppressed by wide field movements of the background. Unlike the LGMD1, the LGMD2 is not excited by the approach of light objects; it specifically responds to movement of edges in the light to dark direction. Both neurones rely on the same monocular image cues to distinguish approaching from receding objects: an increase in the velocity with which edges of images travel over the eye; and an increase in the extent of edges in the image during approach. --- paper_title: Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot paper_content: The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been modeled and successfully applied in robotic vision system for perceiving potential collisions in an efficient and reliable manner. In this research, we conduct binocular neuronal models, for the first time combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions of this research: (1) The arena tests involving multiple robots verified the effectiveness and robustness of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models, which fulfill corresponding biological research. (3) The utilized micro robot may also benefit researches on other embedded vision systems as well as swarm robotics. --- paper_title: Computation of Object Approach by a Wide-Field, Motion-Sensitive Neuron paper_content: The lobula giant motion detector (LGMD) in the locust visual system is a wide-field, motion-sensitive neuron that responds vigorously to objects approaching the animal on a collision course. We investigated the computation performed by LGMD when it responds to approaching objects by recording the activity of its postsynaptic target, the descending contralateral motion detector (DCMD). In each animal, peak DCMD activity occurred a fixed delay δ (15 ≤ δ ≤ 35 msec) after the approaching object had reached a specific angular threshold θthres on the retina (15° ≤ θthres ≤ 40°). θthres was independent of the size or velocity of the approaching object. This angular threshold computation was quite accurate: the error of LGMD and DCMD in estimating θthres(3.1–11.9°) corresponds to the angular separation between two and six ommatidia at each edge of the expanding object on the locust retina. It was also resistant to large amplitude changes in background luminosity, contrast, and body temperature. Using several experimentally derived assumptions, the firing rate of LGMD and DCMD could be shown to depend on the product ψ(t − δ) · e^(−αθ(t−δ0)), where θ(t) is the angular size subtended by the object during approach, ψ(t) is the angular edge velocity of the object and the constant, and α is related to the angular threshold size [α = 1/tan(θthres^(/2))]. Because LGMD appears to receive distinct input projections, respectively motion- and size-sensitive, this result suggests that a multiplication operation is implemented by LGMD. Thus, LGMD might be an ideal model to investigate the biophysical implementation of a multiplication operation by single neurons. --- paper_title: Bio-inspired collision detector with enhanced selectivity for ground robotic vision system paper_content: There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot. --- paper_title: A modified model for the Lobula Giant Movement Detector and its FPGA implementation paper_content: Bio-inspired vision sensors are particularly appropriate candidates for navigation of vehicles or mobile robots due to their computational simplicity, allowing compact hardware implementations with low power dissipation. The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector. --- paper_title: Multiplicative computation in a visual neuron sensitive to looming paper_content: Multiplicative operations are important in sensory processing, but their biophysical implementation remains largely unknown. We investigated an identified neuron (the lobula giant movement detector, LGMD, of locusts) whose output firing rate in response to looming visual stimuli has been described by two models, one of which involves a multiplication. In this model, the LGMD multiplies postsynaptically two inputs (one excitatory, one inhibitory) that converge onto its dendritic tree; in the other model, inhibition is presynaptic to the LGMD. By using selective activation and inactivation of pre- and postsynaptic inhibition, we show that postsynaptic inhibition has a predominant role, suggesting that multiplication is implemented within the neuron itself. Our pharmacological experiments and measurements of firing rate versus membrane potential also reveal that sodium channels act both to advance the response of the LGMD in time and to map membrane potential to firing rate in a nearly exponential manner. These results are consistent with an implementation of multiplication based on dendritic subtraction of two converging inputs encoded logarithmically, followed by exponentiation through active membrane conductances. --- paper_title: Neural circuit tuning fly visual neurons to motion of small objects. II. Input organization of inhibitory circuit elements revealed by electrophysiological and optical recording techniques paper_content: 1. The FD1-cell in the visual system of the fly is an identified visual interneuron that is specifically tuned to motion of small objects. In the companion paper it was shown that this response pro... --- paper_title: Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement paper_content: The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds --- paper_title: Invariance of Angular Threshold Computation in a Wide-Field Looming-Sensitive Neuron paper_content: The lobula giant motion detector (LGMD) is a wide-field bilateral visual interneuron in North American locusts that acts as an angular threshold detector during the approach of a solid square along a trajectory perpendicular to the long axis of the animal (Gabbiani et al., 1999a). We investigated the dependence of this angular threshold computation on several stimulus parameters that alter the spatial and temporal activation patterns of inputs onto the dendritic tree of the LGMD, across three locust species. The same angular threshold computation was implemented by LGMD in all three species. The angular threshold computation was invariant to changes in target shape (from solid squares to solid discs) and to changes in target texture (checkerboard and concentric patterns). Finally, the angular threshold computation did not depend on object approach angle, over at least 135 degrees in the horizontal plane. A two-dimensional model of the responses of the LGMD based on linear summation of motion-related excitatory and size-dependent inhibitory inputs successfully reproduced the experimental results for squares and discs approaching perpendicular to the long axis of the animal. Linear summation, however, was unable to account for invariance to object texture or approach angle. These results indicate that LGMD is a reliable neuron with which to study the biophysical mechanisms underlying the generation of complex but invariant visual responses by dendritic integration. They also suggest that invariance arises in part from non-linear integration of excitatory inputs within the dendritic tree of the LGMD. --- paper_title: A Bio-inspired Collision Detector for Small Quadcopter paper_content: The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU). Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter's responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter's collision avoidance task. --- paper_title: How accurate need sensory coding be for behaviour? Experiments using a mobile robot paper_content: Abstract This paper argues that for those neuronal systems which control behaviour, reliable responses are more appropriate than precise responses. We illustrate this argument using a mobile robot controlled by the responses of a neuronal model of the locust LGMD system, a visual system which responds to looming objects. Our experiments show that although the responses of the model LGMD vary widely as the robot approaches obstacles, they still trigger avoidance responses. --- paper_title: Local and large-range inhibition in feature detection. paper_content: Lateral inhibition is perhaps the most ubiquitous of neuronal mechanisms, having been demonstrated in early stages of processing in many different sensory pathways of both mammals and invertebrates. Recent work challenges the long-standing view that assumes that similar mechanisms operate to tune neuronal responses to higher order properties. Scant evidence for lateral inhibition exists beyond the level of the most peripheral stages of visual processing, leading to suggestions that many features of the tuning of higher order visual neurons can be accounted for by the receptive field and other intrinsic coding properties of visual neurons. Using insect target neurons as a model, we present unequivocal evidence that feature tuning is shaped not by intrinsic properties but by potent spatial lateral inhibition operating well beyond the first stages of visual processing. In addition, we present evidence for a second form of higher-order spatial inhibition--a long-range interocular transfer of information that we argue serves a role in establishing interocular rivalry and thus potentially a neural substrate for directing attention to single targets in the presence of distracters. In so doing, we demonstrate not just one, but two levels of spatial inhibition acting beyond the level of peripheral processing. --- paper_title: Collision avoidance using a model of the locust LGMD neuron paper_content: Abstract The lobula giant movement detector (LGMD) system in the locust responds selectively to objects approaching the animal on a collision course. In earlier work we have presented a neural network model based on the LGMD system which shared this preference for approaching objects. We have extended this model in order to evaluate its responses in a real-world environment using a miniature mobile robot. This extended model shows reliable obstacle detection over an eight-fold range of speeds, and raises interesting questions about basic properties of the biological system. --- paper_title: A collision avoidance model based on the Lobula giant movement detector (LGMD) neuron of the locust paper_content: In insects, we can find very complex and compact neural structures that are task specific. These neural structures allow them to perform complex tasks such as visual navigation, including obstacle avoidance, landing, self-stabilization, etc. Obstacle avoidance is fundamental for successful navigation, and it can be combined with more systems to make up more complex behaviors. In this paper, we present a model for collision avoidance based on the Lobula giant movement detector (LGMD) cell of the locust. This is a wide-field visual neuron that responds to looming stimuli and that can trigger avoidance reactions whenever a rapidly approaching object is detected. Here, we present result based on both an offline study of the model and its application to a flying robot. --- paper_title: On the neuronal basis of figure-ground discrimination by relative motion in the visual system of the fly. III: Possible input circuitries and behavioural significance of the FD-cells paper_content: It has been concluded in the preceding papers (Egelhaaf, 1985a, b) that two functional classes of output elements of the visual ganglia might be involved in figure-ground discrimination by relative motion in the fly: The Horizontal Cells which respond best to the motion of large textured patterns and the FD-cells which are most sensitive to small moving objects. In this paper it is studied by computer simulations (1) in what way the input circuitry of the FD-cells might be organized and (2) the role the FD-cells play in figure-ground discrimination. The characteristic functional properties of the FD-cells can be explained by various alternative model networks. In all models the main input to the FD-cells is formed by two retinotopic arrays of small-field elementary movement detectors, responding to either front-to-back or back-to-front motion. According to their preferred direction of motion the FD-cells are excited by one of these movement detector classes and inhibited by the other. The synaptic transmission between the movement detectors and the FD-cells is assumed to be non-linear. It is a common property of all these model circuits that the inhibition of the FD-cells induced by large-field motion is mediated by pool cells which cover altogether the entire horizontal extent of the visual field of both eyes. These pool cells affect the response of the FD-cells either by pre- or postsynaptic shunting inhibition. Depending on the FD-cell under consideration, the pool cells are directionally selective for motion or sensitive to motion in either horizontal direction. The role the FD-cells and the Horizontal Cells are likely to play in figure-ground discrimination can be demonstrated by computer simulations of a composite neuronal model consisting of the model circuits for these cell types. According to their divergent spatial integration properties they perform different tasks in figure-ground discrimination: Whereas the Horizontal Cells mainly mediate information on wide-field motion, the FD-cells are selectively tuned to efficient detection of relatively small targets. Both cell classes together appear to be sufficient to account for figure-ground discrimination as it has been shown by analysis at the behavioural level. --- paper_title: Neural network based on the input organization of an identified neuron signaling impending collision paper_content: 1. We describe a four-layered neural network (Fig. 1), based on the input organization of a collision signaling neuron in the visual system of the locust, the lobula giant movement detector (LGMD).... --- paper_title: An obstacle avoidance method for two wheeled mobile robot paper_content: In this paper, we proposed an obstacle avoidance method of mobile robot by using lobula giant movement detector (LGMD) method. The LGMD is an identified neuron in the locust brain that responds most strongly to the image of an approaching object such as a predator. As an assistance to the avoidance method, we add a distance measurement algorithm to LGMD method. The measurement algorithm is based on information from camera to detect obstacles and calculate the route for two wheeled mobile robot. Simulation result confirmed the effectiveness of the proposed method. --- paper_title: Dynamic Range Enhance of Visual Sensor Circuits and Application for Multi-object Motion Detection paper_content: This paper presents a dynamic range enhanced visual sensor circuit and its application for multi-object detection based on LGMD (Lobula Giant Movement Detection) method. The proposed visual sensor incorporates a DR (Dynamic Range) extension circuit with the general pipeline AD Converter. Therefore little changes are needed to achieve the DR extension. Together with the modified LGMD model, our system would effectively enhance the reliability of motion detection compared with traditional motion detecting system. --- paper_title: Localized direction selective responses in the dendrites of visual interneurons of the fly paper_content: BackgroundThe various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs.ResultsFluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs.ConclusionsOur study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. --- paper_title: Obstacle avoidance with LGMD neuron: Towards a neuromorphic UAV implementation paper_content: We present a neuromorphic adaptation of a spiking neural network model of the locust Lobula Giant Movement Detector (LGMD), which detects objects increasing in size in the field of vision (looming) and can be used to facilitate obstacle avoidance in robotic applications. Our model is constrained by the parameters of a mixed signal analog-digital neuromorphic device, developed by our group, and is driven by the output of a neuromorphic vision sensor. We demonstrate the performance of the model and how it may be used for obstacle avoidance on an unmanned areal vehicle (UAV). --- paper_title: Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation paper_content: Abstract Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector — the LGMD2. The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner. --- paper_title: Dendro-Dendritic Interactions between Motion-Sensitive Large-Field Neurons in the Fly paper_content: For visual course control, flies rely on a set of motion-sensitive neurons called lobula plate tangential cells (LPTCs). Among these cells, the so-called CH (centrifugal horizontal) cells shape by their inhibitory action the receptive field properties of other LPTCs called FD (figure detection) cells specialized for figure-ground discrimination based on relative motion. Studying the ipsilateral input circuitry of CH cells by means of dual-electrode and combined electrical-optical recordings, we find that CH cells receive graded input from HS (large-field horizontal system) cells via dendro-dendritic electrical synapses. This particular wiring scheme leads to a spatial blur of the motion image on the CH cell dendrite, and, after inhibiting FD cells, to an enhancement of motion contrast. This could be crucial for enabling FD cells to discriminate object from self motion. --- paper_title: Neural based obstacle avoidance with CPG controlled hexapod walking robot paper_content: In this work, we are proposing a collision avoidance system for a hexapod crawling robot based on the detection of intercepting objects using the Lobula giant movement detector (LGMD) connected directly to the locomotion control unit based on the Central pattern generator (CPG). We have designed and experimentally verified the proposed approach that maps the output of the LGMD directly on the locomotion control parameters of the CPG. The results of the experimental verification of the system with real mobile hexapod crawling robot support the feasibility of the proposed approach in collision avoidance scenarios. --- paper_title: Multiplication and stimulus invariance in a looming-sensitive neuron paper_content: Multiplicative operations and invariance of neuronal responses are thought to play important roles in the processing of neural information in many sensory systems. Yet the biophysical mechanisms that underlie both multiplication and invariance of neuronal responses in vivo, either at the single cell or at the network level, remain to a large extent unknown. Recent work on an identified neuron in the locust visual system (the LGMD neuron) that responds well to objects looming on a collision course towards the animal suggests that this cell represents a good model to investigate the biophysical basis of multiplication and invariance at the single neuron level. Experimental and theoretical results are consistent with multiplication being implemented by subtraction of two logarithmic terms followed by exponentiation via active membrane conductances, according to a x 1/b = exp(log(a) - log(b)). Invariance appears to be in part due to non-linear integration of synaptic inputs within the dendritic tree of this neuron. --- paper_title: Emergence of Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for Computing Object Approaches: Dynamics, Peaks, & Fits paper_content: Many species show avoidance reactions in response to looming object approaches. In locusts, the corresponding escape behavior correlates with the activity of the lobula giant movement detector (LGMD) neuron. During an object approach, its firing rate was reported to gradually increase until a peak is reached, and then it declines quickly. The η-function predicts that the LGMD activity is a product between an exponential function of angular size exp(–Θ) and angular velocity Θ, and that peak activity is reached before time-to-contact (ttc). The η-function has become the prevailing LGMD model because it reproduces many experimental observations, and even experimental evidence for the multiplicative operation was reported. Several inconsistencies remain unresolved, though. Here we address these issues with a new model (ψ-model), which explicitly connects Θ and Θ to biophysical quantities. The ψ-model avoids biophysical problems associated with implementing exp(·), implements the multiplicative operation of η via divisive inhibition, and explains why activity peaks could occur after ttc. It consistently predicts response features of the LGMD, and provides excellent fits to published experimental data, with goodness of fit measures comparable to corresponding fits with the η-function. --- paper_title: Simplified bionic solutions: a simple bio-inspired vehicle collision detection system paper_content: Modern cars are equipped with both active and passive sensor systems that can detect potential collisions. In contrast, locusts avoid collisions solely by responding to certain visual cues that are associated with object looming. In neurophysiological experiments, I investigated the possibility that the 'collision-detector neurons' of locusts respond to impending collisions in films recorded with dashboard cameras of fast driving cars. In a complementary modelling approach, I developed a simple algorithm to reproduce the neuronal response that was recorded during object approach. Instead of applying elaborate algorithms that factored in object recognition and optic flow discrimination, I tested the hypothesis that motion detection restricted to a 'danger zone', in which frontal collisions on the motorways are most likely, is sufficient to estimate the risk of a collision. Furthermore, I investigated whether local motion vectors, obtained from the differential excitation of simulated direction-selective networks, could be used to predict evasive steering maneuvers and prevent undesired responses to motion artifacts. The results of the study demonstrate that the risk of impending collisions in real traffic scenes is mirrored in the excitation of the collision-detecting neuron (DCMD) of locusts. The modelling approach was able to reproduce this neuronal response even when the vehicle was driving at high speeds and image resolution was low (about 200 × 100 pixels). Furthermore, evasive maneuvers that involved changing the steering direction and steering force could be planned by comparing the differences in the overall excitation levels of the simulated right and left direction-selective networks. Additionally, it was possible to suppress undesired responses of the algorithm to translatory movements, camera shake and ground shadows by evaluating local motion vectors. These estimated collision risk values and evasive steering vectors could be used as input for a driving assistant, converting the first into braking force and the latter into steering responses to avoid collisions. Since many processing steps were computed on the level of pixels and involved elements of direction-selective networks, this algorithm can be implemented in hardware so that parallel computations enhance the processing speed significantly. --- paper_title: A neural model of the locust visual system for detection of object approaches with real-world scenes paper_content: In the central nervous systems of animals like pigeons and locusts, neurons were identified which signal objects approaching the animal on a direct collision course. Unraveling the neural circuitry for collision avoidance, and identifying the underlying computational principles, is promising for building vision-based neuromorphic architectures, which in the near future could find applications in cars or planes. At the present there is no published model available for robust detection of approaching objects under real-world conditions. Here we present a computational architecture for signalling impending collisions, based on known anatomical data of the locust \emph{lobula giant movement detector} (LGMD) neuron. Our model shows robust performance even in adverse situations, such as with approaching low-contrast objects, or with highly textured and moving backgrounds. We furthermore discuss which components need to be added to our model to convert it into a full-fledged real-world-environment collision detector. KEYWORDS: Locust, LGMD, collision detection, lateral inhibition, diffusion, ON-OFF-pathways, neuronal dynamics, computer vision, image processing --- paper_title: An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments paper_content: Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from 'small target motion detector' neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system. --- paper_title: Computational model of the LGMD neuron for automatic collision detection paper_content: In many animal species it is essential to recognize approach predators from complex, dynamic visual scenes and timely initiate escape behavior. Such sophisticated behaviours are often achieved with low neuronal complexity, such as in locusts, suggesting that emulating these biological models in artificial systems would enable the generation of similar complex behaviours with low computational overhead. On the other hand, artificial collision detection is a complex task that requires both real time data acquisition and important features extraction from a captured image. In order to accomplish this task, the algorithms used need to be fast to process the captured data and then perform real time decisions. Taking into account the previous considerations, neurorobotic models may provide a foundation for the development of more effective and autonomous devices/robots, based on an improved understanding of the biological basis of adaptive behavior. In this paper, we make a comparative analysis between the new computational model of a locust looming-detecting pathway and the model previously proposed by us. The obtained results proved the improvement provided by the pixel remapping in the model performance. --- paper_title: Biologically Inspired Feature Detection Using Cascaded Correlations of off and on Channels paper_content: Abstract Flying insects are valuable animal models for elucidating computational processes underlying visual motion detection. For example, optical flow analysis by wide-field motion processing neurons in the insect visual system has been investigated from both behavioral and physiological perspectives [1]. This has resulted in useful computational models with diverse applications [2,3]. In addition, some insects must also extract the movement of their prey or conspecifics from their environment. Such insects have the ability to detect and interact with small moving targets, even amidst a swarm of others [4,5]. We use electrophysiological techniques to record from small target motion detector (STMD) neurons in the insect brain that are likely to subserve these behaviors. Inspired by such recordings, we previously proposed an ‘elementary’ small target motion detector (ESTMD) model that accounts for the spatial and temporal tuning of such neurons and even their ability to discriminate targets against cluttered surrounds [6-8]. However, other properties such as direction selectivity [9] and response facilitation for objects moving on extended trajectories [10] are not accounted for by this model. We therefore propose here two model variants that cascade an ESTMD model with a traditional motion detection model algorithm, the Hassenstein Reichardt ‘elementary motion detector’ (EMD) [11]. We show that these elaborations maintain the principal attributes of ESTMDs (i.e. spatiotemporal tuning and background clutter rejection) while also capturing the direction selectivity observed in some STMD neurons. By encapsulating the properties of biological STMD neurons we aim to develop computational models that can simulate the remarkable capabilities of insects in target discrimination and pursuit for applications in robotics and artificial vision systems. --- paper_title: Neural specializations for small target detection in insects paper_content: Despite being equipped with low-resolution eyes and tiny brains, many insects show exquisite abilities to detect and pursue targets even in highly textured surrounds. Target tracking behavior is subserved by neurons that are sharply tuned to the motion of small high-contrast targets. These neurons respond robustly to target motion, even against self-generated optic flow. A recent model, supported by neurophysiology, generates target selectivity by being sharply tuned to the unique spatiotemporal profile associated with target motion. Target neurons are likely connected in a complex network where some provide more direct output to behavior, whereas others serve an inter-regulatory role. These interactions may regulate attention and aid in the robust detection of targets in clutter observed in behavior. --- paper_title: Object detection in the fly during simulated translatory flight paper_content: Abstract Translatory movement of an animal in its environment induces optic flow that contains information about the three-dimensional layout of the surroundings: as a rule, images of objects that are closer to the animal move faster across the retina than those of more distant objects. Such relative motion cues are used by flies to detect objects in front of a structured background. We confronted flying flies, tethered to a torque meter, with front-to-back motion of patterns displayed on two CRT screens, thereby simulating translatory motion of the background as experienced by an animal during straight flight. The torque meter measured the instantaneous turning responses of the fly around its vertical body axis. During short time intervals, object motion was superimposed on background pattern motion. The average turning response towards such an object depends on both object and background velocity in a characteristic way: (1) in order to elicit significant responses object motion has to be faster than background motion; (2) background motion within a certain range of velocities improves object detection. These properties can be interpreted as adaptations to situations as they occur in natural free flight. We confirmed that the measured responses were mediated mainly by a control system specialized for the detection of objects rather than by the compensatory optomotor system responsible for course stabilization. --- paper_title: Neuronal encoding of object and distance information: a model simulation study on naturalistic optic flow processing paper_content: We developed a model of the input circuitry of the FD1 cell, an identified motion-sensitive interneuron in the blowfly’s visual system. The model circuit successfully reproduces the FD1 cell’s most conspicuous property: Its larger responses to objects than to spatially extended patterns. The model circuit also mimics the time-dependent responses of FD1 to dynamically complex naturalistic stimuli, shaped by the blowfly’s saccadic flight and gaze strategy: The FD1 responses are enhanced when, as a consequence of self-motion, a nearby object crosses the receptive field during intersaccadic intervals. Moreover, the model predicts that these object-induced responses are superimposed by pronounced pattern-dependent fluctuations during movements on virtual test flights in a three-dimensional environment with systematic modifications of the environmental patterns. Hence, the FD1 cell is predicted to detect not unambiguously objects defined by the spatial layout of the environment, but to be also sensitive to objects distinguished by textural features. These ambiguous detection abilities suggest an encoding of information about objects - irrespective of the features by which the objects are defined - by a population of cells, with the FD1 cell presumably playing a prominent role in such an ensemble. --- paper_title: Development of a bio-inspired vision system for mobile micro-robots paper_content: In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide- field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control. --- paper_title: Distributed Dendritic Processing Facilitates Object Detection: A Computational Analysis on the Visual System of the Fly paper_content: BACKGROUND ::: Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of perceptual phenomena currently explained by lateral inhibition networks. --- paper_title: Detection of object motion by a fly neuron during simulated flight paper_content: Object detection on the basis of relative motion was investigated in the fly at the neuronal level. A representative of the figure detection cells (FD-cells), the FD1b-cell, was characterized with respect to its responses to optic flow which simulated the presence of an object during translatory flight. The figure detection cells reside in the fly's third visual neuropil and are believed to play a central role in mediating object-directed turning behaviour. The dynamical response properties as well as the mean response amplitudes of the FD1b-cell depend on the temporal frequency of object motion and on the presence or absence of background motion. The responses of the FD1b-cell to object motion during simulated translatory flight were compared to behavioural responses of the fly as obtained with identical stimuli in a previous study. The behavioural responses could only partly be explained on the basis of the FD1b-cell's responses. Further processing between the third visual neuropil and the final motor output has to be assumed which involves (1) facilitation of the object-induced responses during translatory background motion at moderate temporal frequencies, and (2) inhibition of the object-induced turning responses during translatory background motion at high temporal frequencies. --- paper_title: A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment paper_content: The lobula giant movement detector (LGMD) neuron of locusts has been shown to preferentially respond to objects approaching the eye of a locust on a direct collision course. Computer simulations of the neuron have been developed and have demonstrated the ability of mobile robots, interfaced with a simulated LGMD model, to avoid collisions. In this study, a model of the LGMD neuron is presented and the functional parameters of the model identified. Models with different parameters were presented with a range of automotive video sequences, including collisions with cars. The parameters were optimised to respond correctly to the video sequences using a range of genetic algorithms (GAs). The model evolved most rapidly using GAs with high clone rates into a form suitable for detecting collisions with cars and not producing false collision alerts to most non-collision scenes. --- paper_title: Spatial response properties of contralateral inhibited lobula plate tangential cells in the fly visual system. paper_content: This study describes the spatial response properties of a particular group of motion-sensitive and directionally selective neurons located in the lobula plate of the fly visual system. Their preferred motion direction is front-to-back (depolarization), and their null direction is back-to-front (hyperpolarization). They receive inhibitory input from the contralateral eye during pattern motion from back to front (regressive). In this study, we call these neurons regressive contralateral inhibited lobula plate tangential cells (rCI-LPTCs). Three physiologic groups of rCI-neurons (rCI-I, rCI-IIa, and rCI-IIb) can be distinguished on the basis of their ipsilateral pattern size dependence and their inhibitory contralateral input. rCI-I neurons depolarize during the motion of small ipsilateral patterns from front to back, but they become hyperpolarized by large ipsilateral patterns moving in the same direction. rCI-IIa and rCI-IIb neurons receive bidirectional inhibitory input from the contralateral eye. rCI-IIa neurons respond best to small ipsilateral pattern sizes, but unlike rCI-I neurons, their net response to large patterns is positive. rCI-IIb neurons respond best to large ipsilateral patterns. The anatomical and physiologic variability of the rCI-neurons suggests that more than three types of rCI-neurons exist. The suggested physiologic groups might be preliminary. We recorded one neuron that could mediate the bidirectional inhibitory input that rCI-IIa and rCI-IIb neurons receive from the contralateral eye. In the case of the rCI-IIa neurons at least one further contralateral inhibitory element has to be assumed. The tuning of rCI-IIa neurons to small ipsilateral pattern sizes is likely to be based on an on-center/off-surround organization of their synaptic input. J. Comp. Neurol. 406:51–71, 1999. © 1999 Wiley-Liss, Inc. --- paper_title: Bio-Inspired Embedded Vision System for Autonomous Micro-Robots: The LGMD Case paper_content: In this paper, we present a new bio-inspired vision system embedded for micro-robots. The vision system takes inspiration from locusts in detecting fast approaching objects. Neurophysiological research suggested that locusts use a wide-field visual neuron called lobula giant movement detector (LGMD) to respond to imminent collisions. In this paper, we present the implementation of the selected neuron model by a low-cost ARM processor as part of a composite vision module. As the first embedded LGMD vision module fits to a micro-robot, the developed system performs all image acquisition and processing independently. The vision module is placed on top of a micro-robot to initiate obstacle avoidance behavior autonomously. Both simulation and real-world experiments were carried out to test the reliability and robustness of the vision system. The results of the experiments with different scenarios demonstrated the potential of the bio-inspired vision system as a low-cost embedded module for autonomous robots. --- paper_title: Motion detectors in the locust visual system: From biology to robot sensors. paper_content: Motion detectors in the locust optic lobe and brain fall into two categories: neurones that respond selectively to approaching vs. receding objects and neurones that respond selectively to a particular pattern of image motion over a substantial part of the eye, generated by the locust's own movements through its environment. Neurones from the two categories can be differentiated on the basis of their response to motion at a constant velocity at a fixed distance from the locust: neurones of the first category respond equally well to motion in any direction whereas neurones in the second category respond selectively to one preferred direction of motion. Several of the motion detectors of the first category, responding to approaching objects, share the same input organisation, suggesting that it is important in generating a tuning for approaching objects. Anatomical, physiological, and modelling studies have revealed how the selectivity of the response is generated. The selectivity arises as a result of a critical race between excitation, generated when image edges move out over the eye and delayed inhibition, generated by the same edge movements. For excitation to build up, the velocity and extent of edge motion over the eye must increase rapidly. The ultrastructure of the afferent inputs onto the dendrites of collision sensitive neurones reveals a possible substrate for the interaction between excitation and inhibition. This interpretation is supported by both physiological and immunocytochemical evidence. The input organisation of these neurones has been incorporated into the control structure of a small mobile robot, which successfully avoids collisions with looming objects. The ecological role of motion detectors of the second category that respond to image motion over a substantial part of the visual field, is discussed as is the input organisation that generates this selective response. The broad tuning of these neurones, particularly at low velocities (<0.02 degree/s), suggests they may have a role in navigation during migratory flights at altitude. By contrast, their optimum tuning to high-image velocities suggests these motion detectors are adapted for use in a fast flying insect, which does not spend significant time hovering. --- paper_title: Binocular Integration of Visual Information: A Model Study on Naturalistic Optic Flow Processing paper_content: The computation of visual information from both visual hemispheres is often of functional relevance when solving orientation and navigation tasks. The vCH-cell is a motion-sensitive wide-field neuron in the visual system of the blowfly Calliphora, a model system in the field of optic flow processing. The vCH-cell receives input from various other identified wide-field cells, the receptive fields of which are located in both the ipsilateral and the contralateral visual field. The relevance of this connectivity to the processing of naturalistic image sequences, with their peculiar dynamical characteristics, is still unresolved. To disentangle the contributions of the different input components to the cell’s overall response, we used electrophysiologically determined responses of the vCH-cell and its various input elements to tune a model of the vCH-circuit. Their impact on the vCH-cell response could be distinguished by stimulating not only extended parts of the visual field of the fly, but also selected regions in the ipsi- and contralateral visual field with behaviorally generated optic flow. We show that a computational model of the vCH-circuit is able to account for the neuronal activities of the counterparts in the blowfly’s visual system. Furthermore, we offer an insight into the dendritic integration of binocular visual input. --- paper_title: Non-Linear Neuronal Responses as an Emergent Property of Afferent Networks: A Case Study of the Locust Lobula Giant Movement Detector paper_content: In principle it appears advantageous for single neurons to perform non-linear operations. Indeed it has been reported that some neurons show signatures of such operations in their electrophysiological response. A particular case in point is the Lobula Giant Movement Detector (LGMD) neuron of the locust, which is reported to locally perform a functional multiplication. Given the wide ramifications of this suggestion with respect to our understanding of neuronal computations, it is essential that this interpretation of the LGMD as a local multiplication unit is thoroughly tested. Here we evaluate an alternative model that tests the hypothesis that the non-linear responses of the LGMD neuron emerge from the interactions of many neurons in the opto-motor processing structure of the locust. We show, by exposing our model to standard LGMD stimulation protocols, that the properties of the LGMD that were seen as a hallmark of local non-linear operations can be explained as emerging from the dynamics of the pre-synaptic network. Moreover, we demonstrate that these properties strongly depend on the details of the synaptic projections from the medulla to the LGMD. From these observations we deduce a number of testable predictions. To assess the real-time properties of our model we applied it to a high-speed robot. These robot results show that our model of the locust opto-motor system is able to reliably stabilize the movement trajectory of the robot and can robustly support collision avoidance. In addition, these behavioural experiments suggest that the emergent non-linear responses of the LGMD neuron enhance the system's collision detection acuity. We show how all reported properties of this neuron are consistently reproduced by this alternative model, and how they emerge from the overall opto-motor processing structure of the locust. Hence, our results propose an alternative view on neuronal computation that emphasizes the network properties as opposed to the local transformations that can be performed by single neurons. --- paper_title: Immunocytochemical evidence that collision sensing neurons in the locust visual system contain acetylcholine paper_content: The lobula giant movement detector (LGMD1 and -2) neurons in the locust visual system are parts of motion-sensitive pathways that detect objects approaching on a collision course. The dendritic processes of the LGMD1 and -2 in the lobula are localised to discrete regions, allowing the dendrites of each neuron to be distinguished uniquely. As was described previously for the LGMD1, the afferent processes onto the LGMD2 synapse directly with each other, and these synapses are immediately adjacent to their outputs onto the LGMD2. Here we present immunocytochemical evidence, using antibodies against choline-protein conjugates and a polyclonal antiserum against choline acetyltransferase (ChAT; Chemicon Ab 143), that the LGMD1 and -2 and the retinotopic units presynaptic to them contain acetylcholine (ACh). It is proposed that these retinotopic units excite the LGMD1 or -2 but inhibit each other. It is well established that ACh has both excitatory and inhibitory effects and may provide the substrate for a critical race in the LGMD1 or -2, between excitation caused by edges moving out over successive photoreceptors, and inhibition spreading laterally resulting in the selective response to objects approaching on a collision course. In the optic lobe, ACh was also found to be localised in discrete layers of the medulla and in the outer chiasm between the lamina and medulla. In the brain, the antennal lobes contained neurons that reacted positively for ACh. Silver- or haematoxylin and eosin-stained sections through the optic lobe confirmed the identities of the positively immunostained neurons. --- paper_title: IDENTIFICATION OF DIRECTIONALLY SELECTIVE MOTION-DETECTING NEURONES IN THE LOCUST LOBULA AND THEIR SYNAPTIC CONNECTIONS WITH AN IDENTIFIED DESCENDING NEURONE paper_content: The anatomy and physiology of two directionally selective motion-detecting neurones in the locust are described. Both neurones had dendrites in the lobula, and projected to the ipsilateral protocerebrum. Their cell bodies were located on the posterio-dorsal junction of the optic lobe with the protocerebrum. The neurones were sensitive to horizontal motion of a visual stimulus. One neurone, LDSMD(F), had a preferred direction forwards over the ipsilateral eye, and a null direction backwards. The other neurone, LDSMD(B), had a preferred direction backwards over the ipsilateral eye ::: ::: 1. 1. Motion in the preferred direction caused EPSPs and spikes in the LDSMD neurones. Motion in the null direction resulted in IPSPs ::: ::: 2. 2. Both excitatory and inhibitory inputs were derived from the ipsilateral eye ::: ::: 3. 3. The DSMD neurones responded to velocities of movement up to and beyond 270°s−1 ::: ::: 4. 4. The response of both LDSMD neurones showed no evidence of adaptation during maintained apparent or real movement ::: ::: 5. 5. There was a delay of 60–80 ms between a single step of apparent movement, either the preferred or the null direction, and the start of the response ::: ::: 6. 6. There was a monosynaptic, excitatory connection between the LDSMD(B) neurone and the protocerebral, descending DSMD neurone (PDDSMD) identified in the preceding paper (Rind, 1990). At resting membrane potential, a single presynaptic spike did not give rise to a spike in the postsynaptic neurone --- paper_title: Responses to object approach by a wide field visual neurone, the LGMD2 of the locust: Characterization and image cues paper_content: The LGMD2 belongs to a group of giant movement-detecting neurones which have fan-shaped arbors in the lobula of the locust optic lobe and respond to movements of objects. One of these neurones, the LGMD1, has been shown to respond directionally to movements of objects in depth, generating vigorous, maintained spike discharges during object approach. Here we compare the responses of the LGMD2 neurone with those of the LGMD1 to simulated movements of objects in depth and examine different image cues which could allow the LGMD2 to distinguish approaching from receding objects. In the absence of stimulation, the LGMD2 has a resting discharge of 10–40 spikes s−1 compared with <1 spike s−1 for the LGMD1. The most powerful excitatory stimulus for the LGMD2 is a dark object approaching the eye. Responses to approaching objects are suppressed by wide field movements of the background. Unlike the LGMD1, the LGMD2 is not excited by the approach of light objects; it specifically responds to movement of edges in the light to dark direction. Both neurones rely on the same monocular image cues to distinguish approaching from receding objects: an increase in the velocity with which edges of images travel over the eye; and an increase in the extent of edges in the image during approach. --- paper_title: Bio-inspired collision detector with enhanced selectivity for ground robotic vision system paper_content: There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot. --- paper_title: INTRACELLULAR CHARACTERIZATION OF NEURONS IN THE LOCUST BRAIN SIGNALING IMPENDING COLLISION paper_content: 1. In response to a rapidly approaching object, intracellular recordings show that excitation in the locust lobula giant movement detecting (LGMD) neuron builds up exponentially, particularly during the final stages of object approach. After the cessation of object motion, inhibitory potentials in the LGMD then help to terminate this excitation. Excitation in the LGMD follows object recession with a short, constant latency but is cut back rapidly by hyperpolarizing potentials. The timing of these hyperpolarizing potentials in the LGMD is variable, and their latency following object recession is shortest with the highest velocities of motion simulated. The hyperpolarizing potentials last from 50-300 ms and are often followed by re-excitation. The observed hyperpolarizations of the LGMD can occur without any preceding excitation and are accompanied by a measurable conductance increase. The hyperpolarizations are likely to be inhibitory postsynaptic potentials (PSPs). The behavior of the intracellularly recorded inhibitory PSPs (IPSPs) closely parallels that of the feed forward inhibitory loop in the neural network described by Rind and Bramwell. 2. The preference of the LGMD for approaching versus receding objects remains over a wide range of starting and finishing distances. The response to object approach, measured both as membrane potential and spike rate, remains single peaked with starting distances of between 200 and 2,100 mm, and approach speeds of 0.5-2 m/s. These results confirm the behavior predicted by the neural network described by Rind and Bramwell but contradicts the findings of Rind and Simmons, forcing a re-evaluation of the suitability of some of the mechanical visual stimuli used in that study. 3. For depolarization of the LGMD neuron to be maintained or increased throughout the motion of image edges, the edges must move with increasing velocity over the eye. Membrane potential declines before the end of edge motion with constant velocities of edge motion. 4. A second identified neuron, the LGMD2 also is shown to respond directionally to approaching objects. In both the LGMD and LGMD2 neurons, postsynaptic inhibition shapes the directional response to object motion. --- paper_title: Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation paper_content: Abstract Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector — the LGMD2. The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner. --- paper_title: The locust DCMD, a movement-detecting neurone tightly tuned to collision trajectories paper_content: A Silicon Graphics computer was used to challenge the locust descending contralateral movement detector (DCMD) neurone with images of approaching objects. The DCMD gave its strongest response, measured as either total spike number or spike frequency, to objects approaching on a direct collision course. Deviation in either a horizontal or vertical direction from a direct collision course resulted in a reduced response. The decline in the DCMD response with increasing deviation from a collision course was used as a measure of the tightness of DCMD tuning for collision trajectories. Tuning was defined as the half-width of the response when it had fallen to half its maximum level. The response tuning, measured as averaged mean spike number versus deviation away from a collision course, had a half-width at half-maximum response of 2.4 ds3.0 d for a deviation in the horizontal direction and 3.0 d for a deviation in the vertical direction. Mean peak spike frequency showed an even sharper tuning, with a half-width at half-maximum response of 1.8 d for deviations away from a collision course in the horizontal plane. --- paper_title: A look into the cockpit of the developing locust: Looming detectors and predator avoidance paper_content: For many animals, the visual detection of looming stimuli is crucial at any stage of their lives. For example, human babies of only 6 days old display evasive responses to looming stimuli (Bower et al. [1971]: Percept Psychophys 9: 193-196). This means the neuronal pathways involved in looming detection should mature early in life. Locusts have been used extensively to examine the neural circuits and mechanisms involved in sensing looming stimuli and triggering visually evoked evasive actions, making them ideal subjects in which to investigate the development of looming sensitivity. Two lobula giant movement detectors (LGMD) neurons have been identified in the lobula region of the locust visual system: the LGMD1 neuron responds selectively to looming stimuli and provides information that contributes to evasive responses such as jumping and emergency glides. The LGMD2 responds to looming stimuli and shares many response properties with the LGMD1. Both neurons have only been described in the adult. In this study, we describe a practical method combining classical staining techniques and 3D neuronal reconstructions that can be used, even in small insects, to reveal detailed anatomy of individual neurons. We have used it to analyze the anatomy of the fan-shaped dendritic tree of the LGMD1 and the LGMD2 neurons in all stages of the post-embryonic development of Locusta migratoria. We also analyze changes seen during the ontogeny of escape behaviors triggered by looming stimuli, specially the hiding response. --- paper_title: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects paper_content: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects --- paper_title: Non-Linear Neuronal Responses as an Emergent Property of Afferent Networks: A Case Study of the Locust Lobula Giant Movement Detector paper_content: In principle it appears advantageous for single neurons to perform non-linear operations. Indeed it has been reported that some neurons show signatures of such operations in their electrophysiological response. A particular case in point is the Lobula Giant Movement Detector (LGMD) neuron of the locust, which is reported to locally perform a functional multiplication. Given the wide ramifications of this suggestion with respect to our understanding of neuronal computations, it is essential that this interpretation of the LGMD as a local multiplication unit is thoroughly tested. Here we evaluate an alternative model that tests the hypothesis that the non-linear responses of the LGMD neuron emerge from the interactions of many neurons in the opto-motor processing structure of the locust. We show, by exposing our model to standard LGMD stimulation protocols, that the properties of the LGMD that were seen as a hallmark of local non-linear operations can be explained as emerging from the dynamics of the pre-synaptic network. Moreover, we demonstrate that these properties strongly depend on the details of the synaptic projections from the medulla to the LGMD. From these observations we deduce a number of testable predictions. To assess the real-time properties of our model we applied it to a high-speed robot. These robot results show that our model of the locust opto-motor system is able to reliably stabilize the movement trajectory of the robot and can robustly support collision avoidance. In addition, these behavioural experiments suggest that the emergent non-linear responses of the LGMD neuron enhance the system's collision detection acuity. We show how all reported properties of this neuron are consistently reproduced by this alternative model, and how they emerge from the overall opto-motor processing structure of the locust. Hence, our results propose an alternative view on neuronal computation that emphasizes the network properties as opposed to the local transformations that can be performed by single neurons. --- paper_title: Modelling LGMD2 visual neuron system paper_content: Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bio-inspired nonlinear functions, are introduced in our model, to achieve LGMD2's collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition. --- paper_title: Spike-Frequency Adaptation and Intrinsic Properties of an Identified, Looming-Sensitive Neuron paper_content: We investigated in vivo the characteristics of spike-frequency adaptation and the intrinsic membrane properties of an identified, looming-sensitive interneuron of the locust optic lobe, the lobula giant movement detector (LGMD). The LGMD had an input resistance of 4–5 MΩ, a membrane time constant of about 8 ms, and exhibited inward rectification and rebound spiking after hyperpolarizing current pulses. Responses to depolarizing current pulses revealed the neuron's intrinsic bursting properties and pronounced spike-frequency adaptation. The characteristics of adaptation, including its time course, the attenuation of the firing rate, the mutual dependency of these two variables, and their dependency on injected current, closely followed the predictions of a model first proposed to describe the adaptation of cat visual cortex pyramidal neurons in vivo. Our results thus validate the model in an entirely different context and suggest that it might be applicable to a wide variety of neurons across species. Spik... --- paper_title: Spike frequency adaptation mediates looming stimulus selectivity in a collision-detecting neuron paper_content: Studying the mechanisms by which spike frequency adaptation shapes visual stimulus selectivity in the lobula giant movement detector interneuron of the locust visual system, the authors find that spike frequency adaptation selectively decreases this neuron's responses to nonpreferred stimuli. --- paper_title: A Collision Detection System for a Mobile Robot Inspired by the Locust Visual System paper_content: The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the image of an approaching object such as a predator. A computational neural network model based on the structure of the LGMD and its afferent inputs is also able to detect approaching objects. In order for the LGMD network to be used as a robust collision detector for robotic applications, we proposed a new mechanism to enhance the feature of colliding objects before the excitations are gathered by LGMD cell. The new model favours grouped excitation but tends to ignore isolated excitation with selective passing coefficients. Experiments with a Khepera robot showed the proposed collision detector worked in real time in an arena surrounded with blocks. --- paper_title: Role of spike-frequency adaptation in shaping neuronal response to dynamic stimuli paper_content: Spike-frequency adaptation is the reduction of a neuron’s firing rate to a stimulus of constant intensity. In the locust, the Lobula Giant Movement Detector (LGMD) is a visual interneuron that exhibits rapid adaptation to both current injection and visual stimuli. Here, a reduced compartmental model of the LGMD is employed to explore adaptation’s role in selectivity for stimuli whose intensity changes with time. We show that supralinearly increasing current injection stimuli are best at driving a high spike count in the response, while linearly increasing current injection stimuli (i.e., ramps) are best at attaining large firing rate changes in an adapting neuron. This result is extended with in vivo experiments showing that the LGMD’s response to translating stimuli having a supralinear velocity profile is larger than the response to constant or linearly increasing velocity translation. Furthermore, we show that the LGMD’s preference for approaching versus receding stimuli can partly be accounted for by adaptation. Finally, we show that the LGMD’s adaptation mechanism appears well tuned to minimize sensitivity for the level of basal input. --- paper_title: Reactive direction control for a mobile robot: a locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated paper_content: Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using `randomised winner-take-all' or `steering wheel' algorithms for LGMD model integration, the Khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research. --- paper_title: LGMD-based bio-inspired algorithm for detecting risk of collision of a road vehicle paper_content: LGMD (Lobula Giant Movement Detector) is part of a visual system of a locust, used to detect and evade approaching predators. Similar algorithm can be used in man-made systems like autonomous robots or safety systems in vehicles for detecting objects approaching on collision course. In this article usage of LGMD-based algorithms in road vehicles is investigated and a new solution is proposed. Video stream recorded from a moving vehicle inherently contains a lot of movement of the background due to vibrations and due to turning of the vehicle. This cause high output of LGMD and could trigger false alarms. The new approach, proposed in this article, enhances LGMD information and adds estimated expansion of objects in X and Y direction. Combination of all three sources of information gives a very good estimate of risk of collision. The new algorithm was developed using a reference set of test videos, where only one parameter was changed. Algorithm was tested with videos recorded from a moving vehicle in normal traffic and on the test ground, including real collisions in carton target at different speeds. Time between triggering of the alert and actual collision was measured. --- paper_title: A modified neural network model for Lobula Giant Movement Detector with additional depth movement feature paper_content: The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron that is located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of the approaching object and its proximity. It has been found that it can respond to looming stimuli very quickly and can trigger avoidance reactions whenever a rapidly approaching object is detected. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper proposes a modified LGMD model that provides additional movement depth direction information. The proposed model retains the simplicity of the previous neural network model, adding only a few new cells. It has been tested on both simulated and recorded video data sets. The experimental results shows that the modified model can very efficiently provide stable information on the depth direction of movement. --- paper_title: Near range path navigation using LGMD visual neural networks paper_content: In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network - lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios. --- paper_title: A bio-inspired visual collision detection mechanism for cars: Combining insect inspired neurons to create a robust system paper_content: The lobula giant movement detector (LGMD) of locusts is a visual interneuron that responds with an increasing spike frequency to an object approaching on a direct collision course. Recent studies involving the use of LGMD models to detect car collisions showed that it could detect collisions, but the neuron produced collision alerts to non-colliding, translating, stimuli in many cases. This study presents a modified model to address these problems. It shows how the neurons pre-synaptic to the LGMD show a remarkable ability to filter images, and only colliding and translating stimuli produce excitation in the neuron. It then integrates the LGMD network with models based on the elementary movement detector (EMD) neurons from the fly visual system, which are used to analyse directional excitation patterns in the biologically filtered images. Combining the information from the LGMD neuron and four directionally sensitive neurons produces a robust collision detection system for a wide range of automotive test situations. --- paper_title: Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot paper_content: The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been modeled and successfully applied in robotic vision system for perceiving potential collisions in an efficient and reliable manner. In this research, we conduct binocular neuronal models, for the first time combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions of this research: (1) The arena tests involving multiple robots verified the effectiveness and robustness of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models, which fulfill corresponding biological research. (3) The utilized micro robot may also benefit researches on other embedded vision systems as well as swarm robotics. --- paper_title: Bio-inspired collision detector with enhanced selectivity for ground robotic vision system paper_content: There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot. --- paper_title: A modified model for the Lobula Giant Movement Detector and its FPGA implementation paper_content: Bio-inspired vision sensors are particularly appropriate candidates for navigation of vehicles or mobile robots due to their computational simplicity, allowing compact hardware implementations with low power dissipation. The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector. --- paper_title: Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement paper_content: The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds --- paper_title: Neural network based on the input organization of an identified neuron signaling impending collision paper_content: 1. We describe a four-layered neural network (Fig. 1), based on the input organization of a collision signaling neuron in the visual system of the locust, the lobula giant movement detector (LGMD).... --- paper_title: Visual motion pattern extraction and fusion for collision detection in complex dynamic scenes paper_content: Detecting colliding objects in complex dynamic scenes is a difficult task for conventional computer vision techniques. However, visual processing mechanisms in animals such as insects may provide very simple and effective solutions for detecting colliding objects in complex dynamic scenes. In this paper, we propose a robust collision detecting system, which consists of a lobula giant movement detector (LGMD) based neural network and a translating sensitive neural network (TSNN), to recognise objects on a direct collision course in complex dynamic scenes. The LGMD based neural network is specialized for recognizing looming objects that are on a direct collision course. The TSNN, which fuses the extracted visual motion cues from several whole field direction selective neural networks, is only sensitive to translating movements in the dynamic scenes. The looming cue and translating cue revealed by the two specialized visual motion detectors are fused in the present system via a decision making mechanism. In the system, the LGMD plays a key role in detecting imminent collision; the decision from TSNN becomes useful only when a collision alarm has been issued by the LGMD network. Using driving scenarios as an example, we showed that the bio-inspired system can reliably detect imminent colliding objects in complex driving scenes. --- paper_title: Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation paper_content: Abstract Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector — the LGMD2. The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner. --- paper_title: Multiplication and stimulus invariance in a looming-sensitive neuron paper_content: Multiplicative operations and invariance of neuronal responses are thought to play important roles in the processing of neural information in many sensory systems. Yet the biophysical mechanisms that underlie both multiplication and invariance of neuronal responses in vivo, either at the single cell or at the network level, remain to a large extent unknown. Recent work on an identified neuron in the locust visual system (the LGMD neuron) that responds well to objects looming on a collision course towards the animal suggests that this cell represents a good model to investigate the biophysical basis of multiplication and invariance at the single neuron level. Experimental and theoretical results are consistent with multiplication being implemented by subtraction of two logarithmic terms followed by exponentiation via active membrane conductances, according to a x 1/b = exp(log(a) - log(b)). Invariance appears to be in part due to non-linear integration of synaptic inputs within the dendritic tree of this neuron. --- paper_title: LGMD and DSNs neural networks integration for collision predication paper_content: An ability to predict collisions is essential for current vehicles and autonomous robots. In this paper, an integrated collision predication system is proposed based on neural subsystems inspired from Lobula giant movement detector (LGMD) and directional selective neurons (DSNs) which focus on different part of the visual field separately. The two type of neurons found in the visual pathways of insects respond most strongly to moving objects with preferred motion patterns, i.e., the LGMD prefers looming stimuli and DSNs prefer specific lateral movements. We fuse the extracted information by each type of neurons to make final decision. By dividing the whole field of view into four regions for each subsystem to process, the proposed approaches can detect hazardous situations that had been difficult for single subsystem only. Our experiments show that the integrated system works in most of the hazardous scenarios. --- paper_title: A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment paper_content: The lobula giant movement detector (LGMD) neuron of locusts has been shown to preferentially respond to objects approaching the eye of a locust on a direct collision course. Computer simulations of the neuron have been developed and have demonstrated the ability of mobile robots, interfaced with a simulated LGMD model, to avoid collisions. In this study, a model of the LGMD neuron is presented and the functional parameters of the model identified. Models with different parameters were presented with a range of automotive video sequences, including collisions with cars. The parameters were optimised to respond correctly to the video sequences using a range of genetic algorithms (GAs). The model evolved most rapidly using GAs with high clone rates into a form suitable for detecting collisions with cars and not producing false collision alerts to most non-collision scenes. --- paper_title: Bio-Inspired Embedded Vision System for Autonomous Micro-Robots: The LGMD Case paper_content: In this paper, we present a new bio-inspired vision system embedded for micro-robots. The vision system takes inspiration from locusts in detecting fast approaching objects. Neurophysiological research suggested that locusts use a wide-field visual neuron called lobula giant movement detector (LGMD) to respond to imminent collisions. In this paper, we present the implementation of the selected neuron model by a low-cost ARM processor as part of a composite vision module. As the first embedded LGMD vision module fits to a micro-robot, the developed system performs all image acquisition and processing independently. The vision module is placed on top of a micro-robot to initiate obstacle avoidance behavior autonomously. Both simulation and real-world experiments were carried out to test the reliability and robustness of the vision system. The results of the experiments with different scenarios demonstrated the potential of the bio-inspired vision system as a low-cost embedded module for autonomous robots. --- paper_title: Characterization of lobula giant neurons responsive to visual stimuli that elicit escape behaviors in the crab Chasmagnathus paper_content: In the grapsid crab Chasmagnathus , a visual danger stimulus elicits a strong escape response that diminishes rapidly on stimulus repetition. This behavioral modification can persist for several days as a result of the formation of an associative memory. We have previously shown that a generic group of large motion-sensitive neurons from the lobula of the crab respond to visual stimuli and accurately reflect the escape performance. Additional evidence indicates that these neurons play a key role in visual memory and in the decision to initiate an escape. Although early studies recognized that the group of lobula giant (LG) neurons consisted of different classes of motion-sensitive cells, a distinction between these classes has been lacking. Here, we recorded in vivo the responses of individual LG neurons to a wide range of visual stimuli presented in different segments of the animal's visual field. Physiological characterizations were followed by intracellular dye injections, which permitted comparison of the functional and morphological features of each cell. All LG neurons consisted of large tangential arborizations in the lobula with axons projecting toward the midbrain. Functionally, these cells proved to be more sensitive to single objects than to flow field motion. Despite these commonalities, clear differences in morphology and physiology allowed us to identify four distinct classes of LG neurons. These results will permit analysis of the role of each neuronal type for visually guided behaviors and will allow us to address specific questions on the neuronal plasticity of LGs that underlie the well-recognized memory model of the crab. --- paper_title: Speed tuning in elementary motion detectors of the correlation type paper_content: A prominent model of visual motion detection is the so-called correlation or Reichardt detector. Whereas this model can account for many properties of motion vision, from humans to insects (review, Borst and Egelhaaf 1989), it has been commonly assumed that this scheme of motion detection is not well suited to the measurement of image velocity. This is because the commonly used version of the model, which incorporates two unidirectional motion detectors with opposite preferred directions, produces a response which varies not only with the velocity of the image, but also with its spatial structure and contrast. On the other hand, information on image velocity can be crucial in various contexts, and a number of recent behavioural experiments suggest that insects do extract velocity for navigational purposes (review, Srinivasan et al. 1996). Here we show that other versions of the correlation model, which consists of a single unidirectional motion detector or incorporates two oppositely directed detectors with unequal sensitivities, produce responses which vary with image speed and display tuning curves that are substantially independent of the spatial structure of the image. This surprising feature suggests simple strategies of reducing ambiguities in the estimation of speed by using components of neural hardware that are already known to exist in the visual system. --- paper_title: Loom-Sensitive Neurons Link Computation to Action in the Drosophila Visual System paper_content: Summary Background Many animals extract specific cues from rich visual scenes to guide appropriate behaviors. Such cues include visual motion signals produced both by self-movement and by moving objects in the environment. The complexity of these signals requires neural circuits to link particular patterns of motion to specific behavioral responses. Results Through electrophysiological recordings, we characterize genetically identified neurons in the optic lobe of Drosophila that are specifically tuned to detect motion signals produced by looming objects on a collision course with the fly. Using a genetic manipulation to specifically silence these neurons, we demonstrate that signals from these cells are important for flies to efficiently initiate the loom escape response. Moreover, through targeted expression of channelrhodopsin in these cells, in flies that are blind, we reveal that optogenetic stimulation of these neurons is typically sufficient to elicit escape, even in the absence of any visual stimulus. Conclusions In this compact nervous system, a small group of neurons that extract a specific visual cue from local motion inputs serve to trigger the ethologically appropriate behavioral response. --- paper_title: Computation of object approach by a system of visual motion-sensitive neurons in the crab Neohelice paper_content: Similar to most visual animals, crabs perform proper avoidance responses to objects directly approaching them. The monostratified lobula giant neurons of type 1 (MLG1) of crabs constitute an ensemble of 14–16 bilateral pairs of motion-detecting neurons projecting from the lobula (third optic neuropile) to the midbrain, with receptive fields that are distributed over the extensive visual field of the animal's eye. Considering the crab Neohelice (previously Chasmagnathus) granulata, here we describe the response of these neurons to looming stimuli that simulate objects approaching the animal on a collision course. We found that the peak firing time of MLG1 acts as an angular threshold detector signaling, with a delay of δ = 35 ms, the time at which an object reaches a fixed angular threshold of 49°. Using in vivo intracellular recordings, we detected the existence of excitatory and inhibitory synaptic currents that shape the neural response. Other functional features identified in the MLG1 neurons were phas... --- paper_title: Escape Behavior: Linking Neural Computation to Action paper_content: A new study uses a combination of physiological and optogenetic techniques to identify visual neurons in fruit flies that detect approaching objects, and whose activation is integral in escaping an oncoming threat. --- paper_title: Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster. paper_content: Flies rely heavily on visual feedback for several aspects of flight control. As a fly approaches an object, the image projected across its retina expands, providing the fly with visual feedback that can be used either to trigger a collision-avoidance maneuver or a landing response. To determine how a fly makes the decision to land on or avoid a looming object, we measured the behaviors generated in response to an expanding image during tethered flight in a visual closed-loop flight arena. During these experiments, each fly varied its wing-stroke kinematics to actively control the azimuth position of a 15°×15° square within its visual field. Periodically, the square symmetrically expanded in both the horizontal and vertical directions. We measured changes in the fly's wing-stroke amplitude and frequency in response to the expanding square while optically tracking the position of its legs to monitor stereotyped landing responses. Although this stimulus could elicit both the landing responses and collision-avoidance reactions, separate pathways appear to mediate the two behaviors. For example, if the square is in the lateral portion of the fly's field of view at the onset of expansion, the fly increases stroke amplitude in one wing while decreasing amplitude in the other, indicative of a collision-avoidance maneuver. In contrast, frontal expansion elicits an increase in wing-beat frequency and leg extension, indicative of a landing response. To further characterize the sensitivity of these responses to expansion rate, we tested a range of expansion velocities from 100 to 10000° s^(-1). Differences in the latency of both the collision-avoidance reactions and the landing responses with expansion rate supported the hypothesis that the two behaviors are mediated by separate pathways. To examine the effects of visual feedback on the magnitude and time course of the two behaviors, we presented the stimulus under open-loop conditions, such that the fly's response did not alter the position of the expanding square. From our results we suggest a model that takes into account the spatial sensitivities and temporal latencies of the collision-avoidance and landing responses, and is sufficient to schematically represent how the fly uses integration of motion information in deciding whether to turn or land when confronted with an expanding object. --- paper_title: Organization of columnar inputs in the third optic ganglion of a highly visual crab paper_content: Motion information provides essential cues for a wide variety of animal behaviors such as mate, prey, or predator detection. In decapod crustaceans and pterygote insects, visual codification of object motion is associated with visual processing in the third optic neuropile, the lobula. In this neuropile, tangential neurons collect motion information from small field columnar neurons and relay it to the midbrain where behavioral responses would be finally shaped. In highly ordered structures, detailed knowledge of the neuroanatomy can give insight into their function. In spite of the relevance of the lobula in processing motion information, studies on the neuroarchitecture of this neuropile are scant. Here, by applying dextran-conjugated dyes in the second optic neuropile (the medulla) of the crab Neohelice, we mass stained the columnar neurons that convey visual information into the lobula. We found that the arborizations of these afferent columnar neurons lie at four main lobula depths. A detailed examination of serial optical sections of the lobula revealed that these input strata are composed of different number of substrata and that the strata are thicker in the centre of the neuropile. Finally, by staining the different lobula layers composed of tangential processes we combined the present characterization of lobula input strata with the previous characterization of the neuroarchitecture of the crab's lobula based on reduced-silver preparations. We found that the third lobula input stratum overlaps with the dendrites of lobula giant tangential neurons. This suggests that columnar neurons projecting from the medulla can directly provide visual input to the crab's lobula giant neurons. --- paper_title: Flies Evade Looming Targets by Executing Rapid Visually Directed Banked Turns paper_content: Avoiding predators is an essential behavior in which animals must quickly transform sensory cues into evasive actions. Sensory reflexes are particularly fast in flying insects such as flies, but the means by which they evade aerial predators is not known. Using high-speed videography and automated tracking of flies in combination with aerodynamic measurements on flapping robots, we show that flying flies react to looming stimuli with directed banked turns. The maneuver consists of a rapid body rotation followed immediately by an active counter-rotation and is enacted by remarkably subtle changes in wing motion. These evasive maneuvers of flies are substantially faster than steering maneuvers measured previously and indicate the existence of sensory-motor circuitry that can reorient the fly’s flight path within a few wingbeats. --- paper_title: Escape behavior and neuronal responses to looming stimuli in the crab Chasmagnathus granulatus (Decapoda: Grapsidae) paper_content: Behavioral responses to looming stimuli have been studied in many vertebrate and invertebrate species, but neurons sensitive to looming have been investigated in very few animals. In this paper we introduce a new experimental model using the crab Chasmagnathus granulatus, which allows investigation of the processes of looming detection and escape decision at both the behavioral and neuronal levels. By analyzing the escape response of the crab in a walking simulator device we show that: (i) a robust and reliable escape response can be elicited by computer-generated looming stimuli in all tested animals; (ii) parameters such as distance, speed, timing and directionality of the escape run, are easy to record and quantify precisely in the walking device; (iii) although the magnitude of escape varies between animals and stimulus presentations, the timing of the response is remarkably consistent and does not habituate at 3 min stimulus intervals. We then study the response of neurons from the brain of the crab by means of intracellular recordings in the intact animal and show that: (iv) two subclasses of previously identified movement detector neurons from the lobula (third optic neuropil) exhibit robust and reliable responses to the same looming stimuli that trigger the behavioral response; (v) the neurons respond to the object approach by increasing their rate of firing in a way that closely matches the dynamics of the image expansion. Finally, we compare the neuronal with the behavioral response showing that: (vi) differences in the neuronal responses to looming, receding or laterally moving stimuli closely reflect the behavioral differences to such stimuli; (vii) during looming, the crab starts to run soon after the looming-sensitive neurons begin to increase their firing rate. The increase in the running speed during stimulus approach faithfully follows the increment in the firing rate, until the moment of maximum stimulus expansion. Thereafter, the neurons abruptly stop firing and the animal immediately decelerates its run. The results are discussed in connection with studies of responses to looming stimuli in the locust. --- paper_title: Elementary motion detectors paper_content: A quick guide to the elementary motion detector— a model of how a simple neural circuit can detect visual motion, developed from work on insect vision but which seems also to be relevant to vertebrate visual systems. --- paper_title: Neuronal correlates of the visually elicited escape response of the crab Chasmagnathus upon seasonal variations, stimuli changes and perceptual alterations paper_content: When confronted with predators, animals are forced to take crucial decisions such as the timing and manner of escape. In the case of the crab Chasmagnathus, cumulative evidence suggests that the escape response to a visual danger stimulus (VDS) can be accounted for by the response of a group of lobula giant (LG) neurons. To further investigate this hypothesis, we examined the relationship between behavioral and neuronal activities within a variety of experimental conditions that affected the level of escape. The intensity of the escape response to VDS was influenced by seasonal variations, changes in stimulus features, and whether the crab perceived stimuli monocularly or binocularly. These experimental conditions consistently affected the response of LG neurons in a way that closely matched the effects observed at the behavioral level. In other words, the intensity of the stimulus-elicited spike activity of LG neurons faithfully reflected the intensity of the escape response. These results support the idea that the LG neurons from the lobula of crabs are deeply involved in the decision for escaping from VDS. --- paper_title: IDENTIFICATION OF DIRECTIONALLY SELECTIVE MOTION-DETECTING NEURONES IN THE LOCUST LOBULA AND THEIR SYNAPTIC CONNECTIONS WITH AN IDENTIFIED DESCENDING NEURONE paper_content: The anatomy and physiology of two directionally selective motion-detecting neurones in the locust are described. Both neurones had dendrites in the lobula, and projected to the ipsilateral protocerebrum. Their cell bodies were located on the posterio-dorsal junction of the optic lobe with the protocerebrum. The neurones were sensitive to horizontal motion of a visual stimulus. One neurone, LDSMD(F), had a preferred direction forwards over the ipsilateral eye, and a null direction backwards. The other neurone, LDSMD(B), had a preferred direction backwards over the ipsilateral eye ::: ::: 1. 1. Motion in the preferred direction caused EPSPs and spikes in the LDSMD neurones. Motion in the null direction resulted in IPSPs ::: ::: 2. 2. Both excitatory and inhibitory inputs were derived from the ipsilateral eye ::: ::: 3. 3. The DSMD neurones responded to velocities of movement up to and beyond 270°s−1 ::: ::: 4. 4. The response of both LDSMD neurones showed no evidence of adaptation during maintained apparent or real movement ::: ::: 5. 5. There was a delay of 60–80 ms between a single step of apparent movement, either the preferred or the null direction, and the start of the response ::: ::: 6. 6. There was a monosynaptic, excitatory connection between the LDSMD(B) neurone and the protocerebral, descending DSMD neurone (PDDSMD) identified in the preceding paper (Rind, 1990). At resting membrane potential, a single presynaptic spike did not give rise to a spike in the postsynaptic neurone --- paper_title: Neural networks in the cockpit of the fly paper_content: Flies have been buzzing around on earth for over 300 million years. During this time they have radiated into more than 125,000 different species (Yeates and Wiegmann 1999), so that, by now, roughly every tenth described species is a fly. They thus represent one of the most successful animal groups on our planet. This evolutionary success might, at least in part, be a result of their acrobatic maneuverability, which enables them, for example, to chase mates at turning velocities of more than 3000° s–1 with delay times of less than 30 ms (Land and Collett 1974; Wagner 1986). It is this fantastic behavior, which has initiated much research during the last decades, both on its sensory control and the biophysical and aerodynamic principles of the flight output (Dickinson et al. 1999, 2000). Here, we review the current state of knowledge about the neural processing of visual motion, which represents one sensory component intimately involved in flight control. Other reviews on this topic have been published with a similar (Hausen 1981, 1984; Hausen and Egelhaaf 1989; Borst 1996) or different emphasis (Frye and Dickinson 2001; Borst and Dickinson 2002). Because of space limitations, we do not review the extensive work that has been done on fly motion-sensitive neurons to advance our understanding of neural coding (Bialek et al. 1991; Rieke et al. 1997; de Ruyter et al. 1997, 2000; Haag and Borst 1997, 1998; Borst and Haag 2001). Unless stated otherwise, all data presented in the following were obtained on the blowfly Calliphora vicina which we will often casually refer to as 'the fly'. --- paper_title: Directionally Selective Motion Detection by Insect Neurons paper_content: Animals have several good reasons for detecting motion with their eyes. First, the motion of other animals — potential preys, mates, intruders or predators — provides essential information on which to base vital moves such as escape or chase. Secondly, information about self-motion is crucial, especially in the context of navigation, course stabilization, obstacle avoidance, and collision-free goal reaching. In fact, the wealth of information provided by passive, non-contact self-motion evaluation in visual systems has been likened to a kind of “visual kinaesthesis” (Gibson 1958). Even the 3D structure of the environment can be picked up by a moving observer (revs. Collett and Harkness 1982; Buchner 1984; Nakayama 1985; Hildreth and Koch 1987). Von Helmholtz (1867) was the first to clearly state the importance of this “motion parallax” in locomotion, and Exner (1891) proposed that arthropods make use of motion parallax as well as stereopsis to estimate distances (see also Horridge 1986). --- paper_title: Common circuit design in fly and mammalian motion vision paper_content: Motion-sensitive neurons have long been studied in both the mammalian retina and the insect optic lobe, yet striking similarities have become obvious only recently. Detailed studies at the circuit level revealed that, in both systems, (i) motion information is extracted from primary visual information in parallel ON and OFF pathways; (ii) in each pathway, the process of elementary motion detection involves the correlation of signals with different temporal dynamics; and (iii) primary motion information from both pathways converges at the next synapse, resulting in four groups of ON-OFF neurons, selective for the four cardinal directions. Given that the last common ancestor of insects and mammals lived about 550 million years ago, this general strategy seems to be a robust solution for how to compute the direction of visual motion with neural hardware. --- paper_title: IDENTIFICATION OF DIRECTIONALLY SELECTIVE MOTION-DETECTING NEURONES IN THE LOCUST LOBULA AND THEIR SYNAPTIC CONNECTIONS WITH AN IDENTIFIED DESCENDING NEURONE paper_content: The anatomy and physiology of two directionally selective motion-detecting neurones in the locust are described. Both neurones had dendrites in the lobula, and projected to the ipsilateral protocerebrum. Their cell bodies were located on the posterio-dorsal junction of the optic lobe with the protocerebrum. The neurones were sensitive to horizontal motion of a visual stimulus. One neurone, LDSMD(F), had a preferred direction forwards over the ipsilateral eye, and a null direction backwards. The other neurone, LDSMD(B), had a preferred direction backwards over the ipsilateral eye ::: ::: 1. 1. Motion in the preferred direction caused EPSPs and spikes in the LDSMD neurones. Motion in the null direction resulted in IPSPs ::: ::: 2. 2. Both excitatory and inhibitory inputs were derived from the ipsilateral eye ::: ::: 3. 3. The DSMD neurones responded to velocities of movement up to and beyond 270°s−1 ::: ::: 4. 4. The response of both LDSMD neurones showed no evidence of adaptation during maintained apparent or real movement ::: ::: 5. 5. There was a delay of 60–80 ms between a single step of apparent movement, either the preferred or the null direction, and the start of the response ::: ::: 6. 6. There was a monosynaptic, excitatory connection between the LDSMD(B) neurone and the protocerebral, descending DSMD neurone (PDDSMD) identified in the preceding paper (Rind, 1990). At resting membrane potential, a single presynaptic spike did not give rise to a spike in the postsynaptic neurone --- paper_title: Postsynaptic organisations of directional selective visual neural networks for collision detection paper_content: In this paper, we studied the postsynaptic organisations of directional selective visual neurons for collision detection. Directional selective neurons can extract different directional visual motion cues fast and reliably by allowing inhibition spreads to further layers in specific directions with one or several time steps delay. Whether these directional selective neurons can be easily organised for other specific visual tasks is not known. Taking collision detection as the primary visual task, we investigated the postsynaptic organisations of these directional selective neurons through evolutionary processes. The evolved postsynaptic organisations demonstrated robust properties in detecting imminent collisions in complex visual environments with many of which achieved 94% success rate after evolution suggesting active roles in collision detection directional selective neurons and its postsynaptic organisations can play. --- paper_title: A Synthetic Vision System Using Directionally Selective Motion Detectors to Recognize Collision paper_content: Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes. --- paper_title: Redundant Neural Vision Systems—Competing for Collision Recognition Roles paper_content: Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition. --- paper_title: Sampling of the Visual Environment by the Compound Eye of the Fly: Fundamentals and Applications paper_content: This paper reviews some experiments which have been done to elucidate how a fly looks at its surroundings. Stimulation for such a study arose out of the conviction that the neural processing of information in the compound eye might better be unravelled with a precise stimulation of single receptor cells and that such a fine stimulation requires a better understanding of the optics. --- paper_title: Complementary mechanisms create direction selectivity in the fly paper_content: The brain extracts information from signals delivered from the eyes and other sensory organs in order to direct behavior. Understanding how the interactions and wiring of a multitude of individual nerve cells process and transmit this critical information to the brain is a fundamental goal in the field of neuroscience. One question many neuroscientists have tried to understand is how nerve cells in an animal’s brain detect direction when an animal sees movement of some kind – so-called motion vision. The raw signal from the light receptors in the eye does not discriminate whether the light moves in one direction or the other. So, the nerve cells in the brain must somehow compute the direction of movement based on the information relayed by the eye. For more than half a century, major debates have revolved around two rival models that could explain how motion vision works. Both models could in principle lead to neurons that prefer images moving in one direction over images moving in the opposite direction – so-called direction selectivity. In both models, the information about the changing light levels hitting two light-sensitive cells at two points on the eye are compared across time. In one model, signals from images moving in a cell’s preferred direction become amplified. In the other model, signals moving in the unfavored direction become canceled out. However, neither model perfectly explains motion vision. Now, Haag, Arenz et al. show that both models are partially correct and that the two mechanisms work together to detect motion across the field of vision more accurately. In the experiments, both models were tested in tiny fruit flies by measuring the activity of the first nerve cells that respond to the direction of visual motion. While each mechanism alone only produces a fairly weak and error-prone signal of direction, together the two mechanisms produce a stronger and more precise directional signal. Further research is now needed to determine which individual neurons amplify or cancel the signals to achieve such a high degree of direction selectivity. --- paper_title: Brain Connectivity: Revealing the Fly Visual Motion Circuit paper_content: Summary A new semi-automated method for high-throughput identification of visual neurons and their synaptic partners has been combined with optical recording of activity and behavioral analysis to give the first complete description of an elementary circuit for detecting visual motion. --- paper_title: Object-Detecting Neurons in Drosophila paper_content: Summary Many animals rely on vision to detect objects such as conspecifics, predators, and prey. Hypercomplex cells found in feline cortex and small target motion detectors found in dragonfly and hoverfly optic lobes demonstrate robust tuning for small objects, with weak or no response to larger objects or movement of the visual panorama [1–3]. However, the relationship among anatomical, molecular, and functional properties of object detection circuitry is not understood. Here we characterize a specialized object detector in Drosophila , the lobula columnar neuron LC11 [4]. By imaging calcium dynamics with two-photon excitation microscopy, we show that LC11 responds to the omni-directional movement of a small object darker than the background, with little or no responses to static flicker, vertically elongated bars, or panoramic gratings. LC11 dendrites innervate multiple layers of the lobula, and each dendrite spans enough columns to sample 75° of visual space, yet the area that evokes calcium responses is only 20° wide and shows robust responses to a 2.2° object spanning less than half of one facet of the compound eye. The dendrites of neighboring LC11s encode object motion retinotopically, but the axon terminals fuse into a glomerular structure in the central brain where retinotopy is lost. Blocking inhibitory ionic currents abolishes small object sensitivity and facilitates responses to elongated bars and gratings. Our results reveal high-acuity object motion detection in the Drosophila optic lobe. --- paper_title: A visual motion detection circuit suggested by Drosophila connectomics paper_content: Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. Here we develop a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our results identify cellular targets for future functional investigations, and demonstrate that connectomes can provide key insights into neuronal computations. --- paper_title: Visual Motion: Cellular Implementation of a Hybrid Motion Detector paper_content: Visual motion detection in insects is mediated by three-input detectors that compare inputs of different spatiotemporal properties. A new modeling study shows that only a small subset of possible arrangements of the input elements provides high direction-selectivity. --- paper_title: Neural networks in the cockpit of the fly paper_content: Flies have been buzzing around on earth for over 300 million years. During this time they have radiated into more than 125,000 different species (Yeates and Wiegmann 1999), so that, by now, roughly every tenth described species is a fly. They thus represent one of the most successful animal groups on our planet. This evolutionary success might, at least in part, be a result of their acrobatic maneuverability, which enables them, for example, to chase mates at turning velocities of more than 3000° s–1 with delay times of less than 30 ms (Land and Collett 1974; Wagner 1986). It is this fantastic behavior, which has initiated much research during the last decades, both on its sensory control and the biophysical and aerodynamic principles of the flight output (Dickinson et al. 1999, 2000). Here, we review the current state of knowledge about the neural processing of visual motion, which represents one sensory component intimately involved in flight control. Other reviews on this topic have been published with a similar (Hausen 1981, 1984; Hausen and Egelhaaf 1989; Borst 1996) or different emphasis (Frye and Dickinson 2001; Borst and Dickinson 2002). Because of space limitations, we do not review the extensive work that has been done on fly motion-sensitive neurons to advance our understanding of neural coding (Bialek et al. 1991; Rieke et al. 1997; de Ruyter et al. 1997, 2000; Haag and Borst 1997, 1998; Borst and Haag 2001). Unless stated otherwise, all data presented in the following were obtained on the blowfly Calliphora vicina which we will often casually refer to as 'the fly'. --- paper_title: A look into the cockpit of the fly: visual orientation, algorithms, and identified neurons paper_content: The top-down approach to understanding brain function seeks to account for the behavior of an animal in terms of biophysical properties of nerve cells and synaptic interactions via a series of progressively reductive levels of explanation. Using the fly as a model system, this approach was pioneered by Werner Reichardt and his colleagues in the late 1950s. Quantitative input-output analyses led them to formal algorithms that related the input of the fly's eye to the orientation behavior of the animal. But it has been possible only recently to track down the implementation of part of these algorithms to the computations performed by individual neurons and small neuronal ensembles. Thus, the visually guided flight maneuvers of the fly have turned out to be one of the few cases in which it has been feasible to reach an understanding of the mechanisms underlying a complex behavioral performance at successively reductive levels of analysis. These recent findings illuminate some of the fundamental questions that are being debated in computational neuroscience (Marr and Poggio, 1977; Sejnowski et al., 1988; Churchland and Sejnowski, 1992): (1) Are some brain functions emergent properties present only at the systems level? (2) Does an understanding of brain function at the systems level help in understanding function at the cellular and subcellular level? (3) Can different levels of organization be understood independently of each other? In this review we concentrate on two basic computational tasks that have to be solved by the fly, as well as by many other moving animals: (1) stabilization of an intended course against disturbances and (2) intended deviations from a straight course in order to orient toward salient objecs. Performing these tasks depends on the extraction of motion information from the changing distribution of light intensity received by the eyes. --- paper_title: Directionally Selective Motion Detection by Insect Neurons paper_content: Animals have several good reasons for detecting motion with their eyes. First, the motion of other animals — potential preys, mates, intruders or predators — provides essential information on which to base vital moves such as escape or chase. Secondly, information about self-motion is crucial, especially in the context of navigation, course stabilization, obstacle avoidance, and collision-free goal reaching. In fact, the wealth of information provided by passive, non-contact self-motion evaluation in visual systems has been likened to a kind of “visual kinaesthesis” (Gibson 1958). Even the 3D structure of the environment can be picked up by a moving observer (revs. Collett and Harkness 1982; Buchner 1984; Nakayama 1985; Hildreth and Koch 1987). Von Helmholtz (1867) was the first to clearly state the importance of this “motion parallax” in locomotion, and Exner (1891) proposed that arthropods make use of motion parallax as well as stereopsis to estimate distances (see also Horridge 1986). --- paper_title: Feedback Network Controls Photoreceptor Output at the Layer of First Visual Synapses in Drosophila paper_content: At the layer of first visual synapses, information from photoreceptors is processed and transmitted towards the brain. In fly compound eye, output from photoreceptors (R1-R6) that share the same visual field is pooled and transmitted via histaminergic synapses to two classes of interneuron, large monopolar cells (LMCs) and amacrine cells (ACs). The interneurons also feed back to photoreceptor terminals via numerous ligand-gated synapses, yet the significance of these connections has remained a mystery. We investigated the role of feedback synapses by comparing intracellular responses of photoreceptors and LMCs in wild-type Drosophila and in synaptic mutants, to light and current pulses and to naturalistic light stimuli. The recordings were further subjected to rigorous statistical and information-theoretical analysis. We show that the feedback synapses form a negative feedback loop that controls the speed and amplitude of photoreceptor responses and hence the quality of the transmitted signals. These results highlight the benefits of feedback synapses for neural information processing, and suggest that similar coding strategies could be used in other nervous systems. --- paper_title: The Temporal Tuning of the Drosophila Motion Detectors Is Determined by the Dynamics of Their Input Elements paper_content: Summary Detecting the direction of motion contained in the visual scene is crucial for many behaviors. However, because single photoreceptors only signal local luminance changes, motion detection requires a comparison of signals from neighboring photoreceptors across time in downstream neuronal circuits. For signals to coincide on readout neurons that thus become motion and direction selective, different input lines need to be delayed with respect to each other. Classical models of motion detection rely on non-linear interactions between two inputs after different temporal filtering. However, recent studies have suggested the requirement for at least three, not only two, input signals. Here, we comprehensively characterize the spatiotemporal response properties of all columnar input elements to the elementary motion detectors in the fruit fly, T4 and T5 cells, via two-photon calcium imaging. Between these input neurons, we find large differences in temporal dynamics. Based on this, computer simulations show that only a small subset of possible arrangements of these input elements maps onto a recently proposed algorithmic three-input model in a way that generates a highly direction-selective motion detector, suggesting plausible network architectures. Moreover, modulating the motion detection system by octopamine-receptor activation, we find the temporal tuning of T4 and T5 cells to be shifted toward higher frequencies, and this shift can be fully explained by the concomitant speeding of the input elements. --- paper_title: The Emergence of Directional Selectivity in the Visual Motion Pathway of Drosophila paper_content: Summary The perception of visual motion is critical for animal navigation, and flies are a prominent model system for exploring this neural computation. In Drosophila , the T4 cells of the medulla are directionally selective and necessary for ON motion behavioral responses. To examine the emergence of directional selectivity, we developed genetic driver lines for the neuron types with the most synapses onto T4 cells. Using calcium imaging, we found that these neuron types are not directionally selective and that selectivity arises in the T4 dendrites. By silencing each input neuron type, we identified which neurons are necessary for T4 directional selectivity and ON motion behavioral responses. We then determined the sign of the connections between these neurons and T4 cells using neuronal photoactivation. Our results indicate a computational architecture for motion detection that is a hybrid of classic theoretical models. --- paper_title: Principles of visual motion detection paper_content: Motion information is required for the solution of many complex tasks of the visual system such as depth perception by motion parallax and figure/ground discrimination by relative motion. However, motion information is not explicitly encoded at the level of the retinal input. Instead, it has to be computed from the time-dependent brightness patterns of the retinal image as sensed by the two-dimensional array of photoreceptors. Different models have been proposed which describe the neural computations underlying motion detection in various ways. To what extent do biological motion detectors approximate any of these models? As will be argued here, there is increasing evidence from the different disciplines studying biological motion vision, that, throughout the animal kingdom ranging from invertebrates to vertebrates including man, the mechanisms underlying motion detection can be attributed to only a few, essentially equivalent computational principles. Motion detection may, therefore, be one of the first examples in computational neurosciences where common principles can be found not only at the cellular level (e.g. dendritic integration, spike propagation, synaptic transmission) but also at the level of computations performed by small neural networks. --- paper_title: Impact and sources of neuronal variability in the fly’s motion vision pathway paper_content: Abstract Nervous systems encode information about dynamically changing sensory input by changes in neuronal activity. Neuronal activity changes, however, also arise from noise sources within and outside the nervous system or from changes of the animal’s behavioral state. The resulting variability of neuronal responses in representing sensory stimuli limits the reliability with which animals can respond to stimuli and may thus even affect the chances for survival in certain situations. Relevant sources of noise arising at different stages along the motion vision pathway have been investigated from the sensory input to the initiation of behavioral reactions. Here, we concentrate on the reliability of processing visual motion information in flies. Flies rely on visual motion information to guide their locomotion. They are among the best established model systems for the processing of visual motion information allowing us to bridge the gap between behavioral performance and underlying neuronal computations. It has been possible to directly assess the consequences of noise at major stages of the fly’s visual motion processing system on the reliability of neuronal signals. Responses of motion sensitive neurons and their variability have been related to optomotor movements as indicators for the overall performance of visual motion computation. We address whether and how noise already inherent in the stimulus, e.g. photon noise for the visual system, influences later processing stages and to what extent variability at the output level of the sensory system limits behavioral performance. Recent advances in circuit analysis and the progress in monitoring neuronal activity in behaving animals should now be applied to understand how the animal meets the requirements of fast and reliable manoeuvres in naturalistic situations. --- paper_title: Redundant Neural Vision Systems—Competing for Collision Recognition Roles paper_content: Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition. --- paper_title: Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression paper_content: Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila , direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein–Reichardt correlator and the Barlow–Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation. SIGNIFICANCE STATEMENT Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges. --- paper_title: Chromatic Organization and Sexual Dimorphism of the Fly Retinal Mosaic paper_content: Whether in vertebrates or in invertebrates, mapping the spectral organization of a retinal mosaic with single cell resolution often remains an insuperable task, for which microspectrophotometry and intracellular recordings appear as cumbersome tools. The need for methods capable of revealing at a glimpse the mosaic pattern of the individual spectral types across a large receptor array has led to many ingenious techniques, some of which are listed in Table 1. --- paper_title: Complementary mechanisms create direction selectivity in the fly paper_content: The brain extracts information from signals delivered from the eyes and other sensory organs in order to direct behavior. Understanding how the interactions and wiring of a multitude of individual nerve cells process and transmit this critical information to the brain is a fundamental goal in the field of neuroscience. One question many neuroscientists have tried to understand is how nerve cells in an animal’s brain detect direction when an animal sees movement of some kind – so-called motion vision. The raw signal from the light receptors in the eye does not discriminate whether the light moves in one direction or the other. So, the nerve cells in the brain must somehow compute the direction of movement based on the information relayed by the eye. For more than half a century, major debates have revolved around two rival models that could explain how motion vision works. Both models could in principle lead to neurons that prefer images moving in one direction over images moving in the opposite direction – so-called direction selectivity. In both models, the information about the changing light levels hitting two light-sensitive cells at two points on the eye are compared across time. In one model, signals from images moving in a cell’s preferred direction become amplified. In the other model, signals moving in the unfavored direction become canceled out. However, neither model perfectly explains motion vision. Now, Haag, Arenz et al. show that both models are partially correct and that the two mechanisms work together to detect motion across the field of vision more accurately. In the experiments, both models were tested in tiny fruit flies by measuring the activity of the first nerve cells that respond to the direction of visual motion. While each mechanism alone only produces a fairly weak and error-prone signal of direction, together the two mechanisms produce a stronger and more precise directional signal. Further research is now needed to determine which individual neurons amplify or cancel the signals to achieve such a high degree of direction selectivity. --- paper_title: Evaluation of optical motion information by movement detectors paper_content: The paper is dealing in its first part with a system-theoretical approach for the decomposition of multi-input systems into the sum of simpler systems. By this approach the algorithm for the computations underlying the extraction of motion information from the optical environment by biological movement detectors is analysed. In the second part it concentrates on a specific model for motion computation known to be realized by the visual system of insects and of man. These motion detectors provide the visual system with information on both, velocity and structural properties of a moving pattern. The last part of the paper deals with the functional properties of two-dimensional arrays of movement detectors. They are analyzed and their relations to meaningful physiological responses are discussed. --- paper_title: Adaptation and the temporal delay filter of fly motion detectors paper_content: Recent accounts attribute motion adaptation to a shortening of the delay filter in elementary motion detectors (EMDs). Using computer modelling and recordings from HS neurons in the drone-fly Eristalis tenax, we present evidence that challenges this theory. (i) Previous evidence for a change in the delay filter comes from ‘image step’ (or ‘velocity impulse’) experiments. We note a large discrepancy between the temporal frequency tuning predicted from these experiments and the observed tuning of motion sensitive cells. (ii) The results of image step experiments are highly sensitive to the experimental method used. (iii) An apparent motion stimulus reveals a much shorter EMD delay than suggested by previous ‘image step’ experiments. This short delay agrees with the observed temporal frequency sensitivity of the unadapted cell. (iv) A key prediction of a shortening delay filter is that the temporal frequency optimum of the cell should show a large shift to higher temporal frequencies after motion adaptation. We show little change in the temporal or spatial frequency (and hence velocity) optima following adaptation. --- paper_title: The Temporal Tuning of the Drosophila Motion Detectors Is Determined by the Dynamics of Their Input Elements paper_content: Summary Detecting the direction of motion contained in the visual scene is crucial for many behaviors. However, because single photoreceptors only signal local luminance changes, motion detection requires a comparison of signals from neighboring photoreceptors across time in downstream neuronal circuits. For signals to coincide on readout neurons that thus become motion and direction selective, different input lines need to be delayed with respect to each other. Classical models of motion detection rely on non-linear interactions between two inputs after different temporal filtering. However, recent studies have suggested the requirement for at least three, not only two, input signals. Here, we comprehensively characterize the spatiotemporal response properties of all columnar input elements to the elementary motion detectors in the fruit fly, T4 and T5 cells, via two-photon calcium imaging. Between these input neurons, we find large differences in temporal dynamics. Based on this, computer simulations show that only a small subset of possible arrangements of these input elements maps onto a recently proposed algorithmic three-input model in a way that generates a highly direction-selective motion detector, suggesting plausible network architectures. Moreover, modulating the motion detection system by octopamine-receptor activation, we find the temporal tuning of T4 and T5 cells to be shifted toward higher frequencies, and this shift can be fully explained by the concomitant speeding of the input elements. --- paper_title: Principles of visual motion detection paper_content: Motion information is required for the solution of many complex tasks of the visual system such as depth perception by motion parallax and figure/ground discrimination by relative motion. However, motion information is not explicitly encoded at the level of the retinal input. Instead, it has to be computed from the time-dependent brightness patterns of the retinal image as sensed by the two-dimensional array of photoreceptors. Different models have been proposed which describe the neural computations underlying motion detection in various ways. To what extent do biological motion detectors approximate any of these models? As will be argued here, there is increasing evidence from the different disciplines studying biological motion vision, that, throughout the animal kingdom ranging from invertebrates to vertebrates including man, the mechanisms underlying motion detection can be attributed to only a few, essentially equivalent computational principles. Motion detection may, therefore, be one of the first examples in computational neurosciences where common principles can be found not only at the cellular level (e.g. dendritic integration, spike propagation, synaptic transmission) but also at the level of computations performed by small neural networks. --- paper_title: Elementary motion detectors paper_content: A quick guide to the elementary motion detector— a model of how a simple neural circuit can detect visual motion, developed from work on insect vision but which seems also to be relevant to vertebrate visual systems. --- paper_title: Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot paper_content: For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M 2 APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6 × 10 − 7 to 1 . 6 × 10 − 2 W·cm − 2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M 2 APix sensor. While both algorithms adequately measured optical flow between 25 ∘ /s and 1000 ∘ /s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. --- paper_title: Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot paper_content: Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M2APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor. --- paper_title: A bee in the corridor: centering and wall-following paper_content: In an attempt to better understand the mechanism underlying lateral collision avoidance in flying insects, we trained honeybees (Apis mellifera) to fly through a large (95-cm wide) flight tunnel. We found that, depending on the entrance and feeder positions, honeybees would either center along the corridor midline or fly along one wall. Bees kept following one wall even when a major (150-cm long) part of the opposite wall was removed. These findings cannot be accounted for by the “optic flow balance” hypothesis that has been put forward to explain the typical bees’ “centering response” observed in narrower corridors. Both centering and wall-following behaviors are well accounted for, however, by a control scheme called the lateral optic flow regulator, i.e., a feedback system that strives to maintain the unilateral optic flow constant. The power of this control scheme is that it would allow the bee to guide itself visually in a corridor without having to measure its speed or distance from the walls. --- paper_title: Visual Guidance Of A Mobile Robot Equipped With A Network Of Self-Motion Sensors paper_content: This paper reports on the principles of an on-board electro-optical system for the guidance of an autonomous mobile robot. Some of the signal processing adopted here was directly inspired by natural visual systems, in particular by the compound eye of the fly. The visual system has compound optics with a panoramic field but relatively low spatial resolution. It makes use of elementary motion detectors (E.M.D's) to estimate the distance to objects from the optic flow. Each E.M.D. constitutes one mesh of an analog network. It measures the relative angular velocity of any contrast point that passes across its receptive field as a result of the robot's own motion and evaluates the radial distance to this contrast point from the motion parallax. For this purpose, the mobile makes translation steps at constant speed during each visual acquisition. An obstacle avoidance algorithm is implemented on a parallel, analog network. This network integrates the numerous data provided by the E.M.D's and controls the drive motor and steering motor of the robot platform in real time. Other navigation modules may be added without altering the basic hardware architecture of the system. For example, a target detector has been associated with the system. No stringent hypothesis needs to be made as to the shape of objects in the environment. Both the visual processing principles and the obstacle avoidance strategy are described. --- paper_title: Speed tuning in elementary motion detectors of the correlation type paper_content: A prominent model of visual motion detection is the so-called correlation or Reichardt detector. Whereas this model can account for many properties of motion vision, from humans to insects (review, Borst and Egelhaaf 1989), it has been commonly assumed that this scheme of motion detection is not well suited to the measurement of image velocity. This is because the commonly used version of the model, which incorporates two unidirectional motion detectors with opposite preferred directions, produces a response which varies not only with the velocity of the image, but also with its spatial structure and contrast. On the other hand, information on image velocity can be crucial in various contexts, and a number of recent behavioural experiments suggest that insects do extract velocity for navigational purposes (review, Srinivasan et al. 1996). Here we show that other versions of the correlation model, which consists of a single unidirectional motion detector or incorporates two oppositely directed detectors with unequal sensitivities, produce responses which vary with image speed and display tuning curves that are substantially independent of the spatial structure of the image. This surprising feature suggests simple strategies of reducing ambiguities in the estimation of speed by using components of neural hardware that are already known to exist in the visual system. --- paper_title: Miniature curved artificial compound eyes. paper_content: In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories. --- paper_title: Toward an insect-inspired event-based autopilot combining both visual and control events paper_content: This paper presents the autopilot and the behavior of a “simulated bee” traveling along two different tunnels using both visual and control events. The computational gain of an event-based PID controller compared to its time-based version is usually discussed because the event detector is computationally expensive. By combining visual and control events, the newly suggested event-based autopilot requires very low computational resources. In particular, the event detector which computes the control error and tests its magnitude is activated only when a new contrast is detected by the optic motion detectors that assesses the optic flow, i.e. only when the magnitude of the optic flow error could have changed. This new event-based control strategy used faithfully the visual information already available in the optic flow sensor to reduce even further the computational cost. The “simulated bee” was equipped with: (i) a minimalistic compound eye comprising 10 or 8 local motion sensors (depending of the tunnel configuration) measuring the optic flow magnitude, (ii) two optic flow regulators updating the control signals whenever specific optic flow criteria changed and (iii) three event-based controllers taking into account both error signals and visual events, each one in charge of its own translational dynamics. The “simulated bee” managed to travel safely along the tunnels without requiring any speed or distance measurements, using very low computational resources, by (i) concomitantly adjusting the side thrust, vertical lift and forward thrust only when both a visual contrast and a change of optic flow control error were detected, and (ii) avoiding collisions with the surface of the tunnels and decreasing or increasing its speed, depending on the clutter rate perceived by motion sensors. --- paper_title: Fly visual system inspired artificial neural network for collision detection paper_content: This work investigates one bio-inspired collision detection system based on fly visual neural structures, in which collision alarm is triggered if an approaching object in a direct collision course appears in the field of view of a camera or a robot, together with the relevant time region of collision. One such artificial system consists of one artificial fly visual neural network model and one collision detection mechanism. The former one is a computational model to capture membrane potentials produced by neurons. The latter one takes the outputs of the former one as its inputs, and executes three detection schemes: (i) identifying when a spike takes place through the membrane potentials and one threshold scheme; (ii) deciding the motion direction of a moving object by the Reichardt detector model; and (iii) sending collision alarms and collision regions. Experimentally, relying upon a series of video image sequences with different scenes, numerical results illustrated that the artificial system with some striking characteristics is a potentially alternative tool for collision detection. --- paper_title: Navigation in an autonomous flying robot by using a biologically inspired visual odometer paper_content: While mobile robots and walking insects can use proprioceptive information (specialized receptors in the insects' leg, or wheel encoders in robots) to estimate distance traveled, flying agents have to rely mainly on visual cues. Experiments with bees provide evidence that flying insects might be using optical flow induced by egomotion to estimate distance traveled. Recently some details of this odometer have been unraveled. In this study, we propose a biologically inspired model of the bee's visual odometer based on Elementary Motion Detectors (EMDs), and present results from goal-directed navigation experiments with an autonomous flying robot platform that we developed specifically for this purpose. The robot is equipped with a panoramic vision system, which is used to provide input to the EMDs of the left and right visual fields. The outputs of the EMDs are in later stage spatially integrated by wide field motion detectors, and their accumulated response is directly used for the odometer. In a set of initial experiments, the robot moves through a corridor on a fixed route, and the outputs of EMDs, the odometer, are recorded. The results show that the proposed model can be used to provide an estimate of the distance traveled, but the performance depends on the route the robot follows, something which is biologically plausible since natural insects tend to adopt a fixed route during foraging. Given these results, we assumed that the optomotor response plays an important role in the context of goal-directed navigation, and we conducted experiments with an autonomous freely flying robot. The experiments demonstrate that this computationally cheap mechanism can be successfully employed in natural indoor environments.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Insect Inspired Autopilots paper_content: This paper deals with the control problems involved in insects' and robots' visually guided piloting. Explicit control schemes are presented which may explain how insects navigate by relying on optic flow cues, without requiring any distance or speed measurements. The concept of the optic flow regulator, a feedback control system based on OF sensors, is presented. We tested our control schemes in simulation, and implemented them on-board two types of miniature aerial robots, a helicopter and a hovercraft. Their electronic OF sensors were inspired by the results of our microelectrode studies on motion sensitive neurons in the housefly's compound eye. The control schemes described do without any conventional avionic sensors like rangefinders or speedometers, and therefore show great potential for safe autonomous control of aerial, underwater and space vehicles in unchartered environments. --- paper_title: Obstacle avoidance and speed control in a mobile vehicle equipped with a compound eye paper_content: Shows that the use of onboard visual motion sensors has advantages in visually guided navigation. An agent equipped with a given array of motion detectors detects obstacles all the more distant when it is running faster. Its "radius of vision" becomes simply proportional to speed, with the interesting consequence that the agent can easily adapt its speed to the environmental context. The simulation study presented shows that the newly designed visuomotor system allows the robot to reach a target while avoiding obstacles in cluttered environments, adapt its own speed to the situation, follow walls, retreat from dead-ends and bypass impenetrable clusters of obstacles. These performances result all from one and the same visuomotor algorithm and do not require any "arbitration" between different "behaviors" as is the case with the subsumption architecture. Nor do they require any kind of high level reasoning, world model and long term memory. The described system could be part of a hierarchical system and carry out the basic and reputedly difficult task of fast piloting in uncertain environments, for example, between intermediary waypoints that would be specified by a high level mission planner. --- paper_title: Optic Flow Regulation in Unsteady Environments: A Tethered MAV Achieves Terrain Following and Targeted Landing Over a Moving Platform paper_content: The present study deals with the risky and daunting tasks of flying and landing in non-stationary environments. Using a two Degree-Of-Freedom (DOF) tethered micro-air vehicle (MAV), we show the benefits of an autopilot dealing with a variable - the optic flow - which depends directly on two relative variables, the groundspeed and the groundheight. The micro-helicopter was shown to follow the ups and downs of a rotating platform that was also oscillated vertically. At no time did the MAV know in terms of ground height whether it was approaching the moving ground or whether the ground itself was rising to it dangerously. Nor did it know whether its current groundspeed was caused only by its forward thrust or whether it was partly due to the ground moving backwards or forwards. Furthermore, the MAV was shown to land safely on a platform set into motion along two directions, vertical and horizontal. This paper extends to non-stationary environments a former approach that introduced the principle of “optic flow regulation” for altitude control. Whereas in the former approach no requirement was set on the robot’s landing target, the target’s elevation angle was used here in a second feedback loop that gradually altered the robot’s pitch and therefore its airspeed, leading to smooth landing in the vicinity of the target. Whether dealing with terrain following or landing, the MAV followed followed appropriately the unpredictable changes in the environment although it had no explicit knowledge of groundheight and groundspeed. The MAV did not make use of any rangefinders or velocimeters and was simply equipped with a 2-gram vision-based autopilot. --- paper_title: Motion Detection Circuits for a Time-To-Travel Algorithm paper_content: The paper describes a new motion detection circuit that extracts motion information based on a time-to-travel algorithm. The front-end photoreceptor adapts over 7 decades of background intensity and motion information can be extracted down to a contrast value of 2.5%. Results from the circuits which were fabricated in a 2-metal 2-poly 1.5mum CMOS process, show that the motion information can be extracted over 2 decades of speed. --- paper_title: A Bio-Inspired Flying Robot Sheds Light on Insect Piloting Abilities paper_content: When insects are flying forward, the image of the ground sweeps backward across their ventral viewfield and forms an "optic flow," which depends on both the groundspeed and the groundheight. To explain how these animals manage to avoid the ground by using this visual motion cue, we suggest that insect navigation hinges on a visual-feedback loop we have called the optic-flow regulator, which controls the vertical lift. To test this idea, we built a micro-helicopter equipped with an optic-flow regulator and a bio-inspired optic-flow sensor. This fly-by-sight micro-robot can perform exacting tasks such as take-off, level flight, and landing. Our control scheme accounts for many hitherto unexplained findings published during the last 70 years on insects' visually guided performances; for example, it accounts for the fact that honeybees descend in a headwind, land with a constant slope, and drown when travelling over mirror-smooth water. Our control scheme explains how insects manage to fly safely without any of the instruments used onboard aircraft to measure the groundheight, groundspeed, and descent speed. An optic-flow regulator is quite simple in terms of its neural implementation and just as appropriate for insects as it would be for aircraft. --- paper_title: Obstacle avoidance in a terrestrial mobile robot provided with a scanning retina paper_content: This study describes the role of a novel visual sensor, called the "scanning local motion detector" (SLMD) for the visual navigation of a terrestrial mobile agent. Following a brief theoretical description of the principle of retinal scanning coupled with notion parallax, we show that a single retina of 24 pixels - when oriented in the mobile agent's forward line of move - allows the latter to avoid the visually detected obstacles in a reflex manner by correcting its trajectory in real time. The results of the preliminary computer simulations yield credit to a zig-zag algorithm which allows the agent to avoid the oncoming obstacles while immediately "keeping an eye" on its lateral regions. In parallel with the simulation, we designed and built a hardware prototype of a scanning retina with coarse optical resolution (average angular sampling of 3/spl deg/) mounted on top of a circular mobile platform. The preliminary results of the robot's successful performances confirm the validity of the zig-zag algorithm. --- paper_title: Nondirectional motion may underlie insect behavioral dependence on image speed paper_content: Behavioral experiments suggest that insects make use of the apparent image speed on their compound eyes to navigate through obstacles, control flight speed, land smoothly, and measure the distance they have flown. However, the vast majority of electrophysiological recordings from motion-sensitive insect neurons show responses which are tuned in spatial and temporal frequency and are thus unable to unambiguously represent image speed. We suggest that this contradiction may be resolved at an early stage of visual motion processing using nondirectional motion sensors that respond proportionally to image speed until their peak response. We describe and characterize a computational model of these sensors and propose a model by which a spatial collation of such sensors could be used to generate speed-dependent behavior. --- paper_title: A fully-autonomous hovercraft inspired by bees: Wall following and speed control in straight and tapered corridors paper_content: The small autonomous vehicles of the future will have to navigate close to obstacles in highly unpredictable environments. Risky tasks of this kind may require novel sensors and control methods that differ from conventional approaches. Recent ethological findings have shown that complex navigation tasks such as obstacle avoidance and speed control are performed by flying insects on the basis of optic flow (OF) cues, although insects' compound eyes have a very poor spatial resolution. The present paper deals with the implementation of an optic flow-based autopilot on a fully autonomous hovercraft. Tests were performed on this small (878-gram) innovative robotic platform in straight and tapered corridors lined with natural panoramas. A bilateral OF regulator controls the robot's forward speed (up to 0.8m/s), while a unilateral OF regulator controls the robot's clearance from the two walls. A micro-gyrometer and a tiny magnetic compass ensure that the hovercraft travels forward in the corridor without yawing. The lateral OFs are measured by two minimalist eyes mounted sideways opposite to each other. For the first time, the hovercraft was found to be capable of adjusting both its forward speed and its clearance from the walls, in both straight and tapered corridors, without requiring any distance or speed measurements, that is, without any need for on-board rangefinders or tachometers. --- paper_title: A robotic aircraft that follows terrain using a neuromorphic eye paper_content: Future Unmanned Air Vehicles (UAV) and Micro Air Vehicles (MAV) will fly in urban areas and very close to obstacles. We have built a miniature (35 cm, 0.840 kg) electrically-powered aircraft which uses a motion-sensing visual system to follow terrain and avoid obstacles. Signals from the 20-photoreceptor onboard eye are processed by 19 custom Elementary Motion Detection (EMD) circuits which are derived from those of the fly. Visual, inertial, and rotor RPM signals from the aircraft are acquired by a flight computer which runs the real-time Linux operating system. Vision-guided trajectories and landings were simulated and automatic terrain-following flights at 2 m/s were demonstrated with the aircraft tethered to a whirling-arm. This UAV project is at the intersection of neurobiology, robotics, and aerospace. It provides technologies for MAV operations. --- paper_title: Contrast saturation in a neuronally-based model of elementary motion detection paper_content: The Hassenstein-Reichardt (HR) correlation model is commonly used to model elementary motion detection in the fly. Recently, a neuronally-based computational model was proposed which, unlike the HR model, is based on identified neurons. The response of both models increases as the square of contrast, although the response of insect neurons saturates at high contrasts. We introduce a saturating nonlinearity into the neuronally-based model in order to produce contrast saturation and discuss the neuronal implications of these elements. Furthermore, we show that features of the contrast sensitivity of movement-detecting neurons are predicted by the modified model. --- paper_title: Small Brains, Smart Machines: From Fly Vision to Robot Vision and Back Again paper_content: Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind were discovered in vertebrates' and invertebrates' visual systems more than 50 years ago, the principles and neural mechanisms involved have not yet been completely elucidated. Here, first, I propose to outline some of the findings we made during the last few decades by performing electrophysiological recordings on identified neurons in the housefly's eye while applying optical stimulation to identified photoreceptors. Whereas these findings shed light on the inner processing structure of an elementary motion detector (EMD), recent studies in which the latest genetic and neuroanatomical methods were applied to the fruitfly's visual system have identified some of the neurons in the visual chain which are possibly involved in the neural circuitry underlying a given EMD. Then, I will describe some of the proof-of-concept robots that we have developed on the basis of our biological findings. The 100-g robot OCTAVE, for example, is able to avoid the ground, react to wind, and land autonomously on a flat terrain without ever having to measure any state variables such as distances or speeds. The 100-g robots OSCAR 1 and OSCAR 2 inspired by the microscanner we discovered in the housefly's eye are able to stabilize their body using mainly visual means and track a moving edge with hyperacuity. These robots react to the optic flow, which is sensed by miniature optic flow sensors inspired by the housefly's EMDs. Constructing a “biorobot” gives us a unique opportunity of checking the soundness and robustness of a principle that is initially thought to be understood by bringing it face to face with the real physical world. Bio-inspired robotics not only help neurobiologists and neuroethologists to identify and investigate worthwhile problems in animals' sensory-motor systems, but they also provide engineers with ideas for developing novel devices and machines with promising future applications, in the field of smart autonomous vehicles and microvehicles, for example. --- paper_title: Bio-inspired optical flow circuits for the visual guidance of micro air vehicles paper_content: In the framework of our research on biologically inspired microrobotics, we have developed a visually based autopilot for micro air vehicles (MAV), which we have called OCTAVE (optical altitude control system for autonomous vehicles). Here, we show the feasibility of a joint altitude and speed control system based on a low complexity optronic velocity sensor that estimates the optic flow in the downward direction. This velocity sensor draws on electrophysiological findings of on the fly elementary motion detectors (EMDs) obtained at our laboratory. We built an elementary, 100-gram tethered helicopter system that carries out terrain following above a randomly textured ground. The overall processing system is light enough to be mounted on-board MAVs with an avionic payload of only some grams. --- paper_title: Movement-induced motion signal distributions in outdoor scenes paper_content: The movement of an observer generates a characteristic field of velocity vectors on the retina (Gibson 1950). Because such optic flow-fields are useful for navigation, many theoretical, psychophysical and physiological studies have addressed the question how egomotion parameters such as direction of heading can be estimated from optic flow. Little is known, however, about the structure of optic flow under natural conditions. To address this issue, we recorded sequences of panoramic images along accurately defined paths in a variety of outdoor locations and used these sequences as input to a two-dimensional array of correlation-based motion detectors (2DMD). We find that (a) motion signal distributions are sparse and noisy with respect to local motion directions; (b) motion signal distributions contain patches (motion streaks) which are systematically oriented along the principal flow-field directions; (c) motion signal distributions show a distinct, dorso-ventral topography, reflecting the distance anisotropy of terrestrial environments; (d) the spatiotemporal tuning of the local motion detector we used has little influence on the structure of motion signal distributions, at least for the range of conditions we tested; and (e) environmental motion is locally noisy throughout the visual field, with little spatial or temporal correlation; it can therefore be removed by temporal averaging and is largely over-ridden by image motion caused by observer movement. Our results suggest that spatial or temporal integration is important to retrieve reliable information on the local direction and size of motion vectors, because the structure of optic flow is clearly detectable in the temporal average of motion signal distributions. Egomotion parameters can be reliably retrieved from such averaged distributions under a range of environmental conditions. These observations raise a number of questions about the role of specific environmental and computational constraints in the processing of natural optic flow. --- paper_title: Fast global motion estimation algorithm based on elementary motion detectors paper_content: This paper presents a fast global motion estimation algorithm based on so called elementary motion detectors or EMD. EMD, modeling insect visual signal processing systems, have low computational complexity aspects and thus can be key components to realize such a fast global motion estimation algorithm. The developed algorithm is evaluated by being applied to various types of image sequences and is found to provide accurate estimation results. --- paper_title: Optic-Flow-Based Collision Avoidance paper_content: Flying in and around caves, tunnels, and buildings demands more than one sensing modality. This article presented an optic-flow- based approach inspired by flying insects for avoiding lateral collisions. However, there were a few real-world scenarios in which optic flow sensing failed. This occurred when obstacles on approach were directly in front of the aircraft. Here, a simple sonar or infrared sensor can be used to trigger a quick transition into the hovering mode to avoid the otherwise fatal collision. Toward this end, we have demonstrated a fixed-wing prototype capable of manually transitioning from conventional cruise flight into the hovering mode. The prototype was then equipped with an IMU and a flight control system to automate the hovering process. The next step in this research is to automate the transition from cruise to hover flight. --- paper_title: Pulse-Based Analog VLSI Velocity Sensors paper_content: We present two algorithms for estimating the velocity of a visual stimulus and their implementations with analog circuits using CMOS VLSI technology. Both are instances of so-called token methods, where velocity is computed by identifying particular features in the image at different locations; in our algorithms, these features are abrupt temporal changes in image irradiance. Our circuits integrate photoreceptors and associated electronics for computing motion onto a single chip and unambiguously extract bidirectional velocity for stimuli of high and intermediate contrasts over considerable irradiance and velocity ranges. At low contrasts, the output signal for a given velocity tends to decrease gracefully with contrast, while direction-selectivity is maintained. The individual motion-sensing cells are compact and highly suitable for use in dense 1-D or 2-D imaging arrays. --- paper_title: Responses of blowfly motion-sensitive neurons to reconstructed optic flow along outdoor flight paths paper_content: The retinal image flow a blowfly experiences in its daily life on the wing is determined by both the structure of the environment and the animal’s own movements. To understand the design of visual processing mechanisms, there is thus a need to analyse the performance of neurons under natural operating conditions. To this end, we recorded flight paths of flies outdoors and reconstructed what they had seen, by moving a panoramic camera along exactly the same paths. The reconstructed image sequences were later replayed on a fast, panoramic flight simulator to identified, motion sensitive neurons of the so-called horizontal system (HS) in the lobula plate of the blowfly, which are assumed to extract self-motion parameters from optic flow. We show that under real life conditions HS-cells not only encode information about self-rotation, but are also sensitive to translational optic flow and, thus, indirectly signal information about the depth structure of the environment. These properties do not require an elaboration of the known model of these neurons, because the natural optic flow sequences generate—at least qualitatively—the same depth-related response properties when used as input to a computational HS-cell model and to real neurons. --- paper_title: A universal strategy for visually guided landing paper_content: Landing is a challenging aspect of flight because, to land safely, speed must be decreased to a value close to zero at touchdown. The mechanisms by which animals achieve this remain unclear. When landing on horizontal surfaces, honey bees control their speed by holding constant the rate of front-to-back image motion (optic flow) generated by the surface as they reduce altitude. As inclination increases, however, this simple pattern of optic flow becomes increasingly complex. How do honey bees control speed when landing on surfaces that have different orientations? To answer this, we analyze the trajectories of honey bees landing on a vertical surface that produces various patterns of motion. We find that landing honey bees control their speed by holding the rate of expansion of the image constant. We then test and confirm this hypothesis rigorously by analyzing landings when the apparent rate of expansion generated by the surface is manipulated artificially. This strategy ensures that speed is reduced, gradually and automatically, as the surface is approached. We then develop a mathematical model of this strategy and show that it can effectively be used to guide smooth landings on surfaces of any orientation, including horizontal surfaces. This biological strategy for guiding landings does not require knowledge about either the distance to the surface or the speed at which it is approached. The simplicity and generality of this landing strategy suggests that it is likely to be exploited by other flying animals and makes it ideal for implementation in the guidance systems of flying robots. --- paper_title: A neural network for pursuit tracking inspired by the fly visual system paper_content: Abstract This paper presents an artificial neural network that detects and tracks an object moving within its field of view. This novel network is inspired by processing functions observed in the fly visual system. The network detects changes in input light intensities, determines motion on both the local and the wide-field levels, and outputs displacement information necessary to control pursuit tracking. Software simulations demonstrate the current prototype successfully follows a moving target within specified radiance and motion constraints. The paper reviews these limiting constraints and suggests future network augmentations to remove them. Despite its current limitations, the existing prototype serves as a solid foundation for a future network that promises to provide machines with the improved abilities to do high-speed pursuit tracking, interception, and collision avoidance. --- paper_title: Object tracking in motion-blind flies paper_content: In response to the movement of its visual world, Drosophila is capable of optomotor response in head and body turning, as well as a visual fixation response. This study shows that blocking the visual pathway activity responsible for optokinetic response in flies does not affect the visual fixation response, suggesting two distinct pathways for processing each set of information. By doing so, the authors also devised a neural and behavioral hierarchy in fly visual system where fixation behavior and the neurons mediating fixation response are upstream of optokinetic response as performed by lobula plate neurons. --- paper_title: Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields paper_content: The 6-pixel low-speed Visual Motion Sensor (VMS) inspired by insects' visual systems presented here performs local 1-D angular speed measurements ranging from 1.5°/s to 25°/s and weighs only 2.8 g. The entire optic flow processing system, including the spatial and temporal filtering stages, has been updated with respect to the original design. This new lightweight sensor was tested under free-flying outdoor conditions over various fields onboard a 80 kg unmanned helicopter called ReSSAC. The visual disturbances encountered included helicopter vibrations, uncontrolled illuminance, trees, roads, and houses. The optic flow measurements obtained were finely analyzed online and also offline, using the sensors of various kinds mounted onboard ReSSAC. The results show that the optic flow measured despite the complex disturbances encountered closely matched the approximate ground-truth optic flow. --- paper_title: Biologically inspired visual odometer for navigation of a flying robot paper_content: Experimental research in biology has uncovered a number of different ways in which flying insects use cues derived from optical flow for navigational purposes, such as safe landing, obstacle avoidance and dead reckoning. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of bees. Specifically, we focus on the mechanisms of course stabilization behavior and visually mediated odometer by using a biological model of motion detector for the purpose of long-range goal-directed navigation in 3D environment. The performance tests of the proposed navigation method are conducted by using a blimp-type flying robot platform in uncontrolled indoor environments. The result shows that the proposed mechanism can be used for goal-directed navigation. Further analysis is also conducted in order to enhance the navigation performance of autonomous aerial vehicles. © 2003 Elsevier B.V. All rights reserved. --- paper_title: Estimation of self-motion by optic flow processing in single visual interneurons paper_content: HUMANS, animals and some mobile robots use visual motion cues for object detection and navigation in structured surroundings1–4. Motion is commonly sensed by large arrays of small field movement detectors, each preferring motion in a particular direction5,6. Self-motion generates distinct 'optic flow fields' in the eyes that depend on the type and direction of the momentary locomotion (rotation, translation) 7. To investigate how the optic flow is processed at the neuronal level, we recorded intracellularly from identified interneurons in the third visual neuropile of the blowfly8. The distribution of local motion tuning over their huge receptive fields was mapped in detail. The global structure of the resulting 'motion response fields' is remarkably similar to optic flow fields. Thus, the organization of the receptive fields of the so-called VS neurons9,10 strongly suggests that each of these neurons specifically extracts the rotatory component of the optic flow around a particular horizontal axis. Other neurons are probably adapted to extract translatory flow components. This study shows how complex visual discrimination can be achieved by task-oriented preprocessing in single neurons. --- paper_title: Adaptation and the temporal delay filter of fly motion detectors paper_content: Recent accounts attribute motion adaptation to a shortening of the delay filter in elementary motion detectors (EMDs). Using computer modelling and recordings from HS neurons in the drone-fly Eristalis tenax, we present evidence that challenges this theory. (i) Previous evidence for a change in the delay filter comes from ‘image step’ (or ‘velocity impulse’) experiments. We note a large discrepancy between the temporal frequency tuning predicted from these experiments and the observed tuning of motion sensitive cells. (ii) The results of image step experiments are highly sensitive to the experimental method used. (iii) An apparent motion stimulus reveals a much shorter EMD delay than suggested by previous ‘image step’ experiments. This short delay agrees with the observed temporal frequency sensitivity of the unadapted cell. (iv) A key prediction of a shortening delay filter is that the temporal frequency optimum of the cell should show a large shift to higher temporal frequencies after motion adaptation. We show little change in the temporal or spatial frequency (and hence velocity) optima following adaptation. --- paper_title: Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers. paper_content: Two bio-inspired guidance principles involving no reference frame are presented here and were implemented in a rotorcraft, which was equipped with panoramic optic flow (OF) sensors but (as in flying insects) no accelerometer. To test these two guidance principles, we built a tethered tandem rotorcraft called BeeRotor (80 grams), which was tested flying along a high-roofed tunnel. The aerial robot adjusts its pitch and hence its speed, hugs the ground and lands safely without any need for an inertial reference frame. The rotorcraft's altitude and forward speed are adjusted via two OF regulators piloting the lift and the pitch angle on the basis of the common-mode and differential rotor speeds, respectively. The robot equipped with two wide-field OF sensors was tested in order to assess the performances of the following two systems of guidance involving no inertial reference frame: (i) a system with a fixed eye orientation based on the curved artificial compound eye (CurvACE) sensor, and (ii) an active system of reorientation based on a quasi-panoramic eye which constantly realigns its gaze, keeping it parallel to the nearest surface followed. Safe automatic terrain following and landing were obtained with CurvACE under dim light to daylight conditions and the active eye-reorientation system over rugged, changing terrain, without any need for an inertial reference frame. --- paper_title: A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila paper_content: We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects’ eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila’s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies. --- paper_title: Outdoor performance of a motion-sensitive neuron in the blowfly paper_content: We studied an identified motion-sensitive neuron of the blowfly under outdoor conditions. The neuron was stimulated by oscillating the fly in a rural environment. We analysed whether the motion-induced neuronal activity is affected by brightness changes ranging between bright sunlight and dusk. In addition, the relationship between spike rate and ambient temperature was determined. The main results are: (1) The mean spike rate elicited by visual motion is largely independent of brightness changes over several orders of magnitude as they occur as a consequence of positional changes of the sun. Even during dusk the neuron responds strongly and directionally selective to motion. (2) The neuronal spike rate is not significantly affected by short-term brightness changes caused by clouds temporarily occluding the sun. (3) In contrast, the neuronal activity is much affected by changes in ambient temperature. --- paper_title: Optic flow-based collision-free strategies: From insects to robots paper_content: Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. --- paper_title: Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster. paper_content: Flies rely heavily on visual feedback for several aspects of flight control. As a fly approaches an object, the image projected across its retina expands, providing the fly with visual feedback that can be used either to trigger a collision-avoidance maneuver or a landing response. To determine how a fly makes the decision to land on or avoid a looming object, we measured the behaviors generated in response to an expanding image during tethered flight in a visual closed-loop flight arena. During these experiments, each fly varied its wing-stroke kinematics to actively control the azimuth position of a 15°×15° square within its visual field. Periodically, the square symmetrically expanded in both the horizontal and vertical directions. We measured changes in the fly's wing-stroke amplitude and frequency in response to the expanding square while optically tracking the position of its legs to monitor stereotyped landing responses. Although this stimulus could elicit both the landing responses and collision-avoidance reactions, separate pathways appear to mediate the two behaviors. For example, if the square is in the lateral portion of the fly's field of view at the onset of expansion, the fly increases stroke amplitude in one wing while decreasing amplitude in the other, indicative of a collision-avoidance maneuver. In contrast, frontal expansion elicits an increase in wing-beat frequency and leg extension, indicative of a landing response. To further characterize the sensitivity of these responses to expansion rate, we tested a range of expansion velocities from 100 to 10000° s^(-1). Differences in the latency of both the collision-avoidance reactions and the landing responses with expansion rate supported the hypothesis that the two behaviors are mediated by separate pathways. To examine the effects of visual feedback on the magnitude and time course of the two behaviors, we presented the stimulus under open-loop conditions, such that the fly's response did not alter the position of the expanding square. From our results we suggest a model that takes into account the spatial sensitivities and temporal latencies of the collision-avoidance and landing responses, and is sufficient to schematically represent how the fly uses integration of motion information in deciding whether to turn or land when confronted with an expanding object. --- paper_title: Optic flow regulation: the key to aircraft automatic guidance paper_content: Abstract We have developed a visually based autopilot which is able to make an air vehicle automatically take off, cruise and land, while reacting appropriately to wind disturbances (head wind and tail wind). This autopilot consists of a visual control system that adjusts the thrust so as to keep the downward optic flow (OF) at a constant value. This autopilot is therefore based on an optic flow regulation loop. It makes use of a sensor, which is known as an elementary motion detector (EMD). The functional structure of this EMD was inspired by that of the housefly, which was previously investigated at our Laboratory by performing electrophysiological recordings while applying optical microstimuli to single photoreceptor cells of the insect's compound eye. We built a proof-of-concept, tethered rotorcraft that circles indoors over an environment composed of contrasting features randomly arranged on the floor. The autopilot, which we have called OCTAVE (Optic flow based Control sysTem for Aerial VEhicles), enables this miniature (100 g) rotorcraft to carry out complex tasks such as ground avoidance and terrain following, to control risky maneuvers such as automatic take off and automatic landing, and to respond appropriately to wind disturbances. A single visuomotor control loop suffices to perform all these reputedly demanding tasks. As the electronic processing system required is extremely light-weight (only a few grams), it can be mounted on-board micro-air vehicles (MAVs) as well as larger unmanned air vehicles (UAVs) or even submarines and autonomous underwater vehicles (AUVs). But the OCTAVE autopilot could also provide guidance and/or warning signals to prevent the pilots of manned aircraft from colliding with shallow terrain, for example. --- paper_title: A Model of Temporal Adaptation in Fly Motion Vision paper_content: A computational model is proposed to account for the adaptive properties of the fly motion system. The response properties of motion-sensitive neurons in the fly are modelled using an underdamped adaptive scheme to adjust the time constants of delay filters in an array of Reichardt detectors. It is shown that the increase in both temporal resolution and sensitivity to velocity change observed following adaptation to constant motion can be understood as a consequence of local adaptation of the filter time constants on the basis of the outputs of elementary motion detectors. --- paper_title: Minimum viewing angle for visually guided ground speed control in bumblebees. paper_content: SUMMARY To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel9s length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23–30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered. --- paper_title: Man-made velocity estimators based on insect vision paper_content: The study of insect vision is of significant interest to engineers for inspiring the design of future motion-sensitive smart sensor devices, for collision avoidance applications. Although insects are relatively simple organisms compared to vertebrates, they are blessed with a very efficient visual system, which enables them to navigate with great ease and accuracy. Biologically inspired motion detection models are bound to replace the conventional machine vision technology because of their simplicity and significant advantages in a number of applications. The dominant model for insect motion detection, first proposed by Hassentein and Reichardt in 1956, has gained widespread acceptance in the invertebrate vision community. The template model is another known model proposed later by Horridge in 1990, which permits simple tracking techniques and lends itself easily to both hardware and software. In this paper, we compare these two different motion detecting strategies. It was found from the data obtained from the intracellular recordings of the steady-state responses of wide-field neurons in the hoverfly Volucella, that the shape of the curves obtained agree with the theoretical predictions made by Dror. In order to compare this with the template model, we carried out an experiment to obtain the velocity response curves of the template model to the same image statistics. The results lead us to believe that the fly motion detector emulates a modified Reichardt correlator. In the second part of the paper, modifications are made to the Reichardt detector that improve its performance in velocity detection by reducing its dependance on contrast and image structure. Our recent neurobiological experiments suggest that adaptive mechanisms decrease the EMD (elementary motion detector) dependence on pattern contrast and improve reliability. So appropriate modelling of an adaptive feedback mechanism is carried out to normalize contrast of input signals in order to improve the reliability and robustness of velocity estimation. --- paper_title: Encoding of Naturalistic Optic Flow by a Population of Blowfly Motion-Sensitive Neurons paper_content: In sensory systems information is encoded by the activity of populations of neurons. To analyze the coding properties of neuronal populations sensory stimuli have usually been used that were much simpler than those encountered in real life. It has been possible only recently to stimulate visual interneurons of the blowfly with naturalistic visual stimuli reconstructed from eye movements measured during free flight. Therefore we now investigate with naturalistic optic flow the coding properties of a small neuronal population of identified visual interneurons in the blowfly, the so-called VS and HS neurons. These neurons are motion sensitive and directionally selective and are assumed to extract information about the animal's self-motion from optic flow. We could show that neuronal responses of VS and HS neurons are mainly shaped by the characteristic dynamical properties of the fly's saccadic flight and gaze strategy. Individual neurons encode information about both the rotational and the translational components of the animal's self-motion. Thus the information carried by individual neurons is ambiguous. The ambiguities can be reduced by considering neuronal population activity. The joint responses of different subpopulations of VS and HS neurons can provide unambiguous information about the three rotational and the three translational components of the animal's self-motion and also, indirectly, about the three-dimensional layout of the environment. --- paper_title: A neural model of how the brain computes heading from optic flow in realistic scenes paper_content: Abstract Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard to behavioral and neural substrates. The current article develops a model that does both. The ViSTARS neural model describes interactions among neurons in the primate magnocellular pathway, including V1, MT + , and MST d . Model outputs are quantitatively similar to human heading data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes, and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1°/s do not affect heading estimates, but faster simulated rotation rates do, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments. --- paper_title: Extraction of optical velocity by use of multi-input Reichardt detectors paper_content: We study the possibility of a metrical readout of velocity from an ensemble of Reichardt correlators. We show that with a suitable choice of spatial and temporal prefiltering of the correlator input it is possible to devise reliable (nonaliasing) velocity-tuned Reichardt detectors. However, because of the well-known covariance of spatial and velocity tuning of velocity detectors in biological motion vision, an ensemble consisting of these detectors has problems in extracting a pattern-invariant velocity. We find that pattern invariance of the motion estimate can be closely approximated with Reichardt correlators that sample the luminance pattern at more than the minimum number of two locations. --- paper_title: Honeybees change their height to restore their optic flow paper_content: To further elucidate the mechanisms underlying insects' height and speed control, we trained outdoor honeybees to fly along a high-roofed tunnel, part of which was equipped with a moving floor. Honeybees followed the stationary part of the floor at a given height. On encountering the moving part of the floor, which moved in the same direction as their flight, honeybees descended and flew at a lower height, thus gradually restoring their ventral optic flow (OF) to a similar value to that they had percieved when flying over the stationary part of the floor. This was therefore achieved not by increasing their airspeed, but by lowering their height of flight. These results can be accounted for by a control system called an optic flow regulator, as proposed in previous studies. This visuo-motor control scheme explains how honeybees can navigate safely along tunnels on the sole basis of OF measurements, without any need to measure either their speed or the clearance from the surrounding walls. --- paper_title: A biomimetic vision-based hovercraft accounts for bees’ complex behaviour in various corridors paper_content: Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the ?lateral optic flow regulation autopilot?, which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69?cm s?1 with a clearance from one wall as small as 31?cm, giving an unusually high translational OF value (125? s?1). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/?45? to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees? ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind. --- paper_title: Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot paper_content: For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M 2 APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6 × 10 − 7 to 1 . 6 × 10 − 2 W·cm − 2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M 2 APix sensor. While both algorithms adequately measured optical flow between 25 ∘ /s and 1000 ∘ /s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. --- paper_title: Direct Observation of ON and OFF Pathways in the Drosophila Visual System paper_content: Summary Visual motion perception is critical to many animal behaviors, and flies have emerged as a powerful model system for exploring this fundamental neural computation. Although numerous studies have suggested that fly motion vision is governed by a simple neural circuit [1–3], the implementation of this circuit has remained mysterious for decades. Connectomics and neurogenetics have produced a surge in recent progress, and several studies have shown selectivity for light increments (ON) or decrements (OFF) in key elements associated with this circuit [4–7]. However, related studies have reached disparate conclusions about where this selectivity emerges and whether it plays a major role in motion vision [8–13]. To address these questions, we examined activity in the neuropil thought to be responsible for visual motion detection, the medulla, of Drosophila melanogaster in response to a range of visual stimuli using two-photon calcium imaging. We confirmed that the input neurons of the medulla, the LMCs, are not responsible for light-on and light-off selectivity. We then examined the pan-neural response of medulla neurons and found prominent selectivity for light-on and light-off in layers of the medulla associated with two anatomically derived pathways (L1/L2 associated) [14, 15]. We next examined the activity of prominent interneurons within each pathway (Mi1 and Tm1) and found that these neurons have corresponding selectivity for light-on or light-off. These results provide direct evidence that motion is computed in parallel light-on and light-off pathways, demonstrate that this selectivity emerges in neurons immediately downstream of the LMCs, and specify where crucial elements of motion computation occur. --- paper_title: A Genetic Push to Understand Motion Detection paper_content: Two articles in this issue of Neuron (Eichner et al. and Clark et al.) attack the problem of explaining how neuronal hardware in Drosophila implements the Reichardt motion detector, one of the most famous computational models in neuroscience, which has proven intractable up to now. --- paper_title: Complementary mechanisms create direction selectivity in the fly paper_content: The brain extracts information from signals delivered from the eyes and other sensory organs in order to direct behavior. Understanding how the interactions and wiring of a multitude of individual nerve cells process and transmit this critical information to the brain is a fundamental goal in the field of neuroscience. One question many neuroscientists have tried to understand is how nerve cells in an animal’s brain detect direction when an animal sees movement of some kind – so-called motion vision. The raw signal from the light receptors in the eye does not discriminate whether the light moves in one direction or the other. So, the nerve cells in the brain must somehow compute the direction of movement based on the information relayed by the eye. For more than half a century, major debates have revolved around two rival models that could explain how motion vision works. Both models could in principle lead to neurons that prefer images moving in one direction over images moving in the opposite direction – so-called direction selectivity. In both models, the information about the changing light levels hitting two light-sensitive cells at two points on the eye are compared across time. In one model, signals from images moving in a cell’s preferred direction become amplified. In the other model, signals moving in the unfavored direction become canceled out. However, neither model perfectly explains motion vision. Now, Haag, Arenz et al. show that both models are partially correct and that the two mechanisms work together to detect motion across the field of vision more accurately. In the experiments, both models were tested in tiny fruit flies by measuring the activity of the first nerve cells that respond to the direction of visual motion. While each mechanism alone only produces a fairly weak and error-prone signal of direction, together the two mechanisms produce a stronger and more precise directional signal. Further research is now needed to determine which individual neurons amplify or cancel the signals to achieve such a high degree of direction selectivity. --- paper_title: Miniature curved artificial compound eyes. paper_content: In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories. --- paper_title: Fly visual course control: behaviour, algorithms and circuits paper_content: Understanding how the brain controls behaviour is undisputedly one of the grand goals of neuroscience research, and the pursuit of this goal has a long tradition in insect neuroscience. However, appropriate techniques were lacking for a long time. Recent advances in genetic and recording techniques now allow the participation of identified neurons in the execution of specific behaviours to be interrogated. By focusing on fly visual course control, I highlight what has been learned about the neuronal circuit modules that control visual guidance in Drosophila melanogaster through the use of these techniques. --- paper_title: Brain Connectivity: Revealing the Fly Visual Motion Circuit paper_content: Summary A new semi-automated method for high-throughput identification of visual neurons and their synaptic partners has been combined with optical recording of activity and behavioral analysis to give the first complete description of an elementary circuit for detecting visual motion. --- paper_title: Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network paper_content: How do animals like insects perceive meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? In this paper, with respect to latest biological research progress made in underlying motion detection circuitry in the fly's preliminary visual system, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely the motion and the position pathways, for mimicking motion tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information of moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) The proposed computational structure fulfills the characteristics of a putative signal tuning map of the fly's physiology. (2) It also satisfies a biological implication that visual fixation behaviors could be simply tuned via the position pathway; nevertheless, the motion-detecting pathway improves the tracking precision. (3) Contrary to segmentation and registration based computer vision techniques, its computational simplicity benefits the building of neuromorphic visual sensor for robots. --- paper_title: A common evolutionary origin for the ON- and OFF-edge motion detection pathways of the Drosophila visual system paper_content: Synaptic circuits for identified behaviors in the Drosophila brain have typically been considered from either a developmental or functional perspective without reference to how the circuits might have been inherited from ancestral forms. For example, two candidate pathways for ON- and OFF-edge motion detection in the visual system act via circuits that use respectively either T4 or T5, two cell types of the fourth neuropil, or lobula plate, that exhibit narrow-field direction-selective responses and provide input to wide-field tangential neurons. T4 or T5 both have four subtypes that terminate one each in the four strata of the lobula plate. Representatives are reported in a wide range of Diptera, and both cell types exhibit various similarities in: 1) the morphology of their dendritic arbors; 2) their four morphological and functional subtypes; 3) their cholinergic profile in Drosophila; 4) their input from the pathways of L3 cells in the first neuropil, or lamina, and by one of a pair of lamina cells, L1 (to the T4 pathway) and L2 (to the T5 pathway); and 5) their innervation by a single, wide-field contralateral tangential neuron from the central brain. Progenitors of both also express the gene atonal early in their proliferation from the inner anlage of the developing optic lobe, being alone among many other cell type progeny to do so. Yet T4 receives input in the second neuropil, or medulla, and T5 in the third neuropil or lobula. Here we suggest that these two cell types were originally one, that their ancestral cell population duplicated and split to innervate separate medulla and lobula neuropils, and that a fiber crossing – the internal chiasma – arose between the two neuropils. The split most plausibly occurred, we suggest, with the formation of the lobula as a new neuropil that formed when it separated from its ancestral neuropil to leave the medulla, suggesting additionally that medulla input neurons to T4 and T5 may also have had a common origin. --- paper_title: Small Brains, Smart Machines: From Fly Vision to Robot Vision and Back Again paper_content: Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind were discovered in vertebrates' and invertebrates' visual systems more than 50 years ago, the principles and neural mechanisms involved have not yet been completely elucidated. Here, first, I propose to outline some of the findings we made during the last few decades by performing electrophysiological recordings on identified neurons in the housefly's eye while applying optical stimulation to identified photoreceptors. Whereas these findings shed light on the inner processing structure of an elementary motion detector (EMD), recent studies in which the latest genetic and neuroanatomical methods were applied to the fruitfly's visual system have identified some of the neurons in the visual chain which are possibly involved in the neural circuitry underlying a given EMD. Then, I will describe some of the proof-of-concept robots that we have developed on the basis of our biological findings. The 100-g robot OCTAVE, for example, is able to avoid the ground, react to wind, and land autonomously on a flat terrain without ever having to measure any state variables such as distances or speeds. The 100-g robots OSCAR 1 and OSCAR 2 inspired by the microscanner we discovered in the housefly's eye are able to stabilize their body using mainly visual means and track a moving edge with hyperacuity. These robots react to the optic flow, which is sensed by miniature optic flow sensors inspired by the housefly's EMDs. Constructing a “biorobot” gives us a unique opportunity of checking the soundness and robustness of a principle that is initially thought to be understood by bringing it face to face with the real physical world. Bio-inspired robotics not only help neurobiologists and neuroethologists to identify and investigate worthwhile problems in animals' sensory-motor systems, but they also provide engineers with ideas for developing novel devices and machines with promising future applications, in the field of smart autonomous vehicles and microvehicles, for example. --- paper_title: Candidate Neural Substrates for Off-Edge Motion Detection in Drosophila paper_content: BACKGROUND ::: In the fly's visual motion pathways, two cell types-T4 and T5-are the first known relay neurons to signal small-field direction-selective motion responses [1]. These cells then feed into large tangential cells that signal wide-field motion. Recent studies have identified two types of columnar neurons in the second neuropil, or medulla, that relay input to T4 from L1, the ON-channel neuron in the first neuropil, or lamina, thus providing a candidate substrate for the elementary motion detector (EMD) [2]. Interneurons relaying the OFF channel from L1's partner, L2, to T5 are so far not known, however. ::: ::: ::: RESULTS ::: Here we report that multiple types of transmedulla (Tm) neurons provide unexpectedly complex inputs to T5 at their terminals in the third neuropil, or lobula. From the L2 pathway, single-column input comes from Tm1 and Tm2 and multiple-column input from Tm4 cells. Additional input to T5 comes from Tm9, the medulla target of a third lamina interneuron, L3, providing a candidate substrate for L3's combinatorial action with L2 [3]. Most numerous, Tm2 and Tm9's input synapses are spatially segregated on T5's dendritic arbor, providing candidate anatomical substrates for the two arms of a T5 EMD circuit; Tm1 and Tm2 provide a second. Transcript profiling indicates that T5 expresses both nicotinic and muscarinic cholinoceptors, qualifying T5 to receive cholinergic inputs from Tm9 and Tm2, which both express choline acetyltransferase (ChAT). ::: ::: ::: CONCLUSIONS ::: We hypothesize that T5 computes small-field motion signals by integrating multiple cholinergic Tm inputs using nicotinic and muscarinic cholinoceptors. --- paper_title: Columnar cells necessary for motion responses of wide-field visual interneurons in Drosophila paper_content: Wide-field motion-sensitive neurons in the lobula plate (lobula plate tangential cells, LPTCs) of the fly have been studied for decades. However, it has never been conclusively shown which cells constitute their major presynaptic elements. LPTCs are supposed to be rendered directionally selective by integrating excitatory as well as inhibitory input from many local motion detectors. Based on their stratification in the different layers of the lobula plate, the columnar cells T4 and T5 are likely candidates to provide some of this input. To study their role in motion detection, we performed whole-cell recordings from LPTCs in Drosophila with T4 and T5 cells blocked using two different genetically encoded tools. In these flies, motion responses were abolished, while flicker responses largely remained. We thus demonstrate that T4 and T5 cells indeed represent those columnar cells that provide directionally selective motion information to LPTCs. Contrary to previous assumptions, flicker responses seem to be largely mediated by a third, independent pathway. This work thus represents a further step towards elucidating the complete motion detection circuitry of the fly. --- paper_title: A visual motion detection circuit suggested by Drosophila connectomics paper_content: Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. Here we develop a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our results identify cellular targets for future functional investigations, and demonstrate that connectomes can provide key insights into neuronal computations. --- paper_title: Optic-Flow-Based Collision Avoidance paper_content: Flying in and around caves, tunnels, and buildings demands more than one sensing modality. This article presented an optic-flow- based approach inspired by flying insects for avoiding lateral collisions. However, there were a few real-world scenarios in which optic flow sensing failed. This occurred when obstacles on approach were directly in front of the aircraft. Here, a simple sonar or infrared sensor can be used to trigger a quick transition into the hovering mode to avoid the otherwise fatal collision. Toward this end, we have demonstrated a fixed-wing prototype capable of manually transitioning from conventional cruise flight into the hovering mode. The prototype was then equipped with an IMU and a flight control system to automate the hovering process. The next step in this research is to automate the transition from cruise to hover flight. --- paper_title: Visual Motion: Cellular Implementation of a Hybrid Motion Detector paper_content: Visual motion detection in insects is mediated by three-input detectors that compare inputs of different spatiotemporal properties. A new modeling study shows that only a small subset of possible arrangements of the input elements provides high direction-selectivity. --- paper_title: Object tracking in motion-blind flies paper_content: In response to the movement of its visual world, Drosophila is capable of optomotor response in head and body turning, as well as a visual fixation response. This study shows that blocking the visual pathway activity responsible for optokinetic response in flies does not affect the visual fixation response, suggesting two distinct pathways for processing each set of information. By doing so, the authors also devised a neural and behavioral hierarchy in fly visual system where fixation behavior and the neurons mediating fixation response are upstream of optokinetic response as performed by lobula plate neurons. --- paper_title: Directionally Selective Motion Detection by Insect Neurons paper_content: Animals have several good reasons for detecting motion with their eyes. First, the motion of other animals — potential preys, mates, intruders or predators — provides essential information on which to base vital moves such as escape or chase. Secondly, information about self-motion is crucial, especially in the context of navigation, course stabilization, obstacle avoidance, and collision-free goal reaching. In fact, the wealth of information provided by passive, non-contact self-motion evaluation in visual systems has been likened to a kind of “visual kinaesthesis” (Gibson 1958). Even the 3D structure of the environment can be picked up by a moving observer (revs. Collett and Harkness 1982; Buchner 1984; Nakayama 1985; Hildreth and Koch 1987). Von Helmholtz (1867) was the first to clearly state the importance of this “motion parallax” in locomotion, and Exner (1891) proposed that arthropods make use of motion parallax as well as stereopsis to estimate distances (see also Horridge 1986). --- paper_title: A Class of Visual Neurons with Wide-Field Properties Is Required for Local Motion Detection paper_content: Summary Visual motion cues are used by many animals to guide navigation across a wide range of environments. Long-standing theoretical models have made predictions about the computations that compare light signals across space and time to detect motion. Using connectomic and physiological approaches, candidate circuits that can implement various algorithmic steps have been proposed in the Drosophila visual system. These pathways connect photoreceptors, via interneurons in the lamina and the medulla, to direction-selective cells in the lobula and lobula plate. However, the functional architecture of these circuits remains incompletely understood. Here, we use a forward genetic approach to identify the medulla neuron Tm9 as critical for motion-evoked behavioral responses. Using in vivo calcium imaging combined with genetic silencing, we place Tm9 within motion-detecting circuitry. Tm9 receives functional inputs from the lamina neurons L3 and, unexpectedly, L1 and passes information onto the direction-selective T5 neuron. Whereas the morphology of Tm9 suggested that this cell would inform circuits about local points in space, we found that the Tm9 spatial receptive field is large. Thus, this circuit informs elementary motion detectors about a wide region of the visual scene. In addition, Tm9 exhibits sustained responses that provide a tonic signal about incoming light patterns. Silencing Tm9 dramatically reduces the response amplitude of T5 neurons under a broad range of different motion conditions. Thus, our data demonstrate that sustained and wide-field signals are essential for elementary motion processing. --- paper_title: The Temporal Tuning of the Drosophila Motion Detectors Is Determined by the Dynamics of Their Input Elements paper_content: Summary Detecting the direction of motion contained in the visual scene is crucial for many behaviors. However, because single photoreceptors only signal local luminance changes, motion detection requires a comparison of signals from neighboring photoreceptors across time in downstream neuronal circuits. For signals to coincide on readout neurons that thus become motion and direction selective, different input lines need to be delayed with respect to each other. Classical models of motion detection rely on non-linear interactions between two inputs after different temporal filtering. However, recent studies have suggested the requirement for at least three, not only two, input signals. Here, we comprehensively characterize the spatiotemporal response properties of all columnar input elements to the elementary motion detectors in the fruit fly, T4 and T5 cells, via two-photon calcium imaging. Between these input neurons, we find large differences in temporal dynamics. Based on this, computer simulations show that only a small subset of possible arrangements of these input elements maps onto a recently proposed algorithmic three-input model in a way that generates a highly direction-selective motion detector, suggesting plausible network architectures. Moreover, modulating the motion detection system by octopamine-receptor activation, we find the temporal tuning of T4 and T5 cells to be shifted toward higher frequencies, and this shift can be fully explained by the concomitant speeding of the input elements. --- paper_title: Optic flow-based collision-free strategies: From insects to robots paper_content: Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. --- paper_title: Processing properties of ON and OFF pathways for Drosophila motion detection paper_content: Four medulla neurons implement two critical processing steps to incoming signals in Drosophila motion detection. --- paper_title: The Emergence of Directional Selectivity in the Visual Motion Pathway of Drosophila paper_content: Summary The perception of visual motion is critical for animal navigation, and flies are a prominent model system for exploring this neural computation. In Drosophila , the T4 cells of the medulla are directionally selective and necessary for ON motion behavioral responses. To examine the emergence of directional selectivity, we developed genetic driver lines for the neuron types with the most synapses onto T4 cells. Using calcium imaging, we found that these neuron types are not directionally selective and that selectivity arises in the T4 dendrites. By silencing each input neuron type, we identified which neurons are necessary for T4 directional selectivity and ON motion behavioral responses. We then determined the sign of the connections between these neurons and T4 cells using neuronal photoactivation. Our results indicate a computational architecture for motion detection that is a hybrid of classic theoretical models. --- paper_title: Dissection of the Peripheral Motion Channel in the Visual System of Drosophila melanogaster paper_content: In the eye, visual information is segregated into modalities such as color and motion, these being transferred to the central brain through separate channels. Here, we genetically dissect the achromatic motion channel in the fly Drosophila melanogaster at the level of the first relay station in the brain, the lamina, where it is split into four parallel pathways (L1-L3, amc/T1). The functional relevance of this divergence is little understood. We now show that the two most prominent pathways, L1 and L2, together are necessary and largely sufficient for motion-dependent behavior. At high pattern contrast, the two pathways are redundant. At intermediate contrast, they mediate motion stimuli of opposite polarity, L2 front-to-back, L1 back-to-front motion. At low contrast, L1 and L2 depend upon each other for motion processing. Of the two minor pathways, amc/T1 specifically enhances the L1 pathway at intermediate contrast. L3 appears not to contribute to motion but to orientation behavior. --- paper_title: ON and OFF pathways in Drosophila motion vision paper_content: Ramon y Cajal, the founding father of neuroscience, observed similarities between the vertebrate retina and the insect eye, but that was based purely on anatomy. Using state-of-the-art genetics and electrophysiology in the fruitfly, these authors distinguish motion-sensitive neurons responding to abrupt increases in light from those specific to light decrements, thus bringing the similarity with vertebrate circuitry to the functional level. --- paper_title: Figure Tracking by Flies Is Supported by Parallel Visual Streams paper_content: Summary Visual figures may be distinguished based on elementary motion or higher-order non-Fourier features, and flies track both [1]. The canonical elementary motion detector, a compact computation for Fourier motion direction and amplitude, can also encode higher-order signals provided elaborate preprocessing [2–4]. However, the way in which a fly tracks a moving figure containing both elementary and higher-order signals has not been investigated. Using a novel white noise approach, we demonstrate that (1) the composite response to an object containing both elementary motion (EM) and uncorrelated higher-order figure motion (FM) reflects the linear superposition of each component; (2) the EM-driven component is velocity-dependent, whereas the FM component is driven by retinal position; (3) retinotopic variation in EM and FM responses are different from one another; (4) the FM subsystem superimposes saccadic turns upon smooth pursuit; and (5) the two systems in combination are necessary and sufficient to predict the full range of figure tracking behaviors, including those that generate no EM cues at all [1]. This analysis requires an extension of the model that fly motion vision is based on simple elementary motion detectors [5] and provides a novel method to characterize the subsystems responsible for the pursuit of visual figures. --- paper_title: The First Steps in Drosophila Motion Detection paper_content: The visual system, with its ability to perceive motion, is crucial for most animals to walk or fly steadily. Theoretical models of motion detection exist, but the underlying cellular mechanisms are still poorly understood. In this issue of Neuron, Rister and colleagues dissect the function of neuronal subtypes in the optic lobe of Drosophila to reveal their role in motion detection. --- paper_title: Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression paper_content: Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila , direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein–Reichardt correlator and the Barlow–Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation. SIGNIFICANCE STATEMENT Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges. --- paper_title: Seeing Things in Motion: Models, Circuits, and Mechanisms paper_content: Motion vision provides essential cues for navigation and course control as well as for mate, prey, or predator detection. Consequently, neurons responding to visual motion in a direction-selective way are found in almost all species that see. However, directional information is not explicitly encoded at the level of a single photoreceptor. Rather, it has to be computed from the spatio-temporal excitation level of at least two photoreceptors. How this computation is done and how this computation is implemented in terms of neural circuitry and membrane biophysics have remained the focus of intense research over many decades. Here, we review recent progress made in this area with an emphasis on insects and the vertebrate retina. --- paper_title: A directional tuning map of Drosophila elementary motion detectors paper_content: This study uses calcium imaging to show that T4 and T5 neurons are divided in specific subpopulations responding to motion in four cardinal directions, and are specific to ON versus OFF edges, respectively; when either T4 or T5 neurons were genetically blocked, tethered flies walking on air-suspended beads failed to respond to the corresponding visual stimuli. --- paper_title: Common circuit design in fly and mammalian motion vision paper_content: Motion-sensitive neurons have long been studied in both the mammalian retina and the insect optic lobe, yet striking similarities have become obvious only recently. Detailed studies at the circuit level revealed that, in both systems, (i) motion information is extracted from primary visual information in parallel ON and OFF pathways; (ii) in each pathway, the process of elementary motion detection involves the correlation of signals with different temporal dynamics; and (iii) primary motion information from both pathways converges at the next synapse, resulting in four groups of ON-OFF neurons, selective for the four cardinal directions. Given that the last common ancestor of insects and mammals lived about 550 million years ago, this general strategy seems to be a robust solution for how to compute the direction of visual motion with neural hardware. --- paper_title: A biomimetic vision-based hovercraft accounts for bees’ complex behaviour in various corridors paper_content: Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the ?lateral optic flow regulation autopilot?, which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69?cm s?1 with a clearance from one wall as small as 31?cm, giving an unusually high translational OF value (125? s?1). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/?45? to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees? ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind. --- paper_title: Internal Structure of the Fly Elementary Motion Detector paper_content: Recent experiments have shown that motion detection in Drosophila starts with splitting the visual input into two parallel channels encoding brightness increments (ON) or decrements (OFF). This suggests the existence of either two (ON-ON, OFF-OFF) or four (for all pairwise interactions) separate motion detectors. To decide between these possibilities, we stimulated flies using sequences of ON and OFF brightness pulses while recording from motion-sensitive tangential cells. We found direction-selective responses to sequences of same sign (ON-ON, OFF-OFF), but not of opposite sign (ON-OFF, OFF-ON), refuting the existence of four separate detectors. Based on further measurements, we propose a model that reproduces a variety of additional experimental data sets, including ones that were previously interpreted as support for four separate detectors. Our experiments and the derived model mark an important step in guiding further dissection of the fly motion detection circuit. --- paper_title: Functional Specialization of Parallel Motion Detection Circuits in the Fly paper_content: In the fly Drosophila melanogaster , photoreceptor input to motion vision is split into two parallel pathways as represented by first-order interneurons L1 and L2 ([Rister et al., 2007][1]; [Joesch et al., 2010][2]). However, how these pathways are functionally specialized remains controversial. One study ([Eichner et al., 2011][3]) proposed that the L1-pathway evaluates only sequences of brightness increments (ON-ON), while the L2-pathway processes exclusively brightness decrements (OFF-OFF). Another study ([Clark et al., 2011][4]) proposed that each of the two pathways evaluates both ON-ON and OFF-OFF sequences. To decide between these alternatives, we recorded from motion-sensitive neurons in flies in which the output from either L1 or L2 was genetically blocked. We found that blocking L1 abolishes ON-ON responses but leaves OFF-OFF responses intact. The opposite was true, when the output from L2 was blocked. We conclude that the L1 and L2 pathways are functionally specialized to detect ON-ON and OFF-OFF sequences, respectively. ::: ::: [1]: #ref-17 ::: [2]: #ref-11 ::: [3]: #ref-6 ::: [4]: #ref-4 --- paper_title: Defining the Computational Structure of the Motion Detector in Drosophila paper_content: Summary Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. --- paper_title: Performance of a Visual Fixation Model in an Autonomous Micro Robot Inspired by Drosophila Physiology paper_content: In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects' visual systems is not only attractive to neural system modellers but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to fixation. The proposed model was realised on the embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual fixation model: the robot showed motion tracking and fixation behaviours similarly to insects; the image processing frequency can maintain 25 ~ 45Hz. Arena tests also demonstrated a successful following behaviour aroused by fixation in navigation. --- paper_title: Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network paper_content: How do animals like insects perceive meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? In this paper, with respect to latest biological research progress made in underlying motion detection circuitry in the fly's preliminary visual system, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely the motion and the position pathways, for mimicking motion tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information of moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) The proposed computational structure fulfills the characteristics of a putative signal tuning map of the fly's physiology. (2) It also satisfies a biological implication that visual fixation behaviors could be simply tuned via the position pathway; nevertheless, the motion-detecting pathway improves the tracking precision. (3) Contrary to segmentation and registration based computer vision techniques, its computational simplicity benefits the building of neuromorphic visual sensor for robots. --- paper_title: An improved LPTC neural model for background motion direction estimation paper_content: A class of specialized neurons, called lobula plate tangential cells (LPTCs) has been shown to respond strongly to wide-field motion. The classic model, elementary motion detector (EMD) and its improved model, two-quadrant detector (TQD) have been proposed to simulate LPTCs. Although EMD and TQD can percept background motion, their outputs are so cluttered that it is difficult to discriminate actual motion direction of the background. In this paper, we propose a max operation mechanism to model a newly-found transmedullary neuron Tm9 whose physiological properties do not map onto EMD and TQD. This proposed max operation mechanism is able to improve the detection performance of TQD in cluttered background by filtering out irrelevant motion signals. We will demonstrate the functionality of this proposed mechanism in wide-field motion perception. --- paper_title: Modeling direction selective visual neural network with ON and OFF pathways for extracting motion cues from cluttered background paper_content: The nature endows animals robust vision systems for extracting and recognizing different motion cues, detecting predators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In this paper, with respect to recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is filtered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: first, the neural network fulfilled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in vision-based micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector. --- paper_title: Retinotopic Organization of Small-Field-Target-Detecting Neurons in the Insect Visual System paper_content: BACKGROUND ::: Despite having tiny brains and relatively low-resolution compound eyes, many fly species frequently engage in precisely controlled aerobatic pursuits of conspecifics. Recent investigations into high-order processing in the fly visual system have revealed a class of neurons, coined small-target-motion detectors (STMDs), capable of responding robustly to target motion against the motion of background clutter. Despite limited spatial acuity in the insect eye, these neurons display exquisite sensitivity to small targets. ::: ::: ::: RESULTS ::: We recorded intracellularly from morphologically identified columnar neurons in the lobula complex of the hoverfly Eristalis tenax. We show that these columnar neurons with exquisitely small receptive fields, like their large-field counterparts recently described from both male and female flies, have an extreme selectivity for the motion of small targets. In doing so, we provide the first physiological characterization of small-field neurons in female flies. These retinotopically organized columnar neurons include both direction-selective and nondirection-selective classes covering a large area of visual space. ::: ::: ::: CONCLUSIONS ::: The retinotopic arrangement of lobula columnar neurons sensitive to the motion of small targets makes a strong case for these neurons as important precursors in the local processing of target motion. Furthermore, the continued response of STMDs with such small receptive fields to the motion of small targets in the presence of moving background clutter places further constraints on the potential mechanisms underlying their small-target tuning. --- paper_title: Neural mechanisms underlying target detection in a dragonfly centrifugal neuron paper_content: Visual identification of targets is an important task for many animals searching for prey or conspecifics. Dragonflies utilize specialized optics in the dorsal acute zone, accompanied by higher-order visual neurons in the lobula complex, and descending neural pathways tuned to the motion of small targets. While recent studies describe the physiology of insect small target motion detector (STMD) neurons, little is known about the mechanisms that underlie their exquisite sensitivity to target motion. Lobula plate tangential cells (LPTCs), a group of neurons in dipteran flies selective for wide-field motion, have been shown to take input from local motion detectors consistent with the classic correlation model developed by Hassenstein and Reichardt in the 1950s. We have tested the hypothesis that similar mechanisms underlie the response of dragonfly STMDs. We show that an anatomically characterized centrifugal STMD neuron (CSTMD1) gives responses that depend strongly on target contrast, a clear prediction of the correlation model. Target stimuli are more complex in spatiotemporal terms than the sinusoidal grating patterns used to study LPTCs, so we used a correlation-based computer model to predict response tuning to velocity and width of moving targets. We show that increasing target width in the direction of travel causes a shift in response tuning to higher velocities, consistent with our model. Finally, we show how the morphology of CSTMD1 allows for impressive spatial interactions when more than one target is present in the visual field. --- paper_title: Insect Detection of Small Targets Moving in Visual Clutter paper_content: Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly, Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. --- paper_title: Object- and self-movement detectors in the ventral nerve cord of the dragonfly paper_content: 1. ::: ::: Descending, movement-sensitive visual interneurons in the ventral nerve cord of the dragonfly,Anax junius, fall into two categories, based upon their responses to a variety of stimulus patterns. One group (object-movement detectors) is sensitive only to movement of small patterns; the other (self-movement detectors) responds maximally to movement of very large patterns or to rotation of the animal in the lighted laboratory. ::: ::: ::: ::: ::: 2. ::: ::: Object-movement-detector responses to repeated identical stimuli habituate very rapidly. The habituation is region specific; pattern movement elsewhere in the receptive field elicits a renewed response (Fig. 3). The habituation is very long lasting and is not subject to dishabituation by mechanical or visual stimulation. Self-movement detectors, in contrast, show little or no habituation (Fig. 3). ::: ::: ::: ::: ::: 3. ::: ::: Increasing the extent of the stimulus pattern in the direction of motion decreases responses of object-movement detectors slightly and greatly increases self-movement-detector responses (Fig. 4). ::: ::: ::: ::: ::: 4. ::: ::: Increasing the length of the advancing edges perpendicular to the line of motion dramatically reduces object-movement-detector responses (Fig. 5). Such increases enhance self-movement-detector responses only slightly, unless they result in the pattern occupying especially sensitive regions of the receptive field (Fig. 7). ::: ::: ::: ::: ::: 5. ::: ::: Over a wide velocity range, self-movement-detector responses are not dependent on pattern wavelength (Fig. 8). ::: ::: ::: ::: ::: 6. ::: ::: These results indicate that the parameter upon which the object/world discrimination is based is different for the two groups of interneurons. The critical parameter for the self-movement detectors is the extent of the pattern in the direction of motion, whereas for the object-movement detectors, the critical parameter is the extent of the pattern perpendicular to the direction of motion. --- paper_title: Local and large-range inhibition in feature detection. paper_content: Lateral inhibition is perhaps the most ubiquitous of neuronal mechanisms, having been demonstrated in early stages of processing in many different sensory pathways of both mammals and invertebrates. Recent work challenges the long-standing view that assumes that similar mechanisms operate to tune neuronal responses to higher order properties. Scant evidence for lateral inhibition exists beyond the level of the most peripheral stages of visual processing, leading to suggestions that many features of the tuning of higher order visual neurons can be accounted for by the receptive field and other intrinsic coding properties of visual neurons. Using insect target neurons as a model, we present unequivocal evidence that feature tuning is shaped not by intrinsic properties but by potent spatial lateral inhibition operating well beyond the first stages of visual processing. In addition, we present evidence for a second form of higher-order spatial inhibition--a long-range interocular transfer of information that we argue serves a role in establishing interocular rivalry and thus potentially a neural substrate for directing attention to single targets in the presence of distracters. In so doing, we demonstrate not just one, but two levels of spatial inhibition acting beyond the level of peripheral processing. --- paper_title: Visual control of flight behaviour in the hoverflySyritta pipiens L. paper_content: 1. ::: ::: The visually guided flight behaviour of groups of male and femaleSyritta pipiens was filmed at 50 f.p.s. and analysed frame by frame. Sometimes the flies cruise around ignoring each other. At other times males but not females track other flies closely, during which the body axis points accurately towards the leading fly. ::: ::: ::: ::: ::: 2. ::: ::: The eyes of males but not females have a forward directed region of enlarged facets where the resolution is 2 to 3 times greater than elsewhere. The inter-ommatidial angle in this “fovea” is 0.6°. ::: ::: ::: ::: ::: 3. ::: ::: Targets outside the fovea are fixated by accurately directed, intermittent, open-loop body saccades. Fixation of moving targets within the fovea is maintained by “continuous” tracking in which the angular position of the target on the retina (Θe) is continuously translated into the angular velocity of the tracking fly (\(\dot \Phi _p \)) with a latency of roughly 20 ms (\(\dot \Phi _p = k \Theta _e \), wherek≏30 s−1). ::: ::: ::: ::: ::: 4. ::: ::: The tracking fly maintains a roughly constant distance (in the range 5–15 cm) from the target. If the distance between the two flies is more than some set value the fly moves forwards, if it is less the fly moves backwards. The forward or backward velocity (\(\dot F_p \)) increases with the difference (D-D0) between the actual and desired distance (\(\dot F_p = k^\prime (D - D_0 )\)), wherek′=10 to 20 s−1). It is argued that the fly computes distance by measuring the vertical substense of the target image on the retina. ::: ::: ::: ::: ::: 5. ::: ::: Angular tracking is sometimes, at the tracking fly's choice,supplemented by changes in sideways velocity. The fly predicts a suitable sideways velocity probably on the basis of a running averageΘ e , but not its instantaneous value. Alternatively, when the target is almost stationary, angular tracking may bereplaced by sideways tracking. In this case the sideways velocity (\(\dot S\)) is related toΘ e about 30 ms earlier (\(\dot S_p = k\prime \prime \Theta _e \), wherek″=2.5 cm · s−1 · deg−1), and the angular tracking system is inoperative. ::: ::: ::: ::: ::: 6. ::: ::: When the leading fly settles the tracking fly often moves rapidly sideways in an arc centred on the leading fly. During thesevoluntary sideways movements the male continues to point his head at the target. He does this not by correctingΘ e , which is usually zero, but by predicting the angular velocity needed to maintain fixation. This prediction requires knowledge of both the distance between the flies and the tracking fly's sideways velocity. It is shown that the fly tends to over-estimate distance by about 20%. ::: ::: ::: ::: ::: 7. ::: ::: When two males meet head on during tracking the pursuit may be cut short as a result of vigorous sideways oscillations of both flies. These side-to-side movements are synchronised so that the males move in opposite directions, and the oscillations usually grow in size until the males separate. The angular tracking system is active during “wobbling” and it is shown that to synchronise the two flies the sideways tracking system must also be operative. The combined action of both systems in the two flies leads to instability and so provides a simple way of automatically separating two males. ::: ::: ::: ::: ::: 8. ::: ::: Tracking is probably sexual in function and often culminates in a rapid dart towards the leading fly, after the latter has settled. During these “rapes” the male accelerates continuously at about 500 cm · s−2, turning just before it lands so that it is in the copulatory position. The male rapes flies of either sex indicating that successful copulation involves more trial and error than recognition. ::: ::: ::: ::: ::: 9. ::: ::: During cruising flight the angular velocity of the fly is zero except for brief saccadic turns. There is often a sideways component to flight which means that the body axis is not necessarily in the direction of flight. Changes in flight direction are made either by means of saccades or by adjusting the ratio of sideways to forward velocity (\(\dot S/\dot F\)). Changes in body axis are frequently made without any change in the direction of flight. On these occasions, when the fly makes an angular saccade, it simultaneously adjusts\(\dot S/\dot F\) by an appropriate amount. ::: ::: ::: ::: ::: 10. ::: ::: Flies change course when they approach flowers using the same variety of mechanisms: a series of saccades, adjustments to\(\dot S/\dot F\), or by a mixture of the two. ::: ::: ::: ::: ::: 11. ::: ::: The optomotor response, which tends to prevent rotation except during saccades, is active both during cruising and tracking flight. --- paper_title: Identified target-selective visual interneurons descending from the dragonfly brain paper_content: 1. ::: ::: Eight large interneurons descending in the dragonfly (Aeshna umbrosa, Anax junius) ventral nerve cord from the brain to the thoracic ganglia were identified anatomically with intracellular dye injection (Fig. 3). All eight were strictly visual and responded only to movements of small patterns, such as black squares, ‘targets’, moving on a white background. ::: ::: ::: ::: ::: 2. ::: ::: The target interneurons all projected from the protocerebrum at least as far as the metathoracic ganglion. Within the protocerebrum they arborized in the posterodorsal neuropil region, near the base of the circumesophageal connectives (Fig. 3). ::: ::: ::: ::: ::: 3. ::: ::: The receptive fields of six of the cells were large, including most of the forward hemisphere of vision. For five of these, spiking responses were often restricted to a much smaller region within the receptive field, with stimulation of other areas yielding only subthreshold responses (Figs. 4 and 5, Table 1). ::: ::: ::: ::: ::: 4. ::: ::: The pattern of selectivity for target size varied, with some neurons responding only to small targets, some showing consistent responses over a wide range of target sizes, and one preferring larger targets (Fig. 6, Table 1). ::: ::: ::: ::: ::: 5. ::: ::: Five of the interneurons were directionally selective. Movement in the antipreferred direction elicited hyperpolarizing responses in two of them. Movements of large patterns, such as a checkerboard pattern covering the forward hemisphere, elicited opposite directional responses, i.e., hyperpolarizations in the preferred target direction and subthreshold depolarizations in the antipreferred direction (Fig. 7). A large pattern moving in any direction inhibited the response to target movement (Fig. 8). ::: ::: ::: ::: ::: 6. ::: ::: These neurons mediate, in part, the visual control of flight orientation. I propose that they convey turning signals to the wing motor in response to objects moving relative to the animal. --- paper_title: Neural specializations for small target detection in insects paper_content: Despite being equipped with low-resolution eyes and tiny brains, many insects show exquisite abilities to detect and pursue targets even in highly textured surrounds. Target tracking behavior is subserved by neurons that are sharply tuned to the motion of small high-contrast targets. These neurons respond robustly to target motion, even against self-generated optic flow. A recent model, supported by neurophysiology, generates target selectivity by being sharply tuned to the unique spatiotemporal profile associated with target motion. Target neurons are likely connected in a complex network where some provide more direct output to behavior, whereas others serve an inter-regulatory role. These interactions may regulate attention and aid in the robust detection of targets in clutter observed in behavior. --- paper_title: Small object detection neurons in female hoverflies paper_content: While predators such as dragonflies are dependent on visual detection of moving prey, social interactions make conspecific detection equally important for many non-predatory insects. Specialized 'acute zones' associated with target detection have evolved in several insect groups and are a prominent male-specific feature in many dipteran flies. The physiology of target selective neurons associated with these specialized eye regions has previously been described only from male flies. We show here that female hoverflies (Eristalis tenax) have several classes of neurons within the third optic ganglion (lobula) capable of detecting moving objects smaller than 1 degrees . These neurons have frontal receptive fields covering a large part of the ipsilateral world and are tuned to a broad range of target speeds and sizes. This could make them suitable for detecting targets under a range of natural conditions such as required during predator avoidance or conspecific interactions. --- paper_title: Modelling LGMD2 visual neuron system paper_content: Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bio-inspired nonlinear functions, are introduced in our model, to achieve LGMD2's collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition. --- paper_title: Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot paper_content: The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been modeled and successfully applied in robotic vision system for perceiving potential collisions in an efficient and reliable manner. In this research, we conduct binocular neuronal models, for the first time combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions of this research: (1) The arena tests involving multiple robots verified the effectiveness and robustness of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models, which fulfill corresponding biological research. (3) The utilized micro robot may also benefit researches on other embedded vision systems as well as swarm robotics. --- paper_title: Bio-inspired collision detector with enhanced selectivity for ground robotic vision system paper_content: There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot. --- paper_title: Review of state-of-the-art artificial compound eye imaging systems. paper_content: The natural compound eye has received much attention in recent years due to its remarkable properties, such as its large field of view (FOV), compact structure, and high sensitivity to moving objects. Many studies have been devoted to mimicking the imaging system of the natural compound eye. The paper gives a review of state-of-the-art artificial compound eye imaging systems. Firstly, we introduce the imaging principle of three types of natural compound eye. Then, we divide current artificial compound eye imaging systems into four categories according to the difference of structural composition. Readers can easily grasp methods to build an artificial compound eye imaging system from the perspective of structural composition. Moreover, we compare the imaging performance of state-of-the-art artificial compound eye imaging systems, which provides a reference for readers to design system parameters of an artificial compound eye imaging system. Next, we present the applications of the artificial compound eye imaging system including imaging with a large FOV, imaging with high resolution, object distance detection, medical imaging, egomotion estimation, and navigation. Finally, an outlook of the artificial compound eye imaging system is highlighted. --- paper_title: Computation of object approach by a system of visual motion-sensitive neurons in the crab Neohelice paper_content: Similar to most visual animals, crabs perform proper avoidance responses to objects directly approaching them. The monostratified lobula giant neurons of type 1 (MLG1) of crabs constitute an ensemble of 14–16 bilateral pairs of motion-detecting neurons projecting from the lobula (third optic neuropile) to the midbrain, with receptive fields that are distributed over the extensive visual field of the animal's eye. Considering the crab Neohelice (previously Chasmagnathus) granulata, here we describe the response of these neurons to looming stimuli that simulate objects approaching the animal on a collision course. We found that the peak firing time of MLG1 acts as an angular threshold detector signaling, with a delay of δ = 35 ms, the time at which an object reaches a fixed angular threshold of 49°. Using in vivo intracellular recordings, we detected the existence of excitatory and inhibitory synaptic currents that shape the neural response. Other functional features identified in the MLG1 neurons were phas... --- paper_title: Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation paper_content: Abstract Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector — the LGMD2. The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner. --- paper_title: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects paper_content: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects --- paper_title: A Genetic Push to Understand Motion Detection paper_content: Two articles in this issue of Neuron (Eichner et al. and Clark et al.) attack the problem of explaining how neuronal hardware in Drosophila implements the Reichardt motion detector, one of the most famous computational models in neuroscience, which has proven intractable up to now. --- paper_title: Internal Structure of the Fly Elementary Motion Detector paper_content: Recent experiments have shown that motion detection in Drosophila starts with splitting the visual input into two parallel channels encoding brightness increments (ON) or decrements (OFF). This suggests the existence of either two (ON-ON, OFF-OFF) or four (for all pairwise interactions) separate motion detectors. To decide between these possibilities, we stimulated flies using sequences of ON and OFF brightness pulses while recording from motion-sensitive tangential cells. We found direction-selective responses to sequences of same sign (ON-ON, OFF-OFF), but not of opposite sign (ON-OFF, OFF-ON), refuting the existence of four separate detectors. Based on further measurements, we propose a model that reproduces a variety of additional experimental data sets, including ones that were previously interpreted as support for four separate detectors. Our experiments and the derived model mark an important step in guiding further dissection of the fly motion detection circuit. --- paper_title: Bio-inspired small target motion detector with a new lateral inhibition mechanism paper_content: In nature, it is an important task for animals to detect small targets which move within cluttered background. In recent years, biologists have found that a class of neurons in the lobula complex, called STMDs (small target motion detectors) which have extreme selectivity for small targets moving within visual clutter. At the same time, some researchers assert that lateral inhibition plays an important role in discriminating the motion of the target from the motion of the background, even account for many features of the tuning of higher order visual neurons. Inspired by the finding that complete lateral inhibition can only be seen when the motion of the central region is identical to the motion of the peripheral region, we propose a new lateral inhibition mechanism combined with motion velocity and direction to improve the performance of ESTMD model (elementary small target motion detector). In this paper, we will elaborate on the biological plausibility and functionality of this new lateral inhibition mechanism in small target motion detection. --- paper_title: Postsynaptic organisations of directional selective visual neural networks for collision detection paper_content: In this paper, we studied the postsynaptic organisations of directional selective visual neurons for collision detection. Directional selective neurons can extract different directional visual motion cues fast and reliably by allowing inhibition spreads to further layers in specific directions with one or several time steps delay. Whether these directional selective neurons can be easily organised for other specific visual tasks is not known. Taking collision detection as the primary visual task, we investigated the postsynaptic organisations of these directional selective neurons through evolutionary processes. The evolved postsynaptic organisations demonstrated robust properties in detecting imminent collisions in complex visual environments with many of which achieved 94% success rate after evolution suggesting active roles in collision detection directional selective neurons and its postsynaptic organisations can play. --- paper_title: A Model for the Detection of Moving Targets in Visual Clutter Inspired by Insect Physiology paper_content: We present a computational model for target discrimination based on intracellular recordings from neurons in the fly visual system. Determining how insects detect and track small moving features, often against cluttered moving backgrounds, is an intriguing challenge, both from a physiological and a computational perspective. Previous research has characterized higher-order neurons within the fly brain, known as 'small target motion detectors' (STMD), that respond robustly to moving features, even when the velocity of the target is matched to the background (i.e. with no relative motion cues). We recorded from intermediate-order neurons in the fly visual system that are well suited as a component along the target detection pathway. This full-wave rectifying, transient cell (RTC) reveals independent adaptation to luminance changes of opposite signs (suggesting separate ON and OFF channels) and fast adaptive temporal mechanisms, similar to other cell types previously described. From this physiological data we have created a numerical model for target discrimination. This model includes nonlinear filtering based on the fly optics, the photoreceptors, the 1(st) order interneurons (Large Monopolar Cells), and the newly derived parameters for the RTC. We show that our RTC-based target detection model is well matched to properties described for the STMDs, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter' to successfully detect most targets from the background. Importantly, this model can explain this type of feature discrimination without the need for relative motion cues. --- paper_title: Functional Specialization of Parallel Motion Detection Circuits in the Fly paper_content: In the fly Drosophila melanogaster , photoreceptor input to motion vision is split into two parallel pathways as represented by first-order interneurons L1 and L2 ([Rister et al., 2007][1]; [Joesch et al., 2010][2]). However, how these pathways are functionally specialized remains controversial. One study ([Eichner et al., 2011][3]) proposed that the L1-pathway evaluates only sequences of brightness increments (ON-ON), while the L2-pathway processes exclusively brightness decrements (OFF-OFF). Another study ([Clark et al., 2011][4]) proposed that each of the two pathways evaluates both ON-ON and OFF-OFF sequences. To decide between these alternatives, we recorded from motion-sensitive neurons in flies in which the output from either L1 or L2 was genetically blocked. We found that blocking L1 abolishes ON-ON responses but leaves OFF-OFF responses intact. The opposite was true, when the output from L2 was blocked. We conclude that the L1 and L2 pathways are functionally specialized to detect ON-ON and OFF-OFF sequences, respectively. ::: ::: [1]: #ref-17 ::: [2]: #ref-11 ::: [3]: #ref-6 ::: [4]: #ref-4 --- paper_title: Navigation in an autonomous flying robot by using a biologically inspired visual odometer paper_content: While mobile robots and walking insects can use proprioceptive information (specialized receptors in the insects' leg, or wheel encoders in robots) to estimate distance traveled, flying agents have to rely mainly on visual cues. Experiments with bees provide evidence that flying insects might be using optical flow induced by egomotion to estimate distance traveled. Recently some details of this odometer have been unraveled. In this study, we propose a biologically inspired model of the bee's visual odometer based on Elementary Motion Detectors (EMDs), and present results from goal-directed navigation experiments with an autonomous flying robot platform that we developed specifically for this purpose. The robot is equipped with a panoramic vision system, which is used to provide input to the EMDs of the left and right visual fields. The outputs of the EMDs are in later stage spatially integrated by wide field motion detectors, and their accumulated response is directly used for the odometer. In a set of initial experiments, the robot moves through a corridor on a fixed route, and the outputs of EMDs, the odometer, are recorded. The results show that the proposed model can be used to provide an estimate of the distance traveled, but the performance depends on the route the robot follows, something which is biologically plausible since natural insects tend to adopt a fixed route during foraging. Given these results, we assumed that the optomotor response plays an important role in the context of goal-directed navigation, and we conducted experiments with an autonomous freely flying robot. The experiments demonstrate that this computationally cheap mechanism can be successfully employed in natural indoor environments.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot paper_content: The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been modeled and successfully applied in robotic vision system for perceiving potential collisions in an efficient and reliable manner. In this research, we conduct binocular neuronal models, for the first time combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions of this research: (1) The arena tests involving multiple robots verified the effectiveness and robustness of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models, which fulfill corresponding biological research. (3) The utilized micro robot may also benefit researches on other embedded vision systems as well as swarm robotics. --- paper_title: Bio-inspired collision detector with enhanced selectivity for ground robotic vision system paper_content: There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the first-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are first, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efficient visual sensor, and realizing the revealed specific characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot. --- paper_title: Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network paper_content: How do animals like insects perceive meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? In this paper, with respect to latest biological research progress made in underlying motion detection circuitry in the fly's preliminary visual system, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely the motion and the position pathways, for mimicking motion tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information of moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) The proposed computational structure fulfills the characteristics of a putative signal tuning map of the fly's physiology. (2) It also satisfies a biological implication that visual fixation behaviors could be simply tuned via the position pathway; nevertheless, the motion-detecting pathway improves the tracking precision. (3) Contrary to segmentation and registration based computer vision techniques, its computational simplicity benefits the building of neuromorphic visual sensor for robots. --- paper_title: Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement paper_content: The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds --- paper_title: A Synthetic Vision System Using Directionally Selective Motion Detectors to Recognize Collision paper_content: Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes. --- paper_title: Small Brains, Smart Machines: From Fly Vision to Robot Vision and Back Again paper_content: Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind were discovered in vertebrates' and invertebrates' visual systems more than 50 years ago, the principles and neural mechanisms involved have not yet been completely elucidated. Here, first, I propose to outline some of the findings we made during the last few decades by performing electrophysiological recordings on identified neurons in the housefly's eye while applying optical stimulation to identified photoreceptors. Whereas these findings shed light on the inner processing structure of an elementary motion detector (EMD), recent studies in which the latest genetic and neuroanatomical methods were applied to the fruitfly's visual system have identified some of the neurons in the visual chain which are possibly involved in the neural circuitry underlying a given EMD. Then, I will describe some of the proof-of-concept robots that we have developed on the basis of our biological findings. The 100-g robot OCTAVE, for example, is able to avoid the ground, react to wind, and land autonomously on a flat terrain without ever having to measure any state variables such as distances or speeds. The 100-g robots OSCAR 1 and OSCAR 2 inspired by the microscanner we discovered in the housefly's eye are able to stabilize their body using mainly visual means and track a moving edge with hyperacuity. These robots react to the optic flow, which is sensed by miniature optic flow sensors inspired by the housefly's EMDs. Constructing a “biorobot” gives us a unique opportunity of checking the soundness and robustness of a principle that is initially thought to be understood by bringing it face to face with the real physical world. Bio-inspired robotics not only help neurobiologists and neuroethologists to identify and investigate worthwhile problems in animals' sensory-motor systems, but they also provide engineers with ideas for developing novel devices and machines with promising future applications, in the field of smart autonomous vehicles and microvehicles, for example. --- paper_title: Movement-induced motion signal distributions in outdoor scenes paper_content: The movement of an observer generates a characteristic field of velocity vectors on the retina (Gibson 1950). Because such optic flow-fields are useful for navigation, many theoretical, psychophysical and physiological studies have addressed the question how egomotion parameters such as direction of heading can be estimated from optic flow. Little is known, however, about the structure of optic flow under natural conditions. To address this issue, we recorded sequences of panoramic images along accurately defined paths in a variety of outdoor locations and used these sequences as input to a two-dimensional array of correlation-based motion detectors (2DMD). We find that (a) motion signal distributions are sparse and noisy with respect to local motion directions; (b) motion signal distributions contain patches (motion streaks) which are systematically oriented along the principal flow-field directions; (c) motion signal distributions show a distinct, dorso-ventral topography, reflecting the distance anisotropy of terrestrial environments; (d) the spatiotemporal tuning of the local motion detector we used has little influence on the structure of motion signal distributions, at least for the range of conditions we tested; and (e) environmental motion is locally noisy throughout the visual field, with little spatial or temporal correlation; it can therefore be removed by temporal averaging and is largely over-ridden by image motion caused by observer movement. Our results suggest that spatial or temporal integration is important to retrieve reliable information on the local direction and size of motion vectors, because the structure of optic flow is clearly detectable in the temporal average of motion signal distributions. Egomotion parameters can be reliably retrieved from such averaged distributions under a range of environmental conditions. These observations raise a number of questions about the role of specific environmental and computational constraints in the processing of natural optic flow. --- paper_title: Neural network based on the input organization of an identified neuron signaling impending collision paper_content: 1. We describe a four-layered neural network (Fig. 1), based on the input organization of a collision signaling neuron in the visual system of the locust, the lobula giant movement detector (LGMD).... --- paper_title: Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation paper_content: Abstract Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector — the LGMD2. The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner. --- paper_title: An improved LPTC neural model for background motion direction estimation paper_content: A class of specialized neurons, called lobula plate tangential cells (LPTCs) has been shown to respond strongly to wide-field motion. The classic model, elementary motion detector (EMD) and its improved model, two-quadrant detector (TQD) have been proposed to simulate LPTCs. Although EMD and TQD can percept background motion, their outputs are so cluttered that it is difficult to discriminate actual motion direction of the background. In this paper, we propose a max operation mechanism to model a newly-found transmedullary neuron Tm9 whose physiological properties do not map onto EMD and TQD. This proposed max operation mechanism is able to improve the detection performance of TQD in cluttered background by filtering out irrelevant motion signals. We will demonstrate the functionality of this proposed mechanism in wide-field motion perception. --- paper_title: Correlation between OFF and ON Channels Underlies Dark Target Selectivity in an Insect Visual System paper_content: In both vertebrates and invertebrates, evidence supports separation of luminance increments and decrements (ON and OFF channels) in early stages of visual processing (Hartline, 1938; Joesch et al., 2010); however, less is known about how these parallel pathways are recombined to encode form and motion. In Drosophila, genetic knockdown of inputs to putative ON and OFF pathways and direct recording from downstream neurons in the wide-field motion pathway reveal that local elementary motion detectors exist in pairs that separately correlate contrast polarity channels, ON with ON and OFF with OFF (Joesch et al., 2013). However, behavioral responses to reverse-phi motion of discrete features reveal additional correlations of the opposite signs (Clark et al., 2011). We here present intracellular recordings from feature detecting neurons in the dragonfly that provide direct physiological evidence for the correlation of OFF and ON pathways. These neurons show clear polarity selectivity for feature contrast, responding strongly to targets that are darker than the background and only weakly to dark contrasting edges. These dark target responses are much stronger than the linear combination of responses to ON and OFF edges. We compare these data with output from elementary motion detector-based models (Eichner et al., 2011; Clark et al., 2011), with and without stages of strong center-surround antagonism. Our data support an alternative elementary small target motion detector model, which derives dark target selectivity from the correlation of a delayed OFF with an un-delayed ON signal at each individual visual processing unit (Wiederman et al., 2008, 2009). --- paper_title: A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila paper_content: We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects’ eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila’s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies. --- paper_title: Optic flow-based collision-free strategies: From insects to robots paper_content: Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. --- paper_title: Optic flow regulation: the key to aircraft automatic guidance paper_content: Abstract We have developed a visually based autopilot which is able to make an air vehicle automatically take off, cruise and land, while reacting appropriately to wind disturbances (head wind and tail wind). This autopilot consists of a visual control system that adjusts the thrust so as to keep the downward optic flow (OF) at a constant value. This autopilot is therefore based on an optic flow regulation loop. It makes use of a sensor, which is known as an elementary motion detector (EMD). The functional structure of this EMD was inspired by that of the housefly, which was previously investigated at our Laboratory by performing electrophysiological recordings while applying optical microstimuli to single photoreceptor cells of the insect's compound eye. We built a proof-of-concept, tethered rotorcraft that circles indoors over an environment composed of contrasting features randomly arranged on the floor. The autopilot, which we have called OCTAVE (Optic flow based Control sysTem for Aerial VEhicles), enables this miniature (100 g) rotorcraft to carry out complex tasks such as ground avoidance and terrain following, to control risky maneuvers such as automatic take off and automatic landing, and to respond appropriately to wind disturbances. A single visuomotor control loop suffices to perform all these reputedly demanding tasks. As the electronic processing system required is extremely light-weight (only a few grams), it can be mounted on-board micro-air vehicles (MAVs) as well as larger unmanned air vehicles (UAVs) or even submarines and autonomous underwater vehicles (AUVs). But the OCTAVE autopilot could also provide guidance and/or warning signals to prevent the pilots of manned aircraft from colliding with shallow terrain, for example. --- paper_title: Principles of visual motion detection paper_content: Motion information is required for the solution of many complex tasks of the visual system such as depth perception by motion parallax and figure/ground discrimination by relative motion. However, motion information is not explicitly encoded at the level of the retinal input. Instead, it has to be computed from the time-dependent brightness patterns of the retinal image as sensed by the two-dimensional array of photoreceptors. Different models have been proposed which describe the neural computations underlying motion detection in various ways. To what extent do biological motion detectors approximate any of these models? As will be argued here, there is increasing evidence from the different disciplines studying biological motion vision, that, throughout the animal kingdom ranging from invertebrates to vertebrates including man, the mechanisms underlying motion detection can be attributed to only a few, essentially equivalent computational principles. Motion detection may, therefore, be one of the first examples in computational neurosciences where common principles can be found not only at the cellular level (e.g. dendritic integration, spike propagation, synaptic transmission) but also at the level of computations performed by small neural networks. --- paper_title: Defining the Computational Structure of the Motion Detector in Drosophila paper_content: Summary Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. --- paper_title: A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment paper_content: The lobula giant movement detector (LGMD) neuron of locusts has been shown to preferentially respond to objects approaching the eye of a locust on a direct collision course. Computer simulations of the neuron have been developed and have demonstrated the ability of mobile robots, interfaced with a simulated LGMD model, to avoid collisions. In this study, a model of the LGMD neuron is presented and the functional parameters of the model identified. Models with different parameters were presented with a range of automotive video sequences, including collisions with cars. The parameters were optimised to respond correctly to the video sequences using a range of genetic algorithms (GAs). The model evolved most rapidly using GAs with high clone rates into a form suitable for detecting collisions with cars and not producing false collision alerts to most non-collision scenes. --- paper_title: A Feedback Neural Network for Small Target Motion Detection in Cluttered Backgrounds paper_content: Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection. --- paper_title: Redundant Neural Vision Systems—Competing for Collision Recognition Roles paper_content: Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition. --- paper_title: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects paper_content: Two identified looming detectors in the locust: ubiquitous lateral connections among their inputs contribute to selective responses to looming objects --- paper_title: Modeling direction selective visual neural network with ON and OFF pathways for extracting motion cues from cluttered background paper_content: The nature endows animals robust vision systems for extracting and recognizing different motion cues, detecting predators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In this paper, with respect to recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is filtered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: first, the neural network fulfilled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in vision-based micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector. --- paper_title: Bio-plausible visual neural network for spatio-temporally spiral motion perception paper_content: Abstract Neurophysiological studies validate that the primate cerebral cortex includes spiral neurons whose visual properties respond preferentially to spiral motion patterns. However, the biological mechanism of which a vision system perceives spiral motion is unclear, while few computational models are reported to discuss the problem of spiral motion perception. In order to fill this gap, this work develops a spiral motion perception neural network in terms of the recent achievements in neurophysiology and simulates the visual response characteristics of spiral neurons. One such network, inspired by two stages of biological visual information processing includes two subnetworks- presynaptic and postsynaptic neural networks. The former comprises multiple lateral inhibition neural sub-networks for the capture of visual motion information, whereas the latter extracts different rotational and radial motion cues and synthesizes them to detect the process of the spiral motion of an object. Experimentally, the proposed neural network is sufficiently examined by different types of spiral motion patterns in non-interference or interference environments. Numerically comparative experiments show that it can effectively detect spiral motion patterns of the object and also does not respond to any non-spiral motion, which is consistent with the metaphor of spiral neurons’ performance perception. --- paper_title: Performance of a Visual Fixation Model in an Autonomous Micro Robot Inspired by Drosophila Physiology paper_content: In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects' visual systems is not only attractive to neural system modellers but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to fixation. The proposed model was realised on the embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual fixation model: the robot showed motion tracking and fixation behaviours similarly to insects; the image processing frequency can maintain 25 ~ 45Hz. Arena tests also demonstrated a successful following behaviour aroused by fixation in navigation. --- paper_title: Postsynaptic organisations of directional selective visual neural networks for collision detection paper_content: In this paper, we studied the postsynaptic organisations of directional selective visual neurons for collision detection. Directional selective neurons can extract different directional visual motion cues fast and reliably by allowing inhibition spreads to further layers in specific directions with one or several time steps delay. Whether these directional selective neurons can be easily organised for other specific visual tasks is not known. Taking collision detection as the primary visual task, we investigated the postsynaptic organisations of these directional selective neurons through evolutionary processes. The evolved postsynaptic organisations demonstrated robust properties in detecting imminent collisions in complex visual environments with many of which achieved 94% success rate after evolution suggesting active roles in collision detection directional selective neurons and its postsynaptic organisations can play. --- paper_title: Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot paper_content: The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been modeled and successfully applied in robotic vision system for perceiving potential collisions in an efficient and reliable manner. In this research, we conduct binocular neuronal models, for the first time combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions of this research: (1) The arena tests involving multiple robots verified the effectiveness and robustness of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models, which fulfill corresponding biological research. (3) The utilized micro robot may also benefit researches on other embedded vision systems as well as swarm robotics. --- paper_title: Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network paper_content: How do animals like insects perceive meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? In this paper, with respect to latest biological research progress made in underlying motion detection circuitry in the fly's preliminary visual system, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely the motion and the position pathways, for mimicking motion tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information of moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) The proposed computational structure fulfills the characteristics of a putative signal tuning map of the fly's physiology. (2) It also satisfies a biological implication that visual fixation behaviors could be simply tuned via the position pathway; nevertheless, the motion-detecting pathway improves the tracking precision. (3) Contrary to segmentation and registration based computer vision techniques, its computational simplicity benefits the building of neuromorphic visual sensor for robots. --- paper_title: A Synthetic Vision System Using Directionally Selective Motion Detectors to Recognize Collision paper_content: Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes. --- paper_title: A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing paper_content: All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts—presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing. --- paper_title: Visual motion pattern extraction and fusion for collision detection in complex dynamic scenes paper_content: Detecting colliding objects in complex dynamic scenes is a difficult task for conventional computer vision techniques. However, visual processing mechanisms in animals such as insects may provide very simple and effective solutions for detecting colliding objects in complex dynamic scenes. In this paper, we propose a robust collision detecting system, which consists of a lobula giant movement detector (LGMD) based neural network and a translating sensitive neural network (TSNN), to recognise objects on a direct collision course in complex dynamic scenes. The LGMD based neural network is specialized for recognizing looming objects that are on a direct collision course. The TSNN, which fuses the extracted visual motion cues from several whole field direction selective neural networks, is only sensitive to translating movements in the dynamic scenes. The looming cue and translating cue revealed by the two specialized visual motion detectors are fused in the present system via a decision making mechanism. In the system, the LGMD plays a key role in detecting imminent collision; the decision from TSNN becomes useful only when a collision alarm has been issued by the LGMD network. Using driving scenarios as an example, we showed that the bio-inspired system can reliably detect imminent colliding objects in complex driving scenes. --- paper_title: LGMD and DSNs neural networks integration for collision predication paper_content: An ability to predict collisions is essential for current vehicles and autonomous robots. In this paper, an integrated collision predication system is proposed based on neural subsystems inspired from Lobula giant movement detector (LGMD) and directional selective neurons (DSNs) which focus on different part of the visual field separately. The two type of neurons found in the visual pathways of insects respond most strongly to moving objects with preferred motion patterns, i.e., the LGMD prefers looming stimuli and DSNs prefer specific lateral movements. We fuse the extracted information by each type of neurons to make final decision. By dividing the whole field of view into four regions for each subsystem to process, the proposed approaches can detect hazardous situations that had been difficult for single subsystem only. Our experiments show that the integrated system works in most of the hazardous scenarios. --- paper_title: Redundant Neural Vision Systems—Competing for Collision Recognition Roles paper_content: Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition. --- paper_title: Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot paper_content: For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M 2 APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6 × 10 − 7 to 1 . 6 × 10 − 2 W·cm − 2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M 2 APix sensor. While both algorithms adequately measured optical flow between 25 ∘ /s and 1000 ∘ /s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. --- paper_title: Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot paper_content: Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M2APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor. --- paper_title: A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS paper_content: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56 dB (9.3 bit) for >10 Lx illuminance. --- paper_title: Biologically Inspired CMOS Image Sensor for Fast Motion and Polarization Detection paper_content: A complementary-metal-oxide semiconductor (CMOS) image sensor replicating the perception of vision in insects is presented for machine vision applications. The sensor is equipped with in-pixel analog and digital memories that allow in-pixel binarization in real time. The binary output of the pixel tries to replicate the flickering effect of an insect's eye to detect the smallest possible motion based on the change in state of each pixel. The pixel level optical flow generation reduces the need for digital hardware and simplifies the process of motion detection. A built-in counter counts the changes in states for each row to estimate the direction of the motion. The designed image sensor can also sense polarization information in real time using a metallic wire grid micropolarizer. An extinction ratio of 7.7 is achieved. The 1-D binary optical flow is shown to vary with the polarization angle of the incoming light ray. The image sensor consists of an array of 128 × 128 pixels, occupies an area of 5 × 4 mm2 and it is designed and fabricated in a 180-nm CMOS process. --- paper_title: The VODKA Sensor: A Bio-Inspired Hyperacute Optical Position Sensing Device paper_content: We have designed and built a simple optical sensor called Vibrating Optical Device for the Kontrol of Autonomous robots (VODKA), that was inspired by the “tremor” eye movements observed in many vertebrate and invertebrate animals. In the initial version presented here, the sensor relies on the repetitive micro-translation of a pair of photoreceptors set behind a small lens, and on the processing designed to locate a target from the two photoreceptor signals. The VODKA sensor, in which retinal micro-scanning movements are performed via a small piezo-bender actuator driven at a frequency of 40 Hz, was found to be able to locate a contrasting edge with an outstandingly high resolution 900-fold greater than its static resolution (which is constrained by the interreceptor angle), regardless of the scanning law imposed on the retina. Hyperacuity is thus obtained at a very low cost, thus opening new vistas for the accurate visuo-motor control of robotic platforms. As an example, the sensor was mounted onto a miniature aerial robot that became able to track a moving target accurately by exploiting the robot's uncontrolled random vibrations as the source of its ocular microscanning movement. The simplicity, small size, low mass and low power consumption of this optical sensor make it highly suitable for many applications in the fields of metrology, astronomy, robotics, automotive, and aerospace engineering. The basic operating principle may also shed new light on the whys and wherefores of the tremor eye movements occurring in both animals and humans. --- paper_title: Miniature curved artificial compound eyes. paper_content: In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories. --- paper_title: Bio-inspired optic flow sensors based on FPGA: Application to Micro-Air-Vehicles paper_content: Tomorrow's Micro-Air-Vehicles (MAVs) could be used as scouts in many civil and military missions without any risk to human life. MAVs have to be equipped with sensors of several kinds for stabilization and guidance purposes. Many recent findings have shown, for example, that complex tasks such as 3-D navigation can be performed by insects using optic flow (OF) sensors although insects' eyes have a rather poor spatial resolution. At our Laboratory, we have been performing electrophysiological, micro-optical, neuroanatomical and behavioral studies for several decades on the housefly's visual system, with a view to understanding the neural principles underlying OF detection and establishing how OF sensors might contribute to performing basic navigational tasks. Based on these studies, we developed a functional model for an Elementary Motion Detector (EMD), which we first transcribed into electronic terms in 1986 and subsequently used onboard several terrestrial and aerial robots. Here we present a Field Programmable Gate Array (FPGA) implementation of an EMD array, which was designed for estimating the OF in various parts of the visual field of a MAV. FPGA technology is particularly suitable for applications of this kind, where a single Integrated Circuit (IC) can receive inputs from several photoreceptors of similar (or different) shapes and sizes located in various parts of the visual field. In addition, the remarkable characteristics of present-day FPGA applications (their high clock frequency, large number of system gates, embedded RAM blocks and Intellectual Property (IP) functions, small size, light weight, low cost, etc.) make for the flexible design of a multi-EMD visual system and its installation onboard MAVs with extremely low permissible avionic payloads. --- paper_title: Energy-Efficient Design and Control of a Vibro-Driven Robot paper_content: Vibro-driven robotic (VDR) systems use stick-slip motions for locomotion. Due to the underactuated nature of the system, efficient design and control are still open problems. We present a new energy preserving design based on a spring-augmented pendulum. We indirectly control the friction-induced stick-slip motions by exploiting the passive dynamics in order to achieve an improvement in overall travelling distance and energy efficiency. Both collocated and non-collocated constraint conditions are elaborately analysed and considered to obtain a desired trajectory generation profile. For tracking control, we develop a partial feedback controller for the driving pendulum which counteracts the dynamic contributions from the platform. Comparative simulation studies show the effectiveness and intriguing performance of the proposed approach, while its feasibility is experimentally verified through a physical robot. Our robot is to the best of our knowledge the first nonlinear-motion prototype in literature towards the VDR systems. --- paper_title: Motion Detection Circuits for a Time-To-Travel Algorithm paper_content: The paper describes a new motion detection circuit that extracts motion information based on a time-to-travel algorithm. The front-end photoreceptor adapts over 7 decades of background intensity and motion information can be extracted down to a contrast value of 2.5%. Results from the circuits which were fabricated in a 2-metal 2-poly 1.5mum CMOS process, show that the motion information can be extracted over 2 decades of speed. --- paper_title: Bioinspired event-driven collision avoidance algorithm based on optic flow paper_content: Any mobile agent, whether biological or robotic, needs to avoid collisions with obstacles. Insects, such as bees and flies, use optic flow to estimate the relative nearness to obstacles. Optic flow induced by ego-motion is composed of a translational and a rotational component. The segregation of both components is computationally and thus energetically expensive. Flies and bees actively separate the rotational and translational optic flow components via behaviour, i.e. by employing a saccadic strategy of flight and gaze control. Although robotic systems are able to mimic this gaze-strategy, the calculation of optic-flow fields from standard camera images remains time and energy consuming. To overcome this problem, we use a dynamic vision sensor (DVS), which provides event-based information about changes in contrast over time at each pixel location. To extract optic flow from this information, a plane-fitting algorithm estimating the relative velocity in a small spatio-temporal cuboid is used. The depth-structure is derived from the translational optic flow by using local properties of the retina. A collision avoidance direction is then computed from the event-based depth-structure of the environment. The system has successfully been tested on a robotic platform in open loop. --- paper_title: A modified model for the Lobula Giant Movement Detector and its FPGA implementation paper_content: Bio-inspired vision sensors are particularly appropriate candidates for navigation of vehicles or mobile robots due to their computational simplicity, allowing compact hardware implementations with low power dissipation. The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector. --- paper_title: Bio-inspired optical flow circuits for the visual guidance of micro air vehicles paper_content: In the framework of our research on biologically inspired microrobotics, we have developed a visually based autopilot for micro air vehicles (MAV), which we have called OCTAVE (optical altitude control system for autonomous vehicles). Here, we show the feasibility of a joint altitude and speed control system based on a low complexity optronic velocity sensor that estimates the optic flow in the downward direction. This velocity sensor draws on electrophysiological findings of on the fly elementary motion detectors (EMDs) obtained at our laboratory. We built an elementary, 100-gram tethered helicopter system that carries out terrain following above a randomly textured ground. The overall processing system is light enough to be mounted on-board MAVs with an avionic payload of only some grams. --- paper_title: Visually guided micro-aerial vehicle: automatic take off, terrain following, landing and wind reaction paper_content: We have developed a visually based autopilot which is able to make a micro air vehicle (MAV) automatically take off, cruise and land, while reacting adequately to wind disturbances. We built a proof-of-concept, tethered rotorcraft that can travel indoors over an environment composed of contrasting features randomly arranged on the floor. Here we show the feasibility of a visuomotor control loop that acts upon the thrust so as to maintain the optic flow (OF) estimated in the downward direction to a reference value. The sensor involved in this OF regulator is an elementary motion detector (EMD). The functional structure of the EMD was inspired by that of the housefly, which was previously investigated at our laboratory by performing electrophysiological recordings while applying optical microstimuli to single photoreceptor cells of the compound eye. The vision based autopilot, which we have called OCTAVE (optic flow control system for aerospace vehicles) solves complex problems such as terrain following, controls risky maneuvers such as take off and landing and responds appropriately to wind disturbances. All these reputedly demanding tasks are performed with one and the same visuomotor control loop. The non-emissive sensor and simple processing system are particularly suitable for use with MAV, since the tolerated avionic payload of these micro-aircraft is only a few grams. OCTAVE autopilot could also contribute to relieve a remote operator from the lowly and difficult task of continuously piloting and guiding an UAV. It could also provide guiding assistance to pilots of manned aircraft. --- paper_title: Pulse-Based Analog VLSI Velocity Sensors paper_content: We present two algorithms for estimating the velocity of a visual stimulus and their implementations with analog circuits using CMOS VLSI technology. Both are instances of so-called token methods, where velocity is computed by identifying particular features in the image at different locations; in our algorithms, these features are abrupt temporal changes in image irradiance. Our circuits integrate photoreceptors and associated electronics for computing motion onto a single chip and unambiguously extract bidirectional velocity for stimuli of high and intermediate contrasts over considerable irradiance and velocity ranges. At low contrasts, the output signal for a given velocity tends to decrease gracefully with contrast, while direction-selectivity is maintained. The individual motion-sensing cells are compact and highly suitable for use in dense 1-D or 2-D imaging arrays. --- paper_title: Characteristics of Three Miniature Bio-inspired Optic Flow Sensors in Natural Environments paper_content: Considerable attention has been paid during the last decade to vision-based navigation systems based on optic flow (OF) cues. OF-based systems have been implemented on an increasingly large number of sighted autonomous robotic platforms. Nowadays, the OF is measured using conventional cameras, custom-made sensors and even optical mouse chips. However, very few studies have dealt so far with the reliability of these OF sensors in terms of their precision, range and sensitivity to illuminance variations. Three miniature custom-made OF sensors developed at our laboratory, which were composed of photosensors connected to an OF processing unit were tested and compared in this study, focusing on their responses and characteristics in real indoor and outdoor environments in a large range of illuminance. It was concluded that by combining a custom-made aVLSI retina equipped with Adaptive Pixels for Insect-based Sensor (APIS) with a bio-inspired visual processing system, it is possible to obtain highly effective miniature sensors for measuring the OF under real environmental conditions. --- paper_title: Obstacle avoidance with LGMD neuron: Towards a neuromorphic UAV implementation paper_content: We present a neuromorphic adaptation of a spiking neural network model of the locust Lobula Giant Movement Detector (LGMD), which detects objects increasing in size in the field of vision (looming) and can be used to facilitate obstacle avoidance in robotic applications. Our model is constrained by the parameters of a mixed signal analog-digital neuromorphic device, developed by our group, and is driven by the output of a neuromorphic vision sensor. We demonstrate the performance of the model and how it may be used for obstacle avoidance on an unmanned areal vehicle (UAV). --- paper_title: Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields paper_content: The 6-pixel low-speed Visual Motion Sensor (VMS) inspired by insects' visual systems presented here performs local 1-D angular speed measurements ranging from 1.5°/s to 25°/s and weighs only 2.8 g. The entire optic flow processing system, including the spatial and temporal filtering stages, has been updated with respect to the original design. This new lightweight sensor was tested under free-flying outdoor conditions over various fields onboard a 80 kg unmanned helicopter called ReSSAC. The visual disturbances encountered included helicopter vibrations, uncontrolled illuminance, trees, roads, and houses. The optic flow measurements obtained were finely analyzed online and also offline, using the sensors of various kinds mounted onboard ReSSAC. The results show that the optic flow measured despite the complex disturbances encountered closely matched the approximate ground-truth optic flow. --- paper_title: New VLSI smart sensor for collision avoidance inspired by insect vision paper_content: An analog VLSI implementation of a smart microsensor that mimics the early visual processing stage in insects is described with an emphasis on the overall concept and the front- end detection. The system employs the `smart sensor' paradigm in that the detectors and processing circuitry are integrated on the one chip. The integrated circuit is composed of sixty channels of photodetectors and parallel processing elements. The photodetection circuitry includes p-well junction diodes on a 2 micrometers CMOS process and a logarithmic compression to increase the dynamic range of the system. The future possibility of gallium arsenide implementation is discussed. The processing elements behind each photodetector contain a low frequency differentiator where subthreshold design methods have been used. The completed IC is ideal for motion detection, particularly collision avoidance tasks, as it essentially detects distance, speed & bearing of an object. The Horridge Template Model for insect vision has been directly mapped into VLSI and therefore the IC truly exploits the beauty of nature in that the insect eye is so compact with parallel processing, enabling compact motion detection without the computational overhead of intensive imaging, full image extraction and interpretation. This world-first has exciting applications in the areas of automobile anti- collision, IVHS, autonomous robot guidance, aids for the blind, continuous process monitoring/web inspection and automated welding, for example.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers. paper_content: Two bio-inspired guidance principles involving no reference frame are presented here and were implemented in a rotorcraft, which was equipped with panoramic optic flow (OF) sensors but (as in flying insects) no accelerometer. To test these two guidance principles, we built a tethered tandem rotorcraft called BeeRotor (80 grams), which was tested flying along a high-roofed tunnel. The aerial robot adjusts its pitch and hence its speed, hugs the ground and lands safely without any need for an inertial reference frame. The rotorcraft's altitude and forward speed are adjusted via two OF regulators piloting the lift and the pitch angle on the basis of the common-mode and differential rotor speeds, respectively. The robot equipped with two wide-field OF sensors was tested in order to assess the performances of the following two systems of guidance involving no inertial reference frame: (i) a system with a fixed eye orientation based on the curved artificial compound eye (CurvACE) sensor, and (ii) an active system of reorientation based on a quasi-panoramic eye which constantly realigns its gaze, keeping it parallel to the nearest surface followed. Safe automatic terrain following and landing were obtained with CurvACE under dim light to daylight conditions and the active eye-reorientation system over rugged, changing terrain, without any need for an inertial reference frame. --- paper_title: A biologically inspired analog IC for visual collision detection paper_content: We have designed and tested a single-chip analog VLSI sensor that detects imminent collisions by measuring radially expanding optic flow. The design of the chip is based on a model proposed to explain leg-extension behavior in flies during landing approaches. We evaluated a detailed version of this model in simulation using a library of 50 test movies taken through a fisheye lens. The algorithm was evaluated on its ability to distinguish movies ending in collisions from movies in which no collision occurred. This biologically inspired algorithm is capable of 94% correct performance in this task using an ultra-low-resolution (132-pixel) image as input. A new elementary motion detector (EMD) circuit was developed to measure optic flow on a CMOS focal-plane sensor. This EMD circuit models the bandpass nature of large monopolar cells (LMCs) immediately postsynaptic to photoreceptors in the fly visual system as well as a saturating multiplication operation proposed for Reichart-type motion detectors. A 16/spl times/16 array of two-dimensional motion detectors was fabricated in a standard 0.5-/spl mu/m CMOS process. The chip consumes 140 /spl mu/W of power from a 5 V supply. With the addition of wide-angle optics, the sensor is able to detect collisions 100-400 ms before impact in complex, real-world scenes. --- paper_title: An FPGA implementation of insect-inspired motion detector for high-speed vision systems paper_content: In this paper, an array of biologically inspired elementary motion detectors (EMDs) is implemented on an FPGA (field programmable gate array) platform. The well-known Reichardt-type EMD, modeling the insect's visual signal processing system, is very sensitive to motion direction and has low computational cost. A modified structure of EMD is used to detect local optical flow. Six templates of receptive fields, according to the fly's vision system, are designed for simple ego-motion estimation. The results of several typical experiments demonstrate local detection of optical flow and simple motion estimation under specific backgrounds. The performance of the real-time implementation is sufficient to deal with a video frame rate of 350 fps at 256 times 256 pixels resolution. The execution of the motion detection algorithm and the resulting time delay is only 0.25 mus. This hardware is suited for obstacle detection, motion estimation and UAV/MAV attitude control. --- paper_title: Optimized adaptive tracking control for an underactuated vibro-driven capsule system paper_content: This paper studies the issue of adaptive trajectory tracking control for an underactuated vibro-driven capsule system and presents a novel motion generation framework. In this framework, feasible motion trajectory is derived through investigating dynamic constraints and kernel control indexes that underlie the underactuated dynamics. Due to the underactuated nature of the capsule system, the global motion dynamics cannot be directly controlled. The main objective of optimization is to indirectly control the friction-induced stick–slip motions to reshape the passive dynamics and, by doing so, to obtain optimal system performance in terms of average speed and energy efficacy. Two tracking control schemes are designed using a closed-loop feedback linearization approach and an adaptive variable structure control method with an auxiliary control variable, respectively. The reference model is accurately matched in a finite-time horizon. The key point is to define an exogenous state variable whose dynamics is employed as a control input. The tracking performance and system stability are investigated through rigorous theoretic analysis. Extensive simulation studies are conducted to demonstrate the effectiveness and feasibility of the developed trajectory model and optimized adaptive control system. ---
Title: Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review Section 1: Introduction Description 1: Introduce the significance of motion perception in insects and its relevance to computational modeling and applications in intelligent machines. Section 2: Related Survey of Research on Biological Visual Systems Description 2: Review past research on the cellular and subcellular mechanisms of insect visual systems, highlighting key discoveries and studies. Section 3: Related Survey of Bio-inspired Models and Applications Description 3: Provide an overview of bio-inspired computational models derived from insect visual systems and their applications in robotics, especially focusing on vision-based navigation and control. Section 4: Taxonomy of This Review Description 4: Outline the review's structure, categorizing computational models by their direction and size selectivity, and summarizing the significance of motion patterns in biological research and artificial systems. Section 5: Neuron Models of Looming Perception Description 5: Discuss computational models and applications of looming-sensitive neurons inspired by locust visual systems, detailing methods to shape looming selectivity and applications in robotics. Section 6: Biological Research Background Description 6: Provide a detailed background of the biological research on looming-sensitive neurons, focusing on LGMD1 and LGMD2 in locusts. Section 7: Computational Models and Applications Description 7: Review various computational models of looming-sensitive neurons and their applications in mobile robots, UAVs, and ground vehicles, highlighting key methodologies and implementations. Section 8: Neural Systems for Translation Perception Description 8: Review computational models and applications of translation-sensitive neural networks inspired by insect visual neurons and pathways, focusing on direction-selective neurons (DSN) and fly elementary motion detectors (EMDs). Section 9: Computational Models of Locust DSNs Description 9: Discuss the modeling of directionally selective neurons in locusts, detailing the computational structures and their applications in real-world environments. Section 10: Fly Motion Detectors Description 10: Review classic and modern theories on fly motion detectors, including Reichardt detectors and EMDs, and their applications to optic flow-based robotic navigation. Section 11: EMD Models and OF-Based Applications to Robotics Description 11: Discuss specific implementations of EMD models and their use in optic flow-based control strategies for various robotic applications. Section 12: Modeling of Fly ON and OFF Pathways and LPTCs Description 12: Present cutting-edge biological findings on fly visual systems, focusing on ON and OFF pathways and Lobula Plate Tangential Cells (LPTCs) and their role in motion detection. Section 13: Small-Target Motion Perception Models Description 13: Review computational models of small-target motion-sensitive neurons, focusing on the small-target motion detector (STMD) and figure detection neuron (FDN) in insect visual systems. Section 14: Discussion Description 14: Summarize similarities in computational modeling of insect motion detectors, discuss the generation of direction and size selectivity in models, and point out existing and potential hardware implementations. Section 15: Integration of Multiple Neural Systems Description 15: Discuss the benefits of integrating multiple neural systems to handle complex visual tasks in dynamic environments and enhance motion perception in artificial systems. Section 16: Hardware Realization of Insect Motion Perception Models Description 16: Explore the hardware implementation of computational models, focusing on single-chip and high-performance solutions, and discuss future trends and applications. Section 17: Conclusion Description 17: Summarize the review, emphasizing the potential of insect-inspired motion perception models for future neuromorphic sensors and intelligent machines.
The complexity of small universal Turing machines: a survey
10
--- paper_title: The uniform halting problem for generalized one-state turing machines* paper_content: It is shown that the uniform halting problem for one-state Turing machines is solvable. It remains solvable for various generalizations like one-state Turing machines with two-dimensional tape and jumping reading head. Other generalizations, for example, one-state Turing machines with two tapes, have an unsolvable uniform halting problem. The history of the problem is summarized. --- paper_title: Solvability of the halting problem for certain classes of Turing machines paper_content: One method of proving that some Turing machine is not universal is to prove that the halting problem is solvable for it. Therefore, to obtain a lower bound on the complexity of a universal machine, it is convenient to have a criterion of solvability of the halting problem. In the present paper, we establish some of these criteria; they are formulated in terms of properties of machine graphs and computations. --- paper_title: MINSKY'S SMALL UNIVERSAL TURING MACHINE paper_content: Marvin L. Minsky constructed a 4-symbol 7-state universal Turing machine in 1962. It was first announced in a postscript to [2] and is also described in [3, Sec. 14.8]. This paper contains everything that is needed for an understanding of his machine, including a complete description of its operation. Minsky's machine remains one of the minimal known universal Turing machines. That is, there is no known such machine which decreases one parameter without increasing the other. However, Rogozhin [6], [7] has constructed seven universal machines with the following parameters: His 4-symbol 7-state machine is somewhat different from Minsky's, but all of his machines use a construction similar to that used by Minsky. The following corrections should be noted: First machine, for q600Lq1 read q600Lq7; second machine, for q411Rq4 read q411Rq10; last machine, for q2b2bLq2 read . A generalized Turing machine with 4 symbols and 7 states, closely related to Minsky's, was constructed and used in [5]. --- paper_title: A Universal Turing Machine with 3 States and 9 Symbols paper_content: With an UTM(3,9) we present a new small universal Turing machine with 3 states and 9 symbols, improving a former result of an UTM(3,10). --- paper_title: The Nature of Computation paper_content: Computational complexity is one of the most beautiful fields of modern mathematics, and it is increasingly relevant to other sciences ranging from physics to biology. But this beauty is often buried underneath layers of unnecessary formalism, and exciting recent results like interactive proofs, cryptography, and quantum computing are usually considered too "advanced" to show to the typical student. The aim of this book is to bridge both gaps by explaining the deep ideas of theoretical computer science in a clear and enjoyable fashion, making them accessible to non computer scientists and to computer scientists who finally want to understand what their formalisms are actually telling. This book gives a lucid and playful explanation of the field, starting with P and NP-completeness. The authors explain why the P vs. NP problem is so fundamental, and why it is so hard to resolve. They then lead the reader through the complexity of mazes and games; optimization in theory and practice; randomized algorithms, interactive proofs, and pseudorandomness; Markov chains and phase transitions; and the outer reaches of quantum computing. At every turn, they use a minimum of formalism, providing explanations that are both deep and accessible. The book is intended for graduates and undergraduates, scientists from other areas who have long wanted to understand this subject, and experts who want to fall in love with this field all over again. --- paper_title: Decidability and Undecidability of the Halting Problem on Turing Machines, a Survey paper_content: The paper surveys the main results obtained for Turing machines about the frontier between a decidable halting problem and universality. The notion of decidability criterion is introduced. Techniques for decidability proofs and for contracting universal objects are sketchily explained. A new approach for finding very small universal Turing machines is considered in the last part of the paper. --- paper_title: Universality of Tag Systems with P = 2 paper_content: By a simple direct construction it is shown that computations done by Turing machines can be duplicated by a very simple symbol manipulation process. The process is described by a simple form of Post canonical system with some very strong restrictions. This system is monogenic : each formula (string of symbols) of the system can be affected by one and only one production (rule of inference) to yield a unique result. Accordingly, if we begin with a single axiom (initial string) the system generates a simply ordered sequence of formulas, and this operation of a monogenic system brings to mind the idea of a machine. The Post canonical system is further restricted to the “Tag” variety, described briefly below. It was shown in [1] that Tag systems are equivalent to Turing machines. The proof in [1] is very complicated and uses lemmas concerned with a variety of two-tape nonwriting Turing machines. The proof here avoids these otherwise interesting machines and strengthens the main result; obtaining the theorem with a best possible deletion number P = 2. Also, the representation of the Turing machine in the present system has a lower degree of exponentiation, which may be of significance in applications. These systems seem to be of value in establishing unsolvability of combinatorial problems. --- paper_title: Surprising Areas in the Quest for Small Universal Devices paper_content: In this paper, we study a few points indicated in the talk which we presented at MFCSIT meeting in Cork. Our study concerns three main areas: Turing machines, cellular automata and hyperbolic cellular automata. The common thread is the quest for small universal devices. It leads from properties belonging to the classical domain up to results on super-Turing computations. --- paper_title: Three Small Universal Turing Machines paper_content: We are interested by "small" Universal Turing Machines (in short: UTMs), in the framework of 2, 3 or 4 tape-symbols. In particular: - 2 tape-symbols. Apart from the old 24-states machine constructed by Rogozhin in 1982, we know two recent examples requiring 22 states, one due to Rogozhin and one to the author. - 3 tape-symbols. The best example we know, due to Rogozhin, requires 10 states. It uses a strategy quite hard to follow, in particular because even-length productions require a different treatment with respect to odd-length ones. - 4 tape-symbols. The best known machines require 7 states. Among them, the Rogozhin's one require only 26 commands; the Robinson's one, though requiring 27 commands, fournishes an easier way to recover the output when the TM halts. In particular, Robinson asked for a 7 × 4 UTM with only 26 commands and an easy treatment of the output. ::: ::: Here we will firstly construct a 7 × 4 UTM with an easy recover of the output which requires only 25 commands; then we will simulate such a machine by a (simple) 10 × 3 UT M and by a 19 × 2 UTM. --- paper_title: Size and structure of universal Turing machines using tag systems paper_content: A data transmission system for transmitting information over a plurality of channels or for multiplexing different information on a single channel is described. The system includes a transmitting station where input data signals are linearly transformed into a complementary pulse sequence and transmitted to a receiving station wherein the transmitted pulses are inversely transformed to recover the original signals. The recovered signals are larger in amplitude than the original input signals by a factor dependent on the number of information channels in the system. Any noise introduced during transmission is not made larger in amplitude with the result that the signal to noise ratio of the received signals is improved. The input signals are supplied to a plurality of encoding devices at the transmitter. The encoding devices include tapped delay line devices having multipliers at the taps that multiply the tapped signals by a plus or minus factor in accordance with the code. The multiplied signals are then combined to produce a pulse sequence which is transmitted to the receiving station. At the receiving station the pulse sequence is applied to a plurality of decoding devices. The decoding devices include tapped delay line devices having multipliers at the taps which multiply the tapped signals by a plus or minus factor according to a code which is complementary to the code used at the transmitting station. The multiplied signals are then combined to recover the original input signal which is increased in amplitude by a given factor. The system may be embodied in acoustic surface wave structures wherein the encoding and decoding devices are interdigital transducers. --- paper_title: On the time complexity of 2-tag systems and small universal Turing machines paper_content: We show that 2-tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improves a forty year old result in the area of small universal Turing machines. --- paper_title: Recursive Unsolvability of Post's Problem of "Tag" and other Topics in Theory of Turing Machines paper_content: The equivalence of the notions of effective computability as based (1) on formal systems (e.g., those of Post), and (2) on computing machines (e.g., those of Turing) has been shown in a number of ways. The main results of this paper show that the same notions of computability can be realized within (1) the highly restricted monogenic formal systems called by Post the "Tag" systems, and (2) within a peculiarly restricted variant of Turing machine which has two tapes, but can neither write on nor erase these tapes. From these, or rather from the arithmetization device used in their construction, we obtain also an interesting basis for recursive function theory involving programs of only the simplest arithmetic operations. We show first how Turing machines can be regarded as programmed computers. Then by defining a hierarchy of programs which perform certain arithmetic transformations, we obtain the representation in terms of the restricted two-tape machines. These machines, in turn, can be represented in terms of Post normal canonical systems in such a way that each instruction for the machine corresponds to a set of productions in a system which has the monogenic property (for each string in the Post system just one production can operate). This settles the questions raised --- paper_title: A Universal Turing Machine with 3 States and 9 Symbols paper_content: With an UTM(3,9) we present a new small universal Turing machine with 3 states and 9 symbols, improving a former result of an UTM(3,10). --- paper_title: Small weakly universal Turing machines paper_content: We give small universal Turing machines with state-symbol pairs of (6, 2), (3, 3) and (2, 4). These machines are weakly universal, which means that they have an infinitely repeated word to the left of their input and another to the right. They simulate Rule 110 and are currently the smallest known weakly universal Turing machines. --- paper_title: Four small universal Turing machines paper_content: We present universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (15, 2). These machines simulate our new variant of tag system, the bi-tag system and are the smallest known single-tape universal Turing machines with 5, 4, 3 and 2-symbols, respectively. Our 5-symbolmachine uses the same number of instructions (22) as the smallest known universal Turing machine by Rogozhin. Also, all of the universalmachines we present here simulate Turing machines in polynomial time. --- paper_title: Non-Erasing Turing Machines: A New Frontier Between a Decidable Halting Problem and Universality paper_content: We define a new criterion which allows to separate cases when all non erasing Turing machines on {0, 1} have a decidable halting problem from cases where a universal non erasing machine can be constructed. It is the case of the number of left instructions in the machine program. In this paper we give the main ideas of the proof for both parts of the frontier result. We prove that there is a universal non-erasing Turing machine whose program has precisely 3 left instructions and that the halting problem is decidable for any non-erasing Turing machine on alphabet {0, 1}, the program of which contains at most 2 left instructions. For this latter result, we have a uniform decision algorithm. --- paper_title: Small fast universal Turing machines paper_content: We present a small time-efficient universal Turing machine with 5 states and 6 symbols. This Turing machine simulates our new variant of tag system. It is the smallest known universal Turing machine that simulates Turing machine computations in polynomial time. --- paper_title: Splicing systems for universal turing machines paper_content: In this paper, we look at extended splicing systems (i.e., H systems) in order to find how small such a system can be in order to generate a recursively enumerable language. It turns out that starting from a Turing machine M with alphabet A and finite set of states Q which generates a given recursively enumerable language L, we need around 2×|I|+2 rules in order to define an extended H system H which generates L, where I is the set of instructions of Turing machine M. Next, coding the states of Q and the non-terminal symbols of L, we obtain an extended H system H 1 which generates L using |A|+2 symbols. At last, by encoding the alphabet, we obtain a splicing system U which generates a universal recursively enumerable set using only two letters. --- paper_title: On the rule complexity of universal tissue p systems paper_content: In the last time several attempts to decrease different complexity parameters (number of membranes, size of rules, number of objects etc.) of universal P systems were done. In this article we consider another parameter which was not investigated yet: the number of rules. We show that 8 rules suffice to recognise any recursively enumerable language if splicing tissue P systems are considered. --- paper_title: Universality of Tag Systems with P = 2 paper_content: By a simple direct construction it is shown that computations done by Turing machines can be duplicated by a very simple symbol manipulation process. The process is described by a simple form of Post canonical system with some very strong restrictions. This system is monogenic : each formula (string of symbols) of the system can be affected by one and only one production (rule of inference) to yield a unique result. Accordingly, if we begin with a single axiom (initial string) the system generates a simply ordered sequence of formulas, and this operation of a monogenic system brings to mind the idea of a machine. The Post canonical system is further restricted to the “Tag” variety, described briefly below. It was shown in [1] that Tag systems are equivalent to Turing machines. The proof in [1] is very complicated and uses lemmas concerned with a variety of two-tape nonwriting Turing machines. The proof here avoids these otherwise interesting machines and strengthens the main result; obtaining the theorem with a best possible deletion number P = 2. Also, the representation of the Turing machine in the present system has a lower degree of exponentiation, which may be of significance in applications. These systems seem to be of value in establishing unsolvability of combinatorial problems. --- paper_title: Some small, multitape universal Turing machines paper_content: Abstract The standard way of assigning complexity to a one-tape Turing machine, by the state-symbol product, is clearly inadequate for machines with more than one tape. Letting an (m,n,p)-machine be a Turing machine with m states, n symbols (including any end markers), and p tapes, the number m· n p gives the maximum number of operating rules for an (m,n,p)-machine and serves as a fairly reasonable complexity criterion, reducing to the state-symbol product when p = 1. A (2,3,2)-machine has been designed to simulate an arbitrary tag system with deletion number 2, and therefore is universal [1]. Also a (1,2,4)-machine, having a fixed loop for one of its four tapes, has been designed ; this machine is universal by its ability to simulate an arbitrary “B machine” (6). Under the criterion stated above, these machines have complexities of 18 and 16, respectively, both less than any reported state-symbol product. Moreover, any (m,n,p)-machine may be simulated, step-by-step, by a (1,2,p′)-machine, where p′ = p⌜log 2 (n)⌝ + ⌜log 2 (m)⌝ . If the logarithms are integral, this transformation is realized with no increase in complexity. However, since the full power of the additional tapes is not utilized in this transformation, it appears that this complexity criterion does not provide a severe enough penalty for the introduction of additional tapes to a Turing machine. --- paper_title: Three Small Universal Turing Machines paper_content: We are interested by "small" Universal Turing Machines (in short: UTMs), in the framework of 2, 3 or 4 tape-symbols. In particular: - 2 tape-symbols. Apart from the old 24-states machine constructed by Rogozhin in 1982, we know two recent examples requiring 22 states, one due to Rogozhin and one to the author. - 3 tape-symbols. The best example we know, due to Rogozhin, requires 10 states. It uses a strategy quite hard to follow, in particular because even-length productions require a different treatment with respect to odd-length ones. - 4 tape-symbols. The best known machines require 7 states. Among them, the Rogozhin's one require only 26 commands; the Robinson's one, though requiring 27 commands, fournishes an easier way to recover the output when the TM halts. In particular, Robinson asked for a 7 × 4 UTM with only 26 commands and an easy treatment of the output. ::: ::: Here we will firstly construct a 7 × 4 UTM with an easy recover of the output which requires only 25 commands; then we will simulate such a machine by a (simple) 10 × 3 UT M and by a 19 × 2 UTM. --- paper_title: Size and structure of universal Turing machines using tag systems paper_content: A data transmission system for transmitting information over a plurality of channels or for multiplexing different information on a single channel is described. The system includes a transmitting station where input data signals are linearly transformed into a complementary pulse sequence and transmitted to a receiving station wherein the transmitted pulses are inversely transformed to recover the original signals. The recovered signals are larger in amplitude than the original input signals by a factor dependent on the number of information channels in the system. Any noise introduced during transmission is not made larger in amplitude with the result that the signal to noise ratio of the received signals is improved. The input signals are supplied to a plurality of encoding devices at the transmitter. The encoding devices include tapped delay line devices having multipliers at the taps that multiply the tapped signals by a plus or minus factor in accordance with the code. The multiplied signals are then combined to produce a pulse sequence which is transmitted to the receiving station. At the receiving station the pulse sequence is applied to a plurality of decoding devices. The decoding devices include tapped delay line devices having multipliers at the taps which multiply the tapped signals by a plus or minus factor according to a code which is complementary to the code used at the transmitting station. The multiplied signals are then combined to recover the original input signal which is increased in amplitude by a given factor. The system may be embodied in acoustic surface wave structures wherein the encoding and decoding devices are interdigital transducers. --- paper_title: A DNA and restriction enzyme implementation of Turing Machines. paper_content: Bacteria employ restriction enzymes to cut or restrict DNA ::: at or near specific words in a unique way. Many restriction ::: enzymes cut the two strands of double-stranded DNA at ::: different positions leaving overhangs of single-stranded DNA. Two pieces of DNA may be rejoined or ligated if their ::: terminal overhangs are complementary. Using these operations ::: fragments of DNA, or oligonucleotides, may be inserted and ::: deleted from a circular piece of plasmid DNA. We propose ::: an encoding for the transition table of a Turing machine in ::: DNA oligonucleotides and a corresponding series of restrictions and ligations of those oligonucleotides that, when performed on circular DNA encoding an instantaneous description of a Turing machine, simulate the operation of the Turing machine encoded in those oligonucleotides. DNA based Turing machines have been proposed by Charles Bennett but they invoke imaginary enzymes to perform the state-symbol transitions. Our approach differs in that every operation can be performed using commercially available restriction enzymes and ligases. --- paper_title: Universal Computation in Simple One-Dimensional Cellular Automata paper_content: The existence of computation-universal one-dimensional cellular automata with seven states per cell for a transition function depending on the cell itself and its nearest neighbors (r= 1), and four states per cell for r= 2 (when next-nearest neighbors also are included), is shown. It is also demonstrated that a Turing machine with m tape symbols and n internal states can be simulated by a cellular automaton of range r= 1 with m+ n+ 2 states per cell. --- paper_title: Small universal Turing machines paper_content: Numerous results for simple computationally universal systems are presented, with a particular focus on small universal Turing machines. These results are towards finding the simplest universal systems. We add a new aspect to this area by examining trade-offs between the simplicity of universal systems and their time/space computational complexity. Improving on the earliest results we give the smallest known universal Turing machines that simulate Turing machines in O(t2) time. They are also the smallest known machines where direct simulation of Turing machines is the technique used to establish their universality. This result gives a new algorithm for small universal Turing machines. We show that the problem of predicting t steps of the 1D cellular automaton Rule 110 is P-complete. As a corollary we find that the small weakly universal Turing machines of Cook and others run in polynomial time, an exponential improvement on their previously known simulation time overhead. These results are achieved by improving the cyclic tag system simulation time of Turing machines from exponential to polynomial. A new form of tag system which we call a bi-tag system is introduced. We prove that bi-tag systems are universal by showing they efficiently simulate Turing machines. We also show that 2-tag systems efficiently simulate Turing machines in polynomial time. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improves on a forty-year old result. We present new small polynomial time universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (15, 2). These machines simulate bi-tag systems and are the smallest known universal Turing machines with 5, 4, 3 and 2-symbols, respectively. The 5-symbol machine uses the same number of instructions (22) as the current smallest known universal Turing machine (Rogozhin’s 6-symbol machine). We give the smallest known weakly universal Turing machines. These machines have state-symbol pairs of (6, 2), (3, 3) and (2, 4). The 3-state and 2-state machines are very close to the minimum possible size for weakly universal machines with 3 and 2 states, respectively. --- paper_title: Book review: A new kind of science paper_content: This is a critical review of the book 'A New Kind of Science' by Stephen Wolfram. We do not attempt a chapter-by-chapter evaluation, but instead focus on two areas: computational complexity and fundamental physics. In complexity, we address some of the questions Wolfram raises using standard techniques in theoretical computer science. In physics, we examine Wolfram's proposal for a deterministic model underlying quantum mechanics, with 'long-range threads' to connect entangled particles. We show that this proposal cannot be made compatible with both special relativity and Bell inequality violation. --- paper_title: Statistical Mechanics of Cellular Automata paper_content: Cellular automata are used as simple mathematical models to investigate self-organization in statistical mechanics. A detailed analysis is given of ''elementary'' cellular automata consisting of a sequence of sites with values 0 or 1 on a line, with each site evolving deterministically in discrete time steps according to p definite rules involving the values of its nearest neighbors. With simple initial configurations, the cellular automata either tend to homogeneous states, or generate self-similar patterns with fractal dimensions approx. =1.59 or approx. =1.69. With ''random'' initial configurations, the irreversible character of the cellular automaton evolution leads to several self-organization phenomena. Statistical properties of the structures generated are found to lie in two universality classes, independent of the details of the initial state or the cellular automaton rules. More complicated cellular automata are briefly considered, and connections with dynamical systems theory and the formal theory of computation are discussed. --- paper_title: A Concrete View of Rule 110 Computation paper_content: Rule 110 is a cellular automaton that performs repeated simultaneous updates of an infinite row of binary values. The values are updated in the following way: 0s are changed to 1s at all positions where the value to the right is a 1, while 1s are changed to 0s at all positions where the values to the left and right are both 1. Though trivial to define, the behavior exhibited by Rule 110 is surprisingly intricate, and in (Cook, 2004) we showed that it is capable of emulating the activity of a Turing machine by encoding the Turing machine and its tape into a repeating left pattern, a central pattern, and a repeating right pattern, which Rule 110 then acts on. In this paper we provide an explicit compiler for converting a Turing machine into a Rule 110 initial state, and we present a general approach for proving that such constructions will work as intended. The simulation was originally assumed to require exponential time, but surprising results of Neary and Woods (2006) have shown that in fact, only polynomial time is required. We use the methods of Neary and Woods to exhibit a direct simulation of a Turing machine by a tag system in polynomial time. --- paper_title: Universality in Elementary Cellular Automata. paper_content: The purpose of this paper is to prove that one of the simplest one dimensional cellular automata is computationally universal, implying that many questions concerning its behavior, such as whether a particular sequence of bits will occur, or whether the behavior will become periodic, are formally undecidable. The cellular automaton we will prove this for is known as “Rule 110” according to Wolfram’s numbering scheme [2]. Being a one dimensional cellular automaton, it consists of an infinitely long row of cells "Ci # i $ !%. Each cell is in one of the two states "0, 1%, and at each discrete time step every cell synchronously updates itself according to the value of itself and its nearest neighbors: &i, Ci ( F(Ci)1, Ci, Ci*1), where F is the following function: --- paper_title: A Particular Universal Cellular Automaton paper_content: Signals are a classical tool of cellular automata constructions that proved to be useful for language recognition or firing-squad synchronisation. Particles and collisions formalize this idea one step further, describing regular nets of colliding signals. In the present paper, we investigate the use of particles and collisions for constructions involving an infinite number of interacting particles. We obtain a high-level construction for a new smallest intrinsically universal cellular automaton with 4 states. --- paper_title: Small weakly universal Turing machines paper_content: We give small universal Turing machines with state-symbol pairs of (6, 2), (3, 3) and (2, 4). These machines are weakly universal, which means that they have an infinitely repeated word to the left of their input and another to the right. They simulate Rule 110 and are currently the smallest known weakly universal Turing machines. --- paper_title: Predicting non-linear cellular automata quickly by decomposing them into linear ones paper_content: We show that a wide variety of nonlinear cellular automata (CAs) can be decomposed into a quasidirect product of linear ones. These CAs can be predicted by parallel circuits of depth O(log2 t) using gates with binary inputs, or O(log t) depth if “sum mod p” gates with an unbounded number of inputs are allowed. Thus these CAs can be predicted by (idealized) parallel computers much faster than by explicit simulation, even though they are nonlinear. ::: ::: This class includes any CA whose rule, when written as an algebra, is a solvable group. We also show that CAs based on nilpotent groups can be predicted in depth O(log t) or O(1) by circuits with binary or “sum mod p” gates, respectively. ::: ::: We use these techniques to give an efficient algorithm for a CA rule which, like elementary CA rule 18, has diffusing defects that annihilate in pairs. This can be used to predict the motion of defects in rule 18 in O(log2 t) parallel time. --- paper_title: The Quest for Small Universal Cellular Automata paper_content: We formalize the idea of intrinsically universal cellular automata, which is strictly stronger than classical computational universality. Thanks to this uniform notion, we construct a new one-dimensional universal automaton with von Neumann neighborhood and only 6 states, thus improving the best known lower bound both for computational and intrinsic universality. --- paper_title: P-completeness of Cellular Automaton Rule 110 paper_content: We show that. the problem of predicting t steps of the ID cellular automaton Rule 110 is P-complete. The result is found by showing that Rule 110 simulates deterministic Turing machines in polynomial time. As a corollary we find that the small universal Turing machines of Mathew Cook run in polynomial time, this is an exponential improvement on their previously known simulation time overhead. --- paper_title: Towards a Precise Characterization of the Complexity of Universal and Nonuniversal Turing Machines paper_content: A computation universal Turing machine, U, with 2 states, 4 letters, 1 head and 1 two-dimensional tape is constructed by a translation of a universal register-machine language into networks over some simple abstract automata and, finally, of such networks into U. As there exists no universal Turing machine with 2 states, 2 letters, 1 head and 1 two-dimensional tape only the 2-state, 3-letter case for such machines remains an open problem. An immediate consequence of the construction of U is the existence of a universal 2-state, 2-letter, 2-head, 1 two-dimensional tape Turing machine, giving a first sharp boundary of the necessary complexity of universal Turing machines. --- paper_title: A simplified universal Turing machine paper_content: In 1936 Turing (1) defined a class of logical machines (which he called a - machines, but which are now generally called Turing machines) which he used as an aid in proving certain results in mathematical logic, and which should prove of interest in connection with the theory of control and switching systems. Given any logical operation or arithmetical computation for which complete instructions for carrying out can be supplied, it is possible to design a Turing machine which can perform this operation. --- paper_title: Some small, multitape universal Turing machines paper_content: Abstract The standard way of assigning complexity to a one-tape Turing machine, by the state-symbol product, is clearly inadequate for machines with more than one tape. Letting an (m,n,p)-machine be a Turing machine with m states, n symbols (including any end markers), and p tapes, the number m· n p gives the maximum number of operating rules for an (m,n,p)-machine and serves as a fairly reasonable complexity criterion, reducing to the state-symbol product when p = 1. A (2,3,2)-machine has been designed to simulate an arbitrary tag system with deletion number 2, and therefore is universal [1]. Also a (1,2,4)-machine, having a fixed loop for one of its four tapes, has been designed ; this machine is universal by its ability to simulate an arbitrary “B machine” (6). Under the criterion stated above, these machines have complexities of 18 and 16, respectively, both less than any reported state-symbol product. Moreover, any (m,n,p)-machine may be simulated, step-by-step, by a (1,2,p′)-machine, where p′ = p⌜log 2 (n)⌝ + ⌜log 2 (m)⌝ . If the logarithms are integral, this transformation is realized with no increase in complexity. However, since the full power of the additional tapes is not utilized in this transformation, it appears that this complexity criterion does not provide a severe enough penalty for the introduction of additional tapes to a Turing machine. --- paper_title: On quasi-unilateral universal Turing machines paper_content: Four universal Turing machines are given with strong limitations on the number of left instructions in their program. One of the machines is morever non-erasing, i.e. it does not erase any 1 written on the tape. --- paper_title: A universal reversible turing machine paper_content: A reversible Turing machines is a computing model with a "backward deterministic" property, which is closely related to physical reversibility. In this paper, we study the problem of finding a small universal reversible Turing machine (URTM). As a result, we obtained a 17-state 5-symbol URTM in the quintuple form that can simulate any cyclic tag system. --- paper_title: Recursive Unsolvability of Post's Problem of "Tag" and other Topics in Theory of Turing Machines paper_content: The equivalence of the notions of effective computability as based (1) on formal systems (e.g., those of Post), and (2) on computing machines (e.g., those of Turing) has been shown in a number of ways. The main results of this paper show that the same notions of computability can be realized within (1) the highly restricted monogenic formal systems called by Post the "Tag" systems, and (2) within a peculiarly restricted variant of Turing machine which has two tapes, but can neither write on nor erase these tapes. From these, or rather from the arithmetization device used in their construction, we obtain also an interesting basis for recursive function theory involving programs of only the simplest arithmetic operations. We show first how Turing machines can be regarded as programmed computers. Then by defining a hierarchy of programs which perform certain arithmetic transformations, we obtain the representation in terms of the restricted two-tape machines. These machines, in turn, can be represented in terms of Post normal canonical systems in such a way that each instruction for the machine corresponds to a set of productions in a system which has the monogenic property (for each string in the Post system just one production can operate). This settles the questions raised --- paper_title: Solvability of the halting problem for certain classes of Turing machines paper_content: One method of proving that some Turing machine is not universal is to prove that the halting problem is solvable for it. Therefore, to obtain a lower bound on the complexity of a universal machine, it is convenient to have a criterion of solvability of the halting problem. In the present paper, we establish some of these criteria; they are formulated in terms of properties of machine graphs and computations. --- paper_title: The laterality problem for non-erasing turing machines on {0,1} is completely solved paper_content: In a previous work, [2], we defined a criterion which allowed to separate ceses when all non-erasing Turing machines on {0,1} have a decidable halting problem from cases where a universal non-erasing machine can be constructed. Applying a theorem which entails the just indicated frontier and analogous techniques based upon a qualitative study of the motions of the head of a Turing machine on its tape, another frontier result is here proved, based upon a new criterion, namely the number of left instructions. In this paper, a complete proof of the decidability part of the results is supplied. The case of a single left instruction with a finite alphabet in a generalized non-erasing context is also delt with. Thus, the laterality problem, raised in the early seventies, see [9], solved on {0,1} alphabet without restriction, is now completely solved in the non-erasing case. --- paper_title: The Solvability of the Halting Problem for 2-State Post Machines paper_content: A Post machine is a Turing machine which cannot both write and move on the same machine step. It is shown that the halting problem for the class of 2-state Post machines is solvable. Thus, there can be no universal 2-state Post machine. This is in contrast with the result of Shannon that there exist universal 2-state Turing machines when the machines are capable of both writing and moving on the same step. --- paper_title: Recursive Unsolvability of a problem of Thue paper_content: Alonzo Church suggested to the writer that a certain problem of Thue [6]' might be proved unsolvable by the methods of [5]. We proceed to prove the problem recursively unsolvable, that is, unsolvable in the sense of Church [1], but by a method meeting the special needs of the problem. Thue's (general) problem is the following. Given a finite set of symbols al, a2, ... , a, , we consider arbitrary strings (Zeichenreihen) on those symbols, that is, rows of symbols each of which is in the given set. Null strings are included. We further have given a finite set of pairs of corresponding strings on the ai's, (Al , B1), (A2 , B2), , I (An , B,). A string R is said to be a substring of a string S if S can be written in the form URV, that is, S consists of the letters, in order of occurrence, of some string U, followed by the letters of R, followed by the letters of some string V. Strings P and Q are then said to be similar if Q can be obtained from P by replacing a substring Ai or Bi of P by its correspondent Bi, Ai. Clearly, if P and Q are similar, Q and P are similar. Finally, P and Q are said to be equivalent if there is a finite set R1 , R2, * * *, R, of strings on a,, * * *, a, such that in the sequence of strings P, R1, R2, ... , RX Q each string except the last is similar to the following string. It is readily seen that this relation between strings on a,, * * *, a, ,is indeed an equivalence relation. Thue's problem is then the problem of determining for arbitrarily given strings A, B on al, * * *, a;, whether, or no, A and B are equivalent. This problem, at least for the writer, is more readily placed if it is restated in terms of a special form of the canonical systems of [3]. In that notation, strings C and D are similar if D can be obtained from C by applying to C one of the following operations: --- paper_title: Non-Erasing Turing Machines: A New Frontier Between a Decidable Halting Problem and Universality paper_content: We define a new criterion which allows to separate cases when all non erasing Turing machines on {0, 1} have a decidable halting problem from cases where a universal non erasing machine can be constructed. It is the case of the number of left instructions in the machine program. In this paper we give the main ideas of the proof for both parts of the frontier result. We prove that there is a universal non-erasing Turing machine whose program has precisely 3 left instructions and that the halting problem is decidable for any non-erasing Turing machine on alphabet {0, 1}, the program of which contains at most 2 left instructions. For this latter result, we have a uniform decision algorithm. --- paper_title: Logical reversibility of computation paper_content: The usual general-purpose computing automaton (e.g.. a Turing machine) is logically irreversible- its transition function lacks a single-valued inverse. Here it is shown that such machines may he made logically reversible at every step, while retainillg their simplicity and their ability to do general computations. This result is of great physical interest because it makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step. In the first stage of its computation the logically reversible automaton parallels the corresponding irreversible automaton, except that it saves all intermediate results, there by avoiding the irreversible operation of erasure. The second stage consists of printing out the desired output. The third stage then reversibly disposes of all the undesired intermediate results by retracing the steps of the first stage in backward order (a process which is only possible because the first stage has been carried out reversibly), there by restoring the machine (except for the now-written output tape) to its original condition. The final machine configuration thus contains the desired output and a reconstructed copy of the input, but no other undesired data. The foregoing results are demonstrated explicitly using a type of three-tape Turing machine. The biosynthesis of messenger RNA is discussed as a physical example of reversible computation. --- paper_title: A simplified universal Turing machine paper_content: In 1936 Turing (1) defined a class of logical machines (which he called a - machines, but which are now generally called Turing machines) which he used as an aid in proving certain results in mathematical logic, and which should prove of interest in connection with the theory of control and switching systems. Given any logical operation or arithmetical computation for which complete instructions for carrying out can be supplied, it is possible to design a Turing machine which can perform this operation. --- paper_title: Counter machines and counter languages paper_content: The languages recognizable by time- and space-restricted multiple-counter machines are compared to the languages recognizable by similarly restricted multipletape Turing machines. Special emphasis is placed on languages definable by machines which operate in “real time”. Time and space requirements for counter machines and Turing machines are analyzed. A number of questions which remain open for time-restricted Turing machines are settled for their counter machine counterparts. --- paper_title: Towards a Precise Characterization of the Complexity of Universal and Nonuniversal Turing Machines paper_content: A computation universal Turing machine, U, with 2 states, 4 letters, 1 head and 1 two-dimensional tape is constructed by a translation of a universal register-machine language into networks over some simple abstract automata and, finally, of such networks into U. As there exists no universal Turing machine with 2 states, 2 letters, 1 head and 1 two-dimensional tape only the 2-state, 3-letter case for such machines remains an open problem. An immediate consequence of the construction of U is the existence of a universal 2-state, 2-letter, 2-head, 1 two-dimensional tape Turing machine, giving a first sharp boundary of the necessary complexity of universal Turing machines. --- paper_title: Complexity of Langton’s ant paper_content: The virtual ant introduced by Langton [Physica D 22 (1986) 120] has an interesting behavior, which has been studied in several contexts. Here we give a construction to calculate any boolean circuit with the trajectory of a single ant. This proves the P-hardness of the system and implies, through the simulation of one-dimensional cellular automata and Turing machines, the universality of the ant and the undecidability of some problems associated to it. --- paper_title: Studying artificial life with cellular automata paper_content: Abstract Biochemistry studies the way in which life emerges from the interaction of inanimate molecules. In this paper we look into the possibility that life could emerge from the interaction of inanimate artificial molecules. Cellular automata provide us with the logical universes within which we can embed artificial molecules in the form of propagating, virtual automata. We suggest that since virtual automata have the computational capacity to fill many of the functional roles played by the primary biomolecules, there is a strong possibility that the ‘molecular logic’ of life can be embedded within cellular automata and that, therefore, artificial life is a distinct possibility within these highly parallel computer structures. --- paper_title: Three Small Universal Turing Machines paper_content: We are interested by "small" Universal Turing Machines (in short: UTMs), in the framework of 2, 3 or 4 tape-symbols. In particular: - 2 tape-symbols. Apart from the old 24-states machine constructed by Rogozhin in 1982, we know two recent examples requiring 22 states, one due to Rogozhin and one to the author. - 3 tape-symbols. The best example we know, due to Rogozhin, requires 10 states. It uses a strategy quite hard to follow, in particular because even-length productions require a different treatment with respect to odd-length ones. - 4 tape-symbols. The best known machines require 7 states. Among them, the Rogozhin's one require only 26 commands; the Robinson's one, though requiring 27 commands, fournishes an easier way to recover the output when the TM halts. In particular, Robinson asked for a 7 × 4 UTM with only 26 commands and an easy treatment of the output. ::: ::: Here we will firstly construct a 7 × 4 UTM with an easy recover of the output which requires only 25 commands; then we will simulate such a machine by a (simple) 10 × 3 UT M and by a 19 × 2 UTM. --- paper_title: Computation: Finite and Infinite Machines paper_content: From the Preface (See Front Matter for full Preface) ::: ::: Man has within a single generation found himself sharing the world with a strange new species: the computers and computer-like machines. Neither history, nor philosophy, nor common sense will tell us how these machines will affect us, for they do not do "work" as did machines of the Industrial Revolution. Instead of dealing with materials or energy, we are told that they handle "control" and "information" and even "intellectual processes." There are very few individuals today who doubt that the computer and its relatives are developing rapidly in capability and complexity, and that these machines are destined to play important (though not as yet fully understood) roles in society's future. Though only some of us deal directly with computers, all of us are falling under the shadow of their ever-growing sphere of influence, and thus we all need to understand their capabilities and their limitations. ::: ::: It would indeed be reassuring to have a book that categorically and systematically described what all these machines can do and what they cannot do, giving sound theoretical or practical grounds for each judgment. However, although some books have purported to do this, it cannot be done for the following reasons: a) Computer-like devices are utterly unlike anything which science has ever considered---we still lack the tools necessary to fully analyze, synthesize, or even think about them; and b) The methods discovered so far are effective in certain areas, but are developing much too rapidly to allow a useful interpretation and interpolation of results. The abstract theory---as described in this book---tells us in no uncertain terms that the machines' potential range is enormous, and that its theoretical limitations are of the subtlest and most elusive sort. There is no reason to suppose machines have any limitations not shared by man. --- paper_title: The Busy Beaver Competition: a historical survey paper_content: Tibor Rado defined the Busy Beaver Competition in 1962. He used Turing machines to give explicit definitions for some functions that are not computable and grow faster than any computable function. He put forward the problem of computing the values of these functions on numbers 1, 2, 3, ... More and more powerful computers have made possible the computation of lower bounds for these values. In 1988, Brady extended the definitions to functions on two variables. We give a historical survey of these works. The successive record holders in the Busy Beaver Competition are displayed, with their discoverers, the date they were found, and, for some of them, an analysis of their behavior. --- paper_title: On non-computable functions paper_content: The construction of non-computable functions used in this paper is based on the principle that a finite, non-empty set of non-negative integers has a largest element. Also, this principle is used only for sets which are exceptionally well-defined by current standards. No enumeration of computable functions is used, and in this sense the diagonal process is not employed. Thus, it appears that an apparently self-evident principle, of constant use in every area of mathematics, yields non-constructive entities. --- paper_title: Recursive Unsolvability of Post's Problem of "Tag" and other Topics in Theory of Turing Machines paper_content: The equivalence of the notions of effective computability as based (1) on formal systems (e.g., those of Post), and (2) on computing machines (e.g., those of Turing) has been shown in a number of ways. The main results of this paper show that the same notions of computability can be realized within (1) the highly restricted monogenic formal systems called by Post the "Tag" systems, and (2) within a peculiarly restricted variant of Turing machine which has two tapes, but can neither write on nor erase these tapes. From these, or rather from the arithmetization device used in their construction, we obtain also an interesting basis for recursive function theory involving programs of only the simplest arithmetic operations. We show first how Turing machines can be regarded as programmed computers. Then by defining a hierarchy of programs which perform certain arithmetic transformations, we obtain the representation in terms of the restricted two-tape machines. These machines, in turn, can be represented in terms of Post normal canonical systems in such a way that each instruction for the machine corresponds to a set of productions in a system which has the monogenic property (for each string in the Post system just one production can operate). This settles the questions raised --- paper_title: The Solvability of the Halting Problem for 2-State Post Machines paper_content: A Post machine is a Turing machine which cannot both write and move on the same machine step. It is shown that the halting problem for the class of 2-state Post machines is solvable. Thus, there can be no universal 2-state Post machine. This is in contrast with the result of Shannon that there exist universal 2-state Turing machines when the machines are capable of both writing and moving on the same step. --- paper_title: The Busy Beaver Competition: a historical survey paper_content: Tibor Rado defined the Busy Beaver Competition in 1962. He used Turing machines to give explicit definitions for some functions that are not computable and grow faster than any computable function. He put forward the problem of computing the values of these functions on numbers 1, 2, 3, ... More and more powerful computers have made possible the computation of lower bounds for these values. In 1988, Brady extended the definitions to functions on two variables. We give a historical survey of these works. The successive record holders in the Busy Beaver Competition are displayed, with their discoverers, the date they were found, and, for some of them, an analysis of their behavior. --- paper_title: Limits to Parallel Computation: P-Completeness Theory paper_content: This volume provides an ideal introduction to key topics in parallel computing. With its cogent overview of the essentials of the subject as well as lists of P -complete- and open problems, extensive remarks corresponding to each problem, a thorough index, and extensive references, the book will prove invaluable to programmers stuck on problems that are particularly difficult to parallelize. In providing an up-to-date survey of parallel computing research from 1994, Topics in Parallel Computing will prove invaluable to researchers and professionals with an interest in the super computers of the future. --- paper_title: Busy beaver competition and Collatz-like problems paper_content: SummaryThe Busy Beaver Competition is held by Turing machines. The better ones halt taking much time or leaving many marks, when starting from a blank tape. In order to understand the behavior of some Turing machines that were once record holders in the five-state Busy Beaver Competition, we analyze their halting problem on all inputs. We prove that the halting problem for these machines amounts to a well-known problem of number theory, that of the behavior of the repeated iteration of Collatz-like functions, that is functions defined by cases according to congruence classes. --- paper_title: Some small, multitape universal Turing machines paper_content: Abstract The standard way of assigning complexity to a one-tape Turing machine, by the state-symbol product, is clearly inadequate for machines with more than one tape. Letting an (m,n,p)-machine be a Turing machine with m states, n symbols (including any end markers), and p tapes, the number m· n p gives the maximum number of operating rules for an (m,n,p)-machine and serves as a fairly reasonable complexity criterion, reducing to the state-symbol product when p = 1. A (2,3,2)-machine has been designed to simulate an arbitrary tag system with deletion number 2, and therefore is universal [1]. Also a (1,2,4)-machine, having a fixed loop for one of its four tapes, has been designed ; this machine is universal by its ability to simulate an arbitrary “B machine” (6). Under the criterion stated above, these machines have complexities of 18 and 16, respectively, both less than any reported state-symbol product. Moreover, any (m,n,p)-machine may be simulated, step-by-step, by a (1,2,p′)-machine, where p′ = p⌜log 2 (n)⌝ + ⌜log 2 (m)⌝ . If the logarithms are integral, this transformation is realized with no increase in complexity. However, since the full power of the additional tapes is not utilized in this transformation, it appears that this complexity criterion does not provide a severe enough penalty for the introduction of additional tapes to a Turing machine. --- paper_title: Frontier between decidability and undecidability: a survey paper_content: Abstract After recalling the definition of decidability and universality, we first give a survey of results on the as exact as possible border betweeen a decidable problem and the corresponding undecidablity question in various models of discrete computation: diophantine equations, word problem, Post systems, molecular computations, register machines, neural networks, cellular automata, tiling the plane and Turing machines with planar tape. We then go on with results more specific to classical Turing machines, with a survey of results and a sketchy account on technics. We conclude by an illustration on simulating the 3x+1 problem. --- paper_title: Study of limits of solvability in tag systems paper_content: In this paper we will give an outline of the proof of the solvability of the halting and reachability problem for 2-symbolic tag systems with a deletion number v = 2. This result will be situated in a more general context of research on limits of solvability in tag systems. ---
Title: The Complexity of Small Universal Turing Machines: A Survey Section 1: Introduction Description 1: Introduce the topic of small universal Turing machines, including historical context and significant contributions. Section 2: Time and Size Efficiency of Universal Machines Description 2: Discuss the time and program size efficiency of various universal machines, focusing on polynomial time simulators and 2-tag systems. Section 3: Non-standard Universal Turing Machines: Time Efficiency and Program Size Description 3: Explore the program size and time complexity results for generalized and restricted models of Turing machines. Section 4: Weak Universality and Rule 110 Description 4: Cover the concept of weak and semi-weak universality, including the use of repetitive patterns and specific examples like Rule 110. Section 5: Other Non-standard Universal Turing Machines Description 5: Present additional generalizations and examples of non-standard universal machines, including those with multiple tapes and multidimensional tapes. Section 6: Restricted Universal Turing Machines Description 6: Explain the results for restricted models of Turing machines and their universality, including non-erasing and non-writing Turing machines. Section 7: Universal Turing Machines with Multidimensional Tapes: Time Efficiency and Program Size Description 7: Detail the time efficiency and program size results for Turing machines with multidimensional tapes. Section 8: Termination of a Computation Description 8: Discuss issues related to the termination of computations in various models of universal Turing machines. Section 9: Busy Beavers Description 9: Explore the busy beaver problem and its relationship to small universal Turing machines. Section 10: Further Work Description 10: Highlight potential areas for future research in the field of small universal Turing machines.
A Systematic Literature Review to Determine the Web Accessibility Issues in Saudi Arabian University and Government Websites for Disable People
7
--- paper_title: Web accessibility: a government's effort to promote e-accessibility in Thailand paper_content: "Web accessibility" was first officially introduced and studied in Ministry of Information Communication Technology in 2003. To support the idea of universal services (one stop for all), the government has been planning to develop an e-Government system that will provide all major services and official information on the internet (web based system). Since the issues to provide accessibilities for person with disabilities (pwds) become a major concern for modern information society. Accessibility becomes an important feature for all government websites, especially for websites that service to pwds. The preliminary survey on the government websites in 2003 showed a remarkably low standard of web accessibilities. Only 3 out of 267 government websites passed the test of World Wide Web Consortium (W3C) guidelines on web accessibility. To support the promotion of web accessibility in Thailand, Assistive Technology Center (ASTEC) plays the important roles to provide necessary information and tools of web accessibility development for the government. This paper presents our works from the past to the current state of the project including: a development of the national guideline on web accessibility that is agreeable among web developers, a development of web accessibility evaluation tool, and a preparation of policy planning to promote web accessibility in Thailand. --- paper_title: Global e-government Web Accessibility: An Empirical Examination of EU, Asian and African Sites paper_content: Accessibility of government Web sites is an important factor for inclusion of disabled persons to be able to fully utilize a variety of government services and information. In this paper, we examine the levels of disability accessibility for a variety of e-government sites in the European Union (EU), Asia and Africa. The study was conducted in 2008, and the results showed that the vast majority of sites in both developed and underdeveloped countries did not meet either legal requirements or industry guidelines in providing fully accessible government sites. Sites located in countries with stronger disabilities laws did score better in the compliance levels. Through comparison of the results, it is concluded that for governments to meet the needs of their disabled constituents, they need to implement a multiphase approach to site development, including stronger legal mandates and establishing localized best practice guidelines. ::: ::: Keywords: Accessibility, Disability, e-government, WCAG, W3C --- paper_title: E-learning in Saudi Arabia: Past, present and future paper_content: The emergence of information and instructional technologies and their influence on teaching and learning has brought about significant changes in academic environment in the Kingdom of Saudi Arabia (KSA). The new learning trend has made it mandatory to equip teachers in educational institutions with the necessary skills to cope with the new challenges. The urgent need for e-learning in KSA has resulted from the massive population growth vis-a`-vis the scarcity of teachers in both quantity and quality, including the need to reduce financial burden. Since 2002, when e-learning started in KSA, it has gained recognition and interest among academic institutions, academics and students, though with a relatively slow pace. This paper takes into account the growth of e-learning in KSA. It analyzes the potential need and the overall impacts of e-learning on various stakeholders. The paper also discusses the current e-learning developments as well as future prospect. --- paper_title: Web Accessibility for Disabled: A Case Study of Government Websites in Pakistan paper_content: It is the era of information technology and governments around the world opting for electronic government and official websites are now under the use of a diverse population for the purpose of information retrieval. A number of disabled persons are becoming the part of this society but they are ignored when web projects are planned and developed. If this practice of software development is kept continuing then disabled persons would not take the advantage in the electronic government era. This study evaluates the websites of central government in Pakistan including all ministries and divisions using accessibility evaluation tools based on World Wide Web Consortium's (W3C) web accessibility standards. Functional accessibility evaluator and total validator are the tools which are used for the evaluation process. The results shows that most of the web sites are not developed according to the accessibility standards for disabled persons. In the light of these results, recommendations are made to improve the accessibility of these websites for disable persons. --- paper_title: Evaluating Accessibility of Malaysian Ministries Websites using WCAG 2.0 and Section 508 Guideline paper_content: Although e-government practice in Malaysia shows considerable progress, accessibility of the government websites has been cited as the next key concern that deserves further attention. It is therefore essential to ensure greater compliance of the government websites with established web accessibility standards and guidelines. This is in line with an initiative to promote better delivery system of the government. In response, this paper reports accessibility status of 25 Malaysian ministries websites as outlined in Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). Using AChecker and WAVE as automated accessibility evaluation tools, the results suggest relatively low compliance of the standards amongst the ministries websites examined. Further improvements are recommended, particularly on the contrast view requirement as well as the use of input and image-related elements. The report can be a meaningful guidance for webmasters to locate and address the errors accordingly. Fully complying with the stipulated guidelines, therefore, ensures equal experience among citizen on accessing government related information and services. ---
Title: A Systematic Literature Review to Determine the Web Accessibility Issues in Saudi Arabian University and Government Websites for Disable People Section 1: INTRODUCTION Description 1: Introduce the importance of web accessibility, provide statistics on disabilities, discuss the treatises and legislation, and explain the focus of the paper on Saudi Arabia's government and university websites. Section 2: WEB ACCESSIBILITY AND WCAG 2.0 GUIDELINES Description 2: Briefly explain web accessibility and describe the principles outlined in the Web Content Accessibility Guidelines (WCAG) 2.0. Section 3: LEGISLATION ON WEB ACCESSIBILITY Description 3: Discuss existing laws and regulations concerning web accessibility in various countries, including Saudi Arabia. Section 4: E-SERVICES PROVIDED BY THE SAUDI GOVERNMENT AND UNIVERSITIES Description 4: Discuss different e-services provided by Saudi government and universities, the increase in internet usage, and the importance of making these services accessible. Section 5: EXISTING RESEARCH STUDIES ON WEB ACCESSIBILITY Description 5: Review existing research studies conducted globally and within Saudi Arabia on web accessibility issues and summarize their findings. Section 6: RESEARCH METHODOLOGY Description 6: Describe the systematic literature review (SLR) methodology, including the formation of research questions and identification of relevant publications. Section 7: DISCUSSION ON WEB ACCESSIBILITY ISSUES Description 7: Discuss the various web accessibility issues identified from the literature review and the factors affecting web accessibility. Section 8: CONCLUSION AND FUTURE WORK Description 8: Summarize the findings of the paper, discuss the importance of addressing web accessibility issues in Saudi Arabia, and propose areas for future research.
DOI:10.1068/b31073 A review of rural land-use planning models
11
--- paper_title: ALES: a framework for land evaluation using a microcomputer paper_content: . ALES, the Automated Land Evaluation System, is a microcomputer program that allows land evaluators to build their own knowledge-based systems with which they can compute the physical and economic suitability of land map units, in accordance with the FAO's Framework for Land Evaluation. The economic suitability of a land mapping unit for a land utilization type is determined from the predicted annual gross margin per unit area. Increasing limitations result in increased costs of production, decreased yields, or both. Evaluators build decision trees to express inferences from land characteristics to land qualities, from land qualities to predicted yields, and from land qualities to overall physical suitability. A representative model is described. --- paper_title: Comparison of Boolean and Fuzzy Classification Methods in Land Suitability Analysis by Using Geographical Information Systems paper_content: In this paper the information content of Boolean and fuzzy-set-based approaches to the problem of analyzing land suitability for agriculture within a geographical information system (GIS) is assessed. First, the two approaches to this problem are stated and formalized in the context of land-suitability evaluation. A database comprising 642 unique areas, 7 land qualities, 13 land characteristics, and 2 crop types is defined and described. Land-use suitability ratings for two crops, wetland rice and soybean, are generated by using Boolean and fuzzy methods. Results produced by the two methods are compared in terms of their usefulness for agricultural land-use plannning. The ARC/INFO vector-based GIS software package is utilized. The study area is the Cimanuk watershed in northwest Java, Indonesia. --- paper_title: A land evaluation project in Greece using GIS and based on Boolean and fuzzy set methodologies paper_content: Abstract In Mediterranean regions there is little experience in using GIS as an aid to land evaluation. This paper reports a project in Greece which investigated the usefulness of such technology, Particular emphasis was also given to comparing the results of land evaluation using Boolean and fuzzy set methodologies. The need was to produce the results as quickly and efficiently as possible to aid agricultural planning. By using a GIS, a series of single factor and land evaluation maps was produced for a range of crops; a land suitability map for receipt of sewage was also derived. A comparison of results from using Boolean and fuzzy set methodologies highlighted the advantages of the latter, although critical decisions are required on choice of membership functions and weights which have a major effect on the results. --- paper_title: LAND SUITABILITY CLASSIFICATION BASED ON FUZZY SET THEORY paper_content: Abstract TANG HUAJUN* J. DEBA VEYE* RUANDA** E. VAN RANST* In this study, the fuzzy set theory is applied to the field of land evaluation. The result of the land suitability classification for a defined land utilization type applied to a land unit, is no more a single land suitability class, as in the tradtional set theory, but an expression of the degree of belonging of the land unit to each of the discerned suitability classes. This principle is applied to a land assessment for grain maize in Hahen County, China. The classification results obtained with the fuzzy set method show a closer relationship with observed yields than previously proposed suitability classification methods. Key-words Fuzzy sets, land suitability classification, maize. 1. INTRODUCTION Land evaluation is concerned with the assessment of land performance for specified land utilization purposes. Such evaluation is essential in the process of land use planning, because it may guide decisions on land utilization in such a way th at the resources of the environment are optimally used and that a sustained land management is achieved. Land suitability classification is an approach in land evaluation that con­cerns the appraisal and grouping of specific areas of land in terms of their suitability for defined uses (FAO, 1976). The FAO proposed a general classi- --- paper_title: Fertility capability soil classification: a tool to help assess soil quality in the tropics paper_content: The soil quality paradigm was originally developed in the temperate region with the overarching objective of approaching air quality and water quality standards. Although holistic and systemsoriented, soil quality focused principally on issues arising from large nutrient and energy inputs to agricultural lands. Soil quality in the tropics, however, focuses on three overarching concerns: food insecurity, rural poverty and ecosystem degradation. Soil science in the tropics relies heavily on quantitative attributes of soils that can be measured. The emotional, value-laden and ‘‘measure everything’’ approach proposed by some proponents of the soil quality paradigm has no place in the tropics. Soil quality in the tropics must be considered a component of an integrated natural resource management framework (INRM). Based on quantitative topsoil attributes and soil taxonomy, the fertility capability soil classification (FCC) system is probably a good starting point to approach soil quality for the tropics and is widely used. FCC does not deal with soil attributes that can change in less than 1 year, but those that are either dynamic at time scales of years or decades with management, as well as inherent ones that do not change in less than a century. FCC attributes can be positive or negative depending on the land use as well as the temporal and spatial scales in question. Version 4 is introduced in this paper. The main changes are to include the former h condition modifier (acid, but not Al-toxic) with ‘‘no major chemical limitations’’ because field experience has shown little difference between the two and to introduce a new condition modifier m that denotes organic carbon saturation deficit. Additional modifiers are needed for nutrient depletion, compaction, surface sealing and other soil biological attributes, but there is no sufficient evidence to propose robust, quantitative threshold values at this time. The authors call on those actively involved in linking these attributes with plant growth and ecosystem functions to provide additional suggestions that would enhance FCC. The use of diffuse reflectance spectroscopy (DRS) shows great potential on a wide range of tropical soils. The evolution of soil science from a qualitative art into a --- paper_title: Evaluating the consistency of results for the agricultural land evaluation and site assessment (LESA) system paper_content: (...). The LESA system uses a measure of soil productivity to evaluate land quality and a series of questions or site factors to evaluate suitability of a site for urban development. The tests were part of a class exercise for a land use course in agronomy to determine, first, if student responses to LESA were the same as the instructors and if the students were consistent among themselves and, second, to determine if any of the site factors used in LESA were especially difficult to analyze. Five sites in Hamilton County, Indiana, were selected for analysis. (...) --- paper_title: An Integrated Expert Geographical Information System for Soil Suitability and Soil Evaluation paper_content: An integrated Expert Geographical Information System (EXGIS) is presented and applied for the evaluation of the suitability of soil and climatic conditions of in area of southern Greece, for five crops. EXGIS is an integration of an Expert System shell, designed for the manipulation of knowledge concerning soil suitability for agricultural uses, with the commercial GIS package PC ARC/INFO. The work was carried out for the purposes of a research program concerning the development of a Geographical Information System (GIS) for the management and evaluation of natural resources of the Pinios River basin, located in West Peloponnese, in South-Western Greece. Both the FAO system for soil evaluation and the local experience and knowledge of soil and climatic conditions were combined for the formulation of the rules of the knowledge base of EXGIS. The shell of the Expert System communicates with the commercial GIS PC ARC/INFO, under a common operating environment. --- paper_title: A Theoretical Framework for Land Evaluation paper_content: Abstract Land evaluation is the process of predicting the use potential of land on the basis of its attributes. A variety of analytical models can be used in these predictions, ranging from qualitative to quantitative, functional to mechanistic, and specific to general. This paper classifies land evaluation models by how they take time and space into account, and whether they use land qualities as an intermediate between land characteristics and land suitability. Temporally, models can be of a static resource base and static land suitability, a dynamic resource base but static land suitability, or both a dynamic resource base and dynamic land suitability. spatially, land evaluation models can be of a single area with no interaction between areas, with static inter-area effects, or dynamic inter-area effects. In the most complex case, land suitabilities for several land uses are interdependent. --- paper_title: Fuzzy classification methods for determining land suitability from soil profile observations and topography paper_content: SUMMARY ::: Because conventional Boolean retrieval of soil survey data and logical models for assessing land suitability treat both spatial units and attribute value ranges as exactly specifiable quantities, they ignore the continuous nature of soil and landscape variation and uncertainties in measurement which can result in the misclassification of sites that just fail to match strictly defined requirements. This paper uses fuzzy classification to determine land suitability from (i) multivariate point observations of soil attributes, (ii) topographically controlled site drainage conditions, and (iii) minimum contiguous areas, and compares the results obtained with conventional Boolean methods. The methods are illustrated using data from the Alberta Agricultural Department experimental farm at Lacombe in Alberta, Canada. Data on site elevation and soil chemical and physical properties measured at 154 soil profiles were interpolated by ordinary block kriging to 15 m × 15 m cells on a 50 × 50 grid. The soil property data for each cell were classified by Boolean and fuzzy methods. The digital elevation model created by interpolating the elevation data was used to determine the surface drainage network and map it in terms of the numbers of cells draining through each cell on the grid. This map was reclassified to yield Boolean and fuzzy maps of surface wetness which were then intersected with the soil profile classes. The resulting classification maps were examined for contiguity to locate areas where a block of minimum size (45m × 45m) could be located successfully. In this study Boolean methods reject larger numbers of cells than fuzzy classification, and select cells that are insufficiently contiguous to meet the aims of the land classification. Fuzzy methods produce contiguous areas and reject less information at all stages of the analyses than Boolean methods. They are much better than Boolean methods for classification of continuous variation, such as the results of the drainage analysis. --- paper_title: The DSSAT cropping system model paper_content: Abstract The decision support system for agrotechnology transfer (DSSAT) has been in use for the last 15 years by researchers worldwide. This package incorporates models of 16 different crops with software that facilitates the evaluation and application of the crop models for different purposes. Over the last few years, it has become increasingly difficult to maintain the DSSAT crop models, partly due to fact that there were different sets of computer code for different crops with little attention to software design at the level of crop models themselves. Thus, the DSSAT crop models have been re-designed and programmed to facilitate more efficient incorporation of new scientific advances, applications, documentation and maintenance. The basis for the new DSSAT cropping system model (CSM) design is a modular structure in which components separate along scientific discipline lines and are structured to allow easy replacement or addition of modules. It has one Soil module, a Crop Template module which can simulate different crops by defining species input files, an interface to add individual crop models if they have the same design and interface, a Weather module, and a module for dealing with competition for light and water among the soil, plants, and atmosphere. It is also designed for incorporation into various application packages, ranging from those that help researchers adapt and test the CSM to those that operate the DSSAT–CSM to simulate production over time and space for different purposes. In this paper, we describe this new DSSAT–CSM design as well as approaches used to model the primary scientific components (soil, crop, weather, and management). In addition, the paper describes data requirements and methods used for model evaluation. We provide an overview of the hundreds of published studies in which the DSSAT crop models have been used for various applications. The benefits of the new, re-designed DSSAT–CSM will provide considerable opportunities to its developers and others in the scientific community for greater cooperation in interdisciplinary research and in the application of knowledge to solve problems at field, farm, and higher levels. --- paper_title: Integrating geographical information systems and multiple criteria decision-making methods paper_content: Abstract Many spatial decision-making problems, such as site selection or land use allocation require the decision-maker to consider the impacts of choice-alternatives along multiple dimensions in order to choose the best alternative. The decision-making process, involving policy priorities, trade-offs, and uncertainties, can be aided by Multiple Criteria Decision making (MCDM) methods. This paper presents a framework for integrating geographical information systems (GIS) and MCDM methods. In this framework the MCDM methods are classified and matched with choice heuristics used by the decision-makers in the presence of competing alternatives and multiple evaluation criteria. Two strategies for integrating GIS with MCDM are proposed. The first strategy suggests linking GIS and MCDM techniques using a file exchange mechanism. The second strategy suggests integrating GIS and MCDM functions using a common database. The paper presents the implementation of the first strategy using PC-ARC/INFO, a file exchange ... --- paper_title: Land suitability assessment in the Namoi Valley of Australia, using a continuous model paper_content: In an agricultural context, land evaluation is assessment for a specified kind of land utilisation. The final result of agricultural evaluation is a map, which partitions the landscapes into suitable and unsuitable areas for a particular land-use of interest. However, this approach may not represent the continuity of land. Land suitability could be better expressed by a fuzzy approach. In this paper a fuzzy methodology is used to evaluate land suitability in the Edgeroi district for various crops including barley, dryland cotton, oats, pasture, soybean, sorghum, sunflower, and wheat. This is achieved using a membership function to derive a land-suitability membership score ranging from non-suitable (i.e. 0) to suitable (i.e. 1). We express this as continuous land suitability maps using punctual kriging. An expression for overall land suitability (i.e. its versatility) and its capacity with respect to suitability to particular rotations is introduced to highlight the most productive units of soil. --- paper_title: GIS and Multicriteria Decision Analysis paper_content: PRELIMINARIES. Geographical Data, Information, and Decision Making. Introduction to GIS. Introduction to Multicriteria Decision Analysis. SPATIAL MULTICRITERIA DECISION ANALYSIS. Evaluation Criteria. Decision Alternatives and Constraints. Criterion Weighing. Decision Rules. Sensitivity Analysis. MULTICRITERIA-SPATIAL DECISION SUPPORT SYSTEMS. Spatial Decision Support Systems. MC-SDSS: Case Studies. Glossary. Selected Bibliography. Indexes. --- paper_title: Fuzzy mathematical methods for soil survey and land evaluation paper_content: SUMMARY ::: The rigid-data model consisting of discrete, sharply bounded internally uniform entities that is used in hierarchical and relational databases of soil profiles, choropleth soil maps and land evaluation classifications ignores important aspects of reality caused by internal inhomogeneity, short-range spatial variation, measurement error, complexity and imprecision. Considerable loss of information can occur when data that have been classified according to this model are retrieved or combined using the methods of simple Boolean algebra available in most soil and geographical information systems. Fuzzy set theory, which is a generalization of Boolean algebra to situations where data are modelled by entities whose attributes have zones of gradual transition, rather than sharp boundaries, offers a useful alternative to existing methodology. The basic principles of fuzzy sets, operations on fuzzy sets and the derivation of membership functions according to the Semantic Import Model are explained and illustrated with data from case studies in Venezuela and Kenya. --- paper_title: ILUDSS: a knowledge-based spatial decision support system for strategic land-use planning paper_content: This paper discusses the design and implementation of the Islay Land Use Decision Support System (ILUDSS), a knowledge-based spatial decision support system for strategic land-use planning in a rural area. ILUDSS is designed to support planners in assessing land-use potential for different types of land-use for the development of the island of Islay, off the west coast of Scotland. The system adopts knowledge-based techniques in its design, and incorporates analytical, rule-based and spatial modelling capabilities. The main functions of ILUDSS include query, formulation of land-use models for assessing land-use potential, and evaluation of the land-use models through automated integration of the database, the rule bases, and different types of models. ILUDSS can evaluate land-use potential according to planners' preferences and assessments relating to the various criteria and related evaluation factors, such as physical suitability, proximity to desirable and undesirable land features, and the required minimum area of each land parcel. --- paper_title: Integrating multi-criteria evaluation with geographical information systems paper_content: Abstract Geographical information systems (GIS) provide the decision-maker with a powerful set of tools for the manipulation and analysis of spatial information. The functionality of GIS is, however, limited to certain deterministic analyses in key application areas such as spatial search. The integration of multi-criteria evaluation (MCE) techniques with GIS is forwarded as providing the user with the means to evaluate various alternatives on the basis of multiple and conflicting criteria and objectives. An example application based on the search for suitable sites for the disposal of radioactive waste in the UK using the Arc/Info GIS is included. The potential use of a combined GIS-MCE approach in the development of spatial decision support systems is considered. --- paper_title: A parameterized region-growing programme for site allocation on raster suitability maps paper_content: In assigning suitability scores to individual cells suitability maps do not solve the question of optimally locating regions of a particular size, shape and orientation. This paper describes a parameterized region-growing (PRG) programme for locating sites with particular spatial characteristics on raster suitability maps. PRG is an heuristic which trades off underlying cell suitability and region suitability to locate near optimal regions. The size, boundary configuration, elongation and orientation of an ideal shape are specified by a set of parameters which control a shape growing programme. Two simulations show how parameterized region-growing can locate wildlife reserves with different spatial characteristics. --- paper_title: Fuzziness in Geographical Information Systems: contributions from the analytic hierarchy process† paper_content: Abstract Recent developments in geographical information systems have drawn upon concepts of fuzzy set theory and multi-criteria methodology. In this paper we argue that there is a method, Saaty’s Analytic Hierarchy Process (AHP), that is compatible with both these research directions. The contributions of the AHP are highlighted in the light of recent developments in GIS, with particular attention to the concept of fuzzy set theory. An example of a GIS application is provided to show how the AHP can deal operationally with fuzziness, factor diversity and complexity in problems of land evaluation involving the location of a public facility. --- paper_title: Rural landscape and economic results of the farm; a multi-objective approach paper_content: The protection and reorganisation of rural landscape constitutes one of the main objectives in European Union agro-environmental policy. Since 1985, numerous measures have been approved with the aim of financing landscape improvement and farm reorganisation. These measures were more clearly defined in 1992, when the timing and content of landscape measures were set in the wider context of agro-environmental policy. European Union intervention was motivated by the growing imbalance between the public demand for higher landscape quality and the land transformation carried out by agriculture. On one hand, the societal and cultural changes which have occurred in the last few decades have produced a rise in demand for green (rural) areas for recreation. On the other hand, the spread of labour-saving technology in agriculture has led to an increasing simplification of the landscape and to a considerable reduction in rural landscape quality. This phenomenon can be attributed to the singular economic characteristics of landscape. In many respects, landscape represents a positive externality of agro-forest activity, which assumes the role of a public good. This therefore necessitates corrective action by the public policy maker who will have to use various means to increase the remuneration of those products which generate positive externalities. Public intervention, however, runs the risk of being transformed into welfare aid. In order to guarantee that it is used to remunerate a real service undertaken by the farmer for the benefit of the whole community, the contribution must be commensurate with the benefits produced. The objectives of this study are, first, to analyse which elements of rural landscape contribute to the aesthetic value of landscape and, second, to estimate the trade-offs between economic results of the farm and landscape quality. An application in the Venice Lagoon Basin Region allowed for the identification and quantification of compromise solutions between the landscape objective and the farm profits. These results can be used to assess the economic consequences of improving landscape quality and the suitability for this purpose of European Union subsidies. --- paper_title: Multicriteria Evaluation of Land-Reallotment Plans: A Case Study paper_content: In this paper an ex post evaluation of a land-reallotment plan carried out in the Netherlands during the 1960s and 1970s is presented. Given the wide range of effects taken into consideration, a multicriteria approach is adopted. Because of the 'soft' nature of many elements involved, special attention is paid to multicriteria methods dealing with qualitative priorities and plan impacts. Outcomes for methods using qualitative and quantitative data are compared. --- paper_title: Multicriteria evaluation methods in renewable resource management: integrated water management under drought conditions paper_content: The limits inherent in conventional decision theory methodologies and the necessity to analyse conflicts between policy objectives have led to many calls for more appropriate analytical tools for strategic evaluation. As such multicriteria evaluation does not itself provide a unique criterion for choice; rather it helps to frame the problem of arriving at a political compromise. This paper, as a first step, deals with the role of multicriteria evaluation methods in the framework of renewable natural resource management. As a second step, the possibility is studied of using multicriteria evaluation methods to tackle problems of integrated water management under drought conditions. We take water management under drought conditions into account because this is an important issue in many southern countries. The term “integrated water management” is used because it is evident that the problems underlying water management can be dealt with only if all the conflicting activities and uses that affect the resource are taken into account. A real-world example in the area of the city of Palermo (western Sicily) is also considered. From this case study, which is a part of a larger project that has been commissioned by the Sicily region, a number of useful lessons can be learned for comparing alternative strategies for the management of a water system under drought conditions. --- paper_title: Mixed-Data Multicriteria Evaluation for Regional Planning: A Systematic Approach to the Decisionmaking Process paper_content: In this paper multicriteria evaluation is dealt with as a tool for the public decisionmaking process. In the first part of the paper the systematic approach to multicriteria evaluation is presented, and the consecutive steps of the evaluation process are outlined. This is followed in the second part by the application of mixed-data multicriteria evaluation to the empirical problem of resource allocation in Poland. Mixed-data multicriteria evaluation is presented as a decision-support tool and as a means of facilitating a bargaining process between the involved parties. --- paper_title: A multiple criteria decision-making approach to GIS-based land suitability evaluation paper_content: Abstract Land suitability evaluation in a raster GIS environment is conceptualized as a multiple-criteria decision-making (MCDM) problem. A combination of MCDM techniques selected for implementing the methodology included value and priority assessment techniques for scaling the interval and ordinal data respectively, and compromise programming (CP) to aggregate the unidimensional evaluations. The contribution of the proposed methodology to handle problems of scaling and dependence that often affect expert-based suitability analyses is discussed. A case-study of habitat evaluation for the endangered Mount Graham red squirrel is presented. The multiple-criteria models resulting from the CP analysis of an expert's perception of the habitat preference structure of the red squirrel are compared with data of actual habitat use. The predictive power of the models is good and sensitivity analysis based on the distance-metric parameter p of CP reveals some interesting differences between alternative strategies for... --- paper_title: Integration of GIS-Based Suitability Analysis and Multicriteria Evaluation in a Spatial Decision Support System for Route Selection paper_content: Land suitability mapping techniques and geographic information systems (GIS) have been used in the last decade to assist planners in route selection problems. These techniques, though robust in translating physical constraints into feasible alternatives for route location, are weak in incorporating the decisionmaker's preferences, and, hence, are of limited use for decision support. The decisionmaking process that follows a route location study can be supported by multicriteria evaluation techniques that incorporate decisionmakers' preferences. This paper presents an approach to integrating a GIS-based land suitability analysis and multicriteria evaluation in a spatial decision support system for route selection. The design and implementation of the system are presented together with an example of system application to a study of the route selection for a water transmission supply line. --- paper_title: A GIS-based approach to multiple criteria group decision-making paper_content: Abstract The multiple criteria group decision-making problem involves a set of feasible alternatives that are evaluated on the basis of multiple, conflicting and noncommensurate criteria by a group of individuals. This paper is concerned with developing a GIS-based approach to group decision-making under multiple criteria. The approach integrates, within a raster GIS environment, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and Borda's choice rule. TOPSIS orders the feasible alternatives according to their closeness to the ideal solution. It is used to derive the individual preference orderings. Borda's method combines the individual preferences into a group preference or consensus/compromise ranking. The approach is implemented within the IDRISI GIS and illustrated on a hypothetical decision situation. --- paper_title: Multi-criteria and multi-objective decision making for land allocation using GIS paper_content: Geographic Information Systems (GIS) are designed for the acquisition, management, analysis and display of georeferenced data. As such they have clear implications for informing the spatial decision making process. Subsequently, recent developments in GIS software and in the conceptual basis for decision making have led to dramatic improvements in the capabilities of GIS for resource allocation. These developments are reviewed through an examination of procedures for Multi-Criteria and Multi-Objective land allocation in GIS. Special emphasis is given to the problems of incorporating subjective expertise in the context of participatory decision making; the expression of uncertainty in establishing the relationship between evidence and the decision to be made; procedures for the aggregation of evidence in the presence of varying degrees of tradeoff between criteria; and procedures for conflict resolution and conflict avoidance in cases of multiple objective decision problems. --- paper_title: RASTER PROCEDURES FOR MULTI-CRITERIA/MULTI-OBJECTIVE DECISIONS paper_content: Decisions about the allocation of land typically involve the evaluation of multiple criteria according to several, often conflicting, objective. With the advent of GIS, we now have the opportunity for a more explicity reasoned environmental decision making process. However, GIS has been slow to develop decision support tools, more typically relying on procedures outside the GIS software. In this paper the issues of multi-criteria/multi- objective decision making are discussed, along with an exploration of a new set of decision support tools appropriate for the large data-handling needs of raster GIS. A case study is used to illustrate these tools as developed for the IDRISI geographic analysis software system --- paper_title: An Application of Linear Programming and Geographic Information Systems: Cropland Allocation in Antigua paper_content: This paper is focused on the application of linear programming (LP) in combination with a geographic information system (GIS) in planning agricultural land-use strategies. One of the essential inputs for planning any agricultural land-use strategy is a knowledge of the natural resources. This is even more critical in small countries such as those in the Eastern Caribbean, where land-area limitations dictate a greater need for careful assessment and management of these resources. The first step of the proposed methodology is to obtain an assessment of the natural resources available to agriculture. The GIS is used to delineate land-use conflicts and provide reliable information on the natural-resource database. This is followed by combining the data on natural resources with other quantifiable information on available labour, market forecasts, technology, and cost information in order to estimate the economic potential of the agricultural sector. LP is used in this step. Finally, the GIS is applied again to map the crop and land-allocation patterns generated by the LP model. The results are concrete suggestions for resource allocation, farm-size mix, policy application, and implementation projects. --- paper_title: A scenario exploration of strategic land use options for the Loess Plateau in northern China paper_content: Soil-loss, food insecurity, population pressure and low income of the rural population are interrelated problems in the Loess Plateau of northern China, and result in a spiral of unsustainability. This paper examines Ansai County as a case study to explore strategic land use options that may meet well-defined goals of regional development, using a systems approach that integrated the fragmented and empirical information on the biophysical, agronomic and socio-economic conditions. We used production ecological principles, simulation modeling and multiple goal linear programming as integrative tools. Four scenarios were explored, representing major directions of agricultural development in the region and views of national and local stakeholders, farmers and environmentalists. The results indicate that soil conservation, food self-sufficiency and income for the rural population can be substantially improved by efficient resource use and appropriate inputs. In the long-term, terracing and use of crop rotations with alfalfa may be the best options for soil conservation. The large rural population and the lack of off-farm employment opportunities could be the most important factors affecting rural development in Ansai. This study contributes to the understanding of regional problems and agricultural development potentials, and shows' agro-technical possibilities for alleviating the unsustainability problems in this fragile and poorly endowed region. To promote actual development towards the identified options, on-farm innovation and appropriate policy measures are needed. The explored land use options enable a much more targeted innovation and development of policies. (C) 2003 Elsevier Ltd. All rights reserved. --- paper_title: Goal programming in a planning problem paper_content: The objective of this paper is to apply one of the techniques of multiobjective programming (goal programming) in a brazilian forest problem, in a case study accomplished in the Santa Candida Farm, Parana, Brazil. The areas of this farm can be managed for timber (pine and native species), harvesting of erva-mate leaves, pasture, and tourism. There is also a concern of the farm managers with increasing the diversity of flora and fauna, increasing environmental protection conditions and maintaining employees in the farm. Goal programming was used to develop a project of land allocation, in which all the goals would be reached as closest as possible of the ideal, in a way to attend all the operational restrictions considered. In goal programming, the concept of optimum solution of LP problems is substituted by a satisfactory solution (nondominated). Several solutions can be obtained, and the best solution will depend on the priority associated to each goal. --- paper_title: Using Linear Integer Programming for Multi-Site Land-Use Allocation paper_content: Research in the area of spatial decision support (SDS) and resource allocation has recently generated increased attention for integrating optimization techniques with GIS. In this paper we address the use of spatial optimization techniques for solving multi-site land-use allocation (MLUA) problems, where MLUA refers to the optimal allocation of multiple sites of different land uses to an area. We solve an MLUA problem using four different integer programs (IP), of which three are linear integer programs. The IPs are formulated for a raster-based GIS environment and are designed to minimize development costs and to maximize compactness of the allocated land use. The preference for either minimizing costs or maximizing compactness has been made operational by including a weighting factor. The IPs are evaluated on their speed and their efficacy for handling large databases. All four IPs yielded the optimal solution within a reasonable amount of time, for an area of 8 x 8 cells. The fastest model was successfully applied to a case study involving an area of 30 x 30 cells. The case study demonstrates the practical use of linear IPs for spatial decision support issues. --- paper_title: SIRO-PLAN and LUPLAN: An Australian Approach to Land-Use Planning. 2. The LUPLAN Land-Use Planning Package paper_content: In this paper are described various approaches to implementing the plans evaluation steps of the SIRO-PLAN land-use planning method, including linear and goal programming and the LUPLAN simplification of the linear programming approach. --- paper_title: A farm multicriteria analysis model for the economic and environmental evaluation of agricultural land use paper_content: The paper presents a model that produces and evaluates alternative farming systems. The economic and environmental viewpoints are considered within different hypotheses of agricultural policy and farmer’s decision-making process. The model is composed of three parts: a) a multiobjective programming model for the simulation at farm level of policies and farmer’s decision-making; b) a model for the environmental impact assessment of the results of the previous simulations; c) a multi-attribute evaluation of the simulations from both the economic and environmental point of view. The model has been applied in the area of the Venice Lagoon Basin (VLB), located in northern Italy. The economic-environmental evaluation demonstrates that there is clear synergy between the tendency of the farmer to minimise the risk of the cropping systems and the necessity to reduce the environmental impact of farming techniques. This is particularly useful for promoting low-impact agricultural practices in the situations dominated by traditional production methods whose environmental implications appear very variable. --- paper_title: Integration of linear programming and GIS for land-use modelling paper_content: Abstract Geographical Information Systems (GIS) are becoming basic tools for a wide variety of earth science and land-use applications. This article presents linear programming (LP) as a promising tool for spatial modelling within a GIS. Although LP is not properly a spatial technique, it may be used to optimize spatial distributions or to guide the integration of variables. An example of the use of LP in land-use planning is described, with minimizing rural unemployment as the main goal. Technical, financial and ecological constraints are established to show the influence of several limitations on achieving the optimal solution. LP makes it possible to achieve optimal land-use, where the objective is maximized and the constraints respected. LP can also be used to simulate different planning scenarios, by modifying both the objective function coefficients and the constraints. The integration of LP and GIS is presented in two phases: (i) acquisition of attribute data for the LP model, and (ii) modelling an... --- paper_title: Urban Simulation Using Principal Components Analysis and Cellular Automata for Land-Use Planning paper_content: Cellular automata, principal components analysis (PCA) and geographic information system techniques can be integrated to simulate alternative urban growth patterns based on a large set of environmental constraints that could be considered in land-use planning. The simulation of actual cities usually involves multicriteria evaluation in tackling the problems of complex spatial factors. Spatial factors often exhibit a high degree of correlation which is considered to be an undesirable property for multicriteria evaluation. It is difficult to determine the weights when many spatial variables are involved. This study uses PCA to remove data redundancy among a large set of spatial variables and determine the ideal point for land development. The simulation is based on transition rules that are related to the neighborhood function and similarity between cells and the ideal point. PCA helps to deal with a large data set of spatial variables for the implementation of the cellular automata model. The model can deal with the complicated resources conditions and environmental restrictions that are often encountered in land-use planning. The model can be further developed as an extended function of GIS to be a useful tool for urban planning and environmental management. --- paper_title: Using simulated annealing for resource allocation paper_content: Many resource allocation issues, such as land use- or irrigation planning, require input from extensive spatial databases and involve complex decisionmaking problems. Spatial decision support systems (SDSS) are designed to make these issues more transparent and to support the design and evaluation of resource allocation alternatives. Recent developments in this field focus on the design of allocation plans that utilise mathematical optimisation techniques. These techniques, often referred to as multi-criteria decision-making (MCDM) techniques, run into numerical problems when faced with the high dimensionality encountered in spatial applications. In this paper we demonstrate how simulated annealing, a heuristic algorithm, can be used to solve high-dimensional non-linear optimisation problems for multi-site land use allocation (MLUA) problems. The optimisation model both minimises development costs and maximises spatial compactness of the land use. Compactness is achieved by adding a non-linear neighbourhood objective to the objective function. The method is successfully applied to a case study in Galicia, Spain, using an SDSS for supporting the restoration of a former mining area with new land use. --- paper_title: Multi-actor-based land use modelling: spatial planning using agents paper_content: This paper describes a spatial planning model combining a multi-agent simulation (MAS) approach with cellular automata (CA). The model includes individual actor behaviour according to a bottom-up modelling concept. Spatial planning intentions and related decision making of planning actors is defined by agents. CA is used to infer the knowledge needed by the agents to make decisions about the future of a spatial organisation in a certain area. The innovative item of this approach offers a framework for modelling complex land use planning process by extending CA approach with MAS. The modelling approach is demonstrated by the implementation of a pilot model using JAVA and the SWARM agent modelling toolkit. The pilot model itself is applied to a study area near the city of Nijmegen, The Netherlands. --- paper_title: A design and application of a multi-agent system for simulation of multi-actor spatial planning. paper_content: Multi-agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. The main goal is to explore the use of MAS to simulate spatial scenarios based on modelling multi-actor decision-making within a spatial planning process. We demonstrate MAS that consists of agents representing organizations and interest groups involved in an urban allocation problem during a land use planning process. The multi-actor based decision-making is modelled by generating beliefs and preferences of actors about the location of and relation between spatial objects. This allows each agent to confront these beliefs and preferences with it's own desires and with that of other agents. The MAS loosely resembles belief, desire and intentions architecture. Based on a case study for a hypothetical land use planning situation in a study area in the Netherlands we discuss the potential and limitations of the MAS to build models that enable spatial planners to include the 'actor factor' in their analysis and design of spatial scenarios. In addition, our experiments revealed the need for further research on the representation of spatial objects and reasoning, learning and communication about allocation problems using MAS. --- paper_title: SimLand: a prototype to simulate land conversion through the integrated GIS and CA with AHP-derived transition rules paper_content: This paper presents a prototype of a simulation model based on cellular automata (CA), and multicriteria evaluation (MCE) and integrated with GIS. Specifically, a method, analytical hierarchy process (AHP), of MCE is used here to derive behaviour-oriented rules of transition in CA. A ‘tight' integration strategy is adopted, which means that the modules of MCE and CA are written in the C programming language and built within ARC/INFO GIS. Designed to run on a workstation Unix SimL and fully utilizes the graphical user interface (GUI),which allows the modelto be driven by menusand automate the simulation of land conversion in the urban-rural fringe. The combination of three elements, GIS, CA, and MCE, has several advantages: visualization of decision-making, easier access to spatial information, and the more realistic definition of transition rules in CA. --- paper_title: Multi-agent systems for the simulation of land-use and land-cover change: A review paper_content: This paper presents an overview of multi-agent system models of land-use/cover change (MAS/LUCC models). This special class of LUCC models combines a cellular landscape model with agent-based representations of decisionmaking, integrating the two components through specification of interdependencies and feedbacks between agents and their environment. The authors review alternative LUCC modeling techniques and discuss the ways in which MAS/LUCC models may overcome some important limitations of existing techniques. We briefly review ongoing MAS/LUCC modeling efforts in four research areas. We discuss the potential strengths of MAS/LUCC models and suggest that these strengths guide researchers in assessing the appropriate choice of model for their particular research question. We find that MAS/LUCC models are particularly well suited for representing complex spatial interactions under heterogeneous conditions and for modeling decentralized, autonomous decision making. We discuss a range of possible roles for MAS/LUCC models, from abstract models designed to derive stylized hypotheses to empirically detailed simulation models appropriate for scenario and policy analysis. We also discuss the challenge of validation and verification for MAS/LUCC models. Finally, we outline important challenges and open research questions in this new field. We conclude that, while significant challenges exist, these models offer a promising new tool for researchers whose goal is to create fine-scale models of LUCC phenomena that focus on human-environment interactions. --- paper_title: Multi-scale analysis of a household level agent-based model of landcover change. paper_content: Scale issues have significant implications for the analysis of social and biophysical processes in complex systems. These same scale implications are likewise considerations for the design and application of models of landcover change. Scale issues have wide-ranging effects from the representativeness of data used to validate models to aggregation errors introduced in the model structure. This paper presents an analysis of how scale issues affect an agent-based model (ABM) of landcover change developed for a research area in the Midwest, USA. The research presented here explores how scale factors affect the design and application of agent-based landcover change models. The ABM is composed of a series of heterogeneous agents who make landuse decisions on a portfolio of cells in a raster-based programming environment. The model is calibrated using measures of fit derived from both spatial composition and spatial pattern metrics from multi-temporal landcover data interpreted from historical aerial photography. A model calibration process is used to find a best-fit set of parameter weights assigned to agents' preferences for different landuses (agriculture, pasture, timber production, and non-harvested forest). Previous research using this model has shown how a heterogeneous set of agents with differing preferences for a portfolio of landuses produces the best fit to landcover changes observed in the study area. The scale dependence of the model is explored by varying the resolution of the input data used to calibrate the model (observed landcover), ancillary datasets that affect land suitability (topography), and the resolution of the model landscape on which agents make decisions. To explore the impact of these scale relationships the model is run with input datasets constructed at the following spatial resolutions: 60, 90, 120, 150, 240, 300 and 480 m. The results show that the distribution of landuse-preference weights differs as a function of scale. In addition, with the gradient descent model fitting method used in this analysis the model was not able to converge to an acceptable fit at the 300 and 480 m spatial resolutions. This is a product of the ratio of the input cell resolution to the average parcel size in the landscape. This paper uses these findings to identify scale considerations in the design, development, validation and application of ABMs of landcover change. --- paper_title: On the use of genetic algorithms to solve location problems paper_content: This paper seeks to evaluate the performance of genetic algorithms (GA) as an alternative procedure for generating optimal or near-optimal solutions for location problems. The specific problems considered are the uncapacitated and capacitated fixed charge problems, the maximum covering problem, and competitive location models. We compare the performance of the GA-based heuristics developed against well-known heuristics from the literature, using a test base of publicly available data sets. --- paper_title: Integrating multi-criteria evaluation with geographical information systems paper_content: Abstract Geographical information systems (GIS) provide the decision-maker with a powerful set of tools for the manipulation and analysis of spatial information. The functionality of GIS is, however, limited to certain deterministic analyses in key application areas such as spatial search. The integration of multi-criteria evaluation (MCE) techniques with GIS is forwarded as providing the user with the means to evaluate various alternatives on the basis of multiple and conflicting criteria and objectives. An example application based on the search for suitable sites for the disposal of radioactive waste in the UK using the Arc/Info GIS is included. The potential use of a combined GIS-MCE approach in the development of spatial decision support systems is considered. --- paper_title: Using Linear Integer Programming for Multi-Site Land-Use Allocation paper_content: Research in the area of spatial decision support (SDS) and resource allocation has recently generated increased attention for integrating optimization techniques with GIS. In this paper we address the use of spatial optimization techniques for solving multi-site land-use allocation (MLUA) problems, where MLUA refers to the optimal allocation of multiple sites of different land uses to an area. We solve an MLUA problem using four different integer programs (IP), of which three are linear integer programs. The IPs are formulated for a raster-based GIS environment and are designed to minimize development costs and to maximize compactness of the allocated land use. The preference for either minimizing costs or maximizing compactness has been made operational by including a weighting factor. The IPs are evaluated on their speed and their efficacy for handling large databases. All four IPs yielded the optimal solution within a reasonable amount of time, for an area of 8 x 8 cells. The fastest model was successfully applied to a case study involving an area of 30 x 30 cells. The case study demonstrates the practical use of linear IPs for spatial decision support issues. --- paper_title: Using GIS and outranking multicriteria analysis for land-use suitability assessment paper_content: Land-use planners often make complex decisions within a short period of time when they must take into account sustainable development and economic competitiveness. A set of land-use suitability maps would be very useful in this respect. Ideally, these maps should incorporate complex criteria integrating several stakeholders' points of view. To illustrate the feasibility of this approach, a land suitability map for housing was realised for a small region of Switzerland. Geographical Information System technology was used to assess the criteria requested to define the suitability of land for housing. An example dealing with the evaluation of noise levels illustrates the initial steps of this procedure. Because the required criteria are heterogeneous and measured on various scales, an outranking multicriteria analysis method called ELECTRE-TRI was used. However, using it to assess the suitability of any point in a territory was impractical due to computational limitations. Therefore, a mathematical function ... --- paper_title: Multi-agent systems for the simulation of land-use and land-cover change: A review paper_content: This paper presents an overview of multi-agent system models of land-use/cover change (MAS/LUCC models). This special class of LUCC models combines a cellular landscape model with agent-based representations of decisionmaking, integrating the two components through specification of interdependencies and feedbacks between agents and their environment. The authors review alternative LUCC modeling techniques and discuss the ways in which MAS/LUCC models may overcome some important limitations of existing techniques. We briefly review ongoing MAS/LUCC modeling efforts in four research areas. We discuss the potential strengths of MAS/LUCC models and suggest that these strengths guide researchers in assessing the appropriate choice of model for their particular research question. We find that MAS/LUCC models are particularly well suited for representing complex spatial interactions under heterogeneous conditions and for modeling decentralized, autonomous decision making. We discuss a range of possible roles for MAS/LUCC models, from abstract models designed to derive stylized hypotheses to empirically detailed simulation models appropriate for scenario and policy analysis. We also discuss the challenge of validation and verification for MAS/LUCC models. Finally, we outline important challenges and open research questions in this new field. We conclude that, while significant challenges exist, these models offer a promising new tool for researchers whose goal is to create fine-scale models of LUCC phenomena that focus on human-environment interactions. --- paper_title: Designing compact and contiguous reserve networks with a hybrid heuristic algorithm paper_content: Conflicting opinions from environmental advocates and economic interests on the best strategy for management of public lands often leaves land managers in a difficult position. Since ecosystem sustainability is in the long-term interest of each group, the establishment of nature reserves could simultaneously address both views. To promote sustainability, fragmentation of existing natural habitats should be avoided, since it is commonly recognized as being disruptive to the species adapted to these habitats. Therefore, when designing an efficient nature reserve, the compactness and contiguity of the land reserved is an essential consideration. A new formulation of the reserve selection problem is presented that explicitly addresses these issues; specifically, the model minimizes a weighted combination of compactness and contiguity measures subject to constraints on the minimal representation of each habitat class. Motivated by the ongoing reserve efforts in the large and diverse Klamath-Siskiyou region of southwestern Oregon and northwestern California, common heuristic search techniques are implemented and results compared on various simulated test problems. From these findings a new heuristic is developed that reduces solution time and increases solution quality. When applied to the Klamath-Siskiyou region, results are promising. --- paper_title: Multicriterion planning of protected-area buffer zones: an application to Mexico’s Izta-Popo national park paper_content: This paper presents a methodology for the design of buffer zones around protected areas. Although the procedure is designed to be used by the protected-area managers and hence respond primarily to their objectives and concerns, it recognizes the influence of the wider socioeconomic and political environments in realizing those objectives. The methodology thus helps resource managers to (1) formulate their objectives, (2) anticipate how those objectives may conflict with ongoing or potential future changes in the wider environment within which resource management must be conducted, and (3) obtain as efficiently as possible the information necessary to (4) design and evaluate management options in view of such changes and conflicts. Multiobjective optimization is used both to formulate the managers’ goals pertaining to land-use change, and to anticipate potential land-use changes resulting under different socioeconomic scenarios. Goal programming is then employed to fashion minimum-conflict plans, which are subsequently used as inputs into a GIS procedure that helps managers design spatial land-use configurations attractive from landscape-ecological perspectives. The prescribed land-use changes common to these plans and/or configurations may be considered a “commitment set”, and the methodology thus employed an exemplification of a robustness philosophy. ---
Title: A Review of Rural Land-Use Planning Models Section 1: Definition of a Land-Use Planning Model Description 1: Provide an overview of the concepts and phases involved in defining a land-use planning model, referencing key frameworks such as the FAO (1976). Section 2: Land Evaluation Description 2: Discuss the process of land evaluation, including historical evolution, various models and methods used, and their respective strengths and weaknesses. Section 3: Land-Use Allocation Description 3: Explore the concept of land-use allocation, different decision support systems (DSSs), and their integration with GIS. Section 4: Expert Systems Description 4: Examine the role of expert systems in land-use planning, their application areas, and examples of developed systems. Section 5: Mathematical Models Description 5: Detail the types of mathematical models used in land-use allocation, including multicriteria evaluation techniques, mathematical programming applications, and spatial simulation models. Section 6: Multicriteria Evaluation Description 6: Discuss multicriteria evaluation techniques, their applications in land planning, and the integration with GIS for generating suitability maps. Section 7: Mathematical Programming Description 7: Outline the principles of mathematical programming in land-use planning, including examples of optimization models and their objectives. Section 8: Spatial Simulation Models Description 8: Describe spatial simulation models, including algorithms like genetic algorithms and simulated annealing, and their applications in land-use optimization. Section 9: Analysis of Land-Use Allocation Models Description 9: Analyze the significant characteristics of various land-use allocation models, comparing their aims, required information, GIS integration, and group decision-making capabilities. Section 10: Discussion Description 10: Summarize the strengths and weaknesses of the different methods used in land evaluation and land-use allocation, providing a critical evaluation of each approach. Section 11: Conclusions Description 11: Conclude with an assessment of the most suitable land-evaluation and land-use allocation methods, considering the specific objectives and expected outcomes of rural land-use planning.
Incorporating prior knowledge in medical image segmentation: a survey
25
--- paper_title: A Fuzzy Locally Adaptive Bayesian Segmentation Approach for Volume Determination in PET paper_content: Accurate volume estimation in positron emission tomography (PET) is crucial for different oncology applications. The objective of our study was to develop a new fuzzy locally adaptive Bayesian (FLAB) segmentation for automatic lesion volume delineation. FLAB was compared with a threshold approach as well as the previously proposed fuzzy hidden Markov chains (FHMC) and the fuzzy C-Means (FCM) algorithms. The performance of the algorithms was assessed on acquired datasets of the IEC phantom, covering a range of spherical lesion sizes (10-37 mm), contrast ratios (4:1 and 8:1), noise levels (1, 2, and 5 min acquisitions), and voxel sizes (8 and 64 mm3). In addition, the performance of the FLAB model was assessed on realistic nonuniform and nonspherical volumes simulated from patient lesions. Results show that FLAB performs better than the other methodologies, particularly for smaller objects. The volume error was 5%-15% for the different sphere sizes (down to 13 mm), contrast and image qualities considered, with a high reproducibility (variation < 4%). By comparison, the thresholding results were greatly dependent on image contrast and noise, whereas FCM results were less dependent on noise but consistently failed to segment lesions < 2 cm. In addition, FLAB performed consistently better for lesions < 2 cm in comparison to the FHMC algorithm. Finally the FLAB model provided errors less than 10% for nonspherical lesions with inhomogeneous activity distributions. Future developments will concentrate on an extension of FLAB in order to allow the segmentation of separate activity distribution regions within the same functional volume as well as a robustness study with respect to different scanners and reconstruction algorithms. --- paper_title: Segmentation of medical images using adaptive region growing paper_content: Interaction increases flexibility of segmentation but it leads to undesirable behavior of an algorithm if knowledge being requested is inappropriate. In region growing, this is the case for defining the homogeneity criterion as its specification depends also on image formation properties that are not known to the user. We developed a region growing algorithm that learns its homogeneity criterion automatically from characteristics of the region to be segmented. The method is based on a model that describes homogeneity and simple shape properties of the region. Parameters of the homogeneity criterion are estimated from sample locations in the region. These locations are selected sequentially in a random walk starting at the seed point, and the homogeneity criterion is updated continuously. The method was tested for segmentation on test images and of structures in CT images. We found the method to work reliable if the model assumption on homogeneity and region characteristics are true. Furthermore, the model is simple but robust, thus allowing for a certain degree of deviation from model constraints and still delivering the expected segmentation result. This approach was extended to a fully automatic and complete segmentation method by using the pixels with the smallest gradient length in the not yet segmented image region as a seed point. --- paper_title: An Efficient Optimization Framework for Multi-Region Segmentation Based on Lagrangian Duality paper_content: We introduce a multi-region model for simultaneous segmentation of medical images. In contrast to many other models, geometric constraints such as inclusion and exclusion between the regions are enforced, which makes it possible to correctly segment different regions even if the intensity distributions are identical. We efficiently optimize the model using a combination of graph cuts and Lagrangian duality which is faster and more memory efficient than current state of the art. As the method is based on global optimization techniques, the resulting segmentations are independent of initialization. We apply our framework to the segmentation of the left and right ventricles, myocardium and the left ventricular papillary muscles in magnetic resonance imaging and to lung segmentation in full-body X-ray computed tomography. We evaluate our approach on a publicly available benchmark with competitive results. --- paper_title: Seeded region growing paper_content: We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures. > --- paper_title: Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations paper_content: A fast and flexible algorithm for computing watersheds in digital gray-scale images is introduced. A review of watersheds and related motion is first presented, and the major methods to determine watersheds are discussed. The algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using of queue of pixel. It is described in detail provided in a pseudo C language. The accuracy of this algorithm is proven to be superior to that of the existing implementations, and it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images (and even to graphs) are straightforward. The algorithm is reported to be faster than any other watershed algorithm. Applications of this algorithm with regard to picture segmentation are presented for magnetic resonance (MR) imagery and for digital elevation models. An example of 3-D watershed is also provided. > --- paper_title: A Bayes-Based Region-Growing Algorithm for Medical Image Segmentation paper_content: This paper discusses a new Bayesian-analysis-based region-growing algorithm for medical image segmentation that can robustly and effectively segment medical images. Specifically the approach studies homogeneity criterion parameters in a local neighbor region. Using the multislices Gaussian and anisotropic filters as a preprocess helps reduce an image's noise. The algorithm framework is tested on CT and MRI image segmentation, and experimental results show that the approach is reliable and efficient. --- paper_title: Joint segmentation of anatomical and functional images: Applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images paper_content: We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use. --- paper_title: Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration. paper_content: OBJECTIVES ::: To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. ::: ::: ::: METHODS ::: Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. ::: ::: ::: RESULTS ::: Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. ::: ::: ::: CONCLUSIONS ::: Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use. --- paper_title: Watershed segmentation using prior shape and appearance knowledge paper_content: Watershed transformation is a common technique for image segmentation. However, its use for automatic medical image segmentation has been limited particularly due to oversegmentation and sensitivity to noise. Employing prior shape knowledge has demonstrated robust improvements to medical image segmentation algorithms. We propose a novel method for enhancing watershed segmentation by utilizing prior shape and appearance knowledge. Our method iteratively aligns a shape histogram with the result of an improved k-means clustering algorithm of the watershed segments. Quantitative validation of magnetic resonance imaging segmentation results supports the robust nature of our method. --- paper_title: Measuring tortuosity of the intracerebral vasculature from MRA images paper_content: The clinical recognition of abnormal vascular tortuosity, or excessive bending, twisting, and winding, is important to the diagnosis of many diseases. Automated detection and quantitation of abnormal vascular tortuosity from three-dimensional (3-D) medical image data would, therefore, be of value. However, previous research has centered primarily upon two-dimensional (2-D) analysis of the special subset of vessels whose paths are normally close to straight. This report provides the first 3-D tortuosity analysis of clusters of vessels within the normally tortuous intracerebral circulation. We define three different clinical patterns of abnormal tortuosity. We extend into 3-D two tortuosity metrics previously reported as useful in analyzing 2-D images and describe a new metric that incorporates counts of minima of total curvature. We extract vessels from MRA data, map corresponding anatomical regions between sets of normal patients and patients with known pathology, and evaluate the three tortuosity metrics for ability to detect each type of abnormality within the region of interest. We conclude that the new tortuosity metric appears to be the most effective in detecting several types of abnormalities. However, one of the other metrics, based on a sum of curvature magnitudes, may be more effective in recognizing tightly coiled, "corkscrew" vessels associated with malignant tumors. --- paper_title: Multifeature Prostate Cancer Diagnosis and Gleason Grading of Histological Images paper_content: We present a study of image features for cancer diagnosis and Gleason grading of the histological images of prostate. In diagnosis, the tissue image is classified into the tumor and nontumor classes. In Gleason grading, which characterizes tumor aggressiveness, the image is classified as containing a low- or high-grade tumor. The image sets used in this paper consisted of 367 and 268 color images for the diagnosis and Gleason grading problems, respectively, and were captured from representative areas of hematoxylin and eosin-stained tissue retrieved from tissue microarray cores or whole sections. The primary contribution of this paper is to aggregate color, texture, and morphometric cues at the global and histological object levels for classification. Features representing different visual cues were combined in a supervised learning framework. We compared the performance of Gaussian, -nearest neighbor, and support vector machine classifiers together with the sequential forward feature selection algorithm. On diagnosis, using a five-fold cross-validation estimate, an accuracy of 96.7% was obtained. On Gleason grading, the achieved accuracy of classification into low- and high-grade classes was 81.0%. --- paper_title: Detection and Measurement of Fetal Anatomies from Ultrasound Images using a Constrained Probabilistic Boosting Tree paper_content: We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer. --- paper_title: An effective visualisation and registration system for image-guided robotic partial nephrectomy paper_content: Robotic partial nephrectomy is presently the fastest-growing robotic surgical procedure, and in comparison to traditional techniques it offers reduced tissue trauma and likelihood of post-operative infection, while hastening recovery time and improving cosmesis. It is also an ideal candidate for image guidance technology since soft tissue deformation, while still present, is localised and less problematic compared to other surgical procedures. This work describes the implementation and ongoing development of an effective image guidance system that aims to address some of the remaining challenges in this area. Specific innovations include the introduction of an intuitive, partially automated registration interface, and the use of a hardware platform that makes sophisticated augmented reality overlays practical in real time. Results and examples of image augmentation are presented from both retrospective and live cases. Quantitative analysis of registration error verifies that the proposed registration technique is appropriate for the chosen image guidance targets. --- paper_title: Improved watershed transform for medical image segmentation using prior information paper_content: The watershed transform has interesting properties that make it useful for many different image segmentation applications: it is simple and intuitive, can be parallelized, and always produces a complete division of the image. However, when applied to medical image analysis, it has important drawbacks (oversegmentation, sensitivity to noise, poor detection of thin or low signal to noise ratio structures). We present an improvement to the watershed transform that enables the introduction of prior information in its calculation. We propose to introduce this information via the use of a previous probability calculation. Furthermore, we introduce a method to combine the watershed transform and atlas registration, through the use of markers. We have applied our new algorithm to two challenging applications: knee cartilage and gray matter/white matter segmentation in MR images. Numerical validation of the results is provided, demonstrating the strength of the algorithm for medical image segmentation. --- paper_title: Segmentation of volumetric MRA images by using Capillary Active Contour paper_content: Precise segmentation of three-dimensional (3D) magnetic resonance angiography (MRA) images can be a very useful computer aided diagnosis (CAD) tool for clinical routines. Level sets based evolution schemes, which have been shown to be effective and easy to implement for many segmentation applications, are being applied to MRA data sets. In this paper, we present a segmentation scheme for accurately extracting vasculature from MRA images. Our proposed algorithm models capillary action and derives a capillary active contour for segmentation of thin vessels. The algorithm is implemented using the level set method and has been applied successfully on real 3D MRA images. Compared with other state-of-the-art MRA segmentation algorithms, experiments show that our method facilitates more accurate segmentation of thin blood vessels. --- paper_title: Ultrasound image segmentation: a survey paper_content: This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem --- paper_title: Automated medical image segmentation techniques paper_content: Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT) and Magnetic resonance (MR) imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images. --- paper_title: Current methods in medical image segmentation. paper_content: Image segmentation plays a crucial role in many medical-imaging applications, by automating or facilitating the delineation of anatomical structures and other regions of interest. We present a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Terminology and important issues in image segmentation are first presented. Current segmentation approaches are then reviewed with an emphasis on the advantages and disadvantages of these methods for medical imaging applications. We conclude with a discussion on the future of image segmentation methods in biomedical research. --- paper_title: A review of segmentation methods in short axis cardiac MR images paper_content: For the last 15 years, Magnetic Resonance Imaging (MRI) has become a reference examination for cardiac morphology, function and perfusion in humans. Yet, due to the characteristics of cardiac MR images and to the great variability of the images among patients, the problem of heart cavities segmentation in MRI is still open. This paper is a review of fully and semi-automated methods performing segmentation in short axis images using a cardiac cine MRI sequence. Medical background and specific segmentation difficulties associated to these images are presented. For this particularly complex segmentation task, prior knowledge is required. We thus propose an original categorization for cardiac segmentation methods, with a special emphasis on what level of external information is required (weak or strong) and how it is used to constrain segmentation. After reviewing method principles and analyzing segmentation results, we conclude with a discussion and future trends in this field regarding methodological and medical issues. --- paper_title: Deformable Models in Medical Image Analysis : A Survey paper_content: This article surveys deformable models, a promising and vigorously researched computerassisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics, and approximation theory. They have proven to be effective in segmenting, matching, and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) knowledge about the location, size, and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching, and motion tracking. --- paper_title: A Survey of Graph Theoretical Approaches to Image Segmentation paper_content: Image segmentation is a fundamental problem in computer vision. Despite many years of research, general purpose image segmentation is still a very challenging task because segmentation is inherently ill-posed. Among different segmentation schemes, graph theoretical ones have several good features in practical applications. It explicitly organizes the image elements into mathematically sound structures, and makes the formulation of the problem more flexible and the computation more efficient. In this paper, we conduct a systematic survey of graph theoretical methods for image segmentation, where the problem is modeled in terms of partitioning a graph into several sub-graphs such that each of them represents a meaningful object of interest in the image. These methods are categorized into five classes under a uniform notation: the minimal spanning tree based methods, graph cut based methods with cost functions, graph cut based methods on Markov random field models, the shortest path based methods and the other methods that do not belong to any of these classes. We present motivations and detailed technical descriptions for each category of methods. The quantitative evaluation is carried by using five indices - Probabilistic Rand (PR) index, Normalized Probabilistic Rand (NPR) index, Variation of Information (VI), Global Consistency Error (GCE) and Boundary Displacement Error (BDE) - on some representative automatic and interactive segmentation methods. --- paper_title: A review of 3D vessel lumen segmentation techniques: Models, features and extraction schemes paper_content: Vascular diseases are among the most important public health problems in developed countries. Given the size and complexity of modern angiographic acquisitions, segmentation is a key step toward the accurate visualization, diagnosis and quantification of vascular pathologies. Despite the tremendous amount of past and on-going dedicated research, vascular segmentation remains a challenging task. In this paper, we review state-of-the-art literature on vascular segmentation, with a particular focus on 3D contrast-enhanced imaging modalities (MRA and CTA). We structure our analysis along three axes: models, features and extraction schemes. We first detail model-based assumptions on the vessel appearance and geometry which can embedded in a segmentation approach. We then review the image features that can be extracted to evaluate these models. Finally, we discuss how existing extraction schemes combine model and feature information to perform the segmentation task. Each component (model, feature and extraction scheme) plays a crucial role toward the efficient, robust and accurate segmentation of vessels of interest. Along each axis of study, we discuss the theoretical and practical properties of recent approaches and highlight the most advanced and promising ones. --- paper_title: Segmentation of medical images using adaptive region growing paper_content: Interaction increases flexibility of segmentation but it leads to undesirable behavior of an algorithm if knowledge being requested is inappropriate. In region growing, this is the case for defining the homogeneity criterion as its specification depends also on image formation properties that are not known to the user. We developed a region growing algorithm that learns its homogeneity criterion automatically from characteristics of the region to be segmented. The method is based on a model that describes homogeneity and simple shape properties of the region. Parameters of the homogeneity criterion are estimated from sample locations in the region. These locations are selected sequentially in a random walk starting at the seed point, and the homogeneity criterion is updated continuously. The method was tested for segmentation on test images and of structures in CT images. We found the method to work reliable if the model assumption on homogeneity and region characteristics are true. Furthermore, the model is simple but robust, thus allowing for a certain degree of deviation from model constraints and still delivering the expected segmentation result. This approach was extended to a fully automatic and complete segmentation method by using the pixels with the smallest gradient length in the not yet segmented image region as a seed point. --- paper_title: Seeded region growing paper_content: We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures. > --- paper_title: Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations paper_content: A fast and flexible algorithm for computing watersheds in digital gray-scale images is introduced. A review of watersheds and related motion is first presented, and the major methods to determine watersheds are discussed. The algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using of queue of pixel. It is described in detail provided in a pseudo C language. The accuracy of this algorithm is proven to be superior to that of the existing implementations, and it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images (and even to graphs) are straightforward. The algorithm is reported to be faster than any other watershed algorithm. Applications of this algorithm with regard to picture segmentation are presented for magnetic resonance (MR) imagery and for digital elevation models. An example of 3-D watershed is also provided. > --- paper_title: A Bayes-Based Region-Growing Algorithm for Medical Image Segmentation paper_content: This paper discusses a new Bayesian-analysis-based region-growing algorithm for medical image segmentation that can robustly and effectively segment medical images. Specifically the approach studies homogeneity criterion parameters in a local neighbor region. Using the multislices Gaussian and anisotropic filters as a preprocess helps reduce an image's noise. The algorithm framework is tested on CT and MRI image segmentation, and experimental results show that the approach is reliable and efficient. --- paper_title: Watershed segmentation using prior shape and appearance knowledge paper_content: Watershed transformation is a common technique for image segmentation. However, its use for automatic medical image segmentation has been limited particularly due to oversegmentation and sensitivity to noise. Employing prior shape knowledge has demonstrated robust improvements to medical image segmentation algorithms. We propose a novel method for enhancing watershed segmentation by utilizing prior shape and appearance knowledge. Our method iteratively aligns a shape histogram with the result of an improved k-means clustering algorithm of the watershed segments. Quantitative validation of magnetic resonance imaging segmentation results supports the robust nature of our method. --- paper_title: Improved watershed transform for medical image segmentation using prior information paper_content: The watershed transform has interesting properties that make it useful for many different image segmentation applications: it is simple and intuitive, can be parallelized, and always produces a complete division of the image. However, when applied to medical image analysis, it has important drawbacks (oversegmentation, sensitivity to noise, poor detection of thin or low signal to noise ratio structures). We present an improvement to the watershed transform that enables the introduction of prior information in its calculation. We propose to introduce this information via the use of a previous probability calculation. Furthermore, we introduce a method to combine the watershed transform and atlas registration, through the use of markers. We have applied our new algorithm to two challenging applications: knee cartilage and gray matter/white matter segmentation in MR images. Numerical validation of the results is provided, demonstrating the strength of the algorithm for medical image segmentation. --- paper_title: Diagonal preconditioning for first order primal-dual algorithms in convex optimization paper_content: In this paper we study preconditioning techniques for the first-order primal-dual algorithm proposed in [5]. In particular, we propose simple and easy to compute diagonal preconditioners for which convergence of the algorithm is guaranteed without the need to compute any step size parameters. As a by-product, we show that for a certain instance of the preconditioning, the proposed algorithm is equivalent to the old and widely unknown alternating step method for monotropic programming [7]. We show numerical results on general linear programming problems and a few standard computer vision problems. In all examples, the preconditioned algorithm significantly outperforms the algorithm of [5]. --- paper_title: A Distributed Mincut/Maxflow Algorithm Combining Path Augmentation and Push-Relabel paper_content: We propose a novel distributed algorithm for the minimum cut problem. Motivated by applications like volumetric segmentation in computer vision, we aim at solving large sparse problems. When the problem does not fully fit in the memory, we need to either process it by parts, looking at one part at a time, or distribute across several computers. Many mincut/maxflow algorithms are designed for the shared memory architecture and do not scale to this setting. We consider algorithms that work on disjoint regions of the problem and exchange messages between the regions. We show that the region push-relabel algorithm of Delong and Boykov (A scalable graph-cut algorithm for N-D grids, in CVPR, 2008) uses Θ(n ::: 2) rounds of message exchange, where n is the number of vertices. Our new algorithm performs path augmentations inside the regions and push-relabel style updates between the regions. It uses asymptotically less message exchanges, $O(\mathcal{B}^{2})$ ::: , where $\mathcal{B}$ ::: is the set of boundary vertices. The sequential and parallel versions of our algorithm are competitive with the state-of-the-art in the shared memory model. By achieving a lower amount of message exchanges (even asymptotically lower in our synthetic experiments), they suit better for solving large problems using a disk storage or a distributed system. --- paper_title: Markov Random Field Modeling, Inference & Learning in Computer Vision & Image Understanding: A Survey paper_content: In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision field about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed significant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic. --- paper_title: The maximum flow problem is log space complete for P paper_content: The space complexity of the maximum flow problem is investigated. It is shown that the problem is log space complete for deterministic polynomial time. Thus the maximum flow problem probably has no algorithm which needs only O(logk n) storage space for any constant k. Another consequence is that there is probably no fast parallel algorithm for the maximum flow problem. --- paper_title: Parallel and distributed graph cuts by dual decomposition paper_content: Graph cuts methods are at the core of many state-of-the-art algorithms in computer vision due to their efficiency in computing globally optimal solutions. In this paper, we solve the maximum flow/minimum cut problem in parallel by splitting the graph into multiple parts and hence, further increase the computational efficacy of graph cuts. Optimality of the solution is guaranteed by dual decomposition, or more specifically, the solutions to the subproblems are constrained to be equal on the overlap with dual variables. We demonstrate that our approach both allows (i) faster processing on multi-core computers and (ii) the capability to handle larger problems by splitting the graph across multiple computers on a distributed network. Even though our approach does not give a theoretical guarantee of speedup, an extensive empirical evaluation on several applications with many different data sets consistently shows good performance. An open source implementation of the dual decomposition method is also made publicly available. --- paper_title: Globally minimal surfaces by continuous maximal flows paper_content: In this paper, we address the computation of globally minimal curves and surfaces for image segmentation and stereo reconstruction. We present a solution, simulating a continuous maximal flow by a novel system of partial differential equations. Existing methods are either grid-biased (graph-based methods) or suboptimal (active contours and surfaces). The solution simulates the flow of an ideal fluid with isotropic velocity constraints. Velocity constraints are defined by a metric derived from image data. An auxiliary potential function is introduced to create a system of partial differential equations. It is proven that the algorithm produces a globally maximal continuous flow at convergence, and that the globally minimal surface may be obtained trivially from the auxiliary potential. The bias of minimal surface methods toward small objects is also addressed. An efficient implementation is given for the flow simulation. The globally minimal surface algorithm is applied to segmentation in 2D and 3D as well as to stereo matching. Results in 2D agree with an existing minimal contour algorithm for planar images. Results in 3D segmentation and stereo matching demonstrate that the new algorithm is robust and free from grid bias. --- paper_title: Determining the optimal weights in multiple objective function optimization paper_content: An important problem in computer vision is the determination of weights for multiple objective function optimization. This problem arises naturally in many reconstruction problems, where one wishes to reconstruct a function belonging to a constrained class of signals based upon noisy observed data. A common approach is to combine the objective functions into a single total cxt function. The problem then is to determine appropriate weights for the objective functions. In this paper we propose techniques for automatically determining the weights, and discuss their properties. The Min-Max Principle, which avoids the problems of extremely low or high weights, is introduced. ExpresEions are derived relating the optimal weights, objective function values, and total cost. --- paper_title: G.: Adaptive regularization for image segmentation using local image curvature cues paper_content: Image segmentation techniques typically require proper weighting of competing data fidelity and regularization terms. Conventionally, the associated parameters are set through tedious trial and error procedures and kept constant over the image. However, spatially varying structural characteristics, such as object curvature, combined with varying noise and imaging artifacts, significantly complicate the selection process of segmentation parameters. In this work, we propose a novel approach for automating the parameter selection by employing a robust structural cue to prevent excessive regularization of trusted (i.e. low noise) high curvature image regions. Our approach autonomously adapts local regularization weights by combining local measures of image curvature and edge evidence that are gated by a signal reliability measure. We demonstrate the utility and favorable performance of our approach within two major segmentation frameworks, graph cuts and active contours, and present quantitative and qualitative results on a variety of natural and medical images. --- paper_title: A Survey and Comparison of Discrete and Continuous Multi-label Optimization Approaches for the Potts Model paper_content: We present a survey and a comparison of a variety of algorithms that have been proposed over the years to minimize multi-label optimization problems based on the Potts model. Discrete approaches based on Markov Random Fields as well as continuous optimization approaches based on partial differential equations can be applied to the task. In contrast to the case of binary labeling, the multi-label problem is known to be NP hard and thus one can only expect near-optimal solutions. In this paper, we carry out a theoretical comparison and an experimental analysis of existing approaches with respect to accuracy, optimality and runtime, aimed at bringing out the advantages and short-comings of the respective algorithms. Systematic quantitative comparison is done on the Graz interactive image segmentation benchmark. This paper thereby generalizes a previous experimental comparison (Klodt et al. 2008) from the binary to the multi-label case. --- paper_title: On the Partial Difference Equations of Mathematical Physics paper_content: Problems involving the classical linear partial differential equations of mathematical physics can be reduced to algebraic ones of a very much simpler structure by replacing the differentials by difference quotients on some (say rectilinear) mesh. This paper will undertake an elementary discussion of these algebraic problems, in particular of the behavior of the solution as the mesh width tends to zero. For present purposes we limit ourselves mainly to simple but typical cases, and treat them in such a way that the applicability of the method to more general difference equations and to those with arbitrarily many independent variables is made clear. --- paper_title: Computing geodesics and minimal surfaces via graph cuts paper_content: Geodesic active contours and graph cuts are two standard image segmentation techniques. We introduce a new segmentation method combining some of their benefits. Our main intuition is that any cut on a graph embedded in some continuous space can be interpreted as a contour (in 2D) or a surface (in 3D). We show how to build a grid graph and set its edge weights so that the cost of cuts is arbitrarily close to the length (area) of the corresponding contours (surfaces) for any anisotropic Riemannian metric. There are two interesting consequences of this technical result. First, graph cut algorithms can be used to find globally minimum geodesic contours (minimal surfaces in 3D) under arbitrary Riemannian metric for a given set of boundary conditions. Second, we show how to minimize metrication artifacts in existing graph-cut based methods in vision. Theoretically speaking, our work provides an interesting link between several branches of mathematics -differential geometry, integral geometry, and combinatorial optimization. The main technical problem is solved using Cauchy-Crofton formula from integral geometry. --- paper_title: Parallel graph-cuts by adaptive bottom-up merging paper_content: Graph-cuts optimization is prevalent in vision and graphics problems. It is thus of great practical importance to parallelize the graph-cuts optimization using today's ubiquitous multi-core machines. However, the current best serial algorithm by Boykov and Kolmogorov (called the BK algorithm) still has the superior empirical performance. It is non-trivial to parallelize as expensive synchronization overhead easily offsets the advantage of parallelism. In this paper, we propose a novel adaptive bottom-up approach to parallelize the BK algorithm. We first uniformly partition the graph into a number of regularly-shaped disjoint subgraphs and process them in parallel, then we incrementally merge the subgraphs in an adaptive way to obtain the global optimum. The new algorithm has three benefits: 1) it is more cache-friendly within smaller subgraphs; 2) it keeps balanced workloads among computing cores; 3) it causes little overhead and is adaptable to the number of available cores. Extensive experiments in common applications such as 2D/3D image segmentations and 3D surface fitting demonstrate the effectiveness of our approach. --- paper_title: Is a single energy functional sufficient? adaptive energy functionals and automatic initialization paper_content: Energy functional minimization is an increasingly popular technique for image segmentation. However, it is far too commonly applied with hand-tuned parameters and initializations that have only been validated for a few images. Fixing these parameters over a set of images assumes the same parameters are ideal for each image. We highlight the effects of varying the parameters and initialization on segmentation accuracy and propose a framework for attaining improved results using image adaptive parameters and initializations. We provide an analytical definition of optimal weights for functional terms through an examination of segmentation in the context of image manifolds, where nearby images on the manifold require similar parameters and similar initializations. Our results validate that fixed parameters are insufficient in addressing the variability in real clinical data, that similar images require similar parameters, and demonstrate how these parameters correlate with the image manifold. We present significantly improved segmentations for synthetic images and a set of 470 clinical examples. --- paper_title: Fast approximate energy minimization via graph cuts paper_content: In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function's smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an /spl alpha/-/spl beta/-swap: for a pair of labels /spl alpha/,/spl beta/, this move exchanges the labels between an arbitrary set of pixels labeled a and another arbitrary set labeled /spl beta/. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an /spl alpha/-expansion: for a label a, this move assigns an arbitrary set of pixels the label /spl alpha/. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. --- paper_title: A convex formulation of continuous multi-label problems paper_content: We propose a spatially continuous formulation of Ishikawa's discrete multi-label problem. We show that the resulting non-convex variational problem can be reformulated as a convex variational problem via embedding in a higher dimensional space. This variational problem can be interpreted as a minimal surface problem in an anisotropic Riemannian space. In several stereo experiments we show that the proposed continuous formulation is superior to its discrete counterpart in terms of computing time, memory efficiency and metrication errors. --- paper_title: Algorithms for Finding Global Minimizers of Image Segmentation and Denoising Models paper_content: We show how certain nonconvex optimization problems that arise in image processing and computer vision can be restated as convex minimization problems. This allows, in particular, the finding of global minimizers via standard convex minimization schemes. --- paper_title: Quasiconvex Optimization and Location Theory paper_content: 1 Introduction.- 2 Elements of Convexity.- 2.1 Generalities.- 2.2 Convex sets.- 2.2.1 Hulls.- 2.2.2 Topological properties of convex sets.- 2.2.3 Separation of convex sets.- 2.3 Convex functions.- 2.3.1 Continuity of convex functions.- 2.3.2 Lower level sets and the subdifferential.- 2.3.3 Sublinear functions and directional derivatives.- 2.3.4 Support functions and gauges.- 2.3.5 Calculus rules with subdifferentials.- 2.4 Quasiconvex functions.- 2.5 Other directional derivatives.- 3 Convex Programming.- 3.1 Introduction.- 3.2 The ellipsoid method.- 3.2.1 The one dimensional case.- 3.2.2 The multidimensional case.- 3.2.3 Improving the numerical stability.- 3.2.4 Convergence proofs.- 3.2.5 Complexity.- 3.3 Stopping criteria.- 3.3.1 Satisfaction of the stopping rules.- 3.4 Computational experience.- 4 Convexity in Location.- 4.1 Introduction.- 4.2 Measuring convex distances.- 4.3 A general model.- 4.4 A convex location model.- 4.5 Characterizing optimality.- 4.6 Checking optimality in the planar case.- 4.6.1 Solving (D).- 4.6.2 Solving (D' ).- 4.6.3 Computational results.- 4.7 Computational results.- 5 Quasiconvex Programming.- 5.1 Introduction.- 5.2 A separation oracle for quasiconvex functions.- 5.2.1 Descent directions and geometry of lower level sets.- 5.2.2 Computing an element of the normal cone.- 5.3 Easy cases.- 5.3.1 Regular functions.- 5.3.2 Another class of easy functions.- 5.4 When we meet a "bad" point.- 5.5 Convergence proof.- 5.5.1 The unconstrained quasiconvex program.- 5.5.2 The constrained quasiconvex program.- 5.6 An ellipsoid algorithm for quasiconvex programming.- 5.6.1 Ellipsoids and boxes.- 5.6.2 Constructing a localization box.- 5.6.3 New cuts.- 5.6.4 Box cuts.- 5.6.5 Parallel cuts.- 5.6.6 Modified algorithm.- 5.7 Improving the stopping criteria.- 6 Quasiconvexity in Location.- 6.1 Introduction.- 6.2 A quasiconvex location model.- 6.3 Computational results.- 7 Conclusions. --- paper_title: Exact optimization for markov random fields with convex priors paper_content: We introduce a method to solve exactly a first order Markov random field optimization problem in more generality than was previously possible. The MRF has a prior term that is convex in terms of a linearly ordered label set. The method maps the problem into a minimum-cut problem for a directed graph, for which a globally optimal solution can be found in polynomial time. The convexity of the prior function in the energy is shown to be necessary and sufficient for the applicability of the method. --- paper_title: Multiregion competition: A level set extension of region competition to multiple region image partitioning paper_content: The purpose of this study is to investigate a new representation of a partition of an image domain into a fixed but arbitrary number of regions by explicit correspondence between the regions of segmentation and the regions defined by simple closed planar curves and their intersections, and the use of this representation in the context of region competition to provide a level set multiregion competition algorithm. This formulation leads to a system of coupled curve evolution equations which is easily amenable to a level set implementation and the computed solution is one that minimizes the stated functional. An unambiguous segmentation is garanteed because at all time during curve evolution the evolving regions form a partition of the image domain. We present the multiregion competition algorithm for intensity-based image segmentation and we subsequently extend it to motion/disparity. Finally, we consider an extension of the algorithm to account for images with aberrations such as occlusions. The formulation, the ensuing algorithm, and its implementation have been validated in several experiments on gray level, color, and motion segmentation. --- paper_title: An Efficient Optimization Framework for Multi-Region Segmentation Based on Lagrangian Duality paper_content: We introduce a multi-region model for simultaneous segmentation of medical images. In contrast to many other models, geometric constraints such as inclusion and exclusion between the regions are enforced, which makes it possible to correctly segment different regions even if the intensity distributions are identical. We efficiently optimize the model using a combination of graph cuts and Lagrangian duality which is faster and more memory efficient than current state of the art. As the method is based on global optimization techniques, the resulting segmentations are independent of initialization. We apply our framework to the segmentation of the left and right ventricles, myocardium and the left ventricular papillary muscles in magnetic resonance imaging and to lung segmentation in full-body X-ray computed tomography. We evaluate our approach on a publicly available benchmark with competitive results. --- paper_title: STACS: new active contour scheme for cardiac MR image segmentation paper_content: The paper presents a novel stochastic active contour scheme (STACS) for automatic image segmentation designed to overcome some of the unique challenges in cardiac MR images such as problems with low contrast, papillary muscles, and turbulent blood flow. STACS minimizes an energy functional that combines stochastic region-based and edge-based information with shape priors of the heart and local properties of the contour. The minimization algorithm solves, by the level set method, the Euler-Lagrange equation that describes the contour evolution. STACS includes an annealing schedule that balances dynamically the weight of the different terms in the energy functional. Three particularly attractive features of STACS are: 1) ability to segment images with low texture contrast by modeling stochastically the image textures; 2) robustness to initial contour and noise because of the utilization of both edge and region-based information; 3) ability to segment the heart from the chest wall and the undesired papillary muscles due to inclusion of heart shape priors. Application of STACS to a set of 48 real cardiac MR images shows that it can successfully segment the heart from its surroundings such as the chest wall and the heart structures (the left and right ventricles and the epicardium.) We compare STACS' automatically generated contours with manually-traced contours, or the "gold standard," using both area and edge similarity measures. This assessment demonstrates very good and consistent segmentation performance of STACS. --- paper_title: Multi-Class Segmentation with Relative Location Prior paper_content: Multi-class image segmentation has made significant advances in recent years through the combination of local and global features. One important type of global feature is that of inter-class spatial relationships. For example, identifying "tree" pixels indicates that pixels above and to the sides are more likely to be "sky" whereas pixels below are more likely to be "grass." Incorporating such global information across the entire image and between all classes is a computational challenge as it is image-dependent, and hence, cannot be precomputed. ::: ::: In this work we propose a method for capturing global information from inter-class spatial relationships and encoding it as a local feature. We employ a two-stage classification process to label all image pixels. First, we generate predictions which are used to compute a local relative location feature from learned relative location maps. In the second stage, we combine this with appearance-based features to provide a final segmentation. We compare our results to recent published results on several multi-class image segmentation databases and show that the incorporation of relative location information allows us to significantly outperform the current state-of-the-art. --- paper_title: Learning CRFs using Graph Cuts paper_content: Many computer vision problems are naturally formulated as random fields, specifically MRFs or CRFs. The introduction of graph cuts has enabled efficient and optimal inference in associative random fields, greatly advancing applications such as segmentation, stereo reconstruction and many others. However, while fast inference is now widespread, parameter learning in random fields has remained an intractable problem. This paper shows how to apply fast inference algorithms, in particular graph cuts, to learn parameters of random fields with similar efficiency. We find optimal parameter values under standard regularized objective functions that ensure good generalization. Our algorithm enables learning of many parameters in reasonable time, and we explore further speedup techniques. We also discuss extensions to non-associative and multi-class problems. We evaluate the method on image segmentation and geometry recognition. --- paper_title: Medial-Based Deformable Models in Nonconvex Shape-Spaces for Medical Image Segmentation paper_content: We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images. --- paper_title: Prior Shape Level Set Segmentation on Multistep Generated Probability Maps of MR Datasets for Fully Automatic Kidney Parenchyma Volumetry paper_content: Fully automatic 3-D segmentation techniques for clinical applications or epidemiological studies have proven to be a very challenging task in the domain of medical image analysis. 3-D organ segmentation on magnetic resonance (MR) datasets requires a well-designed segmentation strategy due to imaging artifacts, partial volume effects, and similar tissue properties of adjacent tissues. We developed a 3-D segmentation framework for fully automatic kidney parenchyma volumetry that uses Bayesian concepts for probability map generation. The probability map quality is improved in a multistep refinement approach. An extended prior shape level set segmentation method is then applied on the refined probability maps. The segmentation quality is improved by incorporating an exterior cortex edge alignment technique using cortex probability maps. In contrast to previous approaches, we combine several relevant kidney parenchyma features in a sequence of segmentation techniques for successful parenchyma delineation on native MR datasets. Furthermore, the proposed method is able to recognize and exclude parenchymal cysts from the parenchymal volume. We analyzed four different quality measures showing better results for right parenchymal tissue than for left parenchymal tissue due to an incorporated liver part removal in the segmentation framework. The results show that the outer cortex edge alignment approach successfully improves the quality measures. --- paper_title: Graph cut with ordering constraints on labels and its applications paper_content: In the last decade, graph-cut optimization has been popular for a variety of pixel labeling problems. Typically graph-cut methods are used to incorporate a smoothness prior on a labeling. Recently several methods incorporated ordering constraints on labels for the application of object segmentation. An example of an ordering constraint is prohibiting a pixel with a ldquocar wheelrdquo label to be above a pixel with a ldquocar roofrdquo label. We observe that the commonly used graph-cut based alpha-expansion is more likely to get stuck in a local minimum when ordering constraints are used. For certain models with ordering constraints, we develop new graph-cut moves which we call order-preserving moves. Order-preserving moves act on all labels, unlike alpha-expansion. Although the global minimum is still not guaranteed, optimization with order-preserving moves performs significantly better than alpha-expansion. We evaluate order-preserving moves for the geometric class scene labeling (introduced by Hoiem et al.) where the goal is to assign each pixel a label such as ldquoskyrdquo, ldquogrounrdquo, etc., so ordering constraints arise naturally. In addition, we use order-preserving moves for certain simple shape priors in graphcut segmentation, which is a novel contribution in itself. --- paper_title: Tiered scene labeling with dynamic programming paper_content: Dynamic programming (DP) has been a useful tool for a variety of computer vision problems. However its application is usually limited to problems with a one dimensional or low treewidth structure, whereas most domains in vision are at least 2D. In this paper we show how to apply DP for pixel labeling of 2D scenes with simple “tiered” structure. While there are many variations possible, for the applications we consider the following tiered structure is appropriate. An image is first divided by horizontal curves into the top, middle, and bottom regions, and the middle region is further subdivided vertically into subregions. Under these constraints a globally optimal labeling can be found using an efficient dynamic programming algorithm. We apply this algorithm to two very different tasks. The first is the problem of geometric class labeling where the goal is to assign each pixel a label such as “sky”, “ground”, and “surface above ground”. The second task involves incorporating simple shape priors for segmentation of an image into the “foreground” and “background” regions. --- paper_title: An Adaptive Subdivision Scheme for Quadratic Programming in Multi-Label Image Segmentation. paper_content: Convex quadratic optimization is one of the most widely used concepts in image segmentation. It facilitates a wide range of information sources, such as edge, intensity, texture and shape. The problem is especially challenging for the multi-label case, even being NP-hard in its most general setting. Therefore, fast "solutions", as the α-expansion of [1], are limited to local optimality. Addressing this problem, several approaches relax the labeling integrality condition, resulting in quadratic programs (QPs) like in [2] and in [4], which can be solved in polynomial time. Although this is efficient in a theoretical sense, large-scale QPs that arise from typical multi-label tasks can rarely be used for image segmentation directly due to either time or space constraints, or both. We address this issue by an adaptive domain subdivision scheme, reducing the problem to a short sequence of spatially smoothed medium-scale QPs, which subsequently better approximate the large-scale program. Our scheme is globally optimal in terms of the approximated problem. Putting our main focus on the subdivision, we restrict ourselves to minimization of the popular but rather simple piecewise constant MumfordShah functional. Therefore, we seek for a labeling that trades off the length of the labeling border and the approximation of image intensity u by known reference intensitites ui for each label i. For discrete domains the associated energy can be written as --- paper_title: A variational model for object segmentation using boundary information and shape prior driven by the Mumford-Shah functional paper_content: In this paper, we propose a variational model to segment an object belonging to a given scale space using the active contour method, a geometric shape prior and the Mumford-Shah functional. We define an energy functional composed by three complementary terms. The first one detects object boundaries from image gradients. The second term constrains the active contour to get a shape compatible with a statistical shape model of the shape of interest. And the third part drives globally the shape prior and the active contour towards a homogeneous intensity region. The segmentation of the object of interest is given by the minimum of our energy functional. This minimum is computed with the calculus of variations and the gradient descent method that provide a system of evolution equations solved with the well-known level set method. We also prove the existence of this minimum in the space of functions with bounded variation. Applications of the proposed model are presented on synthetic and medical images. --- paper_title: Minimizing sparse higher order energy functions of discrete variables paper_content: Higher order energy functions have the ability to encode high level structural dependencies between pixels, which have been shown to be extremely powerful for image labeling problems. Their use, however, is severely hampered in practice by the intractable complexity of representing and minimizing such functions. We observed that higher order functions encountered in computer vision are very often “sparse”, i.e. many labelings of a higher order clique are equally unlikely and hence have the same high cost. In this paper, we address the problem of minimizing such sparse higher order energy functions. Our method works by transforming the problem into an equivalent quadratic function minimization problem. The resulting quadratic function can be minimized using popular message passing or graph cut based algorithms for MAP inference. Although this is primarily a theoretical paper, it also shows how higher order functions can be used to obtain impressive results for the binary texture restoration problem. --- paper_title: On Parameter Learning in Crf-Based Approaches to Object Class Image Segmentation paper_content: Recent progress in per-pixel object class labeling of natural images can be attributed to the use of multiple types of image features and sound statistical learning approaches. Within the latter, Conditional Random Fields (CRF) are prominently used for their ability to represent interactions between random variables. Despite their popularity in computer vision, parameter learning for CRFs has remained difficult, popular approaches being cross-validation and piecewise training. ::: ::: In this work, we propose a simple yet expressive tree-structured CRF based on a recent hierarchical image segmentation method. Our model combines and weights multiple image features within a hierarchical representation and allows simple and efficient globally-optimal learning of ≅ 105 parameters. The tractability of our model allows us to pose and answer some of the open questions regarding parameter learning applying to CRF-based approaches. The key findings for learning CRF models are, from the obvious to the surprising, i) multiple image features always help, ii) the limiting dimension with respect to current models is the amount of training data, iii) piecewise training is competitive, iv) current methods for max-margin training fail for models with many parameters. --- paper_title: Efficient piecewise learning for conditional random fields paper_content: Conditional Random Field models have proved effective for several low-level computer vision problems. Inference in these models involves solving a combinatorial optimization problem, with methods such as graph cuts, belief propagation. Although several methods have been proposed to learn the model parameters from training data, they suffer from various drawbacks. Learning these parameters involves computing the partition function, which is intractable. To overcome this, state-of-the-art structured learning methods frame the problem as one of large margin estimation. Iterative solutions have been proposed to solve the resulting convex optimization problem. Each iteration involves solving an inference problem over all the labels, which limits the efficiency of these structured methods. In this paper we present an efficient large margin piece-wise learning method which is widely applicable. We show how the resulting optimization problem can be reduced to an equivalent convex problem with a small number of constraints, and solve it using an efficient scheme. Our method is both memory and computationally efficient. We show results on publicly available standard datasets. --- paper_title: Area prior constrained level set evolution for medical image segmentation paper_content: The level set framework has proven well suited to medical image segmentation 1-6 thanks to its ability of balancing ::: the contribution of image data and prior knowledge in a principled, flexible and transparent way. It consists ::: of evolving a curve toward the target object boundaries. The curve evolution equation is sought following ::: the optimization of a cost functional containing two types of terms: data terms, which measure the fidelity of ::: segmentation to image intensities, and prior terms, which traduce learned prior knowledge. Without priors many ::: algorithms are likely to fail due to high noise, low contrast and data incompleteness. Different priors have been ::: investigated such as shape 1 and appearance priors. 7 In this study, we propose a simple type of priors: the area ::: prior. This prior embeds knowledge of an approximate object area and has two positive effects. First, It speeds ::: up significantly the evolution when the curve is far from the target object boundaries. Second, it slows down ::: the evolution when the curve is close to the target. Consequently, it reinforces curve stability at the desired ::: boundaries when dealing with low contrast intensity edges. The algorithm is validated with several experiments ::: using Magnetic Resonance (MR) images and Computed Tomography (CT) images. A comparison with another ::: level set method illustrates the positive effects of the area prior. --- paper_title: A Statistical Overlap Prior for Variational Image Segmentation paper_content: This study investigates variational image segmentation with an original data term, referred to as statistical overlap prior, which measures the conformity of overlap between the nonparametric distributions of image data within the segmentation regions to a learned statistical description. This leads to image segmentation and distribution tracking algorithms that relax the assumption of minimal overlap and, as such, are more widely applicable than existing algorithms. We propose to minimize active curve functionals containing the proposed overlap prior, compute the corresponding Euler-Lagrange curve evolution equations, and give an interpretation of how the overlap prior controls such evolution. We model the overlap, measured via the Bhattacharyya coefficient, with a Gaussian prior whose parameters are estimated from a set of relevant training images. Quantitative and comparative performance evaluations of the proposed algorithms over several experiments demonstrate the positive effects of the overlap prior in regard to segmentation accuracy and convergence speed. --- paper_title: Global Minimization for Continuous Multiphase Partitioning Problems Using a Dual Approach paper_content: This paper is devoted to the optimization problem of continuous multi-partitioning, or multi-labeling, which is based on a convex relaxation of the continuous Potts model. In contrast to previous efforts, which are tackling the optimal labeling problem in a direct manner, we first propose a novel dual model and then build up a corresponding duality-based approach. By analyzing the dual formulation, sufficient conditions are derived which show that the relaxation is often exact, i.e. there exists optimal solutions that are also globally optimal to the original nonconvex Potts model. In order to deal with the nonsmooth dual problem, we develop a smoothing method based on the log-sum exponential function and indicate that such a smoothing approach leads to a novel smoothed primal-dual model and suggests labelings with maximum entropy. Such a smoothing method for the dual model also yields a new thresholding scheme to obtain approximate solutions. An expectation maximization like algorithm is proposed based on the smoothed formulation which is shown to be superior in efficiency compared to earlier approaches from continuous optimization. Numerical experiments also show that our method outperforms several competitive approaches in various aspects, such as lower energies and better visual quality. --- paper_title: Active Volume Models for Medical Image Segmentation paper_content: In this paper, we propose a novel predictive model, active volume model (AVM), for object boundary extraction. It is a dynamic “object” model whose manifestation includes a deformable curve or surface representing a shape, a volumetric interior carrying appearance statistics, and an embedded classifier that separates object from background based on current feature information. The model focuses on an accurate representation of the foreground object's attributes, and does not explicitly represent the background. As we will show, however, the model is capable of reasoning about the background statistics thus can detect when is change sufficient to invoke a boundary decision. When applied to object segmentation, the model alternates between two basic operations: 1) deforming according to current region of interest (ROI), which is a binary mask representing the object region predicted by the current model, and 2) predicting ROI according to current appearance statistics of the model. To further improve robustness and accuracy when segmenting multiple objects or an object with multiple parts, we also propose multiple-surface active volume model (MSAVM), which consists of several single-surface AVM models subject to high-level geometric spatial constraints. An AVM's deformation is derived from a linear system based on finite element method (FEM). To keep the model's surface triangulation optimized, surface remeshing is derived from another linear system based on Laplacian mesh optimization (LMO) , . Thus efficient optimization and fast convergence of the model are achieved by solving two linear systems. Segmentation, validation and comparison results are presented from experiments on a variety of 2-D and 3-D medical images. --- paper_title: Is a single energy functional sufficient? adaptive energy functionals and automatic initialization paper_content: Energy functional minimization is an increasingly popular technique for image segmentation. However, it is far too commonly applied with hand-tuned parameters and initializations that have only been validated for a few images. Fixing these parameters over a set of images assumes the same parameters are ideal for each image. We highlight the effects of varying the parameters and initialization on segmentation accuracy and propose a framework for attaining improved results using image adaptive parameters and initializations. We provide an analytical definition of optimal weights for functional terms through an examination of segmentation in the context of image manifolds, where nearby images on the manifold require similar parameters and similar initializations. Our results validate that fixed parameters are insufficient in addressing the variability in real clinical data, that similar images require similar parameters, and demonstrate how these parameters correlate with the image manifold. We present significantly improved segmentations for synthetic images and a set of 470 clinical examples. --- paper_title: Convex multi-region probabilistic segmentation with shape prior in the isometric log-ratio transformation space paper_content: Image segmentation is often performed via the minimization of an energy function over a domain of possible segmentations. The effectiveness and applicability of such methods depends greatly on the properties of the energy function and its domain, and on what information can be encoded by it. Here we propose an energy function that achieves several important goals. Specifically, our energy function is convex and incorporates shape prior information while simultaneously generating a probabilistic segmentation for multiple regions. Our energy function represents multi-region probabilistic segmentations as elements of a vector space using the isometric log-ratio (ILR) transformation. To our knowledge, these four goals (convex, with shape priors, multi-region, and probabilistic) do not exist together in any other method, and this is the first time ILR is used in an image segmentation method. We provide examples demonstrating the usefulness of these features. --- paper_title: Fast approximate energy minimization via graph cuts paper_content: In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function's smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an /spl alpha/-/spl beta/-swap: for a pair of labels /spl alpha/,/spl beta/, this move exchanges the labels between an arbitrary set of pixels labeled a and another arbitrary set labeled /spl beta/. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an /spl alpha/-expansion: for a label a, this move assigns an arbitrary set of pixels the label /spl alpha/. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. --- paper_title: Adaptive segmentation of MRI data paper_content: Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter. --- paper_title: Using the logarithm of odds to define a vector space on probabilistic atlases paper_content: The logarithm of the odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for nonconvex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. --- paper_title: Uncertainty and Patient Heterogeneity in Medical Decision Models paper_content: Parameter uncertainty, patient heterogeneity, and stochastic uncertainty of outcomes are increasingly important concepts in medical decision models. The purpose of this study is to demonstrate the various methods to analyze uncertainty and patient heterogeneity in a decision model. The authors distinguish various purposes of medical decision modeling, serving various stakeholders. Differences and analogies between the analyses are pointed out, as well as practical issues. The analyses are demonstrated with an example comparing imaging tests for patients with chest pain. For complicated analyses step-by-step algorithms are provided. The focus is on Monte Carlo simulation and value of information analysis. Increasing model complexity is a major challenge for probabilistic sensitivity analysis and value of information analysis. The authors discuss nested analyses that are required in patient-level models, and in nonlinear models for analyses of partial value of information analysis. --- paper_title: ProbExplorer: Uncertaintyguided exploration and editing of probabilistic medical image segmentation paper_content: In this paper, we develop an interactive analysis and visualization tool for probabilistic segmentation results in medical imaging. We provide a systematic approach to analyze, interact and highlight regions of segmentation uncertainty. We introduce a set of visual analysis widgets integrating different approaches to analyze multivariate probabilistic field data with direct volume rendering. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and correct the misclassification results using a novel uncertainty-based segmentation editing technique. We evaluate our system and demonstrate its usefulness in the context of static and time-varying medical imaging datasets. --- paper_title: Fuzzy-Snake Segmentation of Anatomical Structures Applied to CT Images paper_content: This paper presents a generic strategy to facilitate the segmentation of anatomical structures in medical images. The segmentation is performed using an adapted PDM by fuzzy c-means classification, which also uses the fuzzy decision to evolve PDM into the final contour. Furthermore, the fuzzy reasoning exploits \(\it{a}\) \(\it{priori}\) statistical information from several knowledge sources based on histogram analysis and the intensity values of the structures under consideration. The fuzzy reasoning is also applied and compared to a geometrical active contour model (or level set). The method has been developed to assist clinicians and radiologists in conformal RTP. Experimental results and their quantitative validation to assess the accuracy and efficiency are given segmenting the bladder on CT images. To assess precision, results are also presented in CT images with added Gaussian noise. The fuzzy-snake is free of parameter and it is able to properly segment the structures by using the same initial spline curve for a whole study image-patient set. --- paper_title: Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm paper_content: The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation--no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, we show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation. --- paper_title: Probabilistic multi-shape representation using an isometric log-ratio mapping paper_content: Several sources of uncertainties in shape boundaries in medical images have motivated the use of probabilistic labeling approaches. Although it is well-known that the sample space for the probabilistic representation of a pixel is the unit simplex, standard techniques of statistical shape analysis (e.g. principal component analysis) have been applied to probabilistic data as if they lie in the unconstrained real Euclidean space. Since these techniques are not constrained to the geometry of the simplex, the statistically feasible data produced end up representing invalid (out of the simplex) shapes. By making use of methods for dealing with what is known as compositional or closed data, we propose a new framework intrinsic to the unit simplex for statistical analysis of probabilistic multi-shape anatomy. In this framework, the isometric logratio (ILR) transformation is used to isometrically and bijectively map the simplex to the Euclidean real space, where data are analyzed in the same way as unconstrained data and then back-transformed to the simplex. We demonstrate favorable properties of ILR over existing mappings (e.g. LogOdds). Our results on synthetic and brain data exhibit a more accurate statistical analysis of probabilistic shapes. --- paper_title: Go digital, go fuzzy paper_content: Abstract In many application areas of imaging sciences, object information captured in multidimensional images needs to be extracted, visualized, manipulated, and analyzed. These four groups of operations have been (and are being) intensively investigated, developed, and applied in a variety of applications. In this paper, after giving a brief overview of the four groups of operations, we put forth two main arguments: (1) Computers are digital, and most image acquisition and communication efforts at present are toward digital approaches. In the same vein, there are considerable advantages to taking an inherently digital approach to the above four groups of operations rather than using concepts based on continuous approximations. (2) Considering the fact that images are inherently fuzzy, to handle uncertainties and heterogeneity of object properties realistically, approaches based on fuzzy sets should be taken to the above four groups of operations. We give two examples in support of these arguments. --- paper_title: Kinetic Modeling Based Probabilistic Segmentation for Molecular Images paper_content: We propose a semi-supervised, kinetic modeling based segmentation technique for molecular imaging applications. It is an iterative, self-learning algorithm based on uncertainty principles, designed to alleviate low signal-to-noise ratio (SNR) and partial volume effect (PVE) problems. Synthetic fluorodeoxyglucose (FDG) and simulated Raclopride dynamic positron emission tomography (dPET) brain images with excessive noise levels are used to validate our algorithm. We show, qualitatively and quantitatively, that our algorithm outperforms state-of-the-art techniques in identifying different functional regions and recovering the kinetic parameters. --- paper_title: The Isometric Log-Ratio Transform for Probabilistic Multi-Label Anatomical Shape Representation paper_content: Sources of uncertainty in the boundaries of structures in medical images have motivated the use of probabilistic labels in segmentation applications. An important component in many medical image segmentation tasks is the use of a shape model, often generated by applying statistical techniques to training data. Standard statistical techniques (e.g., principal component analysis) often assume data lies in an unconstrained vector space, but probabilistic labels are constrained to the unit simplex. If these statistical techniques are used directly on probabilistic labels, relative uncertainty information can be sacrificed. A standard method for facilitating analysis of probabilistic labels is to map them to a vector space using the LogOdds transform. However, the LogOdds transform is asymmetric in one of the labels, which skews results in some applications. The isometric log-ratio (ILR) transform is a symmetrized version of the LogOdds transform, and is so named as it is an isometry between the Aitchison geometry, the inherent geometry of the simplex, and standard Euclidean geometry. We explore how to interpret the Aitchison geometry when applied to probabilistic labels in medical image segmentation applications. We demonstrate the differences when applying the LogOdds transform or the ILR transform to probabilistic labels prior to statistical analysis. Specifically, we show that statistical analysis of ILR transformed data better captures the variability of anatomical shapes in cases where multiple different foreground regions share boundaries (as opposed to foreground-background boundaries). --- paper_title: The Generalized Log-Ratio Transformation: Learning Shape and Adjacency Priors for Simultaneous Thigh Muscle Segmentation paper_content: We present a novel probabilistic shape representation that implicitly includes prior anatomical volume and adjacency information, termed the generalized log-ratio (GLR) representation. We demonstrate the usefulness of this representation in the task of thigh muscle segmentation. Analysis of the shapes and sizes of thigh muscles can lead to a better understanding of the effects of chronic obstructive pulmonary disease (COPD), which often results in skeletal muscle weakness in lower limbs. However, segmenting these muscles from one another is difficult due to a lack of distinctive features and inter-muscular boundaries that are difficult to detect. We overcome these difficulties by building a shape model in the space of GLR representations. We remove pose variability from the model by employing a presegmentation-based alignment scheme. We also design a rotationally invariant random forest boundary detector that learns common appearances of the interface between muscles from training data. We combine the shape model and the boundary detector into a fully automatic globally optimal segmentation technique. Our segmentation technique produces a probabilistic segmentation that can be used to generate uncertainty information, which can be used to aid subsequent analysis. Our experiments on challenging 3D magnetic resonance imaging data sets show that the use of the GLR representation improves the segmentation accuracy, and yields an average Dice similarity coefficient of 0.808 ±0.074, comparable to other state-of-the-art thigh segmentation techniques. --- paper_title: Random Walks for Image Segmentation paper_content: A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs --- paper_title: Hands-free interactive image segmentation using eyegaze paper_content: This paper explores a novel approach to interactive user-guided image segmentation, using eyegaze information as an input. The method includes three steps: 1) eyegaze tracking for providing user input, such as setting object and background seed pixel selection; 2) an optimization method for image labeling that is constrained or affected by user input; and 3) linking the two previous steps via a graphical user interface for displaying the images and other controls to the user and for providing real-time visual feedback of eyegaze and seed locations, thus enabling the interactive segmentation procedure. We developed a new graphical user interface supported by an eyegaze tracking monitor to capture the user’s eyegaze movement and fixations (as opposed to traditional mouse moving and clicking). The user simply looks at different parts of the screen to select which image to segment, to perform foreground and background seed placement and to set optional segmentation parameters. There is an eyegaze-controlled “zoom” feature for difficult images containing objects with narrow parts, holes or weak boundaries. The image is then segmented using the random walker image segmentation method. We performed a pilot study with 7 subjects who segmented synthetic, natural and real medical images. Our results show that getting used the new interface takes about only 5 minutes. Compared with traditional mouse-based control, the new eyegaze approach provided a 18.6% speed improvement for more than 90% of images with high object-background contrast. However, for low contrast and more difficult images it took longer to place seeds using the eyegaze-based “zoom” to relax the required eyegaze accuracy of seed placement. --- paper_title: Interactive Live-Wire Boundary Extraction paper_content: Abstract Live-wire segmentation is a new interactive tool for efficient, accurate and reproducible boundary extraction which requires minimal user input with a mouse. Optimal boundaries are computed and selected at interactive rates as the user moves the mouse starting from a manually specified seed point. When the mouse position comes into the proximity of an object edge, a ‘live-wire’ boundary snaps to, and wraps around the object of interest. The input of a new seed point ‘freezes’ the selected boundary segment and the process is repeated until the boundary is complete. Two novel enhancements to the basic live-wire methodology include boundary cooling and on-the-fly training. Data-driven boundary cooling generates seed points automatically and further reduces user input. On-the-fly training adapts the dynamic boundary to edges of current interest. Using the live-wire technique, boundaries are extracted in one-fifth of the time required for manual tracing, but with 4.4 times greater accuracy and 4.8 times greater reproducibility. In particular, interobserver reproducibility using the live-wire tool is 3.8 times greater than intraobserver reproducibility using manual tracing. --- paper_title: Fast Geodesic Active Contours paper_content: We use an unconditionally stable numerical scheme to implement a fast version of the geodesic active contour model. The proposed scheme is useful for object segmentation in images, like tracking moving objects in a sequence of images. The method is based on the Weickert-Romeney-Viergever (additive operator splitting) AOS scheme. It is applied at small regions, motivated by the Adalsteinsson-Sethian level set narrow band approach, and uses Sethian's (1996) fast marching method for re-initialization. Experimental results demonstrate the power of the new method for tracking in color movies. --- paper_title: Exploration and Visualization of Segmentation Uncertainty using Shape and Appearance Prior Information paper_content: We develop an interactive analysis and visualization tool for probabilistic segmentation in medical imaging. The originality of our approach is that the data exploration is guided by shape and appearance knowledge learned from expert-segmented images of a training population. We introduce a set of multidimensional transfer function widgets to analyze the multivariate probabilistic field data. These widgets furnish the user with contextual information about conformance or deviation from the population statistics. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and to correct the misclassification results. We evaluate our system and demonstrate its usefulness in the context of static anatomical and time-varying functional imaging datasets. --- paper_title: Segmentation from a box paper_content: Drawing a box around an intended segmentation target has become both a popular user interface and a common output for learning-driven detection algorithms. Despite the ubiquity of using a box to define a segmentation target, it is unclear in the literature whether a box is sufficient to define a unique segmentation or whether segmentation from a box is ill-posed without higher-level (semantic) knowledge of the intended target. We examine this issue by conducting a study of 14 subjects who are asked to segment a boxed target in a set of 50 real images for which they have no semantic attachment. We find that the subjects do indeed perceive and trace almost the same segmentations as each other, despite the inhomogeneity of the image intensities, irregular shapes of the segmentation targets and weakness of the target boundaries. Since the subjects produce the same segmentation, we conclude that the problem is well-posed and then provide a new segmentation algorithm from a box which achieves results close to the perceived target. --- paper_title: "GrabCut": interactive foreground extraction using iterated graph cuts paper_content: The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for "border matting" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools. --- paper_title: 3D live-wire-based semi-automatic segmentation of medical images paper_content: Segmenting anatomical structures from medical images is usually one of the most important initial steps in many applications, including visualization, computer-aided diagnosis, and morphometric analysis. Manual 2D segmentation suffers from operator variability and is tedious and time-consuming. These disadvantages are accentuated in 3D applications and, the additional requirement of producing intuitive displays to integrate 3D information for the user, makes manual segmentation even less approachable in 3D. Robust, automatic medical image segmentation in 2D to 3D remains an open problem caused particularly by sensitivity to low-level parameters of segmentation algorithms. Semi-automatic techniques present possible balanced solution where automation focuses on low-level computing-intensive tasks that can be hidden from the user, while manual inter- ::: vention captures high-level expert knowledge nontrivial to capture algorithmically. In this paper we present a 3D extension to the 2D semi-automatic live-wire technique. Live-wire based contours generated semi-automatically on a selected set of slices are used as seed points on new unseen slices in different orientations. The seed points are calculated from intersections of user-based live-wire techniques with new slices. Our algorithm includes a step for ordering the live-wire seed points in the new slices, which is essential for subsequent multi-stage optimal path calculation. We present results of automatically detecting contours in new slices in 3D volumes from a variety of medical images. --- paper_title: Soft scissors: an interactive tool for realtime high quality matting paper_content: We present Soft Scissors, an interactive tool for extracting alpha mattes of foreground objects in realtime. We recently proposed a novel offline matting algorithm capable of extracting high-quality mattes for complex foreground objects such as furry animals [Wang and Cohen 2007]. In this paper we both improve the quality of our offline algorithm and give it the ability to incrementally update the matte in an online interactive setting. Our realtime system efficiently estimates foreground color thereby allowing both the matte and the final composite to be revealed instantly as the user roughly paints along the edge of the foreground object. In addition, our system can dynamically adjust the width and boundary conditions of the scissoring paint brush to approximately capture the boundary of the foreground object that lies ahead on the scissor's path. These advantages in both speed and accuracy create the first interactive tool for high quality image matting and compositing. --- paper_title: User-aided Boundary Delineation through the Propagation of Implicit Representations paper_content: In this paper we introduce user-defined segmentation constraints within the level set methods. Snake-driven methods are powerful and widely explored techniques for object extraction. Level set representations is a mathematical framework technique to implement such methods. This formulation is implicit, intrinsic and parameter/topology free. Introducing shape-driven knowledge within the level set method for segmentation is a recently explored topic. User interactive constraints are more simplistic forms of prior shape knowledge. To this end, we propose a simple formulation that converts user interaction to objective function terms that aim to guide the segmentation solution through the user edits. --- paper_title: Interactive level set segmentation for image-guided therapy paper_content: Image-guided therapy procedures require the patient to remain still throughout the image acquisition, data analysis and therapy. This imposes a tight time constraint on the over-all process. Automatic extraction of the pathological regions prior to the therapy can be faster than the customary manual segmentation performed by the physician. However, the image data alone is usually not sufficient for reliable and unambiguous computerized segmentation. Thus, the oversight of an experienced physician remains mandatory. We present a novel segmentation framework, that allows user feedback. A few mouse-clicks of the user, discrete in nature, are represented as a continuous energy term that is incorporated into a level-set functional. We demonstrate the proposed method on MR scans of uterine fibroids acquired prior to focused ultrasound ablation treatment. The experiments show that with a minimal user input, automatic segmentation results become practically identical to manual expert segmentation. --- paper_title: Image segmentation with a bounding box prior paper_content: User-provided object bounding box is a simple and popular interaction paradigm considered by many existing interactive image segmentation frameworks. However, these frameworks tend to exploit the provided bounding box merely to exclude its exterior from consideration and sometimes to initialize the energy minimization. In this paper, we discuss how the bounding box can be further used to impose a powerful topological prior, which prevents the solution from excessive shrinking and ensures that the user-provided box bounds the segmentation in a sufficiently tight way. The prior is expressed using hard constraints incorporated into the global energy minimization framework leading to an NP-hard integer program. We then investigate the possible optimization strategies including linear relaxation as well as a new graph cut algorithm called pinpointing. The latter can be used either as a rounding method for the fractional LP solution, which is provably better than thresholding-based rounding, or as a fast standalone heuristic. We evaluate the proposed algorithms on a publicly available dataset, and demonstrate the practical benefits of the new prior both qualitatively and quantitatively. --- paper_title: A probabilistic level set formulation for interactive organ segmentation paper_content: Level set methods have become increasingly popular as a framework for image segmentation. Yet when used as a generic segmentation tool, they suffer from an important drawback: Current formulations do not allow much user interaction. Upon initialization, boundaries propagate to the final segmentation without the user being able to guide or correct the segmentation. In the present work, we address this limitation by proposing a probabilistic framework for image segmentation which integrates input intensity information and user interaction on equal footings. The resulting algorithm determines the most likely segmentation given the input image and the user input. In order to allow a user interaction in real-time during the segmentation, the algorithm is implemented on a graphics card and in a narrow band formulation. --- paper_title: Geodesic active contours paper_content: A novel scheme for the detection of object boundaries is presented. The technique is based on active contours deforming according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric as defined by the image content. This geodesic approach for object segmentation allows to connect classical "snakes" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved as showed by a number of examples. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well.<<ETX>> --- paper_title: Snakes: Active contour models paper_content: A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest. --- paper_title: Ultrasound image segmentation: a survey paper_content: This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem --- paper_title: Deformable-model based textured object segmentation paper_content: In this paper, we present a deformable-model based solution for segmenting objects with complex texture patterns of all scales. The external image forces in traditional deformable models come primarily from edges or gradient information and it becomes problematic when the object surfaces have complex large-scale texture patterns that generate many local edges within a same region. We introduce a new textured object segmentation algorithm that has both the robustness of model-based approaches and the ability to deal with non-uniform textures of both small and large scales. The main contributions include an information-theoretical approach for computing the natural scale of a “texon” based on model-interior texture, a nonparametric texture statistics comparison technique and the determination of object belongingness through belief propagation. Another important property of the proposed algorithm is in that the texture statistics of an object of interest are learned online from evolving model interiors, requiring no other a priori information. We demonstrate the potential of this model-based framework for texture learning and segmentation using both natural and medical images with various textures of all scales and patterns. --- paper_title: STACS: new active contour scheme for cardiac MR image segmentation paper_content: The paper presents a novel stochastic active contour scheme (STACS) for automatic image segmentation designed to overcome some of the unique challenges in cardiac MR images such as problems with low contrast, papillary muscles, and turbulent blood flow. STACS minimizes an energy functional that combines stochastic region-based and edge-based information with shape priors of the heart and local properties of the contour. The minimization algorithm solves, by the level set method, the Euler-Lagrange equation that describes the contour evolution. STACS includes an annealing schedule that balances dynamically the weight of the different terms in the energy functional. Three particularly attractive features of STACS are: 1) ability to segment images with low texture contrast by modeling stochastically the image textures; 2) robustness to initial contour and noise because of the utilization of both edge and region-based information; 3) ability to segment the heart from the chest wall and the undesired papillary muscles due to inclusion of heart shape priors. Application of STACS to a set of 48 real cardiac MR images shows that it can successfully segment the heart from its surroundings such as the chest wall and the heart structures (the left and right ventricles and the epicardium.) We compare STACS' automatically generated contours with manually-traced contours, or the "gold standard," using both area and edge similarity measures. This assessment demonstrates very good and consistent segmentation performance of STACS. --- paper_title: Image Segmentation Based on GrabCut Framework Integrating Multiscale Nonlinear Structure Tensor paper_content: In this paper, we propose an interactive color natural image segmentation method. The method integrates color feature with multiscale nonlinear structure tensor texture (MSNST) feature and then uses GrabCut method to obtain the segmentations. The MSNST feature is used to describe the texture feature of an image and integrated into GrabCut framework to overcome the problem of the scale difference of textured images. In addition, we extend the Gaussian Mixture Model (GMM) to MSNST feature and GMM based on MSNST is constructed to describe the energy function so that the texture feature can be suitably integrated into GrabCut framework and fused with the color feature to achieve the more superior image segmentation performance than the original GrabCut method. For easier implementation and more efficient computation, the symmetric KL divergence is chosen to produce the estimates of the tensor statistics instead of the Riemannian structure of the space of tensor. The Conjugate norm was employed using Locality Preserving Projections (LPP) technique as the distance measure in the color space for more discriminating power. An adaptive fusing strategy is presented to effectively adjust the mixing factor so that the color and MSNST texture features are efficiently integrated to achieve more robust segmentation performance. Last, an iteration convergence criterion is proposed to reduce the time of the iteration of GrabCut algorithm dramatically with satisfied segmentation accuracy. Experiments using synthesis texture images and real natural scene images demonstrate the superior performance of our proposed method. --- paper_title: Description of Interest Regions with Local Binary Patterns paper_content: This paper presents a novel method for interest region description. We adopted the idea that the appearance of an interest region can be well characterized by the distribution of its local features. The most well-known descriptor built on this idea is the SIFT descriptor that uses gradient as the local feature. Thus far, existing texture features are not widely utilized in the context of region description. In this paper, we introduce a new texture feature called center-symmetric local binary pattern (CS-LBP) that is a modified version of the well-known local binary pattern (LBP) feature. To combine the strengths of the SIFT and LBP, we use the CS-LBP as the local feature in the SIFT algorithm. The resulting descriptor is called the CS-LBP descriptor. In the matching and object category classification experiments, our descriptor performs favorably compared to the SIFT. Furthermore, the CS-LBP descriptor is computationally simpler than the SIFT. --- paper_title: Multidimensional orientation estimation with applications to texture analysis and optical flow paper_content: The problem of detection of orientation in finite dimensional Euclidean spaces is solved in the least squares sense. The theory is developed for the case when such orientation computations are necessary at all local neighborhoods of the n-dimensional Euclidean space. Detection of orientation is shown to correspond to fitting an axis or a plane to the Fourier transform of an n-dimensional structure. The solution of this problem is related to the solution of a well-known matrix eigenvalue problem. The computations can be performed in the spatial domain without actually doing a Fourier transformation. Along with the orientation estimate, a certainty measure, based on the error of the fit, is proposed. Two applications in image analysis are considered: texture segmentation and optical flow. The theory is verified by experiments which confirm accurate orientation estimates and reliable certainty measures in the presence of noise. The comparative results indicate that the theory produces algorithms computing robust texture features as well as optical flow. > --- paper_title: Lesion Border Detection in Dermoscopy Images paper_content: BACKGROUND ::: Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion borders. ::: ::: ::: METHODS ::: In this article, we present a systematic overview of the recent border detection methods in the literature paying particular attention to computational issues and evaluation aspects. ::: ::: ::: CONCLUSION ::: Common problems with the existing approaches include the acquisition, size, and diagnostic distribution of the test image set, the evaluation of the results, and the inadequate description of the employed methods. Border determination by dermatologists appears to depend upon higher-level knowledge, therefore it is likely that the incorporation of domain knowledge in automated methods will enable them to perform better, especially in sets of images with a variety of diagnoses. --- paper_title: A Graph Cut Approach to Image Segmentation in Tensor Space paper_content: This paper proposes a novel method to apply the standard graph cut technique to segmenting multimodal tensor valued images. The Riemannian nature of the tensor space is explicitly taken into account by first mapping the data to a Euclidean space where non-parametric kernel density estimates of the regional distributions may be calculated from user initialized regions. These distributions are then used as regional priors in calculating graph edge weights. Hence this approach utilizes the true variation of the tensor data by respecting its Riemannian structure in calculating distances when forming probability distributions. Further, the non-parametric model generalizes to arbitrary tensor distribution unlike the Gaussian assumption made in previous works. Casting the segmentation problem in a graph cut framework yields a segmentation robust with respect to initialization on the data tested. --- paper_title: An affine invariant tensor dissimilarity measure and its applications to tensor-valued image segmentation paper_content: Tensor fields specifically, matrix valued data sets, have recently attracted increased attention in the fields of image processing, computer vision, visualization and medical imaging. In this paper, we present a novel definition of tensor "distance" grounded in concepts from information theory and incorporate it in the segmentation of tensor-valued images. In some applications, a symmetric positive definite (SPD) tensor at each point of a tensor valued image can be interpreted as the covariance matrix of a local Gaussian distribution. Thus, a natural measure of dissimilarity between SPD tensors would be the KL divergence or its relative. We propose the square root of the J-divergence (symmetrized KL) between two Gaussian distributions corresponding to the tensors being compared that leads to a novel closed form expression. Unlike the traditional Frobenius norm-based tensor distance, our "distance" is affine invariant, a desirable property in many applications. We then incorporate this new tensor "distance" in a region based active contour model for bimodal tensor field segmentation and show its application to the segmentation of diffusion tensor magnetic resonance images (DT-MRI) as well as for the texture segmentation problem in computer vision. Synthetic and real data experiments are shown to depict the performance of the proposed model. --- paper_title: Discriminative learned dictionaries for local image analysis paper_content: Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well. --- paper_title: Segmentation of 3D Probability Density Fields by Surface Evolution: Application to Diffusion MRI paper_content: We propose an original approach for the segmentation of three-dimensional fields of probability density functions. This presents a wide range of applications in medical images processing, in particular for diffusion magnetic resonance imaging where each voxel is assigned with a function describing the average motion of water molecules. Being able to automatically extract relevant anatomical structures of the white matter, such as the corpus callosum, would dramatically improve our current knowledge of the cerebral connectivity as well as allow for their statistical analysis. Our approach relies on the use of the symmetrized Kullback-Leibler distance and on the modelization of its distribution over the subsets of interest in the volume. The variational formulation of the problem yields a level-set evolution converging toward the optimal segmentation. --- paper_title: Variational Image Segmentation for Endoscopic Human Colonic Aberrant Crypt Foci paper_content: The aim of this paper is to introduce a variational image segmentation method for assessing the aberrant crypt foci (ACF) in the human colon captured in vivo by endoscopy. ACF are thought to be precursors for colorectal cancer, and therefore their early detection may play an important clinical role. We enhance the active contours without edges model of Chan and Vese to account for the ACF's particular structure. We employ level sets to represent the segmentation boundaries and discretize in space by finite elements and in (artificial) time by finite differences. The approach is able to identify the ACF, their boundaries, and some of the internal crypts' orifices. --- paper_title: A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model paper_content: We propose a new multiphase level set framework for image segmentation using the Mumford and Shah model, for piecewise constant and piecewise smooth optimal approximations. The proposed method is also a generalization of an active contour model without edges based 2-phase segmentation, developed by the authors earlier in T. Chan and L. Vese (1999. In Scale-Space'99, M. Nilsen et al. (Eds.), LNCS, vol. 1682, pp. 141–151) and T. Chan and L. Vese (2001. IEEE-IP, 10(2):266–277). The multiphase level set formulation is new and of interest on its own: by construction, it automatically avoids the problems of vacuum and overlaps it needs only log n level set functions for n phases in the piecewise constant cases it can represent boundaries with complex topologies, including triple junctionss in the piecewise smooth case, only two level set functions formally suffice to represent any partition, based on The Four-Color Theorem. Finally, we validate the proposed models by numerical results for signal and image denoising and segmentation, implemented using the Osher and Sethian level set method. --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Active appearance models paper_content: We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors. --- paper_title: Surf: Speeded up robust features paper_content: In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. ::: ::: This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance. --- paper_title: Liver Vessels Segmentation Using a Hybrid Geometrical Moments/Graph Cuts Method paper_content: This paper describes a fast and fully automatic method for liver vessel segmentation on computerized tomography scan preoperative images. The basis of this method is the introduction of a 3-D geometrical moment-based detector of cylindrical shapes within the minimum-cut/maximum-flow energy minimization framework. This method represents an original way to introduce a data term as a constraint into the widely used Boykov's graph cuts algorithm, and hence, to automate the segmentation. The method is evaluated and compared with others on a synthetic dataset. Finally, the relevancy of our method regarding the planning of a necessarily accurate percutaneous high-intensity focused ultrasound surgical operation is demonstrated with some examples. --- paper_title: "GrabCut": interactive foreground extraction using iterated graph cuts paper_content: The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for "border matting" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools. --- paper_title: Decision Forests with Spatio-Temporal Features for Graph-Based Tumor Segmentation in 4D Lung CT paper_content: We propose an automatic lung tumor segmentation in dynamic CT images that incorporates the novel use of tumor tissue deformations. In contrast to elastography imaging techniques for measuring tumor tissue properties, which require mechanical compression and thereby interrupt normal breathing, we completely avoid the use of any external physical forces. Instead, we calculate the tissue deformations during normal respiration using deformable registration. We investigate machine learning methods in order to discover the spatio-temporal dynamics that would help distinguish tumor from normal tissue deformation patterns and integrate this information into the segmentation process. Our method adapts an ensemble of decision trees combined with a 3D graph-based optimization that takes into account spatio-temporal consistency. The experimental results on patients with large tumors achieved an average F-measure accuracy of 0.79. --- paper_title: Abdominal organ segmentation using texture transforms and a Hopfield neural network paper_content: Abdominal organ segmentation is highly desirable but difficult, due to large differences between patients and to overlapping grey-scale values of the various tissue types. The first step in automating this process is to cluster together the pixels within each organ or tissue type. The authors propose to form images based on second-order statistical texture transforms (Haralick transforms) of a CT or MRI scan. The original scan plus the suite of texture transforms are then input into a Hopfield neural network (HNN). The network is constructed to solve an optimization problem, where the best solution is the minima of a Lyapunov energy function. On a sample abdominal CT scan, this process successfully clustered 79-100% of the pixels of seven abdominal organs. It is envisioned that this is the first step to automate segmentation. Active contouring (e.g., SNAKE's) or a back-propagation neural network can then be used to assign names to the clusters and fill in the incorrectly clustered pixels. --- paper_title: Histograms of oriented gradients for human detection paper_content: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. --- paper_title: Classification of tumor histopathology via sparse feature learning paper_content: Our goal is to decompose whole slide images (WSI) of histology sections into distinct patches (e.g., viable tumor, necrosis) so that statistics of distinct histopathology can be linked with the outcome. Such an analysis requires a large cohort of histology sections that may originate from different laboratories, which may not use the same protocol in sample preparation. We have evaluated a method based on a variation of the restricted Boltzmann machine (RBM) that learns intrinsic features of the image signature in an unsupervised fashion. Computed code, from the learned representation, is then utilized to classify patches from a curated library of images. The system has been evaluated against a dataset of small image blocks of 1k-by-1k that have been extracted from glioblastoma multiforme (GBM) and clear cell kidney carcinoma (KIRC) from the cancer genome atlas (TCGA) archive. The learned model is then projected on each whole slide image (e.g., of size 20k-by-20k pixels or larger) for characterizing and visualizing tumor architecture. In the case of GBM, each WSI is decomposed into necrotic, transition into necrosis, and viable. In the case of the KIRC, each WSI is decomposed into tumor types, stroma, normal, and others. Evaluation of 1400 and 2500 samples of GBM and KIRC indicates a performance of 84% and 81%, respectively. --- paper_title: Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope paper_content: In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category. --- paper_title: Interactive Texture Segmentation using Random Forests and Total Variation paper_content: Hypothesis The segmentation quality depends on a strong description for F and B. In order to model hypotheses based on different high-level features, we need an efficient learning algorithm capable of handling arbitrary input data. Random Forests (RFs) are fast to compute while yielding state-of-the-art performance in machine learning and vision problems. Their parallel structure dedicates them to GPU implementations. Recently, an online version of RFs has been proposed [2], which renders retraining of the whole forest upon additional user input unnecessary. --- paper_title: DT-MRI segmentation using graph cuts paper_content: An important problem in medical image analysis is the segmentation of anatomical regions of interest. Once ::: regions of interest are segmented, one can extract shape, appearance, and structural features that can be analyzed ::: for disease diagnosis or treatment evaluation. Diffusion tensor magnetic resonance imaging (DT-MRI) is ::: a relatively new medical imaging modality that captures unique water diffusion properties and fiber orientation ::: information of the imaged tissues. In this paper, we extend the interactive multidimensional graph cuts segmentation ::: technique to operate on DT-MRI data by utilizing latest advances in tensor calculus and diffusion tensor ::: dissimilarity metrics. The user interactively selects certain tensors as object ("obj") or background ("bkg") to ::: provide hard constraints for the segmentation. Additional soft constraints incorporate information about both ::: regional tissue diffusion as well as boundaries between tissues of different diffusion properties. Graph cuts are ::: used to find globally optimal segmentation of the underlying 3D DT-MR image among all segmentations satisfying ::: the constraints. We develop a graph structure from the underlying DT-MR image with the tensor voxels ::: corresponding to the graph vertices and with graph edge weights computed using either Log-Euclidean or the ::: J-divergence tensor dissimilarity metric. The topology of our segmentation is unrestricted and both obj and bkg ::: segments may consist of several isolated parts. We test our method on synthetic DT data and apply it to real ::: 2D and 3D MRI, providing segmentations of the corpus callosum in the brain and the ventricles of the heart. --- paper_title: A performance evaluation of local descriptors paper_content: In this paper we compare the performance of interest point descriptors. Many different descriptors have been proposed in the literature. However, it is unclear which descriptors are more appropriate and how their performance depends on the interest point detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the point detector. Our evaluation uses as criterion detection rate with respect to false positive rate and is carried out for different image transformations. We compare SIFT descriptors (Lowe, 1999), steerable filters (Freeman and Adelson, 1991), differential invariants (Koenderink ad van Doorn, 1987), complex filters (Schaffalitzky and Zisserman, 2002), moment invariants (Van Gool et al., 1996) and cross-correlation for different types of interest points. In this evaluation, we observe that the ranking of the descriptors does not depend on the point detector and that SIFT descriptors perform best. Steerable filters come second ; they can be considered a good choice given the low dimensionality. --- paper_title: Kinetic Modeling Based Probabilistic Segmentation for Molecular Images paper_content: We propose a semi-supervised, kinetic modeling based segmentation technique for molecular imaging applications. It is an iterative, self-learning algorithm based on uncertainty principles, designed to alleviate low signal-to-noise ratio (SNR) and partial volume effect (PVE) problems. Synthetic fluorodeoxyglucose (FDG) and simulated Raclopride dynamic positron emission tomography (dPET) brain images with excessive noise levels are used to validate our algorithm. We show, qualitatively and quantitatively, that our algorithm outperforms state-of-the-art techniques in identifying different functional regions and recovering the kinetic parameters. --- paper_title: Co-Sparse Textural Similarity for Interactive Segmentation paper_content: We propose an algorithm for segmenting natural images based on texture and color information, which leverages the co-sparse analysis model for image segmentation. As a key ingredient of this method, we introduce a novel textural similarity measure, which builds upon the co-sparse representation of image patches. We propose a statistical MAP inference approach to merge textural similarity with information about color and location. Combined with recently developed convex multilabel optimization methods this leads to an efficient algorithm for interactive segmentation, which is easily parallelized on graphics hardware. The provided approach outperforms state-of-the-art interactive segmentation methods on the Graz Benchmark. --- paper_title: Image Classification using Random Forests and Ferns paper_content: We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256. --- paper_title: Vessel scale-selection using MRF optimization paper_content: Many feature detection algorithms rely on the choice of scale. In this paper, we complement standard scale-selection algorithms with spatial regularization. To this end, we formulate scale-selection as a graph labeling problem and employ Markov random field multi-label optimization. We focus on detecting the scales of vascular structures in medical images. We compare the detected vessel scales using our method to those obtained using the selection approach of the well-known vesselness filter (Frangi et al 1998). We propose and discuss two different approaches for evaluating the goodness of scale-selection. Our results on 40 images from the Digital Retinal Images for Vessel Extraction (DRIVE) database show an average reduction in these error measurements by more than 15%. --- paper_title: Active unsupervised texture segmentation on a diffusion based feature space paper_content: We propose a novel and efficient approach for active unsupervised texture segmentation. First, we show how we can extract a small set of good features for texture segmentation based on the structure tensor and nonlinear diffusion. Then, we propose a variational framework that incorporates these features in a level set based unsupervised segmentation process that adaptively takes into account their estimated statistical information inside and outside the region to segment. The approach has been tested on various textured images, and its performance is favorably compared to recent studies. --- paper_title: A Statistical Overlap Prior for Variational Image Segmentation paper_content: This study investigates variational image segmentation with an original data term, referred to as statistical overlap prior, which measures the conformity of overlap between the nonparametric distributions of image data within the segmentation regions to a learned statistical description. This leads to image segmentation and distribution tracking algorithms that relax the assumption of minimal overlap and, as such, are more widely applicable than existing algorithms. We propose to minimize active curve functionals containing the proposed overlap prior, compute the corresponding Euler-Lagrange curve evolution equations, and give an interpretation of how the overlap prior controls such evolution. We model the overlap, measured via the Bhattacharyya coefficient, with a Gaussian prior whose parameters are estimated from a set of relevant training images. Quantitative and comparative performance evaluations of the proposed algorithms over several experiments demonstrate the positive effects of the overlap prior in regard to segmentation accuracy and convergence speed. --- paper_title: A variational method for vessels segmentation: algorithm and application to liver vessels visualization paper_content: We present a new variational-based method for automatic liver vessels segmentation from abdominal CTA images. The segmentation task is formulated as a functional minimization problem within a variational framework. We introduce a new functional that incorporates both geometrical vesselness measure and vessels surface properties. The functional describes the distance between the desired segmentation and the original image. To minimize the functional, we derive the Euler-Lagrange equation from it and solve it using the conjugate gradients algorithm. Our approach is automatic and improves upon other Hessian-based methods in the detection of bifurcations and complex vessels structures by incorporating a surface term into the functional. To assess our method, we conducted with an expert radiologist two comparative studies on 8 abdominal CTA clinical datasets. In the first study, the radiologist assessed the presence of 11 vascular bifurcations on each dataset, totaling of 73 bifurcations. The radiologist qualitatively compared the bifurcations segmentation of our method and that of a Hessian-based threshold method. Our method correctly segmented 88% of the bifurcations with a higher visibility score of 82%, as compared to only 55% in the Hessian-based method with a visibility score of 33%. In the second study, the radiologist assessed the individual vessels visibility on the 3D segmentation images and on the original CTA slices. Ten main liver vessels were examined in each dataset The overall visibility score was 93%. These results indicate that our method is suitable for the automatic segmentation and visualization of the liver vessels. --- paper_title: Active Contours without Edges for Vector-Valued Images paper_content: In this paper, we propose an active contour algorithm for object detection in vector-valued images (such as RGB or multispectral). The model is an extension of the scalar Chan?Vese algorithm to the vector-valued case 1]. The model minimizes a Mumford?Shah functional over the length of the contour, plus the sum of the fitting error over each component of the vector-valued image. Like the Chan?Vese model, our vector-valued model can detect edges both with or without gradient. We show examples where our model detects vector-valued objects which are undetectable in any scalar representation. For instance, objects with different missing parts in different channels are completely detected (such as occlusion). Also, in color images, objects which are invisible in each channel or in intensity can be detected by our algorithm. Finally, the model is robust with respect to noise, requiring no a priori denoising step. --- paper_title: A fast local descriptor for dense matching paper_content: We introduce a novel local image descriptor designed for dense wide-baseline matching purposes. We feed our descriptors to a graph-cuts based dense depth map estimation algorithm and this yields better wide-baseline performance than the commonly used correlation windows for which the size is hard to tune. As a result, unlike competing techniques that require many high-resolution images to produce good reconstructions, our descriptor can compute them from pairs of low-quality images such as the ones captured by video streams. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance. Our approach was tested with ground truth laser scanned depth maps as well as on a wide variety of image pairs of different resolutions and we show that good reconstructions are achieved even with only two low quality images. --- paper_title: A tensor-based algorithm for high-order graph matching paper_content: This paper addresses the problem of establishing correspondences between two sets of visual features using higher-order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multi-dimensional power method, and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data. --- paper_title: Level set based segmentation with intensity and curvature priors paper_content: A method is presented for segmentation of anatomical structures that incorporates prior information about the intensity and curvature profile of the structure from a training set of images and boundaries. Specifically, we model the intensity distribution as a function of signed distance from the object boundary, instead of modeling only the intensity of the object as a whole. A curvature profile acts as a boundary regularization term specific to the shape being extracted, as opposed to simply penalizing high curvature. Using the prior model, the segmentation process estimates a maximum a posteriori higher dimensional surface whose zero level set converges on the boundary of the object to be segmented. Segmentation results are demonstrated on synthetic data and magnetic resonance imagery. --- paper_title: Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation paper_content: We consider a class of region-based energies for image segmentation and inpainting which combine region integrals with curvature regularity of the region boundary. To minimize such energies, we formulate an integer linear program which jointly estimates regions and their boundaries. Curvature regularity is imposed by respective costs on pairs of adjacent boundary segments. --- paper_title: Minimizing Sparse High-Order Energies by Submodular Vertex-cover paper_content: Inference in high-order graphical models has become important in recent years. Several approaches are based, for example, on generalized message-passing, or on transformation to a pairwise model with extra 'auxiliary' variables. We focus on a special case where a much more efficient transformation is possible. Instead of adding variables, we transform the original problem into a comparatively small instance of submodular vertex-cover. These vertex-cover instances can then be attacked by existing algorithms (e.g. belief propagation, QPBO), where they often run 4-15 times faster and find better solutions than when applied to the original problem. We evaluate our approach on synthetic data, then we show applications within a fast hierarchical clustering and model-fitting framework. --- paper_title: Fast global optimization of curvature paper_content: Two challenges in computer vision are to accommodate noisy data and missing data. Many problems in computer vision, such as segmentation, filtering, stereo, reconstruction, inpainting and optical flow seek solutions that match the data while satisfying an additional regularization, such as total variation or boundary length. A regularization which has received less attention is to minimize the curvature of the solution. One reason why this regularization has received less attention is due to the difficulty in finding an optimal solution to this image model, since many existing methods are complicated, slow and/or provide a suboptimal solution. Following the recent progress of Schoenemann et al. [28], we provide a simple formulation of curvature regularization which admits a fast optimization which gives globally optimal solutions in practice. We demonstrate the effectiveness of this method by applying this curvature regularization to image segmentation. --- paper_title: Curvature Regularization for Curves and Surfaces in a Global Optimization Framework paper_content: Length and area regularization are commonplace for inverse problems today. It has however turned out to be much more difficult to incorporate a curvature prior. In this paper we propose several improvements to a recently proposed framework based on global optimization. We identify and solve an issue with extraneous arcs in the original formulation by introducing region consistency constraints. The mesh geometry is analyzed both from a theoretical and experimental viewpoint and hexagonal meshes are shown to be superior. We demonstrate that adaptively generated meshes significantly improve the performance. Our final contribution is that we generalize the framework to handle mean curvature regularization for 3D surface completion and segmentation. (Less) --- paper_title: Snakes: Active contour models paper_content: A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest. --- paper_title: Shortest Paths with Curvature and Torsion paper_content: This paper describes a method of finding thin, elongated structures in images and volumes. We use shortest paths to minimize very general functionals of higher-order curve properties, such as curvature and torsion. Our globally optimal method uses line graphs and its runtime is polynomial in the size of the discretization, often in the order of seconds on a single computer. To our knowledge, we are the first to perform experiments in three dimensions with curvature and torsion regularization. The largest graphs we process have almost one hundred billion arcs. Experiments on medical images and in multi-view reconstruction show the significance and practical usefulness of regularization based on curvature while torsion is still only tractable for small-scale problems. --- paper_title: Interactive image segmentation via minimization of quadratic energies on directed graphs paper_content: We propose a scheme to introduce directionality in the random walker algorithm for image segmentation. In particular, we extend the optimization framework of this algorithm to combinatorial graphs with directed edges. Our scheme is interactive and requires the user to label a few pixels that are representative of a foreground object and of the background. These labeled pixels are used to learn intensity models for the object and the background, which allow us to automatically set the weights of the directed edges. These weights are chosen so that they bias the direction of the object boundary gradients to flow from regions that agree well with the learned object intensity model to regions that do not agree well. We use these weights to define an energy function that associates asymmetric quadratic penalties with the edges in the graph. We show that this energy function is convex, hence it has a unique minimizer. We propose a provably convergent iterative algorithm for minimizing this energy function. We also describe the construction of an equivalent electrical network with diodes and resistors that solves the same segmentation problem as our framework. Finally, our experiments on a database of 69 images show that the use of directional information does improve the segmenting power of the random Walker algorithm. --- paper_title: An Adaptive Two-Stage Edge Detection Scheme for Digital Color Images paper_content: An adaptive two-stage edge detection scheme for digital color images is proposed in this paper. In the first stage of this scheme, each three-dimensional color image is reduced to a one-dimensional gray-level image using the moment-preserving thresholding technique. Then, a new edge detection technique based on the block truncation coding scheme is introduced to detect the edge boundary in the second stage. The edge detection process makes use of the bit plane information of each BTC-encoded block to detect the edge boundary. The experimental results show that the performance of the detected edge image of the proposed scheme is as good as in Yang's scheme and in the Sobel operator. However, the computational cost consumed by the proposed scheme is less than that of Yang's scheme. In addition, the proposed scheme provides an adaptive edge quality decision mechanism. This mechanism can provide different edge images to meet various applications and the subjective evaluation. Moreover, this scheme locates the edge boundaries to the sub-pixel accuracy, which is an advantage to applications such as data hiding and image watermarking. --- paper_title: Detecting Structure in Diffusion Tensor MR Images paper_content: We derive herein first and second-order differential operators for detecting structure in diffusion tensor MRI (DTI). Unlike existing methods, we are able to generate full first and second-order differentials without dimensionality reduction and while respecting the underlying manifold of the data. Further, we extend corner and curvature feature detectors to DTI using our differential operators. Results using the feature detectors on diffusion tensor MR images show the ability to highlight structure within the image that existing methods cannot. --- paper_title: Geodesic Active Regions and Level Set Methods for Supervised Texture Segmentation paper_content: This paper presents a novel variational framework to deal with frame partition problems in Computer Vision. This framework exploits boundary and region-based segmentation modules under a curve-based optimization objective function. The task of supervised texture segmentation is considered to demonstrate the potentials of the proposed framework. The textured feature space is generated by filtering the given textured images using isotropic and anisotropic filters, and analyzing their responses as multi-component conditional probability density functions. The texture segmentation is obtained by unifying region and boundary-based information as an improved Geodesic Active Contour Model. The defined objective function is minimized using a gradient-descent method where a level set approach is used to implement the obtained PDE. According to this PDE, the curve propagation towards the final solution is guided by boundary and region-based segmentation forces, and is constrained by a regularity force. The level set implementation is performed using a fast front propagation algorithm where topological changes are naturally handled. The performance of our method is demonstrated on a variety of synthetic and real textured frames. --- paper_title: Color lesion boundary detection using live wire paper_content: The boundaries of oral lesions in color images were detected using a live-wire method and compared to expert delineations. Multiple cost terms were analyzed for their inclusion in the final total cost function including color gradient magnitude, color gradient direction, Canny edge detection, and Laplacian zero crossing. The gradient magnitude and direction cost terms were implemented so that they acted directly on the three components of the color image, instead of using a single derived color band. The live-wire program was shown to be considerably more accurate and faster compared to manual segmentations by untrained users. --- paper_title: Quaternion color curvature paper_content: In this paper we propose a novel approach to measuring curvature in color or vector-valued images (up to 4-dimensions) based on quaternion singular value decomposition of a Hessian matrix. This approach generalizes the existing scalar-image curvature approach which makes use of the eigenvalues of the Hessian matrix [1]. In the case of vector-valued images, the Hessian is no longer a 2D matrix but rather a rank 3 tensor. We use quaternion curvature to derive vesselness measure for tubular structures in color or vector-valued images by extending Frangi’s [1] vesselness measure for scalar images. Experimental results show the effectiveness of quaternion color curvature in generating a vesselness map. --- paper_title: Normalized cuts and image segmentation paper_content: We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging. --- paper_title: A Computational Approach to Edge Detection paper_content: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge. --- paper_title: Graph Cuts and Efficient N-D Image Segmentation paper_content: Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications. --- paper_title: Geodesic active contours paper_content: A novel scheme for the detection of object boundaries is presented. The technique is based on active contours deforming according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric as defined by the image content. This geodesic approach for object segmentation allows to connect classical "snakes" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved as showed by a number of examples. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well.<<ETX>> --- paper_title: Random Walks for Image Segmentation paper_content: A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs --- paper_title: Exact optimization for markov random fields with convex priors paper_content: We introduce a method to solve exactly a first order Markov random field optimization problem in more generality than was previously possible. The MRF has a prior term that is convex in terms of a linearly ordered label set. The method maps the problem into a minimum-cut problem for a directed graph, for which a globally optimal solution can be found in polynomial time. The convexity of the prior function in the energy is shown to be necessary and sufficient for the applicability of the method. --- paper_title: Segmentation of Intra-Retinal Layers From Optical Coherence Tomography Images Using an Active Contour Approach paper_content: Optical coherence tomography (OCT) is a noninvasive, depth-resolved imaging modality that has become a prominent ophthalmic diagnostic technique. We present a semi-automated segmentation algorithm to detect intra-retinal layers in OCT images acquired from rodent models of retinal degeneration. We adapt Chan-Vese's energy-minimizing active contours without edges for the OCT images, which suffer from low contrast and are highly corrupted by noise. A multiphase framework with a circular shape prior is adopted in order to model the boundaries of retinal layers and estimate the shape parameters using least squares. We use a contextual scheme to balance the weight of different terms in the energy functional. The results from various synthetic experiments and segmentation results on OCT images of rats are presented, demonstrating the strength of our method to detect the desired retinal layers with sufficient accuracy even in the presence of intensity inhomogeneity resulting from blood vessels. Our algorithm achieved an average Dice similarity coefficient of 0.84 over all segmented retinal layers, and of 0.94 for the combined nerve fiber layer, ganglion cell layer, and inner plexiform layer which are the critical layers for glaucomatous degeneration. --- paper_title: Fast approximate energy minimization via graph cuts paper_content: In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function's smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an /spl alpha/-/spl beta/-swap: for a pair of labels /spl alpha/,/spl beta/, this move exchanges the labels between an arbitrary set of pixels labeled a and another arbitrary set labeled /spl beta/. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an /spl alpha/-expansion: for a label a, this move assigns an arbitrary set of pixels the label /spl alpha/. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. --- paper_title: Multiregion competition: A level set extension of region competition to multiple region image partitioning paper_content: The purpose of this study is to investigate a new representation of a partition of an image domain into a fixed but arbitrary number of regions by explicit correspondence between the regions of segmentation and the regions defined by simple closed planar curves and their intersections, and the use of this representation in the context of region competition to provide a level set multiregion competition algorithm. This formulation leads to a system of coupled curve evolution equations which is easily amenable to a level set implementation and the computed solution is one that minimizes the stated functional. An unambiguous segmentation is garanteed because at all time during curve evolution the evolving regions form a partition of the image domain. We present the multiregion competition algorithm for intensity-based image segmentation and we subsequently extend it to motion/disparity. Finally, we consider an extension of the algorithm to account for images with aberrations such as occlusions. The formulation, the ensuing algorithm, and its implementation have been validated in several experiments on gray level, color, and motion segmentation. --- paper_title: A convex formulation of continuous multi-label problems paper_content: We propose a spatially continuous formulation of Ishikawa's discrete multi-label problem. We show that the resulting non-convex variational problem can be reformulated as a convex variational problem via embedding in a higher dimensional space. This variational problem can be interpreted as a minimal surface problem in an anisotropic Riemannian space. In several stereo experiments we show that the proposed continuous formulation is superior to its discrete counterpart in terms of computing time, memory efficiency and metrication errors. --- paper_title: Convex Formulation and Exact Global Solutions for Multi-phase Piecewise Constant Mumford-Shah Image Segmentation paper_content: Abstract : Most variational models for multi-phase image segmentation are non-convex and possess multiple local minima, which makes solving for a global solution an extremely difficult task. In this work, we provide a method for computing a global solution for the (non-convex) multi-phase piecewise constant Mumford-Shah (spatially continuous Potts) image segmentation problem. Our approach is based on using a specific representation of the problem due to Lie et al. [27]. We then rewrite this representation using the dual formulation for total variation so that a variational convexification technique due to Pock et al. [30] may be employed. Unlike some recent methods in this direction, our method can guarantee that a global solution is obtained. We believe our method to be the first in the literature that can make this claim. Once we have the convex optimization problem, we give an algorithm to compute a global solution. We demonstrate our algorithm on several multi-phase image segmentation examples, including a medical imaging application. --- paper_title: A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model paper_content: We propose a new multiphase level set framework for image segmentation using the Mumford and Shah model, for piecewise constant and piecewise smooth optimal approximations. The proposed method is also a generalization of an active contour model without edges based 2-phase segmentation, developed by the authors earlier in T. Chan and L. Vese (1999. In Scale-Space'99, M. Nilsen et al. (Eds.), LNCS, vol. 1682, pp. 141–151) and T. Chan and L. Vese (2001. IEEE-IP, 10(2):266–277). The multiphase level set formulation is new and of interest on its own: by construction, it automatically avoids the problems of vacuum and overlaps it needs only log n level set functions for n phases in the piecewise constant cases it can represent boundaries with complex topologies, including triple junctionss in the piecewise smooth case, only two level set functions formally suffice to represent any partition, based on The Four-Color Theorem. Finally, we validate the proposed models by numerical results for signal and image denoising and segmentation, implemented using the Osher and Sethian level set method. --- paper_title: Star Shape Prior for Graph-Cut Image Segmentation paper_content: In recent years, segmentation with graph cuts is increasingly used for a variety of applications, such as photo/video editing, medical image processing, etc. One of the most common applications of graph cut segmentation is extracting an object of interest from its background. If there is any knowledge about the object shape (i.e. a shape prior), incorporating this knowledge helps to achieve a more robust segmentation. In this paper, we show how to implement a star shape prior into graph cut segmentation. This is a generic shape prior, i.e. it is not specific to any particular object, but rather applies to a wide class of objects, in particular to convex objects. Our major assumption is that the center of the star shape is known, for example, it can be provided by the user. The star shape prior has an additional important benefit - it allows an inclusion of a term in the objective function which encourages a longer object boundary. This helps to alleviate the bias of a graph cut towards shorter segmentation boundaries. In fact, we show that in many cases, with this new term we can achieve an accurate object segmentation with only a single pixel, the center of the object, provided by the user, which is rarely possible with standard graph cut interactive segmentation. --- paper_title: STACS: new active contour scheme for cardiac MR image segmentation paper_content: The paper presents a novel stochastic active contour scheme (STACS) for automatic image segmentation designed to overcome some of the unique challenges in cardiac MR images such as problems with low contrast, papillary muscles, and turbulent blood flow. STACS minimizes an energy functional that combines stochastic region-based and edge-based information with shape priors of the heart and local properties of the contour. The minimization algorithm solves, by the level set method, the Euler-Lagrange equation that describes the contour evolution. STACS includes an annealing schedule that balances dynamically the weight of the different terms in the energy functional. Three particularly attractive features of STACS are: 1) ability to segment images with low texture contrast by modeling stochastically the image textures; 2) robustness to initial contour and noise because of the utilization of both edge and region-based information; 3) ability to segment the heart from the chest wall and the undesired papillary muscles due to inclusion of heart shape priors. Application of STACS to a set of 48 real cardiac MR images shows that it can successfully segment the heart from its surroundings such as the chest wall and the heart structures (the left and right ventricles and the epicardium.) We compare STACS' automatically generated contours with manually-traced contours, or the "gold standard," using both area and edge similarity measures. This assessment demonstrates very good and consistent segmentation performance of STACS. --- paper_title: Using Prior Shapes in Geometric Active Contours in a Variational Framework paper_content: In this paper, we report an active contour algorithm that is capable of using prior shapes. The energy functional of the contour is modified so that the energy depends on the image gradient as well as the prior shape. The model provides the segmentation and the transformation that maps the segmented contour to the prior shape. The active contour is able to find boundaries that are similar in shape to the prior, even when the entire boundary is not visible in the image (i.e., when the boundary has gaps). A level set formulation of the active contour is presented. The existence of the solution to the energy minimization is also established. ::: ::: We also report experimental results of the use of this contour on 2d synthetic images, ultrasound images and fMRI images. Classical active contours cannot be used in many of these images. --- paper_title: Graph-based optimal multi-surface segmentation with a star-shaped prior: Application to the segmentation of the optic disc and cup paper_content: A novel graph-based optimal segmentation method which can simultaneously segment multiple star-shaped surfaces is presented in this paper. Minimum and maximum surface distance constraints can be enforced between different surfaces. In addition, the segmented surfaces are ensured to be smooth by incorporating surface smoothness constraints which limit the variation between adjacent surface voxels. A consistent digital ray system is utilized to make sure the segmentation result is star-shaped and consistent, without interpolating image as required by other methods. To the best of our knowledge, the concept of consistent digital rays is for the first time introduced into the field of medical imaging. The problem is formulated as an MRF optimization problem which can be efficiently and exactly solved by computing a single min s-t cut in an appropriately constructed graph. The method is applied to the segmentation of the optic disc and cup on 70 registered fundus and SD-OCT images from glaucoma patients. The result shows improved accuracy by applying the proposed method (versus using a classification-based approach). --- paper_title: Interactive graph cut based segmentation with shape priors paper_content: Interactive or semi-automatic segmentation is a useful alternative to pure automatic segmentation in many applications. While automatic segmentation can be very challenging, a small amount of user input can often resolve ambiguous decisions on the part of the algorithm. In this work, we devise a graph cut algorithm for interactive segmentation which incorporates shape priors. While traditional graph cut approaches to interactive segmentation are often quite successful, they may fail in cases where there are diffuse edges, or multiple similar objects in close proximity to one another. Incorporation of shape priors within this framework mitigates these problems. Positive results on both medical and natural images are demonstrated. --- paper_title: Nonparametric shape priors for active contour-based image segmentation paper_content: When segmenting images of low quality or with missing data, statistical prior information about the shapes of the objects to be segmented can significantly aid the segmentation process. However, defining probability densities in the space of shapes is an open and challenging problem. In this paper, we propose a nonparametric shape prior model for image segmentation problems. In particular, given example training shapes, we estimate the underlying shape distribution by extending a Parzen density estimator to the space of shapes. Such density estimates are expressed in terms of distances between shapes, and we consider the L2 distance between signed distance functions for shape density estimation, in addition to a distance measure based on the template metric. In particular, we consider the case in which the space of shapes is interpreted as a manifold embedded in a Hilbert space. We then incorporate the learned shape prior distribution into a maximum a posteriori (MAP) estimation framework for segmentation. This results in an optimization problem, which we solve using active contours. We demonstrate the effectiveness of the resulting algorithm in segmenting images that involve low-quality data and occlusions. The proposed framework is especially powerful in handling ''multimodal'' shape densities. --- paper_title: Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences paper_content: This paper presents a model based approach to human body tracking in which the 2D silhouette of a moving human and the corresponding 3D skeletal structure are encapsulated within a non-linear point distribution model. This statistical model allows a direct mapping to be achieved between the external boundary of a human and the anatomical position. It is shown how this information, along with the position of landmark features such as the hands and head can be used to reconstruct information about the pose and structure of the human body from a monocular view of a scene. --- paper_title: Left ventricle segmentation in MRI via convex relaxed distribution matching paper_content: A fundamental step in the diagnosis of cardiovascular diseases, automatic left ventricle (LV) segmentation in cardiac magnetic resonance images (MRIs) is still acknowledged to be a difficult problem. Most of the existing algorithms require either extensive training or intensive user inputs. This study investigates fast detection of the left ventricle (LV) endo- and epicardium surfaces in cardiac MRI via convex relaxation and distribution matching. The algorithm requires a single subject for training and a very simple user input, which amounts to a single point (mouse click) per target region (cavity or myocardium). It seeks cavity and myocardium regions within each 3D phase by optimizing two functionals, each containing two distribution-matching constraints: (1) a distance-based shape prior and (2) an intensity prior. Based on a global measure of similarity between distributions, the shape prior is intrinsically invariant with respect to translation and rotation. We further introduce a scale variable from which we derive a fixed-point equation (FPE), thereby achieving scale-invariance with only few fast computations. The proposed algorithm relaxes the need for costly pose estimation (or registration) procedures and large training sets, and can tolerate shape deformations, unlike template (or atlas) based priors. Our formulation leads to a challenging problem, which is not directly amenable to convex-optimization techniques. For each functional, we split the problem into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Unlike related graph-cut approaches, the proposed convex-relaxation solution can be parallelized to reduce substantially the computational time for 3D domains (or higher), extends directly to high dimensions, and does not have the grid-bias problem. Our parallelized implementation on a graphics processing unit (GPU) demonstrates that the proposed algorithm requires about 3.87 s for a typical cardiac MRI volume, a speed-up of about five times compared to a standard implementation. We report a performance evaluation over 400 volumes acquired from 20 subjects, which shows that the obtained 3D surfaces correlate with independent manual delineations. We further demonstrate experimentally that (1) the performance of the algorithm is not significantly affected by the choice of the training subject and (2) the shape description we use does not change significantly from one subject to another. These results support the fact that a single subject is sufficient for training the proposed algorithm. --- paper_title: A shape-based approach to the segmentation of medical imagery using level sets paper_content: We propose a shape-based approach to curve evolution for the segmentation of medical images containing known object types. In particular, motivated by the work of Leventon, Grimson, and Faugeras, we derive a parametric model for an implicit representation of the segmenting curve by applying principal component analysis to a collection of signed distance representations of the training data. The parameters of this representation are then manipulated to minimize an objective function for segmentation. The resulting algorithm is able to handle multidimensional data, can deal with topological changes of the curve, is robust to noise and initial contour placements, and is computationally efficient. At the same time, it avoids the need for point correspondences during the training phase of the algorithm. We demonstrate this technique by applying it to two medical applications; two-dimensional segmentation of cardiac magnetic resonance imaging (MRI) and three-dimensional segmentation of prostate MRI. --- paper_title: Kernel Density Estimation and Intrinsic Alignment for Shape Priors in Level Set Segmentation paper_content: In this paper, we make two contributions to the field of level set based image segmentation. Firstly, we propose shape dissimilarity measures on the space of level set functions which are analytically invariant under the action of certain transformation groups. The invariance is obtained by an intrinsic registration of the evolving level set function. In contrast to existing approaches to invariance in the level set framework, this closed-form solution removes the need to iteratively optimize explicit pose parameters. The resulting shape gradient is more accurate in that it takes into account the effect of boundary variation on the object's pose. ::: ::: Secondly, based on these invariant shape dissimilarity measures, we propose a statistical shape prior which allows to accurately encode multiple fairly distinct training shapes. This prior constitutes an extension of kernel density estimators to the level set domain. In contrast to the commonly employed Gaussian distribution, such nonparametric density estimators are suited to model aribtrary distributions. ::: ::: We demonstrate the advantages of this multi-modal shape prior applied to the segmentation and tracking of a partially occluded walking person in a video sequence, and on the segmentation of the left ventricle in cardiac ultrasound images. We give quantitative results on segmentation accuracy and on the dependency of segmentation results on the number of training shapes. --- paper_title: Medial profiles for modeling deformation and statistical analysis of shape and their use in medical image segmentation paper_content: We present a novel medial-based, multi-scale approach to shape representation and controlled deformation. We use medial-based profiles for shape representation, which follow the geometry of the structure and describe general, intuitive, and independent shape measures (length, orientation, and thickness). Controlled shape deformations (stretch, bend, and bulge) are obtained either as a result of applying deformation operators at certain locations and scales on the medial profiles, or by varying the weights of the main variation modes obtained from a new hierarchical (multi-scale) and regional (multi-location) principal component analysis of the medial profiles. We demonstrate the ability to produce controlled shape deformations on a medial-based representation of the corpus callosum. We show how this control of shape deformations facilitates the design of a layered framework for image segmentation and present results of segmenting the corpus callosum from 2D mid-sagittal magnetic resonance images of the human brain. Furthermore we show how the medial-based representation facilitates hierarchical, deformation-specific statistical shape analysis of segmented corpora callosa. --- paper_title: Graph Cuts Segmentation with Statistical Shape Priors for Medical Images paper_content: Segmentation of medical images is an important step in many clinical and diagnostic imaging applications. Medical images present many challenges for automated segmentation including poor contrast at tissue boundaries. Traditional segmentation methods based solely on information from the image do not work well in such cases. Statistical shape information for objects in medical images are easy to obtain. In this paper, we propose a graph cuts-based segmentation method for medical images that incorporates statistical shape priors to increase robustness. Our proposed method is able to deal with complex shapes and shape variations while taking advantage of the globally efficient optimization by graph cuts. We demonstrate the effectiveness of our method on kidney images without strong boundaries. --- paper_title: On the adequacy of principal factor analysis for the study of shape variability paper_content: The analysis of shape variability of anatomical structures is of key importance in a number of clinical disciplines, as abnormality in shape can be related to certain diseases. Statistical shape analysis techniques commonly employed in the medical imaging community, such as Active Shape Models or Active Appearance Models rely on Principal Component Analysis (PCA) to decompose shape variability into a reduced set of interpretable components. In this paper we propose Principal Factor Analysis (PFA) as an alternative to PCA and argue that PFA is a better suited technique for medical imaging applications. PFA provides a decomposition into modes of variation that are more easily interpretable, while still being a linear, efficient technique that performs dimensionality reduction (as opposed to Independent Component Analysis, ICA). Both PCA and PFA are described. Examples are provided for 2D landmark data of corpora callosa outlines, as well as vector-valued 3D deformation fields resulting from non-rigid registration of ventricles in MRI. The results show that PFA is a more descriptive tool for shape analysis, at a small cost in size (as in theory more components may be necessary to explain a given percentage of total variance in the data). In conclusion, we argue that it is important to study the potential of factor analysis techniques other than PCA for the application of shape analysis, and defend PFA as a good alternative. --- paper_title: N.: 3d knowledge-based segmentation using pose-invariant higher-order graphs paper_content: Segmentation is a fundamental problem in medical image analysis. The use of prior knowledge is often considered to address the ill-posedness of the process. Such a process consists in bringing all training examples in the same reference pose, and then building statistics. During inference, pose parameters are usually estimated first, and then one seeks a compromise between data-attraction and model-fitness with the prior model. In this paper, we propose a novel higher-order Markov Random Field (MRF) model to encode pose-invariant priors and perform 3D segmentation of challenging data. The approach encodes data support in the singleton terms that are obtained using machine learning, and prior constraints in the higher-order terms. A dual-decomposition-based inference method is used to recover the optimal solution. Promising results on challenging data involving segmentation of tissue classes of the human skeletal muscle demonstrate the potentials of the method. --- paper_title: Statistical shape influence in geodesic active contours paper_content: A novel method of incorporating shape information into the image segmentation process is presented. We introduce a representation for deformable shapes and define a probability distribution over the variances of a set of training shapes. The segmentation process embeds an initial curve as the zero level set of a higher dimensional surface, and evolves the surface such that the zero level set converges on the boundary of the object to be segmented. At each step of the surface evolution, we estimate the maximum a posteriori (MAP) position and shape of the object in the image, based on the prior shape information and the image information. We then evolve the surface globally; towards the MAP estimate, and locally based on image gradients and curvature. Results are demonstrated on synthetic data and medical imagery in 2D min 3D. --- paper_title: Simultaneous Nonrigid Registration, Segmentation, and Tumor Detection in MRI Guided Cervical Cancer Radiation Therapy paper_content: External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. --- paper_title: A Mixture Model for Representing Shape Variation paper_content: The shape variation displayed by a class of objects can be represented as probability density function, allowing us to determine plausible and implausible examples of the class. Given a training set of example shapes we can align them into a common co-ordinate frame and use kernel-based density estimation techniques to represent this distribution. Such an estimate is complex and expensive, so we generate a simpler approximation using a mixture of gaussians. We show how to calculate the distribution, and how it can be used in image search to locate examples of the modelled object in new images. --- paper_title: The Isometric Log-Ratio Transform for Probabilistic Multi-Label Anatomical Shape Representation paper_content: Sources of uncertainty in the boundaries of structures in medical images have motivated the use of probabilistic labels in segmentation applications. An important component in many medical image segmentation tasks is the use of a shape model, often generated by applying statistical techniques to training data. Standard statistical techniques (e.g., principal component analysis) often assume data lies in an unconstrained vector space, but probabilistic labels are constrained to the unit simplex. If these statistical techniques are used directly on probabilistic labels, relative uncertainty information can be sacrificed. A standard method for facilitating analysis of probabilistic labels is to map them to a vector space using the LogOdds transform. However, the LogOdds transform is asymmetric in one of the labels, which skews results in some applications. The isometric log-ratio (ILR) transform is a symmetrized version of the LogOdds transform, and is so named as it is an isometry between the Aitchison geometry, the inherent geometry of the simplex, and standard Euclidean geometry. We explore how to interpret the Aitchison geometry when applied to probabilistic labels in medical image segmentation applications. We demonstrate the differences when applying the LogOdds transform or the ILR transform to probabilistic labels prior to statistical analysis. Specifically, we show that statistical analysis of ILR transformed data better captures the variability of anatomical shapes in cases where multiple different foreground regions share boundaries (as opposed to foreground-background boundaries). --- paper_title: Active shape model segmentation with optimal features paper_content: An active shape model segmentation scheme is presented that is steered by optimal local features, contrary to normalized first order derivative profiles, as in the original formulation [Cootes and Taylor, 1995, 1999, and 2001]. A nonlinear kNN-classifier is used, instead of the linear Mahalanobis distance, to find optimal displacements for landmarks. For each of the landmarks that describe the shape, at each resolution level taken into account during the segmentation optimization procedure, a distinct set of optimal features is determined. The selection of features is automatic, using the training images and sequential feature forward and backward selection. The new approach is tested on synthetic data and in four medical segmentation tasks: segmenting the right and left lung fields in a database of 230 chest radiographs, and segmenting the cerebellum and corpus callosum in a database of 90 slices from MRI brain images. In all cases, the new method produces significantly better results in terms of an overlap error measure (p<0.001 using a paired T-test) than the original active shape model scheme. --- paper_title: Deformable segmentation via sparse representation and dictionary learning paper_content: "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. --- paper_title: Nonlinear Component Analysis as a Kernel Eigenvalue Problem paper_content: A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. --- paper_title: Deformable Fourier models for surface finding in 3-D images paper_content: This paper describes a new global shape parametrization for smoothly deformable three- dimensional objects, such as those found in biomedical images, whose diversity and irregularity make them difficult to represent in terms of fixed features or parts. This representation is used for geometric surface matching to three-dimensional image data. The parametrization decomposes the surface into sinusoidal basis functions. Four types of surfaces are modeled: tori, open surfaces, closed surfaces, and tubes. This parametrization allows a wide variety of smooth surfaces to be described with a small number of parameters. Surface finding is formulated as an optimization problem. Results of the method applied to synthetic and medical three-dimensional images are presented.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Hierarchical active shape models, using the wavelet transform paper_content: Active shape models (ASMs) are often limited by the inability of relatively few eigenvectors to capture the full range of biological shape variability. This paper presents a method that overcomes this limitation, by using a hierarchical formulation of active shape models, using the wavelet transform. The statistical properties of the wavelet transform of a deformable contour are analyzed via principal component analysis, and used as priors in the contour's deformation. Some of these priors reflect relatively global shape characteristics of the object boundaries, whereas, some of them capture local and high-frequency shape characteristics and, thus, serve as local smoothness constraints. This formulation achieves two objectives. First, it is robust when only a limited number of training samples is available. Second, by using local statistics as smoothness constraints, it eliminates the need for adopting ad hoc physical models, such as elasticity or other smoothness models, which do not necessarily reflect true biological variability. Examples on magnetic resonance images of the corpus callosum and hand contours demonstrate that good and fully automated segmentations can be achieved, even with as few as five training samples. --- paper_title: Auto-Context and Its Application to High-Level Vision Tasks and 3D Brain Image Segmentation paper_content: The notion of using context information for solving high-level vision and medical image segmentation problems has been increasingly realized in the field. However, how to learn an effective and efficient context model, together with an image appearance model, remains mostly unknown. The current literature using Markov Random Fields (MRFs) and Conditional Random Fields (CRFs) often involves specific algorithm design in which the modeling and computing stages are studied in isolation. In this paper, we propose a learning algorithm, auto-context. Given a set of training images and their corresponding label maps, we first learn a classifier on local image patches. The discriminative probability (or classification confidence) maps created by the learned classifier are then used as context information, in addition to the original image patches, to train a new classifier. The algorithm then iterates until convergence. Auto-context integrates low-level and context information by fusing a large number of low-level appearance features with context and implicit shape information. The resulting discriminative algorithm is general and easy to implement. Under nearly the same parameter settings in training, we apply the algorithm to three challenging vision applications: foreground/background segregation, human body configuration estimation, and scene region labeling. Moreover, context also plays a very important role in medical/brain images where the anatomical structures are mostly constrained to relatively fixed positions. With only some slight changes resulting from using 3D instead of 2D features, the auto-context algorithm applied to brain MRI image segmentation is shown to outperform state-of-the-art algorithms specifically designed for this domain. Furthermore, the scope of the proposed algorithm goes beyond image analysis and it has the potential to be used for a wide variety of problems for structured prediction problems. --- paper_title: Model-based curve evolution technique for image segmentation paper_content: We propose a model-based curve evolution technique for segmentation of images containing known object types. In particular, motivated by the work of Leventon et al. (2000), we derive a parametric model for an implicit representation of the segmenting curve by applying principal component analysis to a collection of signed distance representations of the training data, The parameters of this representation are then calculated to minimize an objective function for segmentation. We found the resulting algorithm to be computationally efficient, able to handle multidimensional data, robust to noise and initial contour placements, while at the same time, avoiding the need for point correspondences during the training phase of the algorithm. We demonstrate this technique by applying it to two medical applications. --- paper_title: Statistically constrained snake deformations paper_content: The authors present a method for constraining the deformations of Snakes (active contour models) when segmenting a known class of objects. The method we propose is similar to both active shape models (ASM) but without the landmark identification and correspondence requirement, and to active contour models (ACM), but armed with a priori information about shape variation. Rather than representing the object boundary by spatial landmarks in a point-by-point fashion, we employ a frequency based boundary representation. In this way, the principal component analysis (PCA), which is central to ASM, is applied to a set of frequency-domain shape descriptors, removing the need for the difficult determination of spatial landmarks. Given a training set of representative images of the object of interest, we extract an average object shape along with a set of significant shape variation modes, explaining most of the shape variation in the training set. Armed with this a priori model of shape variation, we find the boundaries in unknown images by placing an initial ACM and allowing it to deform only according to the examined shape variations. The described methodology was applied to a set of 105 echocardiographic images for locating the left ventricular boundary. The results were particularly encouraging in clinically difficult cases where the ventricular boundary was partly occluded by noise. --- paper_title: Simulation of Ground-Truth Validation Data Via Physically- and Statistically-Based Warps paper_content: The problem of scarcity of ground-truth expert delineations of medical image data is a serious one that impedes the training and validation of medical image analysis techniques. We develop an algorithm for the automatic generation of large databases of annotated images from a single reference dataset. We provide a web-based interface through which the users can upload a reference data set (an image and its corresponding segmentation and landmark points), provide custom setting of parameters, and, following server-side computations, generate and download an arbitrary number of novel ground-truth data, including segmentations, displacement vector fields, intensity non-uniformity maps, and point correspondences. To produce realistic simulated data, we use variational (statistically-based) and vibrational (physically-based) spatial deformations, nonlinear radiometric warps mimicking imaging nonhomogeneity, and additive random noise with different underlying distributions. We outline the algorithmic details, present sample results, and provide the web address to readers for immediate evaluation and usage. --- paper_title: Globally Optimal Image Segmentation with an Elastic Shape Prior paper_content: So far global optimization techniques have been developed independently for the tasks of shape matching and image segmentation. In this paper we show that both tasks can in fact be solved simultaneously using global optimization. By computing cycles of minimal ratio in a large graph spanned by the product of the input image and a shape template, we are able to compute globally optimal segmentations of the image which are similar to a familiar shape and located in places of strong gradient. The presented approach is translation-invariant and robust to local and global scaling and rotation of the given shape. We show how it can be extended to incorporate invariance to similarity transformations. The particular structure of the graph allows for run-time and memory efficient implementations. Highly parallel implementations on graphics cards allow to produce globally optimal solutions in a few seconds only. --- paper_title: Non-Rigid Motion Analysis in Medical Images: a Physically Based Approach paper_content: We present a physically-based deformable model which can be used to track and to analyze non-rigid motion of dynamic structures in time sequences of 2D or 3D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the classical natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. Compared to the former work of Terzopoulos and his colleagues [12, 27, 26, 15] and Pentland and his colleagues [22, 21, 23, 10], our model can be viewed as a continuation and unification; it provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images (ultrasound and magnetic resonance images). --- paper_title: Closed-form solutions for physically based shape modeling and recognition paper_content: The authors present a closed-form, physically based solution for recovering a three-dimensional (3-D) solid model from collections of 3-D surface measurements. Given a sufficient number of independent measurements, the solution is overconstrained and unique except for rotational symmetries. The proposed approach is based on the finite element method (FEM) and parametric solid modeling using implicit functions. This approach provides both the convenience of parametric modeling and the expressiveness of the physically based mesh formulation and, in addition, can provide great accuracy at physical simulation. A physically based object-recognition method that allows simple, closed-form comparisons of recovered 3-D solid models is presented. The performance of these methods is evaluated using both synthetic range data with various signal-to-noise ratios and using laser rangefinder data. > --- paper_title: A Finite Element Method for Deformable Models paper_content: Deformable models of elastic structures have been proposed for use in image analysis. Previous work has used a variational approach, based on the EulerLagrange theory. In this paper an alternative mathematical treatment is introduced, based on a direct minimisation of the underlying energy integral using the Finite Element Method. The method is outlined and demonstrated, and its principal advantages for modelbased image interpretation are explained. --- paper_title: Snakes: Active contour models paper_content: A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest. --- paper_title: Graph cut based image segmentation with connectivity priors paper_content: Graph cut is a popular technique for interactive image segmentation. However, it has certain shortcomings. In particular, graph cut has problems with segmenting thin elongated objects due to the ldquoshrinking biasrdquo. To overcome this problem, we propose to impose an additional connectivity prior, which is a very natural assumption about objects. We formulate several versions of the connectivity constraint and show that the corresponding optimization problems are all NP-hard. For some of these versions we propose two optimization algorithms: (i) a practical heuristic technique which we call DijkstraGC, and (ii) a slow method based on problem decomposition which provides a lower bound on the problem. We use the second technique to verify that for some practical examples DijkstraGC is able to find the global minimum. --- paper_title: Topology Cuts: A Novel Min-Cut/Max-Flow Algorithm for Topology Preserving Segmentation in N-D Images paper_content: Topology is an important prior in many image segmentation tasks. In this paper, we design and implement a novel graph-based min-cut/max-flow algorithm that incorporates topology priors as global constraints. We show that the optimization of the energy function we consider here is NP-hard. However, our algorithm is guaranteed to find an approximate solution that conforms to the initialization, which is a desirable property in many applications since the globally optimum solution does not consider any initialization information. The key innovation of our algorithm is the organization of the search for maximum flow in a way that allows consideration of topology constraints. In order to achieve this, we introduce a label attribute for each node to explicitly handle the topology constraints, and we use a distance map to keep track of those nodes that are closest to the boundary. We employ the bucket priority queue data structure that records nodes of equal distance and we efficiently extract the node with minimal distance value. Our methodology of embedding distance functions in a graph-based algorithm is general and can also account for other geometric priors. Experimental results show that our algorithm can efficiently handle segmentation cases that are challenging for graph-cut algorithms. Furthermore, our algorithm is a natural choice for problems with rich topology priors such as object tracking. --- paper_title: Simple points, topological numbers and geodesic neighborhoods in cubic grids paper_content: Abstract We introduce the notion of geodesic neighborhood in order to define some topological numbers associated with a point in a three-dimensional cubic grid. For {6, 26} and {6, 18} connectivities, these numbers lead to a characterization of simple points which consists in only two local conditions. --- paper_title: A topology preserving level set method for geometric deformable models paper_content: Active contour and surface models, also known as deformable models, are powerful image segmentation techniques. Geometric deformable models implemented using level set methods have advantages over parametric models due to their intrinsic behavior, parameterization independence, and ease of implementation. However, a long claimed advantage of geometric deformable models-the ability to automatically handle topology changes-turns out to be a liability in applications where the object to be segmented has a known topology that must be preserved. We present a new class of geometric deformable models designed using a novel topology-preserving level set method, which achieves topology preservation by applying the simple point concept from digital topology. These new models maintain the other advantages of standard geometric deformable models including subpixel accuracy and production of nonintersecting curves or surfaces. Moreover, since the topology-preserving constraint is enforced efficiently through local computations, the resulting algorithm incurs only nominal computational overhead over standard geometric deformable models. Several experiments on simulated and real data are provided to demonstrate the performance of this new deformable model algorithm. --- paper_title: A convex framework for image segmentation with moment constraints paper_content: Convex relaxation techniques have become a popular approach to image segmentation as they allow to compute solutions independent of initialization to a variety of image segmentation problems. In this paper, we will show that shape priors in terms of moment constraints can be imposed within the convex optimization framework, since they give rise to convex constraints. In particular, the lower-order moments correspond to the overall volume, the centroid, and the variance or covariance of the shape and can be easily imposed in interactive segmentation methods. Respective constraints can be imposed as hard constraints or soft constraints. Quantitative segmentation studies on a variety of images demonstrate that the user can easily impose such constraints with a few mouse clicks, giving rise to substantial improvements of the resulting segmentation, and reducing the average segmentation error from 12% to 0:35%. GPU-based computation times of around 1 second allow for interactive segmentation. --- paper_title: Affine-invariant geometric shape priors for region-based active contours paper_content: We present a new way of constraining the evolution of a region-based active contour with respect to a reference shape. Minimizing a shape prior, defined as a distance between shape descriptors based on the Legendre moments of the characteristic function, leads to a geometric flow that can be used with benefits in a two-class segmentation application. The shape model includes intrinsic invariance with regard to pose and affine deformations --- paper_title: Area prior constrained level set evolution for medical image segmentation paper_content: The level set framework has proven well suited to medical image segmentation 1-6 thanks to its ability of balancing ::: the contribution of image data and prior knowledge in a principled, flexible and transparent way. It consists ::: of evolving a curve toward the target object boundaries. The curve evolution equation is sought following ::: the optimization of a cost functional containing two types of terms: data terms, which measure the fidelity of ::: segmentation to image intensities, and prior terms, which traduce learned prior knowledge. Without priors many ::: algorithms are likely to fail due to high noise, low contrast and data incompleteness. Different priors have been ::: investigated such as shape 1 and appearance priors. 7 In this study, we propose a simple type of priors: the area ::: prior. This prior embeds knowledge of an approximate object area and has two positive effects. First, It speeds ::: up significantly the evolution when the curve is far from the target object boundaries. Second, it slows down ::: the evolution when the curve is close to the target. Consequently, it reinforces curve stability at the desired ::: boundaries when dealing with low contrast intensity edges. The algorithm is validated with several experiments ::: using Magnetic Resonance (MR) images and Computed Tomography (CT) images. A comparison with another ::: level set method illustrates the positive effects of the area prior. --- paper_title: Left ventricle segmentation in MRI via convex relaxed distribution matching paper_content: A fundamental step in the diagnosis of cardiovascular diseases, automatic left ventricle (LV) segmentation in cardiac magnetic resonance images (MRIs) is still acknowledged to be a difficult problem. Most of the existing algorithms require either extensive training or intensive user inputs. This study investigates fast detection of the left ventricle (LV) endo- and epicardium surfaces in cardiac MRI via convex relaxation and distribution matching. The algorithm requires a single subject for training and a very simple user input, which amounts to a single point (mouse click) per target region (cavity or myocardium). It seeks cavity and myocardium regions within each 3D phase by optimizing two functionals, each containing two distribution-matching constraints: (1) a distance-based shape prior and (2) an intensity prior. Based on a global measure of similarity between distributions, the shape prior is intrinsically invariant with respect to translation and rotation. We further introduce a scale variable from which we derive a fixed-point equation (FPE), thereby achieving scale-invariance with only few fast computations. The proposed algorithm relaxes the need for costly pose estimation (or registration) procedures and large training sets, and can tolerate shape deformations, unlike template (or atlas) based priors. Our formulation leads to a challenging problem, which is not directly amenable to convex-optimization techniques. For each functional, we split the problem into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Unlike related graph-cut approaches, the proposed convex-relaxation solution can be parallelized to reduce substantially the computational time for 3D domains (or higher), extends directly to high dimensions, and does not have the grid-bias problem. Our parallelized implementation on a graphics processing unit (GPU) demonstrates that the proposed algorithm requires about 3.87 s for a typical cardiac MRI volume, a speed-up of about five times compared to a standard implementation. We report a performance evaluation over 400 volumes acquired from 20 subjects, which shows that the obtained 3D surfaces correlate with independent manual delineations. We further demonstrate experimentally that (1) the performance of the algorithm is not significantly affected by the choice of the training subject and (2) the shape description we use does not change significantly from one subject to another. These results support the fact that a single subject is sufficient for training the proposed algorithm. --- paper_title: Region Detection by Minimizing Intraclass Variance With Geometric Constraints, Global Optimality, and Efficient Approximation paper_content: Efficient segmentation of globally optimal surfaces in volumetric images is a central problem in many medical image analysis applications. Intraclass variance has been successfully utilized for object segmentation, for instance, in the Chan-Vese model, especially for images without prominent edges. In this paper, we study the optimization problem of detecting a region (volume) between two coupled smooth surfaces by minimizing the intraclass variance using an efficient polynomial-time algorithm. Our algorithm is based on the shape probing technique in computational geometry and computes a sequence of minimum-cost closed sets in a derived parametric graph. The method has been validated on computer-synthetic volumetric images and in X-ray CT-scanned datasets of plexiglas tubes of known sizes. Its applicability to clinical data sets was also demonstrated. In all cases, the approach yielded highly accurate results. We believe that the developed technique is of interest on its own. We expect that it can shed some light on solving other important optimization problems arising in medical imaging. Furthermore, we report an approximation algorithm which runs much faster than the exact algorithm while yielding highly comparable segmentation accuracy. --- paper_title: Volumetric layer segmentation using coupled surfaces propagation paper_content: The problem of segmenting a volumetric layer of finite thickness is encountered in several important areas within medical image analysis. Key examples include the extraction of the cortical gray matter of the brain and the left ventricle myocardium of the heart. The coupling between the two bounding surfaces of such a layer provides important information that helps to solve the segmentation problem. Here we propose a new approach of coupled surfaces propagation via level set methods, which takes into account coupling as an important constraint. By evolving two embedded surfaces simultaneously, each driven by its own image-derived information while maintaining the coupling, we capture a representation of the two bounding surfaces and achieve automatic segmentation on the layer. Characteristic gray level values, instead of image gradient information alone, are incorporated in deriving the useful image information to drive the surface propagation, which enables our approach to capture the homogeneity inside the layer. The level set implementation offers the advantage of easy initialization, computational efficiency and the ability to capture deep folds of the sulci. As a test example, we apply our approach to unedited 3D Magnetic Resonance (MR) brain images. Our algorithm automatically isolates the brain from non-brain structures and recovers the cortical gray matter. --- paper_title: Optimal Surface Segmentation in Volumetric Images-A Graph-Theoretic Approach paper_content: Efficient segmentation of globally optimal surfaces representing object boundaries in volumetric data sets is important and challenging in many medical image analysis applications. We have developed an optimal surface detection method capable of simultaneously detecting multiple interacting surfaces, in which the optimality is controlled by the cost functions designed for individual surfaces and by several geometric constraints defining the surface smoothness and interrelations. The method solves the surface segmentation problem by transforming it into computing a minimum s{\hbox{-}} t cut in a derived arc-weighted directed graph. The proposed algorithm has a low-order polynomial time complexity and is computationally efficient. It has been extensively validated on more than 300 computer-synthetic volumetric images, 72 CT-scanned data sets of different-sized plexiglas tubes, and tens of medical images spanning various imaging modalities. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional image segmentation. --- paper_title: Fuzzy spatial relationships for image processing and interpretation: a review paper_content: In spatial reasoning, relationships between spatial entities play a major role. In image interpretation, computer vision and structural recognition, the management of imperfect information and of imprecision constitutes a key point. This calls for the framework of fuzzy sets, which exhibits nice features to represent spatial imprecision at different levels, imprecision in knowledge and knowledge representation, and which provides powerful tools for fusion, decision-making and reasoning. In this paper, we review the main fuzzy approaches for defining spatial relationships including topological (set relationships, adjacency) and metrical relations (distances, directional relative position). --- paper_title: Globally optimal segmentation of multi-region objects paper_content: Many objects contain spatially distinct regions, each with a unique colour/texture model. Mixture models ignore the spatial distribution of colours within an object, and thus cannot distinguish between coherent parts versus randomly distributed colours. We show how to encode geometric interactions between distinct region+boundary models, such as regions being interior/exterior to each other along with preferred distances between their boundaries. With a single graph cut, our method extracts only those multi-region objects that satisfy such a combined model. We show applications in medical segmentation and scene layout estimation. Unlike Li et al. [17] we do not need “domain unwrapping” nor do we have topological limits on shapes. --- paper_title: A fast convex optimization approach to segmenting 3d scar tissue from delayed-enhancement cardiac MR images paper_content: We propose a novel multi-region segmentation approach through a partially-ordered Potts (POP) model to segment myocardial scar tissue solely from 3D cardiac delayed-enhancement MR images (DE-MRI). The algorithm makes use of prior knowledge of anatomical spatial consistency and employs customized label ordering to constrain the segmentation without prior knowledge of geometric representation. The proposed method eliminates the need for regional constraint segmentations, thus reduces processing time and potential sources of error. We solve the proposed optimization problem by means of convex relaxation and introduce its duality: the hierarchical continuous max-flow (HMF) model, which amounts to an efficient numerical solver to the resulting convex optimization problem. Experiments are performed over ten DE-MRI data sets. The results are compared to a FWHM (full-width at half-maximum) method and the inter- and intra-operator variabilities assessed. --- paper_title: A Variational Approach for the Segmentation of the Left Ventricle in Cardiac Image Analysis paper_content: In this paper we propose a level set method to segment MR cardiac images. Our approach is based on a coupled propagation of two cardiac contours and integrates visual information with anatomical constraints. The visual information is expressed through a gradient vector flow-based boundary component and a region term that aims at best separating the cardiac contours/regions according to their global intensity properties. In order to deal with misleading visual support, an anatomical constraint is considered that couples the propagation of the cardiac contours according to their relative distance. The resulting motion equations are implemented using a level set approach and a fast and stable numerical approximation scheme, the Additive Operator Splitting. Encouraging experimental results are provided using real data. --- paper_title: Efficient Global Optimization Based 3D Carotid AB-LIB MRI Segmentation by Simultaneously Evolving Coupled Surfaces paper_content: Magnetic resonance (MR) imaging of carotid atherosclerosis biomarkers are increasingly being investigated for the risk assessment of vulnerable plaques. A fast and robust 3D segmentation of the carotid adventitia (AB) and lumen-intima (LIB) boundaries can greatly alleviate the measurement burden of generating quantitative imaging biomarkers in clinical research. In this paper, we propose a novel global optimization-based approach to segment the carotid AB and LIB from 3D T1-weighted black blood MR images, by simultaneously evolving two coupled surfaces with enforcement of anatomical consistency of the AB and LIB. We show that the evolution of two surfaces at each discrete time-frame can be optimized exactly and globally by means of convex relaxation. Our continuous max-flow based algorithm is implemented in GPUs to achieve high computational performance. The experiment results from 16 carotid MR images show that the algorithm obtained high agreement with manual segmentations and achieved high repeatability in segmentation. --- paper_title: Optimizing Binary MRFs via Extended Roof Duality paper_content: Many computer vision applications rely on the efficient optimization of challenging, so-called non-submodular, binary pairwise MRFs. A promising graph cut based approach for optimizing such MRFs known as "roof duality" was recently introduced into computer vision. We study two methods which extend this approach. First, we discuss an efficient implementation of the "probing" technique introduced recently by Bows et al. (2006). It simplifies the MRF while preserving the global optimum. Our code is 400-700 faster on some graphs than the implementation of the work of Bows et al. (2006). Second, we present a new technique which takes an arbitrary input labeling and tries to improve its energy. We give theoretical characterizations of local minima of this procedure. We applied both techniques to many applications, including image segmentation, new view synthesis, super-resolution, diagram recognition, parameter learning, texture restoration, and image deconvolution. For several applications we see that we are able to find the global minimum very efficiently, and considerably outperform the original roof duality approach. In comparison to existing techniques, such as graph cut, TRW, BP, ICM, and simulated annealing, we nearly always find a lower energy. --- paper_title: Cortex segmentation - a fast variational geometric approach paper_content: An automatic cortical gray matter segmentation from three-dimensional brain images (MR or CT) is a well known problem in medical image processing. We formulate it as a geometric variational problem for propagation of two coupled bounding surfaces. An efficient numerical scheme is used to implement the geodesic active surface model. Experimental results of cortex segmentation on real three-dimensional MR data are provided. --- paper_title: Multiphase geometric couplings for the segmentation of neural processes paper_content: The ability to constrain the geometry of deformable models for image segmentation can be useful when information about the expected shape or positioning of the objects in a scene is known a priori. An example of this occurs when segmenting neural cross sections in electron microscopy. Such images often contain multiple nested boundaries separating regions of homogeneous intensities. For these applications, multiphase level sets provide a partitioning framework that allows for the segmentation of multiple deformable objects by combining several level set functions. Although there has been much effort in the study of statistical shape priors that can be used to constrain the geometry of each partition, none of these methods allow for the direct modeling of geometric arrangements of partitions. In this paper, we show how to define elastic couplings between multiple level set functions to model ribbon-like partitions. We build such couplings using dynamic force fields that can depend on the image content and relative location and shape of the level set functions. To the best of our knowledge, this is the first work that shows a direct way of geometrically constraining multiphase level sets for image segmentation. We demonstrate the robustness of our method by comparing it with previous level set segmentation methods. --- paper_title: Fast approximate energy minimization via graph cuts paper_content: In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function's smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an /spl alpha/-/spl beta/-swap: for a pair of labels /spl alpha/,/spl beta/, this move exchanges the labels between an arbitrary set of pixels labeled a and another arbitrary set labeled /spl beta/. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an /spl alpha/-expansion: for a label a, this move assigns an arbitrary set of pixels the label /spl alpha/. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. --- paper_title: An Efficient Optimization Framework for Multi-Region Segmentation Based on Lagrangian Duality paper_content: We introduce a multi-region model for simultaneous segmentation of medical images. In contrast to many other models, geometric constraints such as inclusion and exclusion between the regions are enforced, which makes it possible to correctly segment different regions even if the intensity distributions are identical. We efficiently optimize the model using a combination of graph cuts and Lagrangian duality which is faster and more memory efficient than current state of the art. As the method is based on global optimization techniques, the resulting segmentations are independent of initialization. We apply our framework to the segmentation of the left and right ventricles, myocardium and the left ventricular papillary muscles in magnetic resonance imaging and to lung segmentation in full-body X-ray computed tomography. We evaluate our approach on a publicly available benchmark with competitive results. --- paper_title: Region Detection by Minimizing Intraclass Variance With Geometric Constraints, Global Optimality, and Efficient Approximation paper_content: Efficient segmentation of globally optimal surfaces in volumetric images is a central problem in many medical image analysis applications. Intraclass variance has been successfully utilized for object segmentation, for instance, in the Chan-Vese model, especially for images without prominent edges. In this paper, we study the optimization problem of detecting a region (volume) between two coupled smooth surfaces by minimizing the intraclass variance using an efficient polynomial-time algorithm. Our algorithm is based on the shape probing technique in computational geometry and computes a sequence of minimum-cost closed sets in a derived parametric graph. The method has been validated on computer-synthetic volumetric images and in X-ray CT-scanned datasets of plexiglas tubes of known sizes. Its applicability to clinical data sets was also demonstrated. In all cases, the approach yielded highly accurate results. We believe that the developed technique is of interest on its own. We expect that it can shed some light on solving other important optimization problems arising in medical imaging. Furthermore, we report an approximation algorithm which runs much faster than the exact algorithm while yielding highly comparable segmentation accuracy. --- paper_title: Volumetric layer segmentation using coupled surfaces propagation paper_content: The problem of segmenting a volumetric layer of finite thickness is encountered in several important areas within medical image analysis. Key examples include the extraction of the cortical gray matter of the brain and the left ventricle myocardium of the heart. The coupling between the two bounding surfaces of such a layer provides important information that helps to solve the segmentation problem. Here we propose a new approach of coupled surfaces propagation via level set methods, which takes into account coupling as an important constraint. By evolving two embedded surfaces simultaneously, each driven by its own image-derived information while maintaining the coupling, we capture a representation of the two bounding surfaces and achieve automatic segmentation on the layer. Characteristic gray level values, instead of image gradient information alone, are incorporated in deriving the useful image information to drive the surface propagation, which enables our approach to capture the homogeneity inside the layer. The level set implementation offers the advantage of easy initialization, computational efficiency and the ability to capture deep folds of the sulci. As a test example, we apply our approach to unedited 3D Magnetic Resonance (MR) brain images. Our algorithm automatically isolates the brain from non-brain structures and recovers the cortical gray matter. --- paper_title: Optimal Surface Segmentation in Volumetric Images-A Graph-Theoretic Approach paper_content: Efficient segmentation of globally optimal surfaces representing object boundaries in volumetric data sets is important and challenging in many medical image analysis applications. We have developed an optimal surface detection method capable of simultaneously detecting multiple interacting surfaces, in which the optimality is controlled by the cost functions designed for individual surfaces and by several geometric constraints defining the surface smoothness and interrelations. The method solves the surface segmentation problem by transforming it into computing a minimum s{\hbox{-}} t cut in a derived arc-weighted directed graph. The proposed algorithm has a low-order polynomial time complexity and is computationally efficient. It has been extensively validated on more than 300 computer-synthetic volumetric images, 72 CT-scanned data sets of different-sized plexiglas tubes, and tens of medical images spanning various imaging modalities. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional image segmentation. --- paper_title: Globally optimal segmentation of multi-region objects paper_content: Many objects contain spatially distinct regions, each with a unique colour/texture model. Mixture models ignore the spatial distribution of colours within an object, and thus cannot distinguish between coherent parts versus randomly distributed colours. We show how to encode geometric interactions between distinct region+boundary models, such as regions being interior/exterior to each other along with preferred distances between their boundaries. With a single graph cut, our method extracts only those multi-region objects that satisfy such a combined model. We show applications in medical segmentation and scene layout estimation. Unlike Li et al. [17] we do not need “domain unwrapping” nor do we have topological limits on shapes. --- paper_title: A Variational Approach for the Segmentation of the Left Ventricle in Cardiac Image Analysis paper_content: In this paper we propose a level set method to segment MR cardiac images. Our approach is based on a coupled propagation of two cardiac contours and integrates visual information with anatomical constraints. The visual information is expressed through a gradient vector flow-based boundary component and a region term that aims at best separating the cardiac contours/regions according to their global intensity properties. In order to deal with misleading visual support, an anatomical constraint is considered that couples the propagation of the cardiac contours according to their relative distance. The resulting motion equations are implemented using a level set approach and a fast and stable numerical approximation scheme, the Additive Operator Splitting. Encouraging experimental results are provided using real data. --- paper_title: Cortex segmentation - a fast variational geometric approach paper_content: An automatic cortical gray matter segmentation from three-dimensional brain images (MR or CT) is a well known problem in medical image processing. We formulate it as a geometric variational problem for propagation of two coupled bounding surfaces. An efficient numerical scheme is used to implement the geodesic active surface model. Experimental results of cortex segmentation on real three-dimensional MR data are provided. --- paper_title: Multiphase geometric couplings for the segmentation of neural processes paper_content: The ability to constrain the geometry of deformable models for image segmentation can be useful when information about the expected shape or positioning of the objects in a scene is known a priori. An example of this occurs when segmenting neural cross sections in electron microscopy. Such images often contain multiple nested boundaries separating regions of homogeneous intensities. For these applications, multiphase level sets provide a partitioning framework that allows for the segmentation of multiple deformable objects by combining several level set functions. Although there has been much effort in the study of statistical shape priors that can be used to constrain the geometry of each partition, none of these methods allow for the direct modeling of geometric arrangements of partitions. In this paper, we show how to define elastic couplings between multiple level set functions to model ribbon-like partitions. We build such couplings using dynamic force fields that can depend on the image content and relative location and shape of the level set functions. To the best of our knowledge, this is the first work that shows a direct way of geometrically constraining multiphase level sets for image segmentation. We demonstrate the robustness of our method by comparing it with previous level set segmentation methods. --- paper_title: Deformable Organisms and Error Learning for Brain Segmentation paper_content: Segmentation methods for medical images may not generalize well to different data sets or tasks, hampering their utility. We attempt to remedy these issues using deformable organisms to create an easily customizable segmentation plan. This plan is developed by borrowing ideas from artificial life to govern a set of deformable models that use control processes such as sensing, proactive planning, reactive behavior, and knowledge representation to segment an image. The image may have landmarks and features specific to that dataset; these may be easily incorporated into the plan. We validate this framework by creating a plan to locate the brain in 3D magnetic resonance images of the head (skull-stripping). This is important for surgical planning, understanding how diseases affect the brain, conducting longitudinal studies, registering brain data, and creating cortical surface models. Our plan dictates how deformable organisms find features in head images and cooperatively work to segment the brain. In addition, we use a method based on Adaboost to learn and correct errors in our segmentation. We tested our method on 630 T1-weighted images from healthy young adults, evaluating results using distance and overlap error metrics based on expert gold standard segmentations. We compare our segmentations with and without the error correction step; we also compare our results to three other widely used methods: BSE, BET, and the Hybrid Watershed algorithm. Our method had the least Hausdorff distance to expert segmentations on this dataset, but included slightly more non-brain voxels (false positives). Our framework captures diverse categories of information needed for skull-stripping, and produces competitive segmentations. --- paper_title: Morphological Proximity Priors: Spatial Relationships for Semantic Segmentation paper_content: The introduction of prior knowledge into image analysis algorithms is a central challenge in computer vision. In this paper, we introduce the concept of proximity priors into semantic segmentation methods in order to penalize the proximity of certain object classes. Proximity priors are a generalization of purely global and purely local co-occurrence priors which have been introduced recently. The key idea is to consider pixels as adjacent if they are within a specified neighborhood of arbitrary size and shape. Respective penalties for the adjacency of various label pairs (the labels ’sheep’ and ’lion’ for example) can be learned statistically from a set of segmented images. We propose a variational approach which integrates morphological operators and derive an exact convex relaxation which can be minimized globally. Extensive numerical validations on an established semantic segmentation benchmark demonstrate that the proposed proximity priors compare favorably to existing approaches. --- paper_title: Graph cut with ordering constraints on labels and its applications paper_content: In the last decade, graph-cut optimization has been popular for a variety of pixel labeling problems. Typically graph-cut methods are used to incorporate a smoothness prior on a labeling. Recently several methods incorporated ordering constraints on labels for the application of object segmentation. An example of an ordering constraint is prohibiting a pixel with a ldquocar wheelrdquo label to be above a pixel with a ldquocar roofrdquo label. We observe that the commonly used graph-cut based alpha-expansion is more likely to get stuck in a local minimum when ordering constraints are used. For certain models with ordering constraints, we develop new graph-cut moves which we call order-preserving moves. Order-preserving moves act on all labels, unlike alpha-expansion. Although the global minimum is still not guaranteed, optimization with order-preserving moves performs significantly better than alpha-expansion. We evaluate order-preserving moves for the geometric class scene labeling (introduced by Hoiem et al.) where the goal is to assign each pixel a label such as ldquoskyrdquo, ldquogrounrdquo, etc., so ordering constraints arise naturally. In addition, we use order-preserving moves for certain simple shape priors in graphcut segmentation, which is a novel contribution in itself. --- paper_title: Tiered scene labeling with dynamic programming paper_content: Dynamic programming (DP) has been a useful tool for a variety of computer vision problems. However its application is usually limited to problems with a one dimensional or low treewidth structure, whereas most domains in vision are at least 2D. In this paper we show how to apply DP for pixel labeling of 2D scenes with simple “tiered” structure. While there are many variations possible, for the applications we consider the following tiered structure is appropriate. An image is first divided by horizontal curves into the top, middle, and bottom regions, and the middle region is further subdivided vertically into subregions. Under these constraints a globally optimal labeling can be found using an efficient dynamic programming algorithm. We apply this algorithm to two very different tasks. The first is the problem of geometric class labeling where the goal is to assign each pixel a label such as “sky”, “ground”, and “surface above ground”. The second task involves incorporating simple shape priors for segmentation of an image into the “foreground” and “background” regions. --- paper_title: Nonmetric Priors for Continuous Multilabel Optimization paper_content: We propose a novel convex prior for multilabel optimization which allows to impose arbitrary distances between labels. Only symmetry, d(i,j)≥0 and d(i,i)=0 are required. In contrast to previous grid based approaches for the nonmetric case, the proposed prior is formulated in the continuous setting avoiding grid artifacts. In particular, the model is easy to implement, provides a convex relaxation for the Mumford-Shah functional and yields comparable or superior results on the MSRC segmentation database comparing to metric or grid based approaches. --- paper_title: Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multi-band Image Segmentation. paper_content: We present a novel statistical and variational approach to image segmentation based on a new algorithm, named region competition. This algorithm is derived by minimizing a generalized Bayes/minimum description length (MDL) criterion using the variational principle. The algorithm is guaranteed to converge to a local minimum and combines aspects of snakes/balloons and region growing. The classic snakes/balloons and region growing algorithms can be directly derived from our approach. We provide theoretical analysis of region competition including accuracy of boundary location, criteria for initial conditions, and the relationship to edge detection using filters. It is straightforward to generalize the algorithm to multiband segmentation and we demonstrate it on gray level images, color images and texture images. The novel color model allows us to eliminate intensity gradients and shadows, thereby obtaining segmentation based on the albedos of objects. It also helps detect highlight regions. --- paper_title: A Region Merging Prior for Variational Level Set Image Segmentation paper_content: In current level set image segmentation methods, the number of regions is assumed to known beforehand. As a result, it remains constant during the optimization of the objective functional. How to allow it to vary is an important question which has been generally avoided. This study investigates a region merging prior related to regions area to allow the number of regions to vary automatically during curve evolution, thereby optimizing the objective functional implicitly with respect to the number of regions. We give a statistical interpretation to the coefficient of this prior to balance its effect systematically against the other functional terms. We demonstrate the validity and efficiency of the method by testing on real images of intensity, color, and motion. --- paper_title: Unsupervised non-parametric region segmentation using level sets paper_content: We present a novel non-parametric unsupervised segmentation algorithm based on region competition (Zhu and Yuille, 1996); but implemented within a level sets framework (Osher and Sethian, 1988). The key novelty of the algorithm is that it can solve N /spl ges/ 2 class segmentation problems using just one embedded surface; this is achieved by controlling the merging and splitting behaviour of the level sets according to a minimum description length (MDL) (Leclerc (1989) and Rissanen (1985)) cost function. This is in contrast to N class region-based level set segmentation methods to date which operate by evolving multiple coupled embedded surfaces in parallel (Chan et al., 2002). Furthermore, it operates in an unsupervised manner; it is necessary neither to specify the value of N nor the class models a-priori. We argue that the level sets methodology provides a more convenient framework for the implementation of the region competition algorithm, which is conventionally implemented using region membership arrays due to the lack of a intrinsic curve representation. Finally, we generalise the Gaussian region model used in standard region competition to the non-parametric case. The region boundary motion and merge equations become simple expressions containing cross-entropy and entropy terms. --- paper_title: Level Set Segmentation With Multiple Regions paper_content: The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours. --- paper_title: Geodesic active regions for motion estimation and tracking paper_content: This paper proposes a new front propagation method to deal accurately with the challenging problem of tracking non-rigid moving objects. This is obtained by employing a geodesic active region model where the designed objective function is composed of boundary and region-based terms and optimizes the curve position with respect to motion and intensity properties. The main novelty of our approach is that we deal with the motion estimation (linear models are assumed) and the tracking problem simultaneously. In other words, the optimization problem contains a coupled set of unknown variables; the curve position and the corresponding motion model. The designed objective function is minimized using a gradient descent method; the curve is propagated towards the object boundaries under the influence of boundary, intensity and motion-based forces using a PDE, while given the curve position an incremental analytical solution is obtained for the motion model. Besides, this PDE is implemented using a level set approach where topological changes are naturally handled. Very promising experimental results are provided using real video sequences. --- paper_title: A level set framework with a shape and motion prior for segmentation and region tracking in echocardiography paper_content: We describe a level set formulation using both shape and motion prior, for both segmentation and region tracking in high frame rate echocardiographic image sequences. The proposed approach uses the following steps: registration of the prior shape, level set segmentation constrained through the registered shape and region tracking. Registration of the prior shape is expressed as a rigid or an affine transform problem, where the transform minimizing a global region-based criterion is sought. This criterion is based on image statistics and on the available estimated axial motion data. The segmentation step is then formulated through front propagation, constrained with the registered shape prior. The same region-based criterion is used both for the registration and the segmentation step. Region tracking is based on the motion field estimated from the interframe level set evolution. The proposed approach is applied to high frame rate echocardiographic sequences acquired in vivo. In this particular application, the prior shape is provided by a medical expert and the rigid transform is used for registration. It is shown that this approach provides consistent results in terms of segmentation and stability through the cardiac cycle. In particular, a comparison indicates that the results provided by our approach are very close to the results obtained with manual tracking performed by an expert cardiologist on a Doppler Tissue Imaging (DTI) study. These preliminary results show the ability of the method to perform region tracking and its potential for dynamic parametric imaging of the heart. --- paper_title: Real-time tracking of the left ventricle in 3D echocardiography using a state estimation approach paper_content: In this paper we present a framework for real-time tracking of deformable contours in volumetric datasets. The framework supports composite deformation models, controlled by parameters for contour shape in addition to global pose. Tracking is performed in a sequential state estimation fashion, using an extended Kalman filter, with measurement processing in information space to effectively predict and update contour deformations in real-time. A deformable B-spline surface coupled with a global pose transform is used to model shape changes of the left ventricle of the heart. ::: ::: Successful tracking of global motion and local shape changes without user intervention is demonstrated on a dataset consisting of 21 3D echocardiography recordings. Real-time tracking using the proposed approach requires a modest CPU load of 13% on a modern computer. The segmented volumes compare to a semi-automatic segmentation tool with 95% limits of agreement in the interval 4.1 ± 24.6 ml (r = 0.92). --- paper_title: PWP3D: Real-Time Segmentation and Tracking of 3D Objects paper_content: We formulate a probabilistic framework for simultaneous region-based 2D segmentation and 2D to 3D pose tracking, using a known 3D model. Given such a model, we aim to maximise the discrimination between statistical foreground and background appearance models, via direct optimisation of the 3D pose parameters. The foreground region is delineated by the zero-level-set of a signed distance embedding function, and we define an energy over this region and its immediate background surroundings based on pixel-wise posterior membership probabilities (as opposed to likelihoods). We derive the differentials of this energy with respect to the pose parameters of the 3D object, meaning we can conduct a search for the correct pose using standard gradient-based non-linear minimisation techniques. We propose novel enhancements at the pixel level based on temporal consistency and improved online appearance model adaptation. Furthermore, straightforward extensions of our method lead to multi-camera and multi-object tracking as part of the same framework. The parallel nature of much of the processing in our algorithm means it is amenable to GPU acceleration, and we give details of our real-time implementation, which we use to generate experimental results on both real and artificial video sequences, with a number of 3D models. These experiments demonstrate the benefit of using pixel-wise posteriors rather than likelihoods, and showcase the qualities, such as robustness to occlusions and motion blur (and also some failure modes), of our tracker. --- paper_title: Animal: Validation and Applications of Nonlinear Registration-Based Segmentation paper_content: Magnetic resonance imaging (MRI) has become the modality of choice for neuro-anatomical imaging. Quantitative analysis requires the accurate and reproducible labeling of all voxels in any given structure within the brain. Since manual labeling is prohibitively time-consuming and error-prone we have designed an automated procedure called ANIMAL (Automatic Nonlinear Image Matching and Anatomical Labeling) to objectively segment gross anatomical structures from 3D MRIs of normal brains. The procedure is based on nonlinear registration with a previously labeled target brain, followed by numerical inverse transformation of the labels to the native MRI space. Besides segmentation, ANIMAL has been applied to non-rigid registration and to the analysis of morphometric variability. In this paper, the nonlinear registration approach is validated on five test volumes, produced with simulated deformations. Experiments show that the ANIMAL recovers 64% of the nonlinear residual variability remaining after linear regist... --- paper_title: Elastically deforming 3D atlas to match anatomical brain images paper_content: To evaluate our system for elastically deforming a three-dimensional atlas to match anatomical brain images, six deformed versions of an atlas were generated. The deformed atlases were created by elastically mapping an anatomical brain atlas onto different MR brain image volumes. The mapping matches the edges of the ventricles and the surface of the brain; the resultant deformations are propagated through the atlas volume, deforming the remainder of the structures in the process. The atlas was then elastically matched to its deformed versions. The accuracy of the resultant matches was evaluated by determining the correspondence of 32 cortical and subcortical structures --- paper_title: Simultaneous monocular 2d segmentation, 3d pose recovery and 3d reconstruction paper_content: We propose a novel framework for joint 2D segmentation and 3D pose and 3D shape recovery, for images coming from a single monocular source. In the past, integration of all three has proven difficult, largely because of the high degree of ambiguity in the 2D - 3D mapping. Our solution is to learn nonlinear and probabilistic low dimensional latent spaces, using the Gaussian Process Latent Variable Models dimensionality reduction technique. These act as class or activity constraints to a simultaneous and variational segmentation --- recovery --- reconstruction process. We define an image and level set based energy function, which we minimise with respect to 3D pose and shape, 2D segmentation resulting automatically as the projection of the recovered shape under the recovered pose. We represent 3D shapes as zero levels of 3D level set embedding functions, which we project down directly to probabilistic 2D occupancy maps, without the requirement of an intermediary explicit contour stage. Finally, we detail a fast, open-source, GPU-based implementation of our algorithm, which we use to produce results on both real and artificial video sequences. --- paper_title: Deformable Medical Image Registration: A Survey paper_content: Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: 1) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; 2) longitudinal studies, where temporal structural or anatomical changes are investigated; and 3) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner. --- paper_title: Medical image registration paper_content: Radiological images are increasingly being used in healthcare and medical research. There is, consequently, widespread interest in accurately relating information in the different images for diagnosis, treatment and basic science. This article reviews registration techniques used to solve this problem, and describes the wide variety of applications to which these techniques are applied. Applications of image registration include combining images of the same subject from different modalities, aligning temporal sequences of images to compensate for motion of the subject between scans, image guidance during interventions and aligning images from multiple subjects in cohort studies. Current registration algorithms can, in many cases, automatically register images that are related by a rigid body transformation (i.e. where tissue deformation can be ignored). There has also been substantial progress in non-rigid registration algorithms that can compensate for tissue deformation, or align images from different subjects. Nevertheless many registration problems remain unsolved, and this is likely to continue to be an active field of research in the future. --- paper_title: A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation paper_content: In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one's training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. ---
Title: Incorporating prior knowledge in medical image segmentation: a survey Section 1: Introduction Description 1: This section provides an overview of the process of image segmentation and its applications in medical image analysis, highlighting the need for incorporating prior knowledge. Section 2: Why yet another survey paper on MIS? Description 2: This section explores the related surveys and justifies the need for this survey, focusing on the limitations of existing surveys and the advancements covered in this survey. Section 3: Traditional image segmentation methods Description 3: This section reviews traditional segmentation methods such as thresholding, region-growing, and watershed, and discusses their limitations in handling medical images. Section 4: Optimization-based image segmentation Description 4: This section introduces optimization-based methods for image segmentation, explaining the fundamentals of energy minimization and probability-based approaches. Section 5: Domain of formulation: continuous vs. discrete Description 5: This section discusses the differences between continuous and discrete domain formulations in segmentation and compares their advantages and disadvantages. Section 6: Optimization: convex (submodular) vs. non-convex (nonsubmodular) Description 6: This section explains the distinctions between convex and non-convex optimization problems and their implications for image segmentation accuracy and feasibility. Section 7: Fidelity vs. Optimizibility Description 7: This section elaborates on the trade-off between fidelity and optimizability in energy-based segmentation problems, discussing ways to balance them. Section 8: Uncertainty and fuzzy / probabilistic vs. crisp labeling Description 8: This section addresses the role of uncertainty in segmentation, comparing crisp, probabilistic, and fuzzy labeling approaches and their benefits. Section 9: Sub-pixel accuracy Description 9: This section discusses achieving sub-pixel accuracy in segmentation, focusing on methods to improve the precision of object boundaries. Section 10: Prior knowledge for targeted image segmentation Description 10: This section categorizes and reviews various types of prior knowledge used to improve image segmentation, comparing their effectiveness and methodologies. Section 11: User interaction Description 11: This section explains how user input is incorporated into segmentation frameworks to assist in characterizing and delineating the targeted object. Section 12: Appearance prior Description 12: This section discusses appearance models, including intensity, color, and texture, and their role in improving segmentation results. Section 13: Regularization Description 13: This section elaborates on regularization terms as priors in image segmentation to ensure feasible and stable solutions. Section 14: Boundary information Description 14: This section focuses on using boundary and edge information to guide segmentation, including the role of boundary polarity. Section 15: Extending binary to multi-label segmentation Description 15: This section discusses techniques for multi-label segmentation, extending from binary segmentation methods to handle multiple objects. Section 16: Shape prior Description 16: This section covers the incorporation of shape information as a prior, using geometrical, statistical, and physical models. Section 17: Topological prior Description 17: This section addresses the use of topological constraints to maintain specific topologies in segmented objects. Section 18: Moment prior Description 18: This section explains the use of moment constraints, including area, centroid, and higher-order moments, as priors in segmentation. Section 19: Geometrical and region interactions prior Description 19: This section reviews the role of geometrical and regional interaction constraints in improving segmentation accuracy. Section 20: Spatial distance prior Description 20: This section discusses enforcing spatial distance constraints between regions to ensure plausible segmentation results. Section 21: Adjacency prior Description 21: This section explains the incorporation of adjacency relationships and ordering constraints between labels in segmentation. Section 22: Number of regions/labels Description 22: This section discusses methods to handle cases where the number of regions or labels is not predefined, including label cost priors. Section 23: Motion prior Description 23: This section covers the use of motion and tracking information as a prior in segmenting moving objects in videos. Section 24: Model/Atlas Description 24: This section discusses atlas-based segmentation methods and their ability to encode spatial relationships between multiple tissues or structures. Section 25: Summary, discussion, and conclusions Description 25: This section summarizes the survey, discusses the trade-offs in segmentation methods, and concludes with future research directions.
TIME SYNCHRONIZATION IN WIRELESS SENSOR NETWORKS: A SURVEY
10
--- paper_title: Time synchronization in ad hoc networks paper_content: Ubiquitous computing environments are typically based upon ad hoc networks of mobile computing devices. These devices may be equipped with sensor hardware to sense the physical environment and may be attached to real world artifacts to form so-called smart things. The data sensed by various smart things can then be combined to derive knowledge about the environment, which in turn enables the smart things to "react" intelligently to their environment. For this so-called sensor fusion, temporal relationships (X happened before Y) and real-time issues (X and Y happended within a certain time interval) play an important role. Thus physical time and clock synchronization are crucial in such environments. However, due to the characteristics of sparse ad hoc networks, classical clock synchronization algorithms are not applicable in this setting. We present a time synchronization scheme that is appropriate for sparse ad hoc networks --- paper_title: Fine-grained network time synchronization using reference broadcasts paper_content: Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. --- paper_title: Time synchronization in ad hoc networks paper_content: Ubiquitous computing environments are typically based upon ad hoc networks of mobile computing devices. These devices may be equipped with sensor hardware to sense the physical environment and may be attached to real world artifacts to form so-called smart things. The data sensed by various smart things can then be combined to derive knowledge about the environment, which in turn enables the smart things to "react" intelligently to their environment. For this so-called sensor fusion, temporal relationships (X happened before Y) and real-time issues (X and Y happended within a certain time interval) play an important role. Thus physical time and clock synchronization are crucial in such environments. However, due to the characteristics of sparse ad hoc networks, classical clock synchronization algorithms are not applicable in this setting. We present a time synchronization scheme that is appropriate for sparse ad hoc networks --- paper_title: Fine-grained network time synchronization using reference broadcasts paper_content: Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. --- paper_title: Timing-sync protocol for sensor networks paper_content: Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. --- paper_title: Lightweight time synchronization for sensor networks paper_content: This paper presents lightweight tree-based synchronization (LTS) methods for sensor networks. First, a single-hop, pair-wise synchronization scheme is analyzed. This scheme requires the exchange of only three messages and has Gaussian error properties. The single-hop approach is extended to a centralized multi-hop synchronization method. Multi-hop synchronization consists of pair-wise synchronizations performed along the edges of a spanning tree. Multi-hop synchronization requires only n-1 pair-wise synchronizations for a network of n nodes. In addition, we show that the communication complexity and accuracy of multi-hop synchronization is a function of the construction and depth of the spanning tree; several spanning-tree construction algorithms are described. Further, the required refresh rate of multi-hop synchronization is shown as a function of clock drift and the accuracy of single-hop synchronization. Finally, a distributed multi-hop synchronization is presented where nodes keep track of their own clock drift and their synchronization accuracy. In this scheme, nodes initialize their own resynchronization as needed. --- paper_title: The flooding time synchronization protocol paper_content: Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms. --- paper_title: Lightweight time synchronization for sensor networks paper_content: This paper presents lightweight tree-based synchronization (LTS) methods for sensor networks. First, a single-hop, pair-wise synchronization scheme is analyzed. This scheme requires the exchange of only three messages and has Gaussian error properties. The single-hop approach is extended to a centralized multi-hop synchronization method. Multi-hop synchronization consists of pair-wise synchronizations performed along the edges of a spanning tree. Multi-hop synchronization requires only n-1 pair-wise synchronizations for a network of n nodes. In addition, we show that the communication complexity and accuracy of multi-hop synchronization is a function of the construction and depth of the spanning tree; several spanning-tree construction algorithms are described. Further, the required refresh rate of multi-hop synchronization is shown as a function of clock drift and the accuracy of single-hop synchronization. Finally, a distributed multi-hop synchronization is presented where nodes keep track of their own clock drift and their synchronization accuracy. In this scheme, nodes initialize their own resynchronization as needed. --- paper_title: Security and privacy in sensor networks paper_content: Sensor networks offer economically viable solutions for a variety of applications. For example, current implementations monitor factory instrumentation, pollution levels, freeway traffic, and the structural integrity of buildings. Other applications include climate sensing and control in office buildings and home environmental sensing systems for temperature, light, moisture, and motion. Sensor networks are key to the creation of smart spaces, which embed information technology in everyday home and work environments. The miniature wireless sensor nodes, or motes, developed from low-cost off-the-shelf components at the University of California, Berkeley, as part of its smart dust projects, establish a self-organizing sensor network when dispersed into an environment. The privacy and security issues posed by sensor networks represent a rich field of research problems. Improving network hardware and software may address many of the issues, but others will require new supporting technologies. --- paper_title: Establishing pairwise keys in distributed sensor networks paper_content: Pairwise key establishment is a fundamental security service in sensor networks; it enables sensor nodes to communicate securely with each other using cryptographic techniques. However, due to the resource constraints on sensors, it is infeasible to use traditional key management techniques such as public key cryptography and key distribution center (KDC). To facilitate the study of novel pairwise key predistribution techniques, this paper presents a general framework for establishing pairwise keys between sensors on the basis of a polynomial-based key predistribution protocol [2]. This paper then presents two efficient instantiations of the general framework: a random subset assignment key predistribution scheme and a grid-based key predistribution scheme. The analysis in this paper indicates that these two schemes have a number of nice properties, including high probability (or guarantee) to establish pairwise keys, tolerance of node captures, and low communication overhead. Finally, this paper presents a technique to reduce the computation at sensors required by these schemes. --- paper_title: Comparing Elliptic Curve Cryptography and RSA on 8-bit CPUs paper_content: Strong public-key cryptography is often considered to be too computationally expensive for small devices if not accelerated by cryptographic hardware. We revisited this statement and implemented elliptic curve point multiplication for 160-bit, 192-bit, and 224-bit NIST/SECG curves over GF(p) and RSA-1024 and RSA-2048 on two 8-bit microcontrollers. To accelerate multiple-precision multiplication, we propose a new algorithm to reduce the number of memory accesses. --- paper_title: Random key predistribution schemes for sensor networks paper_content: Key establishment in sensor networks is a challenging problem because asymmetric key cryptosystems are unsuitable for use in resource constrained sensor nodes, and also because the nodes could be physically compromised by an adversary. We present three new mechanisms for key establishment using the framework of pre-distributing a random set of keys to each node. First, in the q-composite keys scheme, we trade off the unlikeliness of a large-scale network attack in order to significantly strengthen random key predistribution's strength against smaller-scale attacks. Second, in the multipath-reinforcement scheme, we show how to strengthen the security between any two nodes by leveraging the security of other links. Finally, we present the random-pairwise keys scheme, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation. --- paper_title: Gradient clock synchronization in wireless sensor networks paper_content: Accurately synchronized clocks are crucial for many applications in sensor networks. Existing time synchronization algorithms provide on average good synchronization between arbitrary nodes, however, as we show in this paper, close-by nodes in a network may be synchronized poorly. We propose the Gradient Time Synchronization Protocol (GTSP) which is designed to provide accurately synchronized clocks between neighbors. GTSP works in a completely decentralized fashion: Every node periodically broadcasts its time information. Synchronization messages received from direct neighbors are used to calibrate the logical clock. The algorithm requires neither a tree topology nor a reference node, which makes it robust against link and node failures. The protocol is implemented on the Mica2 platform using TinyOS. We present an evaluation of GTSP on a 20-node testbed setup and simulations on larger network topologies. --- paper_title: A public-key infrastructure for key distribution in TinyOS based on elliptic curve cryptography paper_content: We present the first known implementation of elliptic curve cryptography over F/sub 2p/ for sensor networks based on the 8-bit, 7.3828-MHz MICA2 mote. Through instrumentation of UC Berkeley's TinySec module, we argue that, although secret-key cryptography has been tractable in this domain for some time, there has remained a need for an efficient, secure mechanism for distribution of secret keys among nodes. Although public-key infrastructure has been thought impractical, we argue, through analysis of our own implementation for TinyOS of multiplication of points on elliptic curves, that public-key infrastructure is, in fact, viable for TinySec keys' distribution, even on the MICA2. We demonstrate that public keys can be generated within 34 seconds, and that shared secrets can be distributed among nodes in a sensor network within the same, using just over 1 kilobyte of SRAM and 34 kilobytes of ROM. --- paper_title: Attack-resilient time synchronization for wireless sensor networks paper_content: The existing time synchronization schemes in sensor networks were not designed with security in mind, thus leaving them vulnerable to security attacks. In this paper, we first identify various attacks that are effective to several representative time synchronization schemes, and then focus on a specific type of attack called delay attack, which cannot be addressed by cryptographic techniques. Next we propose two approaches to detect and accommodate the delay attack. Our first approach uses the generalized extreme studentized deviate (GESD) algorithm to detect multiple outliers introduced by the compromised nodes; our second approach uses a threshold derived using a time transformation technique to filter out the outliers. Finally we show the effectiveness of these two schemes through extensive simulations --- paper_title: The flooding time synchronization protocol paper_content: Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms. --- paper_title: Optimal clock synchronization in networks paper_content: Having access to an accurate time is a vital building block in all networks; in wireless sensor networks even more so, because wireless media access or data fusion may depend on it. Starting out with a novel analysis, we show that orthodox clock synchronization algorithms make fundamental mistakes. The state-of-the-art clock synchronization algorithm FTSP exhibits an error that grows exponentially with the size of the network, for instance. Since the involved parameters are small, the error only becomes visible in midsize networks of about 10--20 nodes. In contrast, we present PulseSync, a new clock synchronization algorithm that is asymptotically optimal. We evaluate PulseSync on a Mica2 testbed, and by simulation on larger networks. On a 20 node network, the prototype implementation of PulseSync outperforms FTSP by a factor of 5. Theory and simulation show that for larger networks, PulseSync offers an accuracy which is several orders of magnitude better than FTSP. To round off the presentation, we investigate several optimization issues, e.g. media access and local skew. --- paper_title: Wireless integrated network sensors: Low power systems on a chip paper_content: Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for transportation, manufacturing, health care, environmental monitoring, and safety and security. WINS combine sensing, signal processing, decision capability, and wireless networking capability in a compact, low power system. WINS systems combine microsensor technology with low power sensor interface, signal processing, and RF communication circuits. The need for low cost presents engineering challenges for implementation of these systems in conventional digital CMOS technology. This paper describes micropower data converter, digital signal processing systems, and weak inversion CMOS RF circuits. The digital signal processing system relies on a continuously operating spectrum analyzer. Finally, the weak inversion CMOS RF systems are designed to exploit the properties of high-Q inductors to enable low power operation. This paper reviews system architecture and low power circuits for WINS. --- paper_title: Fine-grained network time synchronization using reference broadcasts paper_content: Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. --- paper_title: Delay Measurement Time Synchronization for Wireless Sensor Networks paper_content: Amino acid fermentation is conducted by fermenting bacterial cells in a culture medium in a fermentor and separating fermentation solution withdrawn from the fermentor into a solution containing said bacterial cells and a solution not containing bacterial cells by a cell separator. The solution containing said bacterial cells being circulated from said cell separator to said fermenter by circulating means to perform amino acid fermentation continuously, and bubbles being removed from said fermentation solution by a bubble separator before said fermentation solution is fed to said circulating means and said cell separator. --- paper_title: Directed diffusion: a scalable and robust communication paradigm for sensor networks paper_content: Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network. --- paper_title: Global clock synchronization in sensor networks paper_content: Global synchronization is important for many sensor network applications that require precise mapping of collected sensor data with the time of the events, for example, in tracking and surveillance. It also plays an important role in energy conservation in MAC layer protocols. This paper describes four methods to achieve global synchronization in a sensor network: a node-based approach, a hierarchical cluster-based method, a diffusion-based method, and a fault-tolerant diffusion-based method. The diffusion-based protocol is fully localized. We present two implementations of the diffusion-based protocol for synchronous and asynchronous systems and prove its convergence. Finally, we show that, by imposing some constraints on the sensor network, global clock synchronization can be achieved in the presence of malicious nodes that exhibit Byzantine failures. --- paper_title: Continuous clock synchronization in wireless real-time applications paper_content: Continuous clock synchronization avoids unpredictable instantaneous corrections of clock values. This is usually achieved by spreading the clock correction over the synchronization interval. In the context of wireless real time applications, a protocol achieving continuous clock synchronization must tolerate message losses and should have a low overhead in terms of the number of messages. The paper presents a clock synchronization protocol for continuous clock synchronization in wireless real time applications. It extends the IEEE 802.11 standard for wireless local area networks. It provides continuous clock synchronization, improves the precision by exploiting the tightness of the communication medium, and tolerates message losses. Continuous clock synchronization is achieved with an advanced algorithm adjusting the clock rates. We present the design of the protocol, its mathematical analysis, and measurements of a driver level implementation of the protocol on Windows NT. --- paper_title: Wireless integrated network sensors: Low power systems on a chip paper_content: Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for transportation, manufacturing, health care, environmental monitoring, and safety and security. WINS combine sensing, signal processing, decision capability, and wireless networking capability in a compact, low power system. WINS systems combine microsensor technology with low power sensor interface, signal processing, and RF communication circuits. The need for low cost presents engineering challenges for implementation of these systems in conventional digital CMOS technology. This paper describes micropower data converter, digital signal processing systems, and weak inversion CMOS RF circuits. The digital signal processing system relies on a continuously operating spectrum analyzer. Finally, the weak inversion CMOS RF systems are designed to exploit the properties of high-Q inductors to enable low power operation. This paper reviews system architecture and low power circuits for WINS. ---
Title: TIME SYNCHRONIZATION IN WIRELESS SENSOR NETWORKS: A SURVEY Section 1: INTRODUCTION Description 1: Write an introduction to the time synchronization problem in wireless sensor networks, its importance, and the aim of the survey. Section 2: TIME SYNCHRONIZATION PROBLEM Description 2: Describe the core issues related to time synchronization in sensor networks, including clock drift and initialization inconsistencies. Section 3: RELATED WORK ON TIME-SYNC SCHEMES Description 3: Discuss various time synchronization schemes like NTP, RBS, TPSN, and FTSP, highlighting their methodologies and differences. Section 4: UNCERTAINTY IN THE SYNC PACKET Description 4: Explain the uncertainty factors in synchronization packets and how different protocols address or minimize these uncertainties. Section 5: SECURE TIME SYNCHRONIZATION Description 5: Examine the need for secure time synchronization, the vulnerabilities of existing schemes, and potential security measures. Section 6: ENERGY CONSUMPTION OF SYNCHRONIZATION SCHEMES Description 6: Analyze the energy consumption implications of different synchronization schemes and compare their efficiencies. Section 7: POST-FACTO SYNCHRONIZATION Description 7: Introduce the concept of post-facto synchronization, its benefits, limitations, and specific applications. Section 8: GLOBAL CLOCK SYNCHRONIZATION Description 8: Discuss the importance and challenges of achieving global clock synchronization in sensor networks, including different global sync methods. Section 9: GROUP SYNCHRONIZATION Description 9: Describe the concept of group synchronization, its necessity in certain applications, and the techniques used to achieve it. Section 10: CONCLUSION Description 10: Summarize the key points of the survey, reflecting on the importance of precise and secure time synchronization, and its future research directions.
A Survey of Cache Bypassing Techniques
12
--- paper_title: A Survey of Techniques for Managing and Leveraging Caches in GPUs paper_content: Initially introduced as special-purpose accelerators for graphics applications, graphics processing units (GPUs) have now emerged as general purpose computing platforms for a wide range of applications. To address the requirements of these applications, modern GPUs include sizable hardware-managed caches. However, several factors, such as unique architecture of GPU, rise of CPU–GPU heterogeneous computing, etc., demand effective management of caches to achieve high performance and energy efficiency. Recently, several techniques have been proposed for this purpose. In this paper, we survey several architectural and system-level techniques proposed for managing and leveraging GPU caches. We also discuss the importance and challenges of cache management in GPUs. The aim of this paper is to provide the readers insights into cache management techniques for GPUs and motivate them to propose even better techniques for leveraging the full potential of caches in the GPUs of tomorrow. --- paper_title: Design and performance evaluation of a cache assist to implement selective caching paper_content: Conventional cache architectures exploit locality, but do so rather blindly. By forcing all references through a single structure, the cache's effectiveness on many references is reduced. This paper presents a cache assist namely the annex cache which implements a selective caching scheme. Except for filling a main cache at cold start, all entries come to the cache via the annex cache. Items referenced only rarely will be excluded from the main cache, eliminating several conflict misses. The basic premise is that an item deserves to be in the main cache only if it can prove its right to exist in the main cache by demonstrating locality. The annex cache combines the features of Jouppi's (1990) victim caches and McFarling's (1992) cache exclusion schemes. Extensive simulation studies for annex and victim caches using a variety of SPEC programs are presented in the paper. Annex caches were observed to be significantly better than conventional caches, better than victim caches in certain cases, and comparable to victim caches in other cases. --- paper_title: Adaptive Cache Management for Energy-Efficient GPU Computing paper_content: With the SIMT execution model, GPUs can hidememory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPU scause cache contention and resource congestion. Existing CPUcache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPUcaches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid oversaturatingon-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache sensitive benchmarks. This results in a harmonic mean IPCimprovement of 74% and 17% (maximum 661% and 44% IPCimprovement), compared to the baseline GPU architecture and optimal static warp throttling, respectively. --- paper_title: Hardware identification of cache conflict misses paper_content: This paper describes the Miss Classification Table, a simple mechanism that enables the processor or memory controller to identify each cache miss as either a conflict miss or a capacity (non-conflict) miss. The miss classification table works by storing part of the tag of the most recently evicted line of a cache set. If the next miss to that cache set has a matching tag, it is identified as a conflict miss. This technique correctly identifies 87% of misses in the worst case. Several applications of this information are demonstrated, including improvements to victim caching, next-line prefetching, cache exclusion, and a pseudo-associative cache. This paper also presents the Adaptive Miss Buffer (AMB), which combines several of these techniques, targeting each miss with the most appropriate optimization, all within a single small miss buffer. The AMB's combination of techniques achieves 16% better performance than any single technique alone. --- paper_title: SBAC: a statistics based cache bypassing method for asymmetric-access caches paper_content: Asymmetric-access caches with emerging technologies, such as STT-RAM and RRAM, have become very competitive designs recently. Since the write operations consume more time and energy than read ones, data should bypass an asymmetric-access cache unless the locality can justify the data allocation. However, the asymmetric-access property is not well addressed in prior bypassing approaches, which are not energy efficient and induce non-trivial operation overhead. To overcome these problems, we propose a cache bypassing method, SBAC, based on data locality statistics of the whole cache rather than a single cache line's signature. We observe that the decision-making of SBAC is highly accurate and the optimization technique for SBAC works efficiently for multiple applications running concurrently. Experiments show that SBAC cuts down overall energy consumption by 22.3%, and reduces execution time by 8.3%. Compared to prior approaches, the design overhead of SBAC is trivial. --- paper_title: Real-Time GPU Computing: Cache or No Cache? paper_content: Recent Graphics Processing Units (GPUs) have employed cache memories to boost performance. However, cache memories are well known to be harmful to time predictability for CPUs. For high-performance real-time systems using GPUs, it remains unknown whether or not cache memories should be employed. In this paper, we quantitatively compare the performance for GPUs with and without caches, and find that GPUs without the cache actually lead to better average-case performance, with higher time predictability. However, we also study a profiling-based cache bypassing method, which can use the L1 data cache more efficiently to achieve better average-case performance than that without the cache. Therefore, it seems still beneficial to employ caches for real-time computing on GPUs. --- paper_title: Improving cache performance by selective cache bypass paper_content: A technique is proposed to prevent the return of infrequently used items to cache after they are bumped from it. Simulations have shown that the return of these items, called cache pollution, typically degrade cache-based system performance (average reference time) by 10% to 30%. The technique proposed involves the use of hardware called a bypass-cache, which, under program control, will determine whether each reference should be through the cache or should bypass the cache and reference main memory directly. Several inexpensive heuristics for the compiler to determine how to make each reference are given. It is shown that much of the performance loss can be regained.<<ETX>> --- paper_title: Haswell: A Family of IA 22 nm Processors paper_content: We describe the 4th Generation Intel® Core™ processor family (codenamed “Haswell”) implemented on Intel® 22 nm technology and intended to support form factors from desktops to fan-less Ultrabooks™. Performance enhancements include a 102 GB/sec L4 eDRAM cache, hardware support for transactional synchronization, and new FMA instructions that double FP operations per clock. Power improvements include Fully-Integrated Voltage Regulators ( ~ 50% battery life extension), new low-power states (95% standby power savings), optimized MCP I/O system (1.0-1.22 pJ/b), and improved DDR I/O circuits (40% active and 100x idle power savings). Other improvements include full-platform optimization via integrated display I/O interfaces. --- paper_title: A study of replacement algorithms for a virtual-storage computer paper_content: One of the basic limitations of a digital computer is the size of its available memory. ::: 1 ::: In most cases, it is neither feasible nor economical for a user to insist that every problem program fit into memory. The number of words of information in a program often exceeds the number of cells (i.e., word locations) in memory. The only way to solve this problem is to assign more than one program word to a cell. Since a cell can hold only one word at a time, extra words assigned to the cell must be held in external storage. Conventionally, overlay techniques are employed to exchange memory words and external-storage words whenever needed; this, of course, places an additional planning and coding burden on the programmer. For several reasons, it would be advantageous to rid the programmer of this function by providing him with a “virtual” memory larger than his program. An approach that permits him to use a sufficiently large address range can accomplish this objective, assuming that means are provided for automatic execution of the memory-overlay functions. --- paper_title: Improving Cache Management Policies Using Dynamic Reuse Distances paper_content: Cache management policies such as replacement, bypass, or shared cache partitioning have been relying on data reuse behavior to predict the future. This paper proposes a new way to use dynamic reuse distances to further improve such policies. A new replacement policy is proposed which prevents replacing a cache line until a certain number of accesses to its cache set, called a Protecting Distance (PD). The policy protects a cache line long enough for it to be reused, but not beyond that to avoid cache pollution. This can be combined with a bypass mechanism that also relies on dynamic reuse analysis to bypass lines with less expected reuse. A miss fetch is bypassed if there are no unprotected lines. A hit rate model based on dynamic reuse history is proposed and the PD that maximizes the hit rate is dynamically computed. The PD is recomputed periodically to track a program's memory access behavior and phases. Next, a new multi-core cache partitioning policy is proposed using the concept of protection. It manages lifetimes of lines from different cores (threads) in such a way that the overall hit rate is maximized. The average per-thread lifetime is reduced by decreasing the thread's PD. The single-core PD-based replacement policy with bypass achieves an average speedup of 4.2% over the DIP policy, while the average speedups over DIP are 1.5% for dynamic RRIP (DRRIP) and 1.6% for sampling dead-block prediction (SDP). The 16-core PD-based partitioning policy improves the average weighted IPC by 5.2%, throughput by 6.4% and fairness by 9.9% over thread-aware DRRIP (TA-DRRIP). The required hardware is evaluated and the overhead is shown to be manageable. --- paper_title: DASCA: Dead Write Prediction Assisted STT-RAM Cache Architecture paper_content: Spin-Transfer Torque RAM (STT-RAM) has been considered as a promising candidate for on-chip last-level caches, replacing SRAM for better energy efficiency, smaller die footprint, and scalability. However, it also introduces several new challenges into last-level cache design that need to be overcome for feasible deployment of STT-RAM caches. Among other things, mitigating the impact of slow and energy-hungry write operations is of the utmost importance. In this paper, we propose a new mechanism to reduce write activities of STT-RAM last-level caches. The key observation is that a significant amount of data written to last-level caches is not actually re-referenced again during the lifetime of the corresponding cache blocks. Such write operations, which we call dead writes, can bypass the cache without incurring extra misses by definition. Based on this, we propose Dead Write Prediction Assisted STT-RAM Cache Architecture (DASCA), which predicts and bypasses dead writes for write energy reduction. For this purpose, we first propose a novel classification of dead writes, which is composed of dead-on-arrival fills, dead-value fills, and closing writes, as a theoretical model for redundant write elimination. On top of that, we present a dead write predictor based on a state-of-the-art dead block predictor. Evaluations show that our architecture achieves an energy reduction of 68% (62%) in last-level caches and an additional energy reduction of 10% (16%) in main memory and even improves system performance by 6% (14%) on average compared to the STT-RAM baseline in a single-core (quad-core) system. --- paper_title: A survey of architectural techniques for improving cache power efficiency paper_content: Modern processors are using increasingly larger sized on-chip caches. Also, with each CMOS technology generation, there has been a significant increase in their leakage energy consumption. For this reason, cache power management has become a crucial research issue in modern processor design. To address this challenge and also meet the goals of sustainable computing, researchers have proposed several techniques for improving energy efficiency of cache architectures. This paper surveys recent architectural techniques for improving cache power efficiency and also presents a classification of these techniques based on their characteristics. For providing an application perspective, this paper also reviews several real-world processor chips that employ cache energy saving techniques. The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches. --- paper_title: Counter-Based Cache Replacement and Bypassing Algorithms paper_content: Recent studies have shown that, in highly associative caches, the performance gap between the least recently used (LRU) and the theoretical optimal replacement algorithms is large, motivating the design of alternative replacement algorithms to improve cache performance. In LRU replacement, a line, after its last use, remains in the cache for a long time until it becomes the LRU line. Such deadlines unnecessarily reduce the cache capacity available for other lines. In addition, in multilevel caches, temporal reuse patterns are often inverted, showing in the L1 cache but, due to the filtering effect of the L1 cache, not showing in the L2 cache. At the L2, these lines appear to be brought in the cache but are never reaccessed until they are replaced. These lines unnecessarily pollute the L2 cache. This paper proposes a new counter-based approach to deal with the above problems. For the former problem, we predict lines that have become dead and replace them early from the L2 cache. For the latter problem, we identify never-reaccessed lines, bypass the L2 cache, and place them directly in the L1 cache. Both techniques are achieved through a single counter-based mechanism. In our approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest such as certain cache accesses occurs. When the counter reaches a threshold, the line ";expires"; and becomes replaceable. Each line's threshold is unique and is dynamically learned. We propose and evaluate two new replacement algorithms: Access interval predictor (AIP) and live-time predictor (LvP). AIP and LvP speed up 10 capacity-constrained SPEC2000 benchmarks by up to 48 percent and 15 percent on average (7 percent on average for the whole 21 Spec2000 benchmarks). Cache bypassing further reduces L2 cache pollution and improves the average speedups to 17 percent (8 percent for the whole 21 Spec2000 benchmarks). --- paper_title: Bypass and insertion algorithms for exclusive last-level caches paper_content: Inclusive last-level caches (LLCs) waste precious silicon estate due to cross-level replication of cache blocks. As the industry moves toward cache hierarchies with larger inner levels, this wasted cache space leads to bigger performance losses compared to exclusive LLCs. However, exclusive LLCs make the design of replacement policies more challenging. While in an inclusive LLC a block can gather a filtered access history, this is not possible in an exclusive design because the block is de-allocated from the LLC on a hit. As a result, the popular least-recently-used replacement policy and its approximations are rendered ineffective and proper choice of insertion ages of cache blocks becomes even more important in exclusive designs. On the other hand, it is not necessary to fill every block into an exclusive LLC. This is known as selective cache bypassing and is not possible to implement in an inclusive LLC because that would violate inclusion. This paper explores insertion and bypass algorithms for exclusive LLCs. Our detailed execution-driven simulation results show that a combination of our best insertion and bypass policies delivers an improvement of up to 61.2% and on average (geometric mean) 3.4% in terms of instructions retired per cycle (IPC) for 97 single-threaded dynamic instruction traces spanning selected SPEC 2006 and server applications, running on a 2 MB 16-way exclusive LLC compared to a baseline exclusive design in the presence of well-tuned multi-stream hardware prefetchers. The corresponding improvements in throughput for 35 4-way multi-programmed workloads running with an 8 MB 16-way shared exclusive LLC are 20.6% (maximum) and 2.5% (geometric mean). --- paper_title: EnCache: A DYNAMIC PROFILING-BASED RECONFIGURATION TECHNIQUE FOR IMPROVING CACHE ENERGY EFFICIENCY paper_content: With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not require offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an ×86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively. --- paper_title: Energy Savings via Dead Sub-Block Prediction paper_content: Cache memories have traditionally been designed to exploit spatial locality by fetching entire cache lines from memory upon a miss. However, recent studies have shown that often the number of sub-blocks within a line that are actually used is low. Furthermore, those sub-blocks that are used are accessed only a few times before becoming dead (i.e., never accessed again). This results in considerable energy waste since 1) data not needed by the processor is brought into the cache, and 2) data is kept alive in the cache longer than necessary. We propose the Dead Sub-Block Predictor (DSBP) to predict which sub-blocks of a cache line will be actually used and how many times it will be used in order to bring into the cache only those sub-blocks that are necessary, and power them off after they are touched the predicted number of times. We also use DSBP to identify dead lines (i.e., all sub-blocks off) and augment the existing replacement policy by prioritizing dead lines for eviction. Our results show a 24% energy reduction for the whole cache hierarchy when averaged over the SPEC2000, SPEC2006 and NAS-NPB benchmarks. --- paper_title: A Survey of Software Techniques for Using Non-Volatile Memories for Storage and Main Memory Systems paper_content: Non-volatile memory (NVM) devices, such as Flash, phase change RAM, spin transfer torque RAM, and resistive RAM, offer several advantages and challenges when compared to conventional memory technologies, such as DRAM and magnetic hard disk drives (HDDs). In this paper, we present a survey of software techniques that have been proposed to exploit the advantages and mitigate the disadvantages of NVMs when used for designing memory systems, and, in particular, secondary storage (e.g., solid state drive) and main memory. We classify these software techniques along several dimensions to highlight their similarities and differences. Given that NVMs are growing in popularity, we believe that this survey will motivate further research in the field of software technology for NVMs. --- paper_title: OAP: an obstruction-aware cache management policy for STT-RAM last-level caches paper_content: Emerging memory technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor designs. Among various emerging memory technologies, Spin-Torque Transfer RAM (STT-RAM) has the benefits of fast read latency, low leakage power, and high density, and therefore has been investigated as a promising candidate for last-level cache (LLC)1. One of the major disadvantages for STT-RAM is the latency and energy overhead associated with the write operations. In particular, a long-latency write operation to STT-RAM cache may obstruct other cache accesses and result in severe performance degradation. Consequently, mitigation techniques to minimize the write overhead are required in order to successfully adopt this new technology for cache design. In this paper, we propose an obstruction-aware cache management policy called OAP. OAP monitors the cache to periodically detect LLC-obstruction processes, and manage the cache accesses from different processes. The experimental results on a 4-core architecture with an 8MB STT-RAM L3 cache shows that the performance can be improved by 14% on average and up to 42%, with a reduction of energy consumption by 64%2. --- paper_title: A survey of power management techniques for phase change memory paper_content: The demands of larger memory capacity in high-performance computing systems have motivated the researchers to explore alternatives of dynamic random access memory (DRAM). Since phase change memory (PCM) provides high-density, good scalability and non-volatile data storage, it has received significant amount of attention in recent years. A crucial bottleneck in wide-spread adoption of PCM, however, is that its write latency and energy are very high. Recently, several architecture and system-level techniques have been proposed to address this issue. In this paper, we survey several techniques for managing power consumption of PCM. We also classify these techniques based on their characteristics to highlight their similarities and differences. The aim of this paper is to provide insights to researchers into working of PCM power-management techniques and also motivate them to propose even better techniques for designing future 'green' PCM-based main memory systems. --- paper_title: A Survey Of Techniques for Architecting DRAM Caches paper_content: Recent trends of increasing core-count and memory/bandwidth-wall have led to major overhauls in chip architecture. In face of increasing cache capacity demands, researchers have now explored DRAM, which was conventionally considered synonymous to main memory, for designing large last level caches. Efficient integration of DRAM caches in mainstream computing systems, however, also presents several challenges and several recent techniques have been proposed to address them. In this paper, we present a survey of techniques for architecting DRAM caches. Also, by classifying these techniques across several dimensions, we underscore their similarities and differences. We believe that this paper will be very helpful to researchers for gaining insights into the potential, tradeoffs and challenges of DRAM caches. --- paper_title: A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-Volatile On-Chip Caches paper_content: Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memorytechnologies. To highlight their similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems. --- paper_title: MRPB: Memory request prioritization for massively parallel processors paper_content: Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high performance for a broad range of programs. They are, however, complex to program, especially because of their intricate memory hierarchies with multiple address spaces. In response, modern GPUs have widely adopted caches, hoping to providing smoother reductions in memory access traffic and latency. Unfortunately, GPU caches often have mixed or unpredictable performance impact due to cache contention that results from the high thread counts in GPUs. We propose the memory request prioritization buffer (MRPB) to ease GPU programming and improve GPU performance. This hardware structure improves caching efficiency of massively parallel workloads by applying two prioritization methods—request reordering and cache bypassing—to memory requests before they access a cache. MRPB then releases requests into the cache in a more cache-friendly order. The result is drastically reduced cache contention and improved use of the limited per-thread cache capacity. For a simulated 16KB L1 cache, MRPB improves the average performance of the entire PolyBench and Rodinia suites by 2.65× and 1.27× respectively, outperforming a state-of-the-art GPU cache management technique. --- paper_title: Exploiting Core Working Sets to Filter the L1 Cache with Random Sampling paper_content: Locality is often characterized by working sets, defined by Denning as the set of distinct addresses referenced within a certain window of time. This definition ignores the fact that dramatic differences exist between the usage patterns of frequently used data and transient data. We therefore propose to extend Denning's definition with that of core working sets, which identify blocks that are used most frequently and for the longest time. The concept of a core motivates the design of dual-cache structures that provide special treatment for the core. In particular, we present a probabilistic locality predictor for L1 caches that leverages the skewed popularity of blocks to distinguish transient cache insertions from more persistent ones. We further present a dual L1 design that inserts only frequently used blocks into a low-latency, low-power, direct-mapped main cache, while serving others from a small fully associative filter. To reduce the prohibitive cost of such a filter, we present a content addressable memory design that eliminates most of the costly lookups using a small auxiliary lookup table. The proposed design enables a 16K direct-mapped L1 cache, augmented with a small 2K filter, to outperform a 32K 4-way cache, while at the same time consumes 70-80 percent less dynamic power and 40 percent less static power. --- paper_title: Adaptive GPU cache bypassing paper_content: Modern graphics processing units (GPUs) include hardware- controlled caches to reduce bandwidth requirements and energy consumption. However, current GPU cache hierarchies are inefficient for general purpose GPU (GPGPU) comput- ing. GPGPU workloads tend to include data structures that would not fit in any reasonably sized caches, leading to very low cache hit rates. This problem is exacerbated by the design of current GPUs, which share small caches be- tween many threads. Caching these streaming data struc- tures needlessly burns power while evicting data that may otherwise fit into the cache. We propose a GPU cache management technique to im- prove the efficiency of small GPU caches while further re- ducing their power consumption. It adaptively bypasses the GPU cache for blocks that are unlikely to be referenced again before being evicted. This technique saves energy by avoid- ing needless insertions and evictions while avoiding cache pollution, resulting in better performance. We show that, with a 16KB L1 data cache, dynamic bypassing achieves sim- ilar performance to a double-sized L1 cache while reducing energy consumption by 25% and power by 18%. The technique is especially interesting for programs that do not use programmer-managed scratchpad memories. We give a case study to demonstrate the inefficiency of current GPU caches compared to programmer-managed scratchpad memories and show the extent to which cache bypassing can make up for the potential performance loss where the effort to program scratchpad memories is impractical. --- paper_title: Adaptive and transparent cache bypassing for GPUs paper_content: In the last decade, GPUs have emerged to be widely adopted for general-purpose applications. To capture on-chip locality for these applications, modern GPUs have integrated multilevel cache hierarchy, in an attempt to reduce the amount and latency of the massive and sometimes irregular memory accesses. However, inferior performance is frequently attained due to serious congestion in the caches results from the huge amount of concurrent threads. In this paper, we propose a novel compile-time framework for adaptive and transparent cache bypassing on GPUs. It uses a simple yet effective approach to control the bypass degree to match the size of applications' runtime footprints. We validate the design on seven GPU platforms that cover all existing GPU generations using 16 applications from widely used GPU benchmarks. Experiments show that our design can significantly mitigate the negative impact due to small cache sizes and improve the overall performance. We analyze the performance across different platforms and applications. We also propose some optimization guidelines on how to efficiently use the GPU caches. --- paper_title: Bypass and insertion algorithms for exclusive last-level caches paper_content: Inclusive last-level caches (LLCs) waste precious silicon estate due to cross-level replication of cache blocks. As the industry moves toward cache hierarchies with larger inner levels, this wasted cache space leads to bigger performance losses compared to exclusive LLCs. However, exclusive LLCs make the design of replacement policies more challenging. While in an inclusive LLC a block can gather a filtered access history, this is not possible in an exclusive design because the block is de-allocated from the LLC on a hit. As a result, the popular least-recently-used replacement policy and its approximations are rendered ineffective and proper choice of insertion ages of cache blocks becomes even more important in exclusive designs. On the other hand, it is not necessary to fill every block into an exclusive LLC. This is known as selective cache bypassing and is not possible to implement in an inclusive LLC because that would violate inclusion. This paper explores insertion and bypass algorithms for exclusive LLCs. Our detailed execution-driven simulation results show that a combination of our best insertion and bypass policies delivers an improvement of up to 61.2% and on average (geometric mean) 3.4% in terms of instructions retired per cycle (IPC) for 97 single-threaded dynamic instruction traces spanning selected SPEC 2006 and server applications, running on a 2 MB 16-way exclusive LLC compared to a baseline exclusive design in the presence of well-tuned multi-stream hardware prefetchers. The corresponding improvements in throughput for 35 4-way multi-programmed workloads running with an 8 MB 16-way shared exclusive LLC are 20.6% (maximum) and 2.5% (geometric mean). --- paper_title: Coordinated static and dynamic cache bypassing for GPUs paper_content: The massive parallel architecture enables graphics processing units (GPUs) to boost performance for a wide range of applications. Initially, GPUs only employ scratchpad memory as on-chip memory. Recently, to broaden the scope of applications that can be accelerated by GPUs, GPU vendors have used caches in conjunction with scratchpad memory as on-chip memory in the new generations of GPUs. Unfortunately, GPU caches face many performance challenges that arise due to excessive thread contention for cache resource. Cache bypassing, where memory requests can selectively bypass the cache, is one solution that can help to mitigate the cache resource contention problem. In this paper, we propose coordinated static and dynamic cache bypassing to improve application performance. At compile-time, we identify the global loads that indicate strong preferences for caching or bypassing through profiling. For the rest global loads, our dynamic cache bypassing has the flexibility to cache only a fraction of threads. In CUDA programming model, the threads are divided into work units called thread blocks. Our dynamic bypassing technique modulates the ratio of thread blocks that cache or bypass at run-time. We choose to modulate at thread block level in order to avoid the memory divergence problems. Our approach combines compile-time analysis that determines the cache or bypass preferences for global loads with run-time management that adjusts the ratio of thread blocks that cache or bypass. Our coordinated static and dynamic cache bypassing technique achieves up to 2.28X (average I.32X) performance speedup for a variety of GPU applications. --- paper_title: BEAR: techniques for mitigating bandwidth bloat in gigascale DRAM caches paper_content: Die stacking memory technology can enable gigascale DRAM caches that can operate at 4x-8x higher bandwidth than commodity DRAM. Such caches can improve system performance by servicing data at a faster rate when the requested data is found in the cache, potentially increasing the memory bandwidth of the system by 4x-8x. Unfortunately, a DRAM cache uses the available memory bandwidth not only for data transfer on cache hits, but also for other secondary operations such as cache miss detection, fill on cache miss, and writeback lookup and content update on dirty evictions from the last-level on-chip cache. Ideally, we want the bandwidth consumed for such secondary operations to be negligible, and have almost all the bandwidth be available for transfer of useful data from the DRAM cache to the processor. We evaluate a 1GB DRAM cache, architected as Alloy Cache, and show that even the most bandwidth-efficient proposal for DRAM cache consumes 3.8x bandwidth compared to an idealized DRAM cache that does not consume any bandwidth for secondary operations. We also show that redesigning the DRAM cache to minimize the bandwidth consumed by secondary operations can potentially improve system performance by 22%. To that end, this paper proposes Bandwidth Efficient ARchitecture (BEAR) for DRAM caches. BEAR integrates three components, one each for reducing the bandwidth consumed by miss detection, miss fill, and writeback probes. BEAR reduces the bandwidth consumption of DRAM cache by 32%, which reduces cache hit latency by 24% and increases overall system performance by 10%. BEAR, with negligible overhead, outperforms an idealized SRAM Tag-Store design that incurs an unacceptable overhead of 64 megabytes, as well as Sector Cache designs that incur an SRAM storage overhead of 6 megabytes. --- paper_title: SCIP: Selective cache insertion and bypassing to improve the performance of last-level caches paper_content: The design of an effective last-level cache (LLC) is crucial to the overall processor performance and, consequently, continues to be the center of substantial research. Unfortunately, LLCs in modern high-performance processors are not used efficiently. One major problem suffered by LLCs is their low hit rates caused by the large fraction of cache blocks that do not get re-accessed after being brought into the LLC following a cache miss. These blocks do not contribute any cache hits and usually induce cache pollution and thrashing. Cache bypassing presents an effective solution to this problem. Cache blocks that are predicted not to be accessed while residing in the cache are not inserted into the LLC following a miss, instead they bypass the LLC and are only inserted in the higher cache levels. This paper presents a simple, low-hardware overhead, yet effective, cache bypassing algorithm that dynamically chooses which blocks to insert into the LLC and which to bypass it following a miss based on past access/bypass patterns. Our proposed algorithm is thoroughly evaluated using a detailed simulation environment where its effectiveness, performance-improvement capabilities, and robustness are demonstrated. Moreover, it is shown to outperform the state-of-the-art cache bypassing algorithm in both a uniprocessor and a multi-core processor settings. --- paper_title: Optimal bypass monitor for high performance last-level caches paper_content: In the last-level cache, large amounts of blocks have reuse distances greater than the available cache capacity. Cache performance and efficiency can be improved if some subset of these distant reuse blocks can reside in the cache longer. The bypass technique is an effective and attractive solution that prevents the insertion of harmful blocks. --- paper_title: Energy Savings via Dead Sub-Block Prediction paper_content: Cache memories have traditionally been designed to exploit spatial locality by fetching entire cache lines from memory upon a miss. However, recent studies have shown that often the number of sub-blocks within a line that are actually used is low. Furthermore, those sub-blocks that are used are accessed only a few times before becoming dead (i.e., never accessed again). This results in considerable energy waste since 1) data not needed by the processor is brought into the cache, and 2) data is kept alive in the cache longer than necessary. We propose the Dead Sub-Block Predictor (DSBP) to predict which sub-blocks of a cache line will be actually used and how many times it will be used in order to bring into the cache only those sub-blocks that are necessary, and power them off after they are touched the predicted number of times. We also use DSBP to identify dead lines (i.e., all sub-blocks off) and augment the existing replacement policy by prioritizing dead lines for eviction. Our results show a 24% energy reduction for the whole cache hierarchy when averaged over the SPEC2000, SPEC2006 and NAS-NPB benchmarks. --- paper_title: Adaptive and transparent cache bypassing for GPUs paper_content: In the last decade, GPUs have emerged to be widely adopted for general-purpose applications. To capture on-chip locality for these applications, modern GPUs have integrated multilevel cache hierarchy, in an attempt to reduce the amount and latency of the massive and sometimes irregular memory accesses. However, inferior performance is frequently attained due to serious congestion in the caches results from the huge amount of concurrent threads. In this paper, we propose a novel compile-time framework for adaptive and transparent cache bypassing on GPUs. It uses a simple yet effective approach to control the bypass degree to match the size of applications' runtime footprints. We validate the design on seven GPU platforms that cover all existing GPU generations using 16 applications from widely used GPU benchmarks. Experiments show that our design can significantly mitigate the negative impact due to small cache sizes and improve the overall performance. We analyze the performance across different platforms and applications. We also propose some optimization guidelines on how to efficiently use the GPU caches. --- paper_title: Adaptive Cache Management for Energy-Efficient GPU Computing paper_content: With the SIMT execution model, GPUs can hidememory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPU scause cache contention and resource congestion. Existing CPUcache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPUcaches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid oversaturatingon-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache sensitive benchmarks. This results in a harmonic mean IPCimprovement of 74% and 17% (maximum 661% and 44% IPCimprovement), compared to the baseline GPU architecture and optimal static warp throttling, respectively. --- paper_title: Full system simulation framework for integrated CPU/GPU architecture paper_content: The integrated CPU/GPU architecture brings performance advantage since the communication cost between the CPU and GPU is reduced, and also imposes new challenges in processor architecture design, especially in the management of shared memory resources, e.g., the last-level cache and memory bandwidth. Therefore, a micro-architecture level simulator is essential to facilitate researches in this direction. In this paper, we develop the first cycle-level full-system simulation framework for CPU-GPU integration with detailed memory models. With the simulation framework, we analyze the communication cost between the CPU and GPU for GPU workloads, and perform memory system characterization running both applications concurrently. --- paper_title: A Survey of CPU-GPU Heterogeneous Computing Techniques paper_content: As both CPUs and GPUs become employed in a wide range of applications, it has been acknowledged that both of these Processing Units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated a significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this article, we survey Heterogeneous Computing Techniques (HCTs) such as workload partitioning that enable utilizing both CPUs and GPUs to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler, and application levels. Further, we review both discrete and fused CPU-GPU systems and discuss benchmark suites designed for evaluating Heterogeneous Computing Systems (HCSs). We believe that this article will provide insights into the workings and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance. --- paper_title: Optimal bypass monitor for high performance last-level caches paper_content: In the last-level cache, large amounts of blocks have reuse distances greater than the available cache capacity. Cache performance and efficiency can be improved if some subset of these distant reuse blocks can reside in the cache longer. The bypass technique is an effective and attractive solution that prevents the insertion of harmful blocks. --- paper_title: Adaptive Cache Bypassing for Inclusive Last Level Caches paper_content: Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the last level cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing inherently breaks the inclusion property. This paper presents a solution to enabling cache bypassing for inclusive caches. We introduce a bypass buffer to an LLC. Bypassed cache lines skip the LLC while their tags are stored in this bypass buffer. When a tag is evicted from the bypass buffer, it invalidates the corresponding cache lines in upper level caches to ensure the inclusion property. Our key insight is that the lifetime of a bypassed line, assuming a well-designed bypassing algorithm, should be short in upper level caches and is most likely dead when its tag is evicted from the bypass buffer. Therefore, a small bypass buffer is sufficient to maintain the inclusion property and to reap most performance benefits of bypassing. Furthermore, the bypass buffer facilitates bypassing algorithms by providing the usage information of bypassed lines. We show that a top performing cache bypassing algorithm, which is originally designed for non-inclusive caches, performs comparably for inclusive caches equipped with our bypass buffer. The usage information collected from the bypass buffer also significantly reduces the cost of hardware implementation compared to the original design. --- paper_title: Bypassing method for STT-RAM based inclusive last-level cache paper_content: Non-volatile memories (NVMs), such as STT-RAM and PCM, have recently become very competitive designs for last-level caches (LLCs). To avoid cache pollution caused by unnecessary write operations, many cache-bypassing methods have been introduced. Among them, SBAC (a statistics-based cache bypassing method for asymmetric-access caches) is the most recent approach for NVMs and shows the lowest cache access latency. However, SBAC only works on non-inclusive caches, so it is not practical with state-of-the-art processors that employ inclusive LLCs. To overcome this limitation, we propose a novel cache scheme, called inclusive bypass tag cache (IBTC) for NVMs. The proposed IBTC with consideration for the characteristics of NVMs is integrated into LLC to maintain coherence of data in the inclusive LLC with a bypass method and the algorithm is introduced to handle the tag information for bypassed blocks with a minimal storage overhead. Experiments show that IBTC cuts down overall energy consumption by 17.4%, and increases the cache hit rate by 5.1%. --- paper_title: Cache bursts: A new approach for eliminating dead blocks and increasing cache efficiency paper_content: Data caches in general-purpose microprocessors often contain mostly dead blocks and are thus used inefficiently. To improve cache efficiency, dead blocks should be identified and evicted early. Prior schemes predict the death of a block immediately after it is accessed; however, these schemes yield lower prediction accuracy and coverage. Instead, we find that predicting the death of a block when it just moves out of the MRU position gives the best tradeoff between timeliness and prediction accuracy/coverage. Furthermore, the individual reference history of a block in the L1 cache can be irregular because of data/control dependence. This paper proposes a new class of dead-block predictors that predict dead blocks based on bursts of accesses to a cache block. A cache burst begins when a block becomes MRU and ends when it becomes non-MRU. Cache bursts are more predictable than individual references because they hide the irregularity of individual references. When used at the L1 cache, the best burst-based predictor can identify 96% of the dead blocks with a 96% accuracy. With the improved dead-block predictors, we evaluate three ways to increase cache efficiency by eliminating dead blocks early: replacement optimization, bypassing, and prefetching. The most effective approach, prefetching into dead blocks, increases the average L1 efficiency from 8% to 17% and the L2 efficiency from 17% to 27%. This increased cache efficiency translates into higher overall performance: prefetching into dead blocks outperforms the same prefetch scheme without dead-block prediction by 12% at the L1 and by 13% at the L2. --- paper_title: Design and performance evaluation of a cache assist to implement selective caching paper_content: Conventional cache architectures exploit locality, but do so rather blindly. By forcing all references through a single structure, the cache's effectiveness on many references is reduced. This paper presents a cache assist namely the annex cache which implements a selective caching scheme. Except for filling a main cache at cold start, all entries come to the cache via the annex cache. Items referenced only rarely will be excluded from the main cache, eliminating several conflict misses. The basic premise is that an item deserves to be in the main cache only if it can prove its right to exist in the main cache by demonstrating locality. The annex cache combines the features of Jouppi's (1990) victim caches and McFarling's (1992) cache exclusion schemes. Extensive simulation studies for annex and victim caches using a variety of SPEC programs are presented in the paper. Annex caches were observed to be significantly better than conventional caches, better than victim caches in certain cases, and comparable to victim caches in other cases. --- paper_title: Timestamp-Based Selective Cache Allocation paper_content: Electrolytically precipitated metal sheets, such as copper, nickel or zinc sheets, are detached from a base plate having worked as a cathode during precipitation and being rigidly held in a vertical position. Firstly, the upper edge of the metal sheet is detached entirely or locally by the aid of a blade system and, thereafter, said detached edge portion is grabbed with gripping means and pulled outwards essentially in a horizontal direction until the entire metal sheet is detached from the base plate. To facilitate the initial detachment an insulating tape or similar may be glued to the base plate at the solution level before precipitation. The gripping means may be connected to essentially horizontal guide means provided with driving members for creating the pulling force. --- paper_title: Adaptive Cache Management for Energy-Efficient GPU Computing paper_content: With the SIMT execution model, GPUs can hidememory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPU scause cache contention and resource congestion. Existing CPUcache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPUcaches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid oversaturatingon-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache sensitive benchmarks. This results in a harmonic mean IPCimprovement of 74% and 17% (maximum 661% and 44% IPCimprovement), compared to the baseline GPU architecture and optimal static warp throttling, respectively. --- paper_title: Sampling Dead Block Prediction for Last-Level Caches paper_content: Last-level caches (LLCs) are large structures with significant power requirements. They can be quite inefficient. On average, a cache block in a 2MB LRU-managed LLC is dead 86% of the time, i.e., it will not be referenced again before it is evicted. This paper introduces sampling dead block prediction, a technique that samples program counters (PCs) to determine when a cache block is likely to be dead. Rather than learning from accesses and evictions from every set in the cache, a sampling predictor keeps track of a small number of sets using partial tags. Sampling allows the predictor to use far less state than previous predictors to make predictions with superior accuracy. Dead block prediction can be used to drive a dead block replacement and bypass optimization. A sampling predictor can reduce the number of LLC misses over LRU by 11.7% for memory-intensive single-thread benchmarks and 23% for multi-core workloads. The reduction in misses yields a geometric mean speedup of 5.9% for single-thread benchmarks and a geometric mean normalized weighted speedup of 12.5% for multi-core workloads. Due to the reduced state and number of accesses, the sampling predictor consumes only 3.1% of the of the dynamic power and 1.2% of the leakage power of a baseline 2MB LLC, comparing favorably with more costly techniques. The sampling predictor can even be used to significantly improve a cache with a default random replacement policy. --- paper_title: Counter-Based Cache Replacement and Bypassing Algorithms paper_content: Recent studies have shown that, in highly associative caches, the performance gap between the least recently used (LRU) and the theoretical optimal replacement algorithms is large, motivating the design of alternative replacement algorithms to improve cache performance. In LRU replacement, a line, after its last use, remains in the cache for a long time until it becomes the LRU line. Such deadlines unnecessarily reduce the cache capacity available for other lines. In addition, in multilevel caches, temporal reuse patterns are often inverted, showing in the L1 cache but, due to the filtering effect of the L1 cache, not showing in the L2 cache. At the L2, these lines appear to be brought in the cache but are never reaccessed until they are replaced. These lines unnecessarily pollute the L2 cache. This paper proposes a new counter-based approach to deal with the above problems. For the former problem, we predict lines that have become dead and replace them early from the L2 cache. For the latter problem, we identify never-reaccessed lines, bypass the L2 cache, and place them directly in the L1 cache. Both techniques are achieved through a single counter-based mechanism. In our approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest such as certain cache accesses occurs. When the counter reaches a threshold, the line ";expires"; and becomes replaceable. Each line's threshold is unique and is dynamically learned. We propose and evaluate two new replacement algorithms: Access interval predictor (AIP) and live-time predictor (LvP). AIP and LvP speed up 10 capacity-constrained SPEC2000 benchmarks by up to 48 percent and 15 percent on average (7 percent on average for the whole 21 Spec2000 benchmarks). Cache bypassing further reduces L2 cache pollution and improves the average speedups to 17 percent (8 percent for the whole 21 Spec2000 benchmarks). --- paper_title: Locality-Driven Dynamic GPU Cache Bypassing paper_content: This paper presents novel cache optimizations for massively parallel, throughput-oriented architectures like GPUs. L1 data caches (L1 D-caches) are critical resources for providing high-bandwidth and low-latency data accesses. However, the high number of simultaneous requests from single-instruction multiple-thread (SIMT) cores makes the limited capacity of L1 D-caches a performance and energy bottleneck, especially for memory-intensive applications. We observe that the memory access streams to L1 D-caches for many applications contain a significant amount of requests with low reuse, which greatly reduce the cache efficacy. Existing GPU cache management schemes are either based on conditional/reactive solutions or hit-rate based designs specifically developed for CPU last level caches, which can limit overall performance. To overcome these challenges, we propose an efficient locality monitoring mechanism to dynamically filter the access stream on cache insertion such that only the data with high reuse and short reuse distances are stored in the L1 D-cache. Specifically, we present a design that integrates locality filtering based on reuse characteristics of GPU workloads into the decoupled tag store of the existing L1 D-cache through simple and cost-effective hardware extensions. Results show that our proposed design can dramatically reduce cache contention and achieve up to 56.8% and an average of 30.3% performance improvement over the baseline architecture, for a range of highly-optimized cache-unfriendly applications with minor area overhead and better energy efficiency. Our design also significantly outperforms the state-of-the-art CPU and GPU bypassing schemes (especially for irregular applications), without generating extra L2 and DRAM level contention. --- paper_title: Bypass and insertion algorithms for exclusive last-level caches paper_content: Inclusive last-level caches (LLCs) waste precious silicon estate due to cross-level replication of cache blocks. As the industry moves toward cache hierarchies with larger inner levels, this wasted cache space leads to bigger performance losses compared to exclusive LLCs. However, exclusive LLCs make the design of replacement policies more challenging. While in an inclusive LLC a block can gather a filtered access history, this is not possible in an exclusive design because the block is de-allocated from the LLC on a hit. As a result, the popular least-recently-used replacement policy and its approximations are rendered ineffective and proper choice of insertion ages of cache blocks becomes even more important in exclusive designs. On the other hand, it is not necessary to fill every block into an exclusive LLC. This is known as selective cache bypassing and is not possible to implement in an inclusive LLC because that would violate inclusion. This paper explores insertion and bypass algorithms for exclusive LLCs. Our detailed execution-driven simulation results show that a combination of our best insertion and bypass policies delivers an improvement of up to 61.2% and on average (geometric mean) 3.4% in terms of instructions retired per cycle (IPC) for 97 single-threaded dynamic instruction traces spanning selected SPEC 2006 and server applications, running on a 2 MB 16-way exclusive LLC compared to a baseline exclusive design in the presence of well-tuned multi-stream hardware prefetchers. The corresponding improvements in throughput for 35 4-way multi-programmed workloads running with an 8 MB 16-way shared exclusive LLC are 20.6% (maximum) and 2.5% (geometric mean). --- paper_title: Introducing hierarchy-awareness in replacement and bypass algorithms for last-level caches paper_content: The replacement policies for the last-level caches (LLCs) are usually designed based on the access information available locally at the LLC. These policies are inherently sub-optimal due to lack of information about the activities in the inner-levels of the hierarchy. This paper introduces cache hierarchy-aware replacement (CHAR) algorithms for inclusive LLCs (or L3 caches) and applies the same algorithms to implement efficient bypass techniques for exclusive LLCs in a three-level hierarchy. In a hierarchy with an inclusive LLC, these algorithms mine the L2 cache eviction stream and decide if a block evicted from the L2 cache should be made a victim candidate in the LLC based on the access pattern of the evicted block. Ours is the first proposal that explores the possibility of using a subset of L2 cache eviction hints to improve the replacement algorithms of an inclusive LLC. The CHAR algorithm classifies the blocks residing in the L2 cache based on their reuse patterns and dynamically estimates the reuse probability of each class of blocks to generate selective replacement hints to the LLC. Compared to the static re-reference interval prediction (SRRIP) policy, our proposal offers an average reduction of 10.9% in LLC misses and an average improvement of 3.8% in instructions retired per cycle (IPC) for twelve single-threaded applications. The corresponding reduction in LLC misses for one hundred 4-way multi-programmed workloads is 6.8% leading to an average improvement of 3.9% in through-put. Finally, our proposal achieves an 11.1% reduction in LLC misses and a 4.2% reduction in parallel execution cycles for six 8-way threaded shared memory applications compared to the SRRIP policy. In a cache hierarchy with an exclusive LLC, our CHAR proposal offers an effective algorithm for selecting the subset of blocks (clean or dirty) evicted from the L2 cache that need not be written to the LLC and can be bypassed. Compared to the TC-AGE policy (analogue of SRRIP for exclusive LLC), our best exclusive LLC proposal improves average throughput by 3.2% while saving an average of 66.6% of data transactions from the L2 cache to the on-die interconnect for one hundred 4-way multi-programmed workloads. Compared to an inclusive LLC design with an identical hierarchy, this corresponds to an average throughput improvement of 8.2% with only 17% more data write transactions originating from the L2 cache. --- paper_title: Improving Cache Management Policies Using Dynamic Reuse Distances paper_content: Cache management policies such as replacement, bypass, or shared cache partitioning have been relying on data reuse behavior to predict the future. This paper proposes a new way to use dynamic reuse distances to further improve such policies. A new replacement policy is proposed which prevents replacing a cache line until a certain number of accesses to its cache set, called a Protecting Distance (PD). The policy protects a cache line long enough for it to be reused, but not beyond that to avoid cache pollution. This can be combined with a bypass mechanism that also relies on dynamic reuse analysis to bypass lines with less expected reuse. A miss fetch is bypassed if there are no unprotected lines. A hit rate model based on dynamic reuse history is proposed and the PD that maximizes the hit rate is dynamically computed. The PD is recomputed periodically to track a program's memory access behavior and phases. Next, a new multi-core cache partitioning policy is proposed using the concept of protection. It manages lifetimes of lines from different cores (threads) in such a way that the overall hit rate is maximized. The average per-thread lifetime is reduced by decreasing the thread's PD. The single-core PD-based replacement policy with bypass achieves an average speedup of 4.2% over the DIP policy, while the average speedups over DIP are 1.5% for dynamic RRIP (DRRIP) and 1.6% for sampling dead-block prediction (SDP). The 16-core PD-based partitioning policy improves the average weighted IPC by 5.2%, throughput by 6.4% and fairness by 9.9% over thread-aware DRRIP (TA-DRRIP). The required hardware is evaluated and the overhead is shown to be manageable. --- paper_title: A model-driven approach to warp/thread-block level GPU cache bypassing paper_content: The high amount of memory requests from massive threads may easily cause cache contention and cache-miss-related resource congestion on GPUs. This paper proposes a simple yet effective performance model to estimate the impact of cache contention and resource congestion as a function of the number of warps/thread blocks (TBs) to bypass the cache. Then we design a hardware-based dynamic warp/thread-block level GPU cache bypassing scheme, which achieves 1.68x speedup on average on a set of memory-intensive benchmarks over the baseline. Compared to prior works, our scheme achieves 21.6% performance improvement over SWL-best [29] and 11.9% over CBWT-best [4] on average. --- paper_title: OAP: an obstruction-aware cache management policy for STT-RAM last-level caches paper_content: Emerging memory technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor designs. Among various emerging memory technologies, Spin-Torque Transfer RAM (STT-RAM) has the benefits of fast read latency, low leakage power, and high density, and therefore has been investigated as a promising candidate for last-level cache (LLC)1. One of the major disadvantages for STT-RAM is the latency and energy overhead associated with the write operations. In particular, a long-latency write operation to STT-RAM cache may obstruct other cache accesses and result in severe performance degradation. Consequently, mitigation techniques to minimize the write overhead are required in order to successfully adopt this new technology for cache design. In this paper, we propose an obstruction-aware cache management policy called OAP. OAP monitors the cache to periodically detect LLC-obstruction processes, and manage the cache accesses from different processes. The experimental results on a 4-core architecture with an 8MB STT-RAM L3 cache shows that the performance can be improved by 14% on average and up to 42%, with a reduction of energy consumption by 64%2. --- paper_title: Using cache mapping to improve memory performance handheld devices paper_content: Processors such as the Intel StrongARM SA-1110 and the Intel XScale provide flexible control over the cache management to achieve better cache utilization. Programs can specify the cache mapping policy for each virtual page, i.e. mapping it to the main cache, the mini-cache, or neither. For the latter case, the page is marked as non-cacheable. In this paper, we use memory profiling to guide such page-based cache mapping. We model the cache mapping problem and prove that finding the optimal cache mapping is NP-hard. We then present a heuristic to select the mapping. Execution time measurement shows that our heuristics can improve the performance from 1% to 21% for a set of test programs. As a byproduct of performance enhancement, we also save the energy by 4% to 28%. --- paper_title: Hardware identification of cache conflict misses paper_content: This paper describes the Miss Classification Table, a simple mechanism that enables the processor or memory controller to identify each cache miss as either a conflict miss or a capacity (non-conflict) miss. The miss classification table works by storing part of the tag of the most recently evicted line of a cache set. If the next miss to that cache set has a matching tag, it is identified as a conflict miss. This technique correctly identifies 87% of misses in the worst case. Several applications of this information are demonstrated, including improvements to victim caching, next-line prefetching, cache exclusion, and a pseudo-associative cache. This paper also presents the Adaptive Miss Buffer (AMB), which combines several of these techniques, targeting each miss with the most appropriate optimization, all within a single small miss buffer. The AMB's combination of techniques achieves 16% better performance than any single technique alone. --- paper_title: WADE: Writeback-aware dynamic cache management for NVM-based main memory system paper_content: Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC). In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory. 1 The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical. The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5p for memory-intensive single-threaded benchmarks and 10.8p for multicore workloads. It yields a geometric mean speedup of 5.1p for single-thread applications and 7.6p for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1p for single-thread applications and 7.6p for multicore workloads. --- paper_title: Global Priority Table for Last-Level Caches paper_content: Last-level caches (LLC) grow large with significant power consumption. As LLC's capacity increases, it becomes quite inefficient. As recent studies show, a large percent of cache blocks are dead during the cache time. There is a growing need for LLC management to reduce the number of dead block in the LLC. However, there is a significant power requirement for the dead block's in-placement and replacement operations. In this paper, we introduce a global priority table predictor, a technique which is used for determining a cache block's priority when it attempts to insert into the LLC. It is similar to previous predictors, such as reuse distance and dead block predictor. The global priority table is indexed by the hash value of the block address and stores the priority value of the associate cache block. The priority value can be used to drive a dead block replacement and bypass optimization. Through the priority table, a large number of dead blocks could be bypassed. It achieves an average reduction of 13.2% in the number of LLC miss for twenty single-thread workloads from the SPEC2006 suite and 29.9% for ten multi-programmed workloads. It also yields a geometric mean speedup of 8.6% for single-thread workloads and a geometric mean normalized weighted speedup of 39.1% for multi-programmed workloads. --- paper_title: SBAC: a statistics based cache bypassing method for asymmetric-access caches paper_content: Asymmetric-access caches with emerging technologies, such as STT-RAM and RRAM, have become very competitive designs recently. Since the write operations consume more time and energy than read ones, data should bypass an asymmetric-access cache unless the locality can justify the data allocation. However, the asymmetric-access property is not well addressed in prior bypassing approaches, which are not energy efficient and induce non-trivial operation overhead. To overcome these problems, we propose a cache bypassing method, SBAC, based on data locality statistics of the whole cache rather than a single cache line's signature. We observe that the decision-making of SBAC is highly accurate and the optimization technique for SBAC works efficiently for multiple applications running concurrently. Experiments show that SBAC cuts down overall energy consumption by 22.3%, and reduces execution time by 8.3%. Compared to prior approaches, the design overhead of SBAC is trivial. --- paper_title: Real-Time GPU Computing: Cache or No Cache? paper_content: Recent Graphics Processing Units (GPUs) have employed cache memories to boost performance. However, cache memories are well known to be harmful to time predictability for CPUs. For high-performance real-time systems using GPUs, it remains unknown whether or not cache memories should be employed. In this paper, we quantitatively compare the performance for GPUs with and without caches, and find that GPUs without the cache actually lead to better average-case performance, with higher time predictability. However, we also study a profiling-based cache bypassing method, which can use the L1 data cache more efficiently to achieve better average-case performance than that without the cache. Therefore, it seems still beneficial to employ caches for real-time computing on GPUs. --- paper_title: Exploiting Core Working Sets to Filter the L1 Cache with Random Sampling paper_content: Locality is often characterized by working sets, defined by Denning as the set of distinct addresses referenced within a certain window of time. This definition ignores the fact that dramatic differences exist between the usage patterns of frequently used data and transient data. We therefore propose to extend Denning's definition with that of core working sets, which identify blocks that are used most frequently and for the longest time. The concept of a core motivates the design of dual-cache structures that provide special treatment for the core. In particular, we present a probabilistic locality predictor for L1 caches that leverages the skewed popularity of blocks to distinguish transient cache insertions from more persistent ones. We further present a dual L1 design that inserts only frequently used blocks into a low-latency, low-power, direct-mapped main cache, while serving others from a small fully associative filter. To reduce the prohibitive cost of such a filter, we present a content addressable memory design that eliminates most of the costly lookups using a small auxiliary lookup table. The proposed design enables a 16K direct-mapped L1 cache, augmented with a small 2K filter, to outperform a 32K 4-way cache, while at the same time consumes 70-80 percent less dynamic power and 40 percent less static power. --- paper_title: Coordinated static and dynamic cache bypassing for GPUs paper_content: The massive parallel architecture enables graphics processing units (GPUs) to boost performance for a wide range of applications. Initially, GPUs only employ scratchpad memory as on-chip memory. Recently, to broaden the scope of applications that can be accelerated by GPUs, GPU vendors have used caches in conjunction with scratchpad memory as on-chip memory in the new generations of GPUs. Unfortunately, GPU caches face many performance challenges that arise due to excessive thread contention for cache resource. Cache bypassing, where memory requests can selectively bypass the cache, is one solution that can help to mitigate the cache resource contention problem. In this paper, we propose coordinated static and dynamic cache bypassing to improve application performance. At compile-time, we identify the global loads that indicate strong preferences for caching or bypassing through profiling. For the rest global loads, our dynamic cache bypassing has the flexibility to cache only a fraction of threads. In CUDA programming model, the threads are divided into work units called thread blocks. Our dynamic bypassing technique modulates the ratio of thread blocks that cache or bypass at run-time. We choose to modulate at thread block level in order to avoid the memory divergence problems. Our approach combines compile-time analysis that determines the cache or bypass preferences for global loads with run-time management that adjusts the ratio of thread blocks that cache or bypass. Our coordinated static and dynamic cache bypassing technique achieves up to 2.28X (average I.32X) performance speedup for a variety of GPU applications. --- paper_title: BEAR: techniques for mitigating bandwidth bloat in gigascale DRAM caches paper_content: Die stacking memory technology can enable gigascale DRAM caches that can operate at 4x-8x higher bandwidth than commodity DRAM. Such caches can improve system performance by servicing data at a faster rate when the requested data is found in the cache, potentially increasing the memory bandwidth of the system by 4x-8x. Unfortunately, a DRAM cache uses the available memory bandwidth not only for data transfer on cache hits, but also for other secondary operations such as cache miss detection, fill on cache miss, and writeback lookup and content update on dirty evictions from the last-level on-chip cache. Ideally, we want the bandwidth consumed for such secondary operations to be negligible, and have almost all the bandwidth be available for transfer of useful data from the DRAM cache to the processor. We evaluate a 1GB DRAM cache, architected as Alloy Cache, and show that even the most bandwidth-efficient proposal for DRAM cache consumes 3.8x bandwidth compared to an idealized DRAM cache that does not consume any bandwidth for secondary operations. We also show that redesigning the DRAM cache to minimize the bandwidth consumed by secondary operations can potentially improve system performance by 22%. To that end, this paper proposes Bandwidth Efficient ARchitecture (BEAR) for DRAM caches. BEAR integrates three components, one each for reducing the bandwidth consumed by miss detection, miss fill, and writeback probes. BEAR reduces the bandwidth consumption of DRAM cache by 32%, which reduces cache hit latency by 24% and increases overall system performance by 10%. BEAR, with negligible overhead, outperforms an idealized SRAM Tag-Store design that incurs an unacceptable overhead of 64 megabytes, as well as Sector Cache designs that incur an SRAM storage overhead of 6 megabytes. --- paper_title: Adaptive placement and migration policy for an STT-RAM-based hybrid cache paper_content: Emerging Non-Volatile Memories (NVM) such as Spin-Torque Transfer RAM (STT-RAM) and Resistive RAM (RRAM) have been explored as potential alternatives for traditional SRAM-based Last-Level-Caches (LLCs) due to the benefits of higher density and lower leakage power. However, NVM technologies have long latency and high energy overhead associated with the write operations. Consequently, a hybrid STT-RAM and SRAM based LLC architecture has been proposed in the hope of exploiting high density and low leakage power of STT-RAM and low write overhead of SRAM. Such a hybrid cache design relies on an intelligent block placement policy that makes good use of the characteristics of both STT-RAM and SRAM technology. --- paper_title: A Technique for Improving Lifetime of Non-Volatile Caches Using Write-Minimization paper_content: While non-volatile memories (NVMs) provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime of non-Volatile cachEs. Our technique uses a small SRAM (static random access memory) storage, called HotStore. ENLIVE detects frequently written blocks and transfers them to the HotStore so that they can be accessed with smaller latency and energy. This also reduces the number of writes to the NVM cache which improves its lifetime. We present microarchitectural schemes for managing the HotStore. Simulations have been performed using an x86-64 simulator and benchmarks from SPEC2006 suite. We observe that ENLIVE provides higher improvement in lifetime and better performance and energy efficiency than two state-of-the-art techniques for improving NVM cache lifetime. ENLIVE provides 8.47×, 14.67× and 15.79× improvement in lifetime or two, four and eight core systems, respectively. In addition, it works well for a range of system and algorithm parameters and incurs only small overhead. --- paper_title: SCIP: Selective cache insertion and bypassing to improve the performance of last-level caches paper_content: The design of an effective last-level cache (LLC) is crucial to the overall processor performance and, consequently, continues to be the center of substantial research. Unfortunately, LLCs in modern high-performance processors are not used efficiently. One major problem suffered by LLCs is their low hit rates caused by the large fraction of cache blocks that do not get re-accessed after being brought into the LLC following a cache miss. These blocks do not contribute any cache hits and usually induce cache pollution and thrashing. Cache bypassing presents an effective solution to this problem. Cache blocks that are predicted not to be accessed while residing in the cache are not inserted into the LLC following a miss, instead they bypass the LLC and are only inserted in the higher cache levels. This paper presents a simple, low-hardware overhead, yet effective, cache bypassing algorithm that dynamically chooses which blocks to insert into the LLC and which to bypass it following a miss based on past access/bypass patterns. Our proposed algorithm is thoroughly evaluated using a detailed simulation environment where its effectiveness, performance-improvement capabilities, and robustness are demonstrated. Moreover, it is shown to outperform the state-of-the-art cache bypassing algorithm in both a uniprocessor and a multi-core processor settings. --- paper_title: Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance paper_content: In a GPU, all threads within a warp execute the same instruction in lockstep. For a memory instruction, this can lead to memory divergence: the memory requests for some threads are serviced early, while the remaining requests incur long latencies. This divergence stalls the warp, as it cannot execute the next instruction until all requests from the current instruction complete. In this work, we make three new observations. First, GPGPU warps exhibit heterogeneous memory divergence behavior at the shared cache: some warps have most of their requests hit in the cache (high cache utility), while other warps see most of their request miss (low cache utility). Second, a warp retains the same divergence behavior for long periods of execution. Third, due to high memory level parallelism, requests going to the shared cache can incur queuing delays as large as hundreds of cycles, exacerbating the effects of memory divergence. We propose a set of techniques, collectively called Memory Divergence Correction (MeDiC), that reduce the negative performance impact of memory divergence and cache queuing. MeDiC uses warp divergence characterization to guide three components: (1) a cache bypassing mechanism that exploits the latency tolerance of low cache utility warps to both alleviate queuing delay and increase the hit rate for high cache utility warps, (2) a cache insertion policy that prevents data from highcache utility warps from being prematurely evicted, and (3) a memory controller that prioritizes the few requests received from high cache utility warps to minimize stall time. We compare MeDiC to four cache management techniques, and find that it delivers an average speedup of 21.8%, and 20.1% higher energy efficiency, over a state-of-the-art GPU cache management mechanism across 15 different GPGPU applications. --- paper_title: Improving cache performance by selective cache bypass paper_content: A technique is proposed to prevent the return of infrequently used items to cache after they are bumped from it. Simulations have shown that the return of these items, called cache pollution, typically degrade cache-based system performance (average reference time) by 10% to 30%. The technique proposed involves the use of hardware called a bypass-cache, which, under program control, will determine whether each reference should be through the cache or should bypass the cache and reference main memory directly. Several inexpensive heuristics for the compiler to determine how to make each reference are given. It is shown that much of the performance loss can be regained.<<ETX>> --- paper_title: SLIP: reducing wire energy in the memory hierarchy paper_content: Wire energy has become the major contributor to energy in large lower level caches. While wire energy is related to wire latency its costs are exposed differently in the memory hierarchy. We propose Sub-Level Insertion Policy (SLIP), a cache management policy which improves cache energy consumption by increasing the number of accesses from energy efficient locations while simultaneously decreasing intra-level data movement. In SLIP, each cache level is partitioned into several cache sublevels of differing sizes. Then, the recent reuse distance distribution of a line is used to choose an energy-optimized insertion and movement policy for the line. The policy choice is made by a hardware unit that predicts the number of accesses and inter-level movements. Using a full-system simulation including OS interactions and hardware overheads, we show that SLIP saves 35% energy at the L2 and 22% energy at the L3 level and performs 0.75% better than a regular cache hierarchy in a single core system. When configured to include a bypassing policy, SLIP reduces traffic to DRAM by 2.2%. This is achieved at the cost of storing 12b metadata per cache line (2.3% overhead), a 6b policy in the PTE, and 32b distribution metadata for each page in the DRAM (a overhead of 0.1%). Using SLIP in a multiprogrammed system saves 47% LLC energy, and reduces traffic to DRAM by 5.5%. --- paper_title: Enhancing LRU replacement via phantom associativity paper_content: In this paper, we propose a novel cache design, Phantom Associative Cache (PAC), that alleviates cache thrashing in L2 caches by keeping the in-cache data blocks for a longer time period. To realize PAC, we introduce the concept of phantom lines. A phantom line works like a real cache line in the LRU stack but does not hold any data or tag. When a phantom line is selected for replacement, cache bypassing is performed instead of replacement. By using appropriate number of phantom lines, PAC can always keep the data blocks that show stronger locality longer in the cache and bypass the cache for other blocks. We show that PAC can be implemented reasonably in practice. The experimental results show that on average PAC reduces cache misses by 17.95% for twelve CPU2006 benchmarks with Misses Per Kilo-Instruction (MPKI) larger than 1 and by 6.61% for all CPU2006 and PARSEC benchmarks. With the help of compiler hints, PAC can further reduce cache misses by 22% for benchmarks that have relatively high MPKI or miss rate. --- paper_title: Load Miss Prediction - Exploiting Power Performance Trade-offs paper_content: Modern CPUs operate at GHz frequencies, but the latencies of memory accesses are still relatively large, in the order of hundreds of cycles. Deeper cache hierarchies with larger cache sizes can mask these latencies for codes with good data locality and reuse, such as structured dense matrix computations. However, cache hierarchies do not necessarily benefit sparse scientific computing codes, which tend to have limited data locality and reuse. We therefore propose a new memory architecture with a load miss predictor (LMP), which includes a data bypass cache and a predictor table, to reduce access latencies by determining whether a load should bypass the main cache hierarchy and issue an early load to main memory. Our architecture uses the L2 (and lower caches) as a victim cache for data removed from our bypass cache. We use cycle-accurate simulations, with SimpleScalar and Wattch to show that our LMP improves the performance of sparse codes, our application domain of interest, on average by 14%, with a 13.6% increase in power. When the LMP is used with dynamic voltage and frequency scaling (DVFS), performance can be improved by 8.7% with system power savings of 7.3% and energy reduction of 17.3% at 1800 MHz relative to the base system at 2000 MHz. Alternatively our LMP can be used to improve the performance of SPEC benchmarks by an average of 2.9 % at the cost of 7.1 % increase in average power. --- paper_title: Reducing off-chip memory traffic by selective cache management scheme in GPGPUs paper_content: The performance of General Purpose Graphics Processing Units (GPGPUs) is frequently limited by the off-chip memory bandwidth. To mitigate this bandwidth wall problem, recent GPUs are equipped with on-chip L1 and L2 caches. However, there has been little work for better utilizing on-chip shared caches in GPGPUs. In this paper, we propose two cache management schemes: write-buffering and read-bypassing. The write buffering technique tries to utilize the shared cache for inter-block communication, and thereby reduces the DRAM accesses as much as the capacity of the cache. The read-bypassing scheme prevents the shared cache from being polluted by streamed data that are consumed only within a thread-block. The proposed schemes can be selectively applied to global memory instructions using newly defined cache operators. We evaluate the effects of the proposed schemes for a few GPGPU applications by simulations. We have shown that the off-chip memory accesses can be successfully reduced by the proposed techniques. We also analyze the effectiveness of these methods when the throughput gap between cores and off-chip memory becomes wider. --- paper_title: Optimal bypass monitor for high performance last-level caches paper_content: In the last-level cache, large amounts of blocks have reuse distances greater than the available cache capacity. Cache performance and efficiency can be improved if some subset of these distant reuse blocks can reside in the cache longer. The bypass technique is an effective and attractive solution that prevents the insertion of harmful blocks. --- paper_title: DaCache: Memory Divergence-Aware GPU Cache Management paper_content: The lock-step execution model of GPU requires a warp to have the data blocks for all its threads before execution. However, there is a lack of salient cache mechanisms that can recognize the need of managing GPU cache blocks at the warp level for increasing the number of warps ready for execution. In addition, warp scheduling is very important for GPU-specific cache management to reduce both intra- and inter-warp conflicts and maximize data locality. In this paper, we propose a Divergence-Aware Cache (DaCache) management that can orchestrate L1D cache management and warp scheduling together for GPGPUs. In DaCache, the insertion position of an incoming data block depends on the fetching warp's scheduling priority. Blocks of warps with lower priorities are inserted closer to the LRU position of the LRU-chain so that they have shorter lifetime in cache. This fine-grained insertion policy is extended to prioritize coherent loads over divergent loads so that coherent loads are less vulnerable to both inter- and intra-warp thrashing. DaCache also adopts a constrained replacement policy with L1D bypassing to sustain a good supply of Fully Cached Warps (FCW), along with a dynamic mechanism to adjust FCW during runtime. Our experiments demonstrate that DaCache achieves 40.4% performance improvement over the baseline GPU and outperforms two state-of-the-art thrashing-resistant techniques RRIP and DIP by 40% and 24.9%, respectively. --- paper_title: Compiler managed micro-cache bypassing for high performance EPIC processors paper_content: Advanced microprocessors have been increasing clock rates, well beyond the Gigahertz boundary. For such high performance microprocessors, a small and fast data micro cache (ucache) is important to overall performance, and proper management of it via load bypassing has a significant performance impact. In this paper, we propose and evaluate a hardware-software collaborative technique to manage ucache bypassing for EPIC processors. The hardware supports the ucache bypassing with a flag in the load instruction format, and the compiler employs static analysis and profiling to identify loads that should bypass the ucache. The collaborative method achieves a significant improvement in performance for the SpecInt2000 benchmarks. On average, about 40%, 30%, 24%, and 22% of load references are identified to bypass 256B, 1K, 4K, and 8K sized ucaches, respectively. This reduces the ucache miss rates by 39%, 32%, 28%, and 26%. The number of pipeline stalls from loads to their uses is reduced by 13%, 9%, 6%, and 5%. Meanwhile, the L1 and L2 cache misses remain largely unchanged. For the 256B ucache, bypassing improves overall performance on average by 5%. --- paper_title: Orchestrating Cache Management and Memory Scheduling for GPGPU Applications paper_content: Modern graphics processing units (GPUs) are delivering tremendous computing horsepower by running tens of thousands of threads concurrently. The massively parallel execution model has been effective to hide the long latency of off-chip memory accesses in graphics and other general computing applications exhibiting regular memory behaviors. With the fast-growing demand for general purpose computing on GPUs (GPGPU), GPU workloads are becoming highly diversified, and thus requiring a synergistic coordination of both computing and memory resources to unleash the computing power of GPUs. Accordingly, recent graphics processors begin to integrate an on-die level-2 (L2) cache. The huge number of threads on GPUs, however, poses significant challenges to L2 cache design. The experiments on a variety of GPGPU applications reveal that the L2 cache may or may not improve the overall performance depending on the characteristics of applications. In this paper, we propose efficient techniques to improve GPGPU performance by orchestrating both L2 cache and memory in a unified framework. The basic philosophy is to exploit the temporal locality among the massive number of concurrent memory requests and minimize the impact of memory divergence behaviors among simultaneously executed groups of threads. Our major contributions are twofold. First, a priority-based cache management is proposed to maximize the chance of frequently revisited data to be kept in the cache. Second, an effective memory scheduling is introduced to reorder memory requests in the memory controller according to the divergence behavior for reducing average waiting time of warps. Simulation results reveal that our techniques enhance the overall performance by 10% on average for memory intensive benchmarks, whereas the maximum gain can be up to 30%. --- paper_title: Adaptive Cache and Concurrency Allocation on GPGPUs paper_content: Memory bandwidth is critical to GPGPU performance. Exploiting locality in caches can better utilize memory bandwidth. However, memory requests issued by excessive threads cause cache thrashing and saturate memory bandwidth, degrading performance. In this paper, we propose adaptive cache and concurrency allocation (CCA) to prevent cache thrashing and improve the utilization of bandwidth and computational resources, hence improving performance. According to locality and reuse distance of access patterns in GPGPU program, warps on a stream multiprocessor are dynamically divided into three groups: cached, bypassed, and waiting. The data cache accommodates the footprint of cached warps. Bypassed warps cannot allocate cache lines in the data cache to prevent cache thrashing, but are able to take advantage of available memory bandwidth and computational resource. Waiting warps are de-scheduled. Experimental results show that adaptive CCA can significant improve benchmark performance, with 80 percent harmonic mean IPC improvement over the baseline. --- paper_title: A novel approach to cache block reuse predictions paper_content: We introduce a novel approach to predict whether a block should be allocated in the cache or not based on past reuse behavior during its lifetime in the cache. Our evaluation of the scheme shows that the prediction accuracy is between 66% and 94% across the applications and can potentially result in a cache miss rate reduction of between 1% and 32% with an average of 12%. We also find that with a modest hardware cost - a table of around 300 bytes - we can cut the miss rate with up to 14% compared to a cache with an always-allocate strategy --- paper_title: An Efficient Compiler Framework for Cache Bypassing on GPUs paper_content: Graphics processing units (GPUs) have become ubiquitous for general purpose applications due to their tremendous computing power. Initially, GPUs only employ scratchpad memory as on-chip memory. Though scratchpad memory benefits many applications, it is not ideal for those general purpose applications with irregular memory accesses. Hence, GPU vendors have introduced caches in conjunction with scratchpad memory in the recent generations of GPUs. The caches on GPUs are highly configurable. The programmer or compiler can explicitly control cache access or bypass for global load instructions. This highly configurable feature of GPU caches opens up the opportunities for optimizing the cache performance. In this paper, we propose an efficient compiler framework for cache bypassing on GPUs. Our objective is to efficiently utilize the configurable cache and improve the overall performance for general purpose GPU applications. In order to achieve this goal, we first characterize GPU cache utilization and develop performance metrics to estimate the cache reuses and memory traffic. Next, we present efficient algorithms that judiciously select global load instructions for cache access or bypass. Finally, we present techniques to explore the unified cache and shared memory design space. We integrate our techniques into an automatic compiler framework that leverages parallel thread execution instruction set architecture to enable cache bypassing for GPUs. Experiments evaluation on NVIDIA GTX680 using a variety of applications demonstrates that compared to cache-all and bypass-all solutions, our techniques improve the performance from 4.6% to 13.1% for 16 KB L1 cache. --- paper_title: Adaptive Cache Bypassing for Inclusive Last Level Caches paper_content: Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the last level cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing inherently breaks the inclusion property. This paper presents a solution to enabling cache bypassing for inclusive caches. We introduce a bypass buffer to an LLC. Bypassed cache lines skip the LLC while their tags are stored in this bypass buffer. When a tag is evicted from the bypass buffer, it invalidates the corresponding cache lines in upper level caches to ensure the inclusion property. Our key insight is that the lifetime of a bypassed line, assuming a well-designed bypassing algorithm, should be short in upper level caches and is most likely dead when its tag is evicted from the bypass buffer. Therefore, a small bypass buffer is sufficient to maintain the inclusion property and to reap most performance benefits of bypassing. Furthermore, the bypass buffer facilitates bypassing algorithms by providing the usage information of bypassed lines. We show that a top performing cache bypassing algorithm, which is originally designed for non-inclusive caches, performs comparably for inclusive caches equipped with our bypass buffer. The usage information collected from the bypass buffer also significantly reduces the cost of hardware implementation compared to the original design. --- paper_title: Adaptive GPU cache bypassing paper_content: Modern graphics processing units (GPUs) include hardware- controlled caches to reduce bandwidth requirements and energy consumption. However, current GPU cache hierarchies are inefficient for general purpose GPU (GPGPU) comput- ing. GPGPU workloads tend to include data structures that would not fit in any reasonably sized caches, leading to very low cache hit rates. This problem is exacerbated by the design of current GPUs, which share small caches be- tween many threads. Caching these streaming data struc- tures needlessly burns power while evicting data that may otherwise fit into the cache. We propose a GPU cache management technique to im- prove the efficiency of small GPU caches while further re- ducing their power consumption. It adaptively bypasses the GPU cache for blocks that are unlikely to be referenced again before being evicted. This technique saves energy by avoid- ing needless insertions and evictions while avoiding cache pollution, resulting in better performance. We show that, with a 16KB L1 data cache, dynamic bypassing achieves sim- ilar performance to a double-sized L1 cache while reducing energy consumption by 25% and power by 18%. The technique is especially interesting for programs that do not use programmer-managed scratchpad memories. We give a case study to demonstrate the inefficiency of current GPU caches compared to programmer-managed scratchpad memories and show the extent to which cache bypassing can make up for the potential performance loss where the effort to program scratchpad memories is impractical. --- paper_title: Less reused filter: improving l2 cache performance via filtering less reused lines paper_content: The L2 cache is commonly managed using LRU policy. For workloads that have a working set larger than L2 cache, LRU behaves poorly, resulting in a great number of less reused lines that are never reused or reused for few times. In this case, the cache performance can be improved through retaining a portion of working set in cache for a period long enough. Previous schemes approach this by bypassing never reused lines. Nevertheless, severely constrained by the number of never reused lines, sometimes they deliver no benefit due to the lack of never reused lines. This paper proposes a new filtering mechanism that filters out the less reused lines rather than just never reused lines. The extended scope of bypassing provides more opportunities to fit the working set into cache. This paper also proposes a Less Reused Filter (LRF), a separate structure that precedes L2 cache, to implement the above mechanism. LRF employs a reuse frequency predictor to accurately identify the less reused lines from incoming lines. Meanwhile, based on our observation that most less reused lines have a short life span, LRF places the filtered lines into a small filter buffer to fully utilize them, avoiding extra misses. Our evaluation, for 24 SPEC 2000 benchmarks, shows that augmenting a 512KB LRU-managed L2 cache with a LRF having 32KB filter buffer reduces the average MPKI by 27.5%, narrowing the gap between LRU and OPT by 74.4%. --- paper_title: Energy Savings via Dead Sub-Block Prediction paper_content: Cache memories have traditionally been designed to exploit spatial locality by fetching entire cache lines from memory upon a miss. However, recent studies have shown that often the number of sub-blocks within a line that are actually used is low. Furthermore, those sub-blocks that are used are accessed only a few times before becoming dead (i.e., never accessed again). This results in considerable energy waste since 1) data not needed by the processor is brought into the cache, and 2) data is kept alive in the cache longer than necessary. We propose the Dead Sub-Block Predictor (DSBP) to predict which sub-blocks of a cache line will be actually used and how many times it will be used in order to bring into the cache only those sub-blocks that are necessary, and power them off after they are touched the predicted number of times. We also use DSBP to identify dead lines (i.e., all sub-blocks off) and augment the existing replacement policy by prioritizing dead lines for eviction. Our results show a 24% energy reduction for the whole cache hierarchy when averaged over the SPEC2000, SPEC2006 and NAS-NPB benchmarks. --- paper_title: Efficient utilization of GPGPU cache hierarchy paper_content: Recent GPUs are equipped with general-purpose L1 and L2 caches in an attempt to reduce memory bandwidth demand and improve the performance of some irregular GPGPU applications. However, due to the massive multithreading, GPGPU caches suffer from severe resource contention and low data-sharing which may degrade the performance instead. In this work, we propose three techniques to efficiently utilize and improve the performance of GPGPU caches. The first technique aims to dynamically detect and bypass memory accesses that show streaming behavior. In the second technique, we propose dynamic warp throttling via cores sampling (DWT-CS) to alleviate cache thrashing by throttling the number of active warps per core. DWT-CS monitors the MPKI at L1, when it exceeds a specific threshold, all GPU cores are sampled with different number of active warps to find the optimal number of warps that mitigates thrashing and achieves the highest performance. Our proposed third technique addresses the problem of GPU cache associativity since many GPGPU applications suffer from severe associativity stalls and conflict misses. Prior work proposed cache bypassing on associativity stalls. In this work, instead of bypassing, we employ a better cache indexing function, Pseudo Random Interleaving Cache (PRIC), that is based on polynomial modulus mapping, in order to fairly and evenly distribute memory accesses over cache sets. The proposed techniques improve the average performance of streaming and contention applications by 1.2X and 2.3X respectively. Compared to prior work, it achieves 1.7X and 1.5X performance improvement over Cache-Conscious Wavefront Scheduler and Memory Request Prioritization Buffer respectively. --- paper_title: DASCA: Dead Write Prediction Assisted STT-RAM Cache Architecture paper_content: Spin-Transfer Torque RAM (STT-RAM) has been considered as a promising candidate for on-chip last-level caches, replacing SRAM for better energy efficiency, smaller die footprint, and scalability. However, it also introduces several new challenges into last-level cache design that need to be overcome for feasible deployment of STT-RAM caches. Among other things, mitigating the impact of slow and energy-hungry write operations is of the utmost importance. In this paper, we propose a new mechanism to reduce write activities of STT-RAM last-level caches. The key observation is that a significant amount of data written to last-level caches is not actually re-referenced again during the lifetime of the corresponding cache blocks. Such write operations, which we call dead writes, can bypass the cache without incurring extra misses by definition. Based on this, we propose Dead Write Prediction Assisted STT-RAM Cache Architecture (DASCA), which predicts and bypasses dead writes for write energy reduction. For this purpose, we first propose a novel classification of dead writes, which is composed of dead-on-arrival fills, dead-value fills, and closing writes, as a theoretical model for redundant write elimination. On top of that, we present a dead write predictor based on a state-of-the-art dead block predictor. Evaluations show that our architecture achieves an energy reduction of 68% (62%) in last-level caches and an additional energy reduction of 10% (16%) in main memory and even improves system performance by 6% (14%) on average compared to the STT-RAM baseline in a single-core (quad-core) system. --- paper_title: Location-aware cache management for many-core processors with deep cache hierarchy paper_content: As cache hierarchies become deeper and the number of cores on a chip increases, managing caches becomes more important for performance and energy. However, current hardware cache management policies do not always adapt optimally to the applications behavior: e.g., caches may be polluted by data structures whose locality cannot be captured by the caches, and producer-consumer communication incurs multiple round trips of coherence messages per cache line transferred. We propose load and store instructions that carry hints regarding into which cache(s) the accessed data should be placed. Our instructions allow software to convey locality information to the hardware, while incurring minimal hardware cost and not affecting correctness. Our instructions provide a 1.07x speedup and a 1.24x energy efficiency boost, on average, according to simulations on a 64-core system with private L1 and L2 caches. With a large shared L3 cache added, the benefits increase, providing 1.33x energy reduction on average. --- paper_title: Managing shared last-level cache in a heterogeneous multicore processor paper_content: Heterogeneous multicore processors that integrate CPU cores and data-parallel accelerators such as GPU cores onto the same die raise several new issues for sharing various on-chip resources. The shared last-level cache (LLC) is one of the most important shared resources due to its impact on performance. Accesses to the shared LLC in heterogeneous multicore processors can be dominated by the GPU due to the significantly higher number of threads supported. Under current cache management policies, the CPU applications' share of the LLC can be significantly reduced in the presence of competing GPU applications. For cache sensitive CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can often tolerate increased memory access latency in the presence of LLC misses when there is sufficient thread-level parallelism. In this work, we propose Heterogeneous LLC Management (HeLM), a novel shared LLC management policy that takes advantage of the GPU's tolerance for memory access latency. HeLM is able to throttle GPU LLC accesses and yield LLC space to cache sensitive CPU applications. GPU LLC access throttling is achieved by allowing GPU threads that can tolerate longer memory access latencies to bypass the LLC. The latency tolerance of a GPU application is determined by the availability of thread-level parallelism, which can be measured at runtime as the average number of threads that are available for issuing. Our heterogeneous LLC management scheme outperforms LRU policy by 12.5% and TAP-RRIP by 5.6% for a processor with 4 CPU and 4 GPU cores. --- paper_title: Sampling Dead Block Prediction for Last-Level Caches paper_content: Last-level caches (LLCs) are large structures with significant power requirements. They can be quite inefficient. On average, a cache block in a 2MB LRU-managed LLC is dead 86% of the time, i.e., it will not be referenced again before it is evicted. This paper introduces sampling dead block prediction, a technique that samples program counters (PCs) to determine when a cache block is likely to be dead. Rather than learning from accesses and evictions from every set in the cache, a sampling predictor keeps track of a small number of sets using partial tags. Sampling allows the predictor to use far less state than previous predictors to make predictions with superior accuracy. Dead block prediction can be used to drive a dead block replacement and bypass optimization. A sampling predictor can reduce the number of LLC misses over LRU by 11.7% for memory-intensive single-thread benchmarks and 23% for multi-core workloads. The reduction in misses yields a geometric mean speedup of 5.9% for single-thread benchmarks and a geometric mean normalized weighted speedup of 12.5% for multi-core workloads. Due to the reduced state and number of accesses, the sampling predictor consumes only 3.1% of the of the dynamic power and 1.2% of the leakage power of a baseline 2MB LLC, comparing favorably with more costly techniques. The sampling predictor can even be used to significantly improve a cache with a default random replacement policy. --- paper_title: Counter-Based Cache Replacement and Bypassing Algorithms paper_content: Recent studies have shown that, in highly associative caches, the performance gap between the least recently used (LRU) and the theoretical optimal replacement algorithms is large, motivating the design of alternative replacement algorithms to improve cache performance. In LRU replacement, a line, after its last use, remains in the cache for a long time until it becomes the LRU line. Such deadlines unnecessarily reduce the cache capacity available for other lines. In addition, in multilevel caches, temporal reuse patterns are often inverted, showing in the L1 cache but, due to the filtering effect of the L1 cache, not showing in the L2 cache. At the L2, these lines appear to be brought in the cache but are never reaccessed until they are replaced. These lines unnecessarily pollute the L2 cache. This paper proposes a new counter-based approach to deal with the above problems. For the former problem, we predict lines that have become dead and replace them early from the L2 cache. For the latter problem, we identify never-reaccessed lines, bypass the L2 cache, and place them directly in the L1 cache. Both techniques are achieved through a single counter-based mechanism. In our approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest such as certain cache accesses occurs. When the counter reaches a threshold, the line ";expires"; and becomes replaceable. Each line's threshold is unique and is dynamically learned. We propose and evaluate two new replacement algorithms: Access interval predictor (AIP) and live-time predictor (LvP). AIP and LvP speed up 10 capacity-constrained SPEC2000 benchmarks by up to 48 percent and 15 percent on average (7 percent on average for the whole 21 Spec2000 benchmarks). Cache bypassing further reduces L2 cache pollution and improves the average speedups to 17 percent (8 percent for the whole 21 Spec2000 benchmarks). --- paper_title: Bypass and insertion algorithms for exclusive last-level caches paper_content: Inclusive last-level caches (LLCs) waste precious silicon estate due to cross-level replication of cache blocks. As the industry moves toward cache hierarchies with larger inner levels, this wasted cache space leads to bigger performance losses compared to exclusive LLCs. However, exclusive LLCs make the design of replacement policies more challenging. While in an inclusive LLC a block can gather a filtered access history, this is not possible in an exclusive design because the block is de-allocated from the LLC on a hit. As a result, the popular least-recently-used replacement policy and its approximations are rendered ineffective and proper choice of insertion ages of cache blocks becomes even more important in exclusive designs. On the other hand, it is not necessary to fill every block into an exclusive LLC. This is known as selective cache bypassing and is not possible to implement in an inclusive LLC because that would violate inclusion. This paper explores insertion and bypass algorithms for exclusive LLCs. Our detailed execution-driven simulation results show that a combination of our best insertion and bypass policies delivers an improvement of up to 61.2% and on average (geometric mean) 3.4% in terms of instructions retired per cycle (IPC) for 97 single-threaded dynamic instruction traces spanning selected SPEC 2006 and server applications, running on a 2 MB 16-way exclusive LLC compared to a baseline exclusive design in the presence of well-tuned multi-stream hardware prefetchers. The corresponding improvements in throughput for 35 4-way multi-programmed workloads running with an 8 MB 16-way shared exclusive LLC are 20.6% (maximum) and 2.5% (geometric mean). --- paper_title: Introducing hierarchy-awareness in replacement and bypass algorithms for last-level caches paper_content: The replacement policies for the last-level caches (LLCs) are usually designed based on the access information available locally at the LLC. These policies are inherently sub-optimal due to lack of information about the activities in the inner-levels of the hierarchy. This paper introduces cache hierarchy-aware replacement (CHAR) algorithms for inclusive LLCs (or L3 caches) and applies the same algorithms to implement efficient bypass techniques for exclusive LLCs in a three-level hierarchy. In a hierarchy with an inclusive LLC, these algorithms mine the L2 cache eviction stream and decide if a block evicted from the L2 cache should be made a victim candidate in the LLC based on the access pattern of the evicted block. Ours is the first proposal that explores the possibility of using a subset of L2 cache eviction hints to improve the replacement algorithms of an inclusive LLC. The CHAR algorithm classifies the blocks residing in the L2 cache based on their reuse patterns and dynamically estimates the reuse probability of each class of blocks to generate selective replacement hints to the LLC. Compared to the static re-reference interval prediction (SRRIP) policy, our proposal offers an average reduction of 10.9% in LLC misses and an average improvement of 3.8% in instructions retired per cycle (IPC) for twelve single-threaded applications. The corresponding reduction in LLC misses for one hundred 4-way multi-programmed workloads is 6.8% leading to an average improvement of 3.9% in through-put. Finally, our proposal achieves an 11.1% reduction in LLC misses and a 4.2% reduction in parallel execution cycles for six 8-way threaded shared memory applications compared to the SRRIP policy. In a cache hierarchy with an exclusive LLC, our CHAR proposal offers an effective algorithm for selecting the subset of blocks (clean or dirty) evicted from the L2 cache that need not be written to the LLC and can be bypassed. Compared to the TC-AGE policy (analogue of SRRIP for exclusive LLC), our best exclusive LLC proposal improves average throughput by 3.2% while saving an average of 66.6% of data transactions from the L2 cache to the on-die interconnect for one hundred 4-way multi-programmed workloads. Compared to an inclusive LLC design with an identical hierarchy, this corresponds to an average throughput improvement of 8.2% with only 17% more data write transactions originating from the L2 cache. --- paper_title: Hardware identification of cache conflict misses paper_content: This paper describes the Miss Classification Table, a simple mechanism that enables the processor or memory controller to identify each cache miss as either a conflict miss or a capacity (non-conflict) miss. The miss classification table works by storing part of the tag of the most recently evicted line of a cache set. If the next miss to that cache set has a matching tag, it is identified as a conflict miss. This technique correctly identifies 87% of misses in the worst case. Several applications of this information are demonstrated, including improvements to victim caching, next-line prefetching, cache exclusion, and a pseudo-associative cache. This paper also presents the Adaptive Miss Buffer (AMB), which combines several of these techniques, targeting each miss with the most appropriate optimization, all within a single small miss buffer. The AMB's combination of techniques achieves 16% better performance than any single technique alone. --- paper_title: WADE: Writeback-aware dynamic cache management for NVM-based main memory system paper_content: Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC). In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory. 1 The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical. The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5p for memory-intensive single-threaded benchmarks and 10.8p for multicore workloads. It yields a geometric mean speedup of 5.1p for single-thread applications and 7.6p for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1p for single-thread applications and 7.6p for multicore workloads. --- paper_title: Exploiting Core Working Sets to Filter the L1 Cache with Random Sampling paper_content: Locality is often characterized by working sets, defined by Denning as the set of distinct addresses referenced within a certain window of time. This definition ignores the fact that dramatic differences exist between the usage patterns of frequently used data and transient data. We therefore propose to extend Denning's definition with that of core working sets, which identify blocks that are used most frequently and for the longest time. The concept of a core motivates the design of dual-cache structures that provide special treatment for the core. In particular, we present a probabilistic locality predictor for L1 caches that leverages the skewed popularity of blocks to distinguish transient cache insertions from more persistent ones. We further present a dual L1 design that inserts only frequently used blocks into a low-latency, low-power, direct-mapped main cache, while serving others from a small fully associative filter. To reduce the prohibitive cost of such a filter, we present a content addressable memory design that eliminates most of the costly lookups using a small auxiliary lookup table. The proposed design enables a 16K direct-mapped L1 cache, augmented with a small 2K filter, to outperform a 32K 4-way cache, while at the same time consumes 70-80 percent less dynamic power and 40 percent less static power. --- paper_title: BEAR: techniques for mitigating bandwidth bloat in gigascale DRAM caches paper_content: Die stacking memory technology can enable gigascale DRAM caches that can operate at 4x-8x higher bandwidth than commodity DRAM. Such caches can improve system performance by servicing data at a faster rate when the requested data is found in the cache, potentially increasing the memory bandwidth of the system by 4x-8x. Unfortunately, a DRAM cache uses the available memory bandwidth not only for data transfer on cache hits, but also for other secondary operations such as cache miss detection, fill on cache miss, and writeback lookup and content update on dirty evictions from the last-level on-chip cache. Ideally, we want the bandwidth consumed for such secondary operations to be negligible, and have almost all the bandwidth be available for transfer of useful data from the DRAM cache to the processor. We evaluate a 1GB DRAM cache, architected as Alloy Cache, and show that even the most bandwidth-efficient proposal for DRAM cache consumes 3.8x bandwidth compared to an idealized DRAM cache that does not consume any bandwidth for secondary operations. We also show that redesigning the DRAM cache to minimize the bandwidth consumed by secondary operations can potentially improve system performance by 22%. To that end, this paper proposes Bandwidth Efficient ARchitecture (BEAR) for DRAM caches. BEAR integrates three components, one each for reducing the bandwidth consumed by miss detection, miss fill, and writeback probes. BEAR reduces the bandwidth consumption of DRAM cache by 32%, which reduces cache hit latency by 24% and increases overall system performance by 10%. BEAR, with negligible overhead, outperforms an idealized SRAM Tag-Store design that incurs an unacceptable overhead of 64 megabytes, as well as Sector Cache designs that incur an SRAM storage overhead of 6 megabytes. --- paper_title: Adaptive placement and migration policy for an STT-RAM-based hybrid cache paper_content: Emerging Non-Volatile Memories (NVM) such as Spin-Torque Transfer RAM (STT-RAM) and Resistive RAM (RRAM) have been explored as potential alternatives for traditional SRAM-based Last-Level-Caches (LLCs) due to the benefits of higher density and lower leakage power. However, NVM technologies have long latency and high energy overhead associated with the write operations. Consequently, a hybrid STT-RAM and SRAM based LLC architecture has been proposed in the hope of exploiting high density and low leakage power of STT-RAM and low write overhead of SRAM. Such a hybrid cache design relies on an intelligent block placement policy that makes good use of the characteristics of both STT-RAM and SRAM technology. --- paper_title: Enhancing LRU replacement via phantom associativity paper_content: In this paper, we propose a novel cache design, Phantom Associative Cache (PAC), that alleviates cache thrashing in L2 caches by keeping the in-cache data blocks for a longer time period. To realize PAC, we introduce the concept of phantom lines. A phantom line works like a real cache line in the LRU stack but does not hold any data or tag. When a phantom line is selected for replacement, cache bypassing is performed instead of replacement. By using appropriate number of phantom lines, PAC can always keep the data blocks that show stronger locality longer in the cache and bypass the cache for other blocks. We show that PAC can be implemented reasonably in practice. The experimental results show that on average PAC reduces cache misses by 17.95% for twelve CPU2006 benchmarks with Misses Per Kilo-Instruction (MPKI) larger than 1 and by 6.61% for all CPU2006 and PARSEC benchmarks. With the help of compiler hints, PAC can further reduce cache misses by 22% for benchmarks that have relatively high MPKI or miss rate. --- paper_title: Load Miss Prediction - Exploiting Power Performance Trade-offs paper_content: Modern CPUs operate at GHz frequencies, but the latencies of memory accesses are still relatively large, in the order of hundreds of cycles. Deeper cache hierarchies with larger cache sizes can mask these latencies for codes with good data locality and reuse, such as structured dense matrix computations. However, cache hierarchies do not necessarily benefit sparse scientific computing codes, which tend to have limited data locality and reuse. We therefore propose a new memory architecture with a load miss predictor (LMP), which includes a data bypass cache and a predictor table, to reduce access latencies by determining whether a load should bypass the main cache hierarchy and issue an early load to main memory. Our architecture uses the L2 (and lower caches) as a victim cache for data removed from our bypass cache. We use cycle-accurate simulations, with SimpleScalar and Wattch to show that our LMP improves the performance of sparse codes, our application domain of interest, on average by 14%, with a 13.6% increase in power. When the LMP is used with dynamic voltage and frequency scaling (DVFS), performance can be improved by 8.7% with system power savings of 7.3% and energy reduction of 17.3% at 1800 MHz relative to the base system at 2000 MHz. Alternatively our LMP can be used to improve the performance of SPEC benchmarks by an average of 2.9 % at the cost of 7.1 % increase in average power. --- paper_title: Optimal bypass monitor for high performance last-level caches paper_content: In the last-level cache, large amounts of blocks have reuse distances greater than the available cache capacity. Cache performance and efficiency can be improved if some subset of these distant reuse blocks can reside in the cache longer. The bypass technique is an effective and attractive solution that prevents the insertion of harmful blocks. --- paper_title: Adaptive Cache Bypassing for Inclusive Last Level Caches paper_content: Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the last level cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing inherently breaks the inclusion property. This paper presents a solution to enabling cache bypassing for inclusive caches. We introduce a bypass buffer to an LLC. Bypassed cache lines skip the LLC while their tags are stored in this bypass buffer. When a tag is evicted from the bypass buffer, it invalidates the corresponding cache lines in upper level caches to ensure the inclusion property. Our key insight is that the lifetime of a bypassed line, assuming a well-designed bypassing algorithm, should be short in upper level caches and is most likely dead when its tag is evicted from the bypass buffer. Therefore, a small bypass buffer is sufficient to maintain the inclusion property and to reap most performance benefits of bypassing. Furthermore, the bypass buffer facilitates bypassing algorithms by providing the usage information of bypassed lines. We show that a top performing cache bypassing algorithm, which is originally designed for non-inclusive caches, performs comparably for inclusive caches equipped with our bypass buffer. The usage information collected from the bypass buffer also significantly reduces the cost of hardware implementation compared to the original design. --- paper_title: Adaptive GPU cache bypassing paper_content: Modern graphics processing units (GPUs) include hardware- controlled caches to reduce bandwidth requirements and energy consumption. However, current GPU cache hierarchies are inefficient for general purpose GPU (GPGPU) comput- ing. GPGPU workloads tend to include data structures that would not fit in any reasonably sized caches, leading to very low cache hit rates. This problem is exacerbated by the design of current GPUs, which share small caches be- tween many threads. Caching these streaming data struc- tures needlessly burns power while evicting data that may otherwise fit into the cache. We propose a GPU cache management technique to im- prove the efficiency of small GPU caches while further re- ducing their power consumption. It adaptively bypasses the GPU cache for blocks that are unlikely to be referenced again before being evicted. This technique saves energy by avoid- ing needless insertions and evictions while avoiding cache pollution, resulting in better performance. We show that, with a 16KB L1 data cache, dynamic bypassing achieves sim- ilar performance to a double-sized L1 cache while reducing energy consumption by 25% and power by 18%. The technique is especially interesting for programs that do not use programmer-managed scratchpad memories. We give a case study to demonstrate the inefficiency of current GPU caches compared to programmer-managed scratchpad memories and show the extent to which cache bypassing can make up for the potential performance loss where the effort to program scratchpad memories is impractical. --- paper_title: Energy Savings via Dead Sub-Block Prediction paper_content: Cache memories have traditionally been designed to exploit spatial locality by fetching entire cache lines from memory upon a miss. However, recent studies have shown that often the number of sub-blocks within a line that are actually used is low. Furthermore, those sub-blocks that are used are accessed only a few times before becoming dead (i.e., never accessed again). This results in considerable energy waste since 1) data not needed by the processor is brought into the cache, and 2) data is kept alive in the cache longer than necessary. We propose the Dead Sub-Block Predictor (DSBP) to predict which sub-blocks of a cache line will be actually used and how many times it will be used in order to bring into the cache only those sub-blocks that are necessary, and power them off after they are touched the predicted number of times. We also use DSBP to identify dead lines (i.e., all sub-blocks off) and augment the existing replacement policy by prioritizing dead lines for eviction. Our results show a 24% energy reduction for the whole cache hierarchy when averaged over the SPEC2000, SPEC2006 and NAS-NPB benchmarks. --- paper_title: DASCA: Dead Write Prediction Assisted STT-RAM Cache Architecture paper_content: Spin-Transfer Torque RAM (STT-RAM) has been considered as a promising candidate for on-chip last-level caches, replacing SRAM for better energy efficiency, smaller die footprint, and scalability. However, it also introduces several new challenges into last-level cache design that need to be overcome for feasible deployment of STT-RAM caches. Among other things, mitigating the impact of slow and energy-hungry write operations is of the utmost importance. In this paper, we propose a new mechanism to reduce write activities of STT-RAM last-level caches. The key observation is that a significant amount of data written to last-level caches is not actually re-referenced again during the lifetime of the corresponding cache blocks. Such write operations, which we call dead writes, can bypass the cache without incurring extra misses by definition. Based on this, we propose Dead Write Prediction Assisted STT-RAM Cache Architecture (DASCA), which predicts and bypasses dead writes for write energy reduction. For this purpose, we first propose a novel classification of dead writes, which is composed of dead-on-arrival fills, dead-value fills, and closing writes, as a theoretical model for redundant write elimination. On top of that, we present a dead write predictor based on a state-of-the-art dead block predictor. Evaluations show that our architecture achieves an energy reduction of 68% (62%) in last-level caches and an additional energy reduction of 10% (16%) in main memory and even improves system performance by 6% (14%) on average compared to the STT-RAM baseline in a single-core (quad-core) system. --- paper_title: A Survey of Techniques for Cache Locking paper_content: Cache memory, although important for boosting application performance, is also a source of execution time variability, and this makes its use difficult in systems requiring worst-case execution time (WCET) guarantees. Cache locking is a promising approach for simplifying WCET estimation and providing predictability, and hence, several commercial processors provide ability for locking cache. However, cache locking also has several disadvantages (e.g., extra misses for unlocked blocks, complex algorithms required for selection of locking contents) and hence, a careful management is required to realize the full potential of cache locking. In this article, we present a survey of techniques proposed for cache locking. We categorize the techniques into several groups to underscore their similarities and differences. We also discuss the opportunities and obstacles in using cache locking. We hope that this article will help researchers gain insight into cache locking schemes and will also stimulate further work in this area. --- paper_title: MASTER: A Multicore Cache Energy-Saving Technique Using Dynamic Cache Reconfiguration paper_content: With increasing number of on-chip cores and CMOS scaling, the size of last-level caches (LLCs) is on the rise and hence, managing their leakage energy consumption has become vital for continuing to scale performance. In multicore systems, the locality of memory access stream is significantly reduced because of multiplexing of access streams from different running programs and hence, leakage energy-saving techniques such as decay cache, which rely on memory access locality, do not save a large amount of energy. The techniques based on way level allocation provide very coarse granularity and the techniques based on offline profiling become infeasible to use for large number of cores. We present a multicore cache energy saving technique using dynamic cache reconfiguration (MASTER) that uses online profiling to predict energy consumption of running programs at multiple LLC sizes. Using these estimates, suitable cache quotas are allocated to different programs using cache coloring scheme and the unused LLC space is turned off to save energy. Even for four core systems, the implementation overhead of MASTER is only 0.8% of L2 size. We evaluate MASTER using out-of-order simulations with multiprogrammed workloads from SPEC2006 and compare it with conventional cache leakage energy-saving techniques. The results show that MASTER gives the highest saving in energy and does not harm performance or cause unfairness. For twoand four-core simulations, the average savings in memory subsystem (which includes LLC and main memory) energy over shared baseline LLC are 15% and 11%, respectively. Also, the average values of weighted speedup and fair speedup are close to one (≥0.98). --- paper_title: A Survey of Recent Prefetching Techniques for Processor Caches paper_content: As the trends of process scaling make memory systems an even more crucial bottleneck, the importance of latency hiding techniques such as prefetching grows further. However, naively using prefetching can harm performance and energy efficiency and, hence, several factors and parameters need to be taken into account to fully realize its potential. In this article, we survey several recent techniques that aim to improve the implementation and effectiveness of prefetching. We characterize the techniques on several parameters to highlight their similarities and differences. The aim of this survey is to provide insights to researchers into working of prefetching techniques and spark interesting future work for improving the performance advantages of prefetching even further. --- paper_title: Counter-Based Cache Replacement and Bypassing Algorithms paper_content: Recent studies have shown that, in highly associative caches, the performance gap between the least recently used (LRU) and the theoretical optimal replacement algorithms is large, motivating the design of alternative replacement algorithms to improve cache performance. In LRU replacement, a line, after its last use, remains in the cache for a long time until it becomes the LRU line. Such deadlines unnecessarily reduce the cache capacity available for other lines. In addition, in multilevel caches, temporal reuse patterns are often inverted, showing in the L1 cache but, due to the filtering effect of the L1 cache, not showing in the L2 cache. At the L2, these lines appear to be brought in the cache but are never reaccessed until they are replaced. These lines unnecessarily pollute the L2 cache. This paper proposes a new counter-based approach to deal with the above problems. For the former problem, we predict lines that have become dead and replace them early from the L2 cache. For the latter problem, we identify never-reaccessed lines, bypass the L2 cache, and place them directly in the L1 cache. Both techniques are achieved through a single counter-based mechanism. In our approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest such as certain cache accesses occurs. When the counter reaches a threshold, the line ";expires"; and becomes replaceable. Each line's threshold is unique and is dynamically learned. We propose and evaluate two new replacement algorithms: Access interval predictor (AIP) and live-time predictor (LvP). AIP and LvP speed up 10 capacity-constrained SPEC2000 benchmarks by up to 48 percent and 15 percent on average (7 percent on average for the whole 21 Spec2000 benchmarks). Cache bypassing further reduces L2 cache pollution and improves the average speedups to 17 percent (8 percent for the whole 21 Spec2000 benchmarks). --- paper_title: SCIP: Selective cache insertion and bypassing to improve the performance of last-level caches paper_content: The design of an effective last-level cache (LLC) is crucial to the overall processor performance and, consequently, continues to be the center of substantial research. Unfortunately, LLCs in modern high-performance processors are not used efficiently. One major problem suffered by LLCs is their low hit rates caused by the large fraction of cache blocks that do not get re-accessed after being brought into the LLC following a cache miss. These blocks do not contribute any cache hits and usually induce cache pollution and thrashing. Cache bypassing presents an effective solution to this problem. Cache blocks that are predicted not to be accessed while residing in the cache are not inserted into the LLC following a miss, instead they bypass the LLC and are only inserted in the higher cache levels. This paper presents a simple, low-hardware overhead, yet effective, cache bypassing algorithm that dynamically chooses which blocks to insert into the LLC and which to bypass it following a miss based on past access/bypass patterns. Our proposed algorithm is thoroughly evaluated using a detailed simulation environment where its effectiveness, performance-improvement capabilities, and robustness are demonstrated. Moreover, it is shown to outperform the state-of-the-art cache bypassing algorithm in both a uniprocessor and a multi-core processor settings. --- paper_title: Less reused filter: improving l2 cache performance via filtering less reused lines paper_content: The L2 cache is commonly managed using LRU policy. For workloads that have a working set larger than L2 cache, LRU behaves poorly, resulting in a great number of less reused lines that are never reused or reused for few times. In this case, the cache performance can be improved through retaining a portion of working set in cache for a period long enough. Previous schemes approach this by bypassing never reused lines. Nevertheless, severely constrained by the number of never reused lines, sometimes they deliver no benefit due to the lack of never reused lines. This paper proposes a new filtering mechanism that filters out the less reused lines rather than just never reused lines. The extended scope of bypassing provides more opportunities to fit the working set into cache. This paper also proposes a Less Reused Filter (LRF), a separate structure that precedes L2 cache, to implement the above mechanism. LRF employs a reuse frequency predictor to accurately identify the less reused lines from incoming lines. Meanwhile, based on our observation that most less reused lines have a short life span, LRF places the filtered lines into a small filter buffer to fully utilize them, avoiding extra misses. Our evaluation, for 24 SPEC 2000 benchmarks, shows that augmenting a 512KB LRU-managed L2 cache with a LRF having 32KB filter buffer reduces the average MPKI by 27.5%, narrowing the gap between LRU and OPT by 74.4%. --- paper_title: Improving Cache Management Policies Using Dynamic Reuse Distances paper_content: Cache management policies such as replacement, bypass, or shared cache partitioning have been relying on data reuse behavior to predict the future. This paper proposes a new way to use dynamic reuse distances to further improve such policies. A new replacement policy is proposed which prevents replacing a cache line until a certain number of accesses to its cache set, called a Protecting Distance (PD). The policy protects a cache line long enough for it to be reused, but not beyond that to avoid cache pollution. This can be combined with a bypass mechanism that also relies on dynamic reuse analysis to bypass lines with less expected reuse. A miss fetch is bypassed if there are no unprotected lines. A hit rate model based on dynamic reuse history is proposed and the PD that maximizes the hit rate is dynamically computed. The PD is recomputed periodically to track a program's memory access behavior and phases. Next, a new multi-core cache partitioning policy is proposed using the concept of protection. It manages lifetimes of lines from different cores (threads) in such a way that the overall hit rate is maximized. The average per-thread lifetime is reduced by decreasing the thread's PD. The single-core PD-based replacement policy with bypass achieves an average speedup of 4.2% over the DIP policy, while the average speedups over DIP are 1.5% for dynamic RRIP (DRRIP) and 1.6% for sampling dead-block prediction (SDP). The 16-core PD-based partitioning policy improves the average weighted IPC by 5.2%, throughput by 6.4% and fairness by 9.9% over thread-aware DRRIP (TA-DRRIP). The required hardware is evaluated and the overhead is shown to be manageable. --- paper_title: Global Priority Table for Last-Level Caches paper_content: Last-level caches (LLC) grow large with significant power consumption. As LLC's capacity increases, it becomes quite inefficient. As recent studies show, a large percent of cache blocks are dead during the cache time. There is a growing need for LLC management to reduce the number of dead block in the LLC. However, there is a significant power requirement for the dead block's in-placement and replacement operations. In this paper, we introduce a global priority table predictor, a technique which is used for determining a cache block's priority when it attempts to insert into the LLC. It is similar to previous predictors, such as reuse distance and dead block predictor. The global priority table is indexed by the hash value of the block address and stores the priority value of the associate cache block. The priority value can be used to drive a dead block replacement and bypass optimization. Through the priority table, a large number of dead blocks could be bypassed. It achieves an average reduction of 13.2% in the number of LLC miss for twenty single-thread workloads from the SPEC2006 suite and 29.9% for ten multi-programmed workloads. It also yields a geometric mean speedup of 8.6% for single-thread workloads and a geometric mean normalized weighted speedup of 39.1% for multi-programmed workloads. --- paper_title: SLIP: reducing wire energy in the memory hierarchy paper_content: Wire energy has become the major contributor to energy in large lower level caches. While wire energy is related to wire latency its costs are exposed differently in the memory hierarchy. We propose Sub-Level Insertion Policy (SLIP), a cache management policy which improves cache energy consumption by increasing the number of accesses from energy efficient locations while simultaneously decreasing intra-level data movement. In SLIP, each cache level is partitioned into several cache sublevels of differing sizes. Then, the recent reuse distance distribution of a line is used to choose an energy-optimized insertion and movement policy for the line. The policy choice is made by a hardware unit that predicts the number of accesses and inter-level movements. Using a full-system simulation including OS interactions and hardware overheads, we show that SLIP saves 35% energy at the L2 and 22% energy at the L3 level and performs 0.75% better than a regular cache hierarchy in a single core system. When configured to include a bypassing policy, SLIP reduces traffic to DRAM by 2.2%. This is achieved at the cost of storing 12b metadata per cache line (2.3% overhead), a 6b policy in the PTE, and 32b distribution metadata for each page in the DRAM (a overhead of 0.1%). Using SLIP in a multiprogrammed system saves 47% LLC energy, and reduces traffic to DRAM by 5.5%. --- paper_title: Optimal bypass monitor for high performance last-level caches paper_content: In the last-level cache, large amounts of blocks have reuse distances greater than the available cache capacity. Cache performance and efficiency can be improved if some subset of these distant reuse blocks can reside in the cache longer. The bypass technique is an effective and attractive solution that prevents the insertion of harmful blocks. --- paper_title: High performance cache replacement using re-reference interval prediction (RRIP) paper_content: Practical cache replacement policies attempt to emulate optimal replacement by predicting the re-reference interval of a cache block. The commonly used LRU replacement policy always predicts a near-immediate re-reference interval on cache hits and misses. Applications that exhibit a distant re-reference interval perform badly under LRU. Such applications usually have a working-set larger than the cache or have frequent bursts of references to non-temporal data (called scans). To improve the performance of such workloads, this paper proposes cache replacement using Re-reference Interval Prediction (RRIP). We propose Static RRIP (SRRIP) that is scan-resistant and Dynamic RRIP (DRRIP) that is both scan-resistant and thrash-resistant. Both RRIP policies require only 2-bits per cache block and easily integrate into existing LRU approximations found in modern processors. Our evaluations using PC games, multimedia, server and SPEC CPU2006 workloads on a single-core processor with a 2MB last-level cache (LLC) show that both SRRIP and DRRIP outperform LRU replacement on the throughput metric by an average of 4% and 10% respectively. Our evaluations with over 1000 multi-programmed workloads on a 4-core CMP with an 8MB shared LLC show that SRRIP and DRRIP outperform LRU replacement on the throughput metric by an average of 7% and 9% respectively. We also show that RRIP outperforms LFU, the state-of the art scan-resistant replacement algorithm to-date. For the cache configurations under study, RRIP requires 2X less hardware than LRU and 2.5X less hardware than LFU. --- paper_title: Hardware identification of cache conflict misses paper_content: This paper describes the Miss Classification Table, a simple mechanism that enables the processor or memory controller to identify each cache miss as either a conflict miss or a capacity (non-conflict) miss. The miss classification table works by storing part of the tag of the most recently evicted line of a cache set. If the next miss to that cache set has a matching tag, it is identified as a conflict miss. This technique correctly identifies 87% of misses in the worst case. Several applications of this information are demonstrated, including improvements to victim caching, next-line prefetching, cache exclusion, and a pseudo-associative cache. This paper also presents the Adaptive Miss Buffer (AMB), which combines several of these techniques, targeting each miss with the most appropriate optimization, all within a single small miss buffer. The AMB's combination of techniques achieves 16% better performance than any single technique alone. --- paper_title: A study of replacement algorithms for a virtual-storage computer paper_content: One of the basic limitations of a digital computer is the size of its available memory. ::: 1 ::: In most cases, it is neither feasible nor economical for a user to insist that every problem program fit into memory. The number of words of information in a program often exceeds the number of cells (i.e., word locations) in memory. The only way to solve this problem is to assign more than one program word to a cell. Since a cell can hold only one word at a time, extra words assigned to the cell must be held in external storage. Conventionally, overlay techniques are employed to exchange memory words and external-storage words whenever needed; this, of course, places an additional planning and coding burden on the programmer. For several reasons, it would be advantageous to rid the programmer of this function by providing him with a “virtual” memory larger than his program. An approach that permits him to use a sufficiently large address range can accomplish this objective, assuming that means are provided for automatic execution of the memory-overlay functions. --- paper_title: Exploiting Core Working Sets to Filter the L1 Cache with Random Sampling paper_content: Locality is often characterized by working sets, defined by Denning as the set of distinct addresses referenced within a certain window of time. This definition ignores the fact that dramatic differences exist between the usage patterns of frequently used data and transient data. We therefore propose to extend Denning's definition with that of core working sets, which identify blocks that are used most frequently and for the longest time. The concept of a core motivates the design of dual-cache structures that provide special treatment for the core. In particular, we present a probabilistic locality predictor for L1 caches that leverages the skewed popularity of blocks to distinguish transient cache insertions from more persistent ones. We further present a dual L1 design that inserts only frequently used blocks into a low-latency, low-power, direct-mapped main cache, while serving others from a small fully associative filter. To reduce the prohibitive cost of such a filter, we present a content addressable memory design that eliminates most of the costly lookups using a small auxiliary lookup table. The proposed design enables a 16K direct-mapped L1 cache, augmented with a small 2K filter, to outperform a 32K 4-way cache, while at the same time consumes 70-80 percent less dynamic power and 40 percent less static power. --- paper_title: Design and performance evaluation of a cache assist to implement selective caching paper_content: Conventional cache architectures exploit locality, but do so rather blindly. By forcing all references through a single structure, the cache's effectiveness on many references is reduced. This paper presents a cache assist namely the annex cache which implements a selective caching scheme. Except for filling a main cache at cold start, all entries come to the cache via the annex cache. Items referenced only rarely will be excluded from the main cache, eliminating several conflict misses. The basic premise is that an item deserves to be in the main cache only if it can prove its right to exist in the main cache by demonstrating locality. The annex cache combines the features of Jouppi's (1990) victim caches and McFarling's (1992) cache exclusion schemes. Extensive simulation studies for annex and victim caches using a variety of SPEC programs are presented in the paper. Annex caches were observed to be significantly better than conventional caches, better than victim caches in certain cases, and comparable to victim caches in other cases. --- paper_title: Load Miss Prediction - Exploiting Power Performance Trade-offs paper_content: Modern CPUs operate at GHz frequencies, but the latencies of memory accesses are still relatively large, in the order of hundreds of cycles. Deeper cache hierarchies with larger cache sizes can mask these latencies for codes with good data locality and reuse, such as structured dense matrix computations. However, cache hierarchies do not necessarily benefit sparse scientific computing codes, which tend to have limited data locality and reuse. We therefore propose a new memory architecture with a load miss predictor (LMP), which includes a data bypass cache and a predictor table, to reduce access latencies by determining whether a load should bypass the main cache hierarchy and issue an early load to main memory. Our architecture uses the L2 (and lower caches) as a victim cache for data removed from our bypass cache. We use cycle-accurate simulations, with SimpleScalar and Wattch to show that our LMP improves the performance of sparse codes, our application domain of interest, on average by 14%, with a 13.6% increase in power. When the LMP is used with dynamic voltage and frequency scaling (DVFS), performance can be improved by 8.7% with system power savings of 7.3% and energy reduction of 17.3% at 1800 MHz relative to the base system at 2000 MHz. Alternatively our LMP can be used to improve the performance of SPEC benchmarks by an average of 2.9 % at the cost of 7.1 % increase in average power. --- paper_title: A novel approach to cache block reuse predictions paper_content: We introduce a novel approach to predict whether a block should be allocated in the cache or not based on past reuse behavior during its lifetime in the cache. Our evaluation of the scheme shows that the prediction accuracy is between 66% and 94% across the applications and can potentially result in a cache miss rate reduction of between 1% and 32% with an average of 12%. We also find that with a modest hardware cost - a table of around 300 bytes - we can cut the miss rate with up to 14% compared to a cache with an always-allocate strategy --- paper_title: Compiler managed micro-cache bypassing for high performance EPIC processors paper_content: Advanced microprocessors have been increasing clock rates, well beyond the Gigahertz boundary. For such high performance microprocessors, a small and fast data micro cache (ucache) is important to overall performance, and proper management of it via load bypassing has a significant performance impact. In this paper, we propose and evaluate a hardware-software collaborative technique to manage ucache bypassing for EPIC processors. The hardware supports the ucache bypassing with a flag in the load instruction format, and the compiler employs static analysis and profiling to identify loads that should bypass the ucache. The collaborative method achieves a significant improvement in performance for the SpecInt2000 benchmarks. On average, about 40%, 30%, 24%, and 22% of load references are identified to bypass 256B, 1K, 4K, and 8K sized ucaches, respectively. This reduces the ucache miss rates by 39%, 32%, 28%, and 26%. The number of pipeline stalls from loads to their uses is reduced by 13%, 9%, 6%, and 5%. Meanwhile, the L1 and L2 cache misses remain largely unchanged. For the 256B ucache, bypassing improves overall performance on average by 5%. --- paper_title: Location-aware cache management for many-core processors with deep cache hierarchy paper_content: As cache hierarchies become deeper and the number of cores on a chip increases, managing caches becomes more important for performance and energy. However, current hardware cache management policies do not always adapt optimally to the applications behavior: e.g., caches may be polluted by data structures whose locality cannot be captured by the caches, and producer-consumer communication incurs multiple round trips of coherence messages per cache line transferred. We propose load and store instructions that carry hints regarding into which cache(s) the accessed data should be placed. Our instructions allow software to convey locality information to the hardware, while incurring minimal hardware cost and not affecting correctness. Our instructions provide a 1.07x speedup and a 1.24x energy efficiency boost, on average, according to simulations on a 64-core system with private L1 and L2 caches. With a large shared L3 cache added, the benefits increase, providing 1.33x energy reduction on average. --- paper_title: Energy Savings via Dead Sub-Block Prediction paper_content: Cache memories have traditionally been designed to exploit spatial locality by fetching entire cache lines from memory upon a miss. However, recent studies have shown that often the number of sub-blocks within a line that are actually used is low. Furthermore, those sub-blocks that are used are accessed only a few times before becoming dead (i.e., never accessed again). This results in considerable energy waste since 1) data not needed by the processor is brought into the cache, and 2) data is kept alive in the cache longer than necessary. We propose the Dead Sub-Block Predictor (DSBP) to predict which sub-blocks of a cache line will be actually used and how many times it will be used in order to bring into the cache only those sub-blocks that are necessary, and power them off after they are touched the predicted number of times. We also use DSBP to identify dead lines (i.e., all sub-blocks off) and augment the existing replacement policy by prioritizing dead lines for eviction. Our results show a 24% energy reduction for the whole cache hierarchy when averaged over the SPEC2000, SPEC2006 and NAS-NPB benchmarks. --- paper_title: Sampling Dead Block Prediction for Last-Level Caches paper_content: Last-level caches (LLCs) are large structures with significant power requirements. They can be quite inefficient. On average, a cache block in a 2MB LRU-managed LLC is dead 86% of the time, i.e., it will not be referenced again before it is evicted. This paper introduces sampling dead block prediction, a technique that samples program counters (PCs) to determine when a cache block is likely to be dead. Rather than learning from accesses and evictions from every set in the cache, a sampling predictor keeps track of a small number of sets using partial tags. Sampling allows the predictor to use far less state than previous predictors to make predictions with superior accuracy. Dead block prediction can be used to drive a dead block replacement and bypass optimization. A sampling predictor can reduce the number of LLC misses over LRU by 11.7% for memory-intensive single-thread benchmarks and 23% for multi-core workloads. The reduction in misses yields a geometric mean speedup of 5.9% for single-thread benchmarks and a geometric mean normalized weighted speedup of 12.5% for multi-core workloads. Due to the reduced state and number of accesses, the sampling predictor consumes only 3.1% of the of the dynamic power and 1.2% of the leakage power of a baseline 2MB LLC, comparing favorably with more costly techniques. The sampling predictor can even be used to significantly improve a cache with a default random replacement policy. --- paper_title: Adaptive Cache Bypassing for Inclusive Last Level Caches paper_content: Cache hierarchy designs, including bypassing, replacement, and the inclusion property, have significant performance impact. Recent works on high performance caches have shown that cache bypassing is an effective technique to enhance the last level cache (LLC) performance. However, commonly used inclusive cache hierarchy cannot benefit from this technique because bypassing inherently breaks the inclusion property. This paper presents a solution to enabling cache bypassing for inclusive caches. We introduce a bypass buffer to an LLC. Bypassed cache lines skip the LLC while their tags are stored in this bypass buffer. When a tag is evicted from the bypass buffer, it invalidates the corresponding cache lines in upper level caches to ensure the inclusion property. Our key insight is that the lifetime of a bypassed line, assuming a well-designed bypassing algorithm, should be short in upper level caches and is most likely dead when its tag is evicted from the bypass buffer. Therefore, a small bypass buffer is sufficient to maintain the inclusion property and to reap most performance benefits of bypassing. Furthermore, the bypass buffer facilitates bypassing algorithms by providing the usage information of bypassed lines. We show that a top performing cache bypassing algorithm, which is originally designed for non-inclusive caches, performs comparably for inclusive caches equipped with our bypass buffer. The usage information collected from the bypass buffer also significantly reduces the cost of hardware implementation compared to the original design. --- paper_title: Bypass and insertion algorithms for exclusive last-level caches paper_content: Inclusive last-level caches (LLCs) waste precious silicon estate due to cross-level replication of cache blocks. As the industry moves toward cache hierarchies with larger inner levels, this wasted cache space leads to bigger performance losses compared to exclusive LLCs. However, exclusive LLCs make the design of replacement policies more challenging. While in an inclusive LLC a block can gather a filtered access history, this is not possible in an exclusive design because the block is de-allocated from the LLC on a hit. As a result, the popular least-recently-used replacement policy and its approximations are rendered ineffective and proper choice of insertion ages of cache blocks becomes even more important in exclusive designs. On the other hand, it is not necessary to fill every block into an exclusive LLC. This is known as selective cache bypassing and is not possible to implement in an inclusive LLC because that would violate inclusion. This paper explores insertion and bypass algorithms for exclusive LLCs. Our detailed execution-driven simulation results show that a combination of our best insertion and bypass policies delivers an improvement of up to 61.2% and on average (geometric mean) 3.4% in terms of instructions retired per cycle (IPC) for 97 single-threaded dynamic instruction traces spanning selected SPEC 2006 and server applications, running on a 2 MB 16-way exclusive LLC compared to a baseline exclusive design in the presence of well-tuned multi-stream hardware prefetchers. The corresponding improvements in throughput for 35 4-way multi-programmed workloads running with an 8 MB 16-way shared exclusive LLC are 20.6% (maximum) and 2.5% (geometric mean). --- paper_title: Introducing hierarchy-awareness in replacement and bypass algorithms for last-level caches paper_content: The replacement policies for the last-level caches (LLCs) are usually designed based on the access information available locally at the LLC. These policies are inherently sub-optimal due to lack of information about the activities in the inner-levels of the hierarchy. This paper introduces cache hierarchy-aware replacement (CHAR) algorithms for inclusive LLCs (or L3 caches) and applies the same algorithms to implement efficient bypass techniques for exclusive LLCs in a three-level hierarchy. In a hierarchy with an inclusive LLC, these algorithms mine the L2 cache eviction stream and decide if a block evicted from the L2 cache should be made a victim candidate in the LLC based on the access pattern of the evicted block. Ours is the first proposal that explores the possibility of using a subset of L2 cache eviction hints to improve the replacement algorithms of an inclusive LLC. The CHAR algorithm classifies the blocks residing in the L2 cache based on their reuse patterns and dynamically estimates the reuse probability of each class of blocks to generate selective replacement hints to the LLC. Compared to the static re-reference interval prediction (SRRIP) policy, our proposal offers an average reduction of 10.9% in LLC misses and an average improvement of 3.8% in instructions retired per cycle (IPC) for twelve single-threaded applications. The corresponding reduction in LLC misses for one hundred 4-way multi-programmed workloads is 6.8% leading to an average improvement of 3.9% in through-put. Finally, our proposal achieves an 11.1% reduction in LLC misses and a 4.2% reduction in parallel execution cycles for six 8-way threaded shared memory applications compared to the SRRIP policy. In a cache hierarchy with an exclusive LLC, our CHAR proposal offers an effective algorithm for selecting the subset of blocks (clean or dirty) evicted from the L2 cache that need not be written to the LLC and can be bypassed. Compared to the TC-AGE policy (analogue of SRRIP for exclusive LLC), our best exclusive LLC proposal improves average throughput by 3.2% while saving an average of 66.6% of data transactions from the L2 cache to the on-die interconnect for one hundred 4-way multi-programmed workloads. Compared to an inclusive LLC design with an identical hierarchy, this corresponds to an average throughput improvement of 8.2% with only 17% more data write transactions originating from the L2 cache. --- paper_title: Using cache mapping to improve memory performance handheld devices paper_content: Processors such as the Intel StrongARM SA-1110 and the Intel XScale provide flexible control over the cache management to achieve better cache utilization. Programs can specify the cache mapping policy for each virtual page, i.e. mapping it to the main cache, the mini-cache, or neither. For the latter case, the page is marked as non-cacheable. In this paper, we use memory profiling to guide such page-based cache mapping. We model the cache mapping problem and prove that finding the optimal cache mapping is NP-hard. We then present a heuristic to select the mapping. Execution time measurement shows that our heuristics can improve the performance from 1% to 21% for a set of test programs. As a byproduct of performance enhancement, we also save the energy by 4% to 28%. --- paper_title: OAP: an obstruction-aware cache management policy for STT-RAM last-level caches paper_content: Emerging memory technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor designs. Among various emerging memory technologies, Spin-Torque Transfer RAM (STT-RAM) has the benefits of fast read latency, low leakage power, and high density, and therefore has been investigated as a promising candidate for last-level cache (LLC)1. One of the major disadvantages for STT-RAM is the latency and energy overhead associated with the write operations. In particular, a long-latency write operation to STT-RAM cache may obstruct other cache accesses and result in severe performance degradation. Consequently, mitigation techniques to minimize the write overhead are required in order to successfully adopt this new technology for cache design. In this paper, we propose an obstruction-aware cache management policy called OAP. OAP monitors the cache to periodically detect LLC-obstruction processes, and manage the cache accesses from different processes. The experimental results on a 4-core architecture with an 8MB STT-RAM L3 cache shows that the performance can be improved by 14% on average and up to 42%, with a reduction of energy consumption by 64%2. --- paper_title: BEAR: techniques for mitigating bandwidth bloat in gigascale DRAM caches paper_content: Die stacking memory technology can enable gigascale DRAM caches that can operate at 4x-8x higher bandwidth than commodity DRAM. Such caches can improve system performance by servicing data at a faster rate when the requested data is found in the cache, potentially increasing the memory bandwidth of the system by 4x-8x. Unfortunately, a DRAM cache uses the available memory bandwidth not only for data transfer on cache hits, but also for other secondary operations such as cache miss detection, fill on cache miss, and writeback lookup and content update on dirty evictions from the last-level on-chip cache. Ideally, we want the bandwidth consumed for such secondary operations to be negligible, and have almost all the bandwidth be available for transfer of useful data from the DRAM cache to the processor. We evaluate a 1GB DRAM cache, architected as Alloy Cache, and show that even the most bandwidth-efficient proposal for DRAM cache consumes 3.8x bandwidth compared to an idealized DRAM cache that does not consume any bandwidth for secondary operations. We also show that redesigning the DRAM cache to minimize the bandwidth consumed by secondary operations can potentially improve system performance by 22%. To that end, this paper proposes Bandwidth Efficient ARchitecture (BEAR) for DRAM caches. BEAR integrates three components, one each for reducing the bandwidth consumed by miss detection, miss fill, and writeback probes. BEAR reduces the bandwidth consumption of DRAM cache by 32%, which reduces cache hit latency by 24% and increases overall system performance by 10%. BEAR, with negligible overhead, outperforms an idealized SRAM Tag-Store design that incurs an unacceptable overhead of 64 megabytes, as well as Sector Cache designs that incur an SRAM storage overhead of 6 megabytes. --- paper_title: A Technique for Improving Lifetime of Non-Volatile Caches Using Write-Minimization paper_content: While non-volatile memories (NVMs) provide high-density and low-leakage, they also have low write-endurance. This, along with the write-variation introduced by the cache management policies, can lead to very small cache lifetime. In this paper, we propose ENLIVE, a technique for ENhancing the LIfetime of non-Volatile cachEs. Our technique uses a small SRAM (static random access memory) storage, called HotStore. ENLIVE detects frequently written blocks and transfers them to the HotStore so that they can be accessed with smaller latency and energy. This also reduces the number of writes to the NVM cache which improves its lifetime. We present microarchitectural schemes for managing the HotStore. Simulations have been performed using an x86-64 simulator and benchmarks from SPEC2006 suite. We observe that ENLIVE provides higher improvement in lifetime and better performance and energy efficiency than two state-of-the-art techniques for improving NVM cache lifetime. ENLIVE provides 8.47×, 14.67× and 15.79× improvement in lifetime or two, four and eight core systems, respectively. In addition, it works well for a range of system and algorithm parameters and incurs only small overhead. --- paper_title: OAP: an obstruction-aware cache management policy for STT-RAM last-level caches paper_content: Emerging memory technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor designs. Among various emerging memory technologies, Spin-Torque Transfer RAM (STT-RAM) has the benefits of fast read latency, low leakage power, and high density, and therefore has been investigated as a promising candidate for last-level cache (LLC)1. One of the major disadvantages for STT-RAM is the latency and energy overhead associated with the write operations. In particular, a long-latency write operation to STT-RAM cache may obstruct other cache accesses and result in severe performance degradation. Consequently, mitigation techniques to minimize the write overhead are required in order to successfully adopt this new technology for cache design. In this paper, we propose an obstruction-aware cache management policy called OAP. OAP monitors the cache to periodically detect LLC-obstruction processes, and manage the cache accesses from different processes. The experimental results on a 4-core architecture with an 8MB STT-RAM L3 cache shows that the performance can be improved by 14% on average and up to 42%, with a reduction of energy consumption by 64%2. --- paper_title: WADE: Writeback-aware dynamic cache management for NVM-based main memory system paper_content: Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC). In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory. 1 The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical. The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5p for memory-intensive single-threaded benchmarks and 10.8p for multicore workloads. It yields a geometric mean speedup of 5.1p for single-thread applications and 7.6p for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1p for single-thread applications and 7.6p for multicore workloads. --- paper_title: SBAC: a statistics based cache bypassing method for asymmetric-access caches paper_content: Asymmetric-access caches with emerging technologies, such as STT-RAM and RRAM, have become very competitive designs recently. Since the write operations consume more time and energy than read ones, data should bypass an asymmetric-access cache unless the locality can justify the data allocation. However, the asymmetric-access property is not well addressed in prior bypassing approaches, which are not energy efficient and induce non-trivial operation overhead. To overcome these problems, we propose a cache bypassing method, SBAC, based on data locality statistics of the whole cache rather than a single cache line's signature. We observe that the decision-making of SBAC is highly accurate and the optimization technique for SBAC works efficiently for multiple applications running concurrently. Experiments show that SBAC cuts down overall energy consumption by 22.3%, and reduces execution time by 8.3%. Compared to prior approaches, the design overhead of SBAC is trivial. --- paper_title: BEAR: techniques for mitigating bandwidth bloat in gigascale DRAM caches paper_content: Die stacking memory technology can enable gigascale DRAM caches that can operate at 4x-8x higher bandwidth than commodity DRAM. Such caches can improve system performance by servicing data at a faster rate when the requested data is found in the cache, potentially increasing the memory bandwidth of the system by 4x-8x. Unfortunately, a DRAM cache uses the available memory bandwidth not only for data transfer on cache hits, but also for other secondary operations such as cache miss detection, fill on cache miss, and writeback lookup and content update on dirty evictions from the last-level on-chip cache. Ideally, we want the bandwidth consumed for such secondary operations to be negligible, and have almost all the bandwidth be available for transfer of useful data from the DRAM cache to the processor. We evaluate a 1GB DRAM cache, architected as Alloy Cache, and show that even the most bandwidth-efficient proposal for DRAM cache consumes 3.8x bandwidth compared to an idealized DRAM cache that does not consume any bandwidth for secondary operations. We also show that redesigning the DRAM cache to minimize the bandwidth consumed by secondary operations can potentially improve system performance by 22%. To that end, this paper proposes Bandwidth Efficient ARchitecture (BEAR) for DRAM caches. BEAR integrates three components, one each for reducing the bandwidth consumed by miss detection, miss fill, and writeback probes. BEAR reduces the bandwidth consumption of DRAM cache by 32%, which reduces cache hit latency by 24% and increases overall system performance by 10%. BEAR, with negligible overhead, outperforms an idealized SRAM Tag-Store design that incurs an unacceptable overhead of 64 megabytes, as well as Sector Cache designs that incur an SRAM storage overhead of 6 megabytes. --- paper_title: Adaptive placement and migration policy for an STT-RAM-based hybrid cache paper_content: Emerging Non-Volatile Memories (NVM) such as Spin-Torque Transfer RAM (STT-RAM) and Resistive RAM (RRAM) have been explored as potential alternatives for traditional SRAM-based Last-Level-Caches (LLCs) due to the benefits of higher density and lower leakage power. However, NVM technologies have long latency and high energy overhead associated with the write operations. Consequently, a hybrid STT-RAM and SRAM based LLC architecture has been proposed in the hope of exploiting high density and low leakage power of STT-RAM and low write overhead of SRAM. Such a hybrid cache design relies on an intelligent block placement policy that makes good use of the characteristics of both STT-RAM and SRAM technology. --- paper_title: Bypassing method for STT-RAM based inclusive last-level cache paper_content: Non-volatile memories (NVMs), such as STT-RAM and PCM, have recently become very competitive designs for last-level caches (LLCs). To avoid cache pollution caused by unnecessary write operations, many cache-bypassing methods have been introduced. Among them, SBAC (a statistics-based cache bypassing method for asymmetric-access caches) is the most recent approach for NVMs and shows the lowest cache access latency. However, SBAC only works on non-inclusive caches, so it is not practical with state-of-the-art processors that employ inclusive LLCs. To overcome this limitation, we propose a novel cache scheme, called inclusive bypass tag cache (IBTC) for NVMs. The proposed IBTC with consideration for the characteristics of NVMs is integrated into LLC to maintain coherence of data in the inclusive LLC with a bypass method and the algorithm is introduced to handle the tag information for bypassed blocks with a minimal storage overhead. Experiments show that IBTC cuts down overall energy consumption by 17.4%, and increases the cache hit rate by 5.1%. --- paper_title: DASCA: Dead Write Prediction Assisted STT-RAM Cache Architecture paper_content: Spin-Transfer Torque RAM (STT-RAM) has been considered as a promising candidate for on-chip last-level caches, replacing SRAM for better energy efficiency, smaller die footprint, and scalability. However, it also introduces several new challenges into last-level cache design that need to be overcome for feasible deployment of STT-RAM caches. Among other things, mitigating the impact of slow and energy-hungry write operations is of the utmost importance. In this paper, we propose a new mechanism to reduce write activities of STT-RAM last-level caches. The key observation is that a significant amount of data written to last-level caches is not actually re-referenced again during the lifetime of the corresponding cache blocks. Such write operations, which we call dead writes, can bypass the cache without incurring extra misses by definition. Based on this, we propose Dead Write Prediction Assisted STT-RAM Cache Architecture (DASCA), which predicts and bypasses dead writes for write energy reduction. For this purpose, we first propose a novel classification of dead writes, which is composed of dead-on-arrival fills, dead-value fills, and closing writes, as a theoretical model for redundant write elimination. On top of that, we present a dead write predictor based on a state-of-the-art dead block predictor. Evaluations show that our architecture achieves an energy reduction of 68% (62%) in last-level caches and an additional energy reduction of 10% (16%) in main memory and even improves system performance by 6% (14%) on average compared to the STT-RAM baseline in a single-core (quad-core) system. --- paper_title: WADE: Writeback-aware dynamic cache management for NVM-based main memory system paper_content: Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC). In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory. 1 The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical. The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5p for memory-intensive single-threaded benchmarks and 10.8p for multicore workloads. It yields a geometric mean speedup of 5.1p for single-thread applications and 7.6p for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1p for single-thread applications and 7.6p for multicore workloads. --- paper_title: Managing shared last-level cache in a heterogeneous multicore processor paper_content: Heterogeneous multicore processors that integrate CPU cores and data-parallel accelerators such as GPU cores onto the same die raise several new issues for sharing various on-chip resources. The shared last-level cache (LLC) is one of the most important shared resources due to its impact on performance. Accesses to the shared LLC in heterogeneous multicore processors can be dominated by the GPU due to the significantly higher number of threads supported. Under current cache management policies, the CPU applications' share of the LLC can be significantly reduced in the presence of competing GPU applications. For cache sensitive CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can often tolerate increased memory access latency in the presence of LLC misses when there is sufficient thread-level parallelism. In this work, we propose Heterogeneous LLC Management (HeLM), a novel shared LLC management policy that takes advantage of the GPU's tolerance for memory access latency. HeLM is able to throttle GPU LLC accesses and yield LLC space to cache sensitive CPU applications. GPU LLC access throttling is achieved by allowing GPU threads that can tolerate longer memory access latencies to bypass the LLC. The latency tolerance of a GPU application is determined by the availability of thread-level parallelism, which can be measured at runtime as the average number of threads that are available for issuing. Our heterogeneous LLC management scheme outperforms LRU policy by 12.5% and TAP-RRIP by 5.6% for a processor with 4 CPU and 4 GPU cores. --- paper_title: Adaptive Cache Management for Energy-Efficient GPU Computing paper_content: With the SIMT execution model, GPUs can hidememory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPU scause cache contention and resource congestion. Existing CPUcache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPUcaches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid oversaturatingon-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache sensitive benchmarks. This results in a harmonic mean IPCimprovement of 74% and 17% (maximum 661% and 44% IPCimprovement), compared to the baseline GPU architecture and optimal static warp throttling, respectively. --- paper_title: Locality-Driven Dynamic GPU Cache Bypassing paper_content: This paper presents novel cache optimizations for massively parallel, throughput-oriented architectures like GPUs. L1 data caches (L1 D-caches) are critical resources for providing high-bandwidth and low-latency data accesses. However, the high number of simultaneous requests from single-instruction multiple-thread (SIMT) cores makes the limited capacity of L1 D-caches a performance and energy bottleneck, especially for memory-intensive applications. We observe that the memory access streams to L1 D-caches for many applications contain a significant amount of requests with low reuse, which greatly reduce the cache efficacy. Existing GPU cache management schemes are either based on conditional/reactive solutions or hit-rate based designs specifically developed for CPU last level caches, which can limit overall performance. To overcome these challenges, we propose an efficient locality monitoring mechanism to dynamically filter the access stream on cache insertion such that only the data with high reuse and short reuse distances are stored in the L1 D-cache. Specifically, we present a design that integrates locality filtering based on reuse characteristics of GPU workloads into the decoupled tag store of the existing L1 D-cache through simple and cost-effective hardware extensions. Results show that our proposed design can dramatically reduce cache contention and achieve up to 56.8% and an average of 30.3% performance improvement over the baseline architecture, for a range of highly-optimized cache-unfriendly applications with minor area overhead and better energy efficiency. Our design also significantly outperforms the state-of-the-art CPU and GPU bypassing schemes (especially for irregular applications), without generating extra L2 and DRAM level contention. --- paper_title: BEAR: techniques for mitigating bandwidth bloat in gigascale DRAM caches paper_content: Die stacking memory technology can enable gigascale DRAM caches that can operate at 4x-8x higher bandwidth than commodity DRAM. Such caches can improve system performance by servicing data at a faster rate when the requested data is found in the cache, potentially increasing the memory bandwidth of the system by 4x-8x. Unfortunately, a DRAM cache uses the available memory bandwidth not only for data transfer on cache hits, but also for other secondary operations such as cache miss detection, fill on cache miss, and writeback lookup and content update on dirty evictions from the last-level on-chip cache. Ideally, we want the bandwidth consumed for such secondary operations to be negligible, and have almost all the bandwidth be available for transfer of useful data from the DRAM cache to the processor. We evaluate a 1GB DRAM cache, architected as Alloy Cache, and show that even the most bandwidth-efficient proposal for DRAM cache consumes 3.8x bandwidth compared to an idealized DRAM cache that does not consume any bandwidth for secondary operations. We also show that redesigning the DRAM cache to minimize the bandwidth consumed by secondary operations can potentially improve system performance by 22%. To that end, this paper proposes Bandwidth Efficient ARchitecture (BEAR) for DRAM caches. BEAR integrates three components, one each for reducing the bandwidth consumed by miss detection, miss fill, and writeback probes. BEAR reduces the bandwidth consumption of DRAM cache by 32%, which reduces cache hit latency by 24% and increases overall system performance by 10%. BEAR, with negligible overhead, outperforms an idealized SRAM Tag-Store design that incurs an unacceptable overhead of 64 megabytes, as well as Sector Cache designs that incur an SRAM storage overhead of 6 megabytes. --- paper_title: MRPB: Memory request prioritization for massively parallel processors paper_content: Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high performance for a broad range of programs. They are, however, complex to program, especially because of their intricate memory hierarchies with multiple address spaces. In response, modern GPUs have widely adopted caches, hoping to providing smoother reductions in memory access traffic and latency. Unfortunately, GPU caches often have mixed or unpredictable performance impact due to cache contention that results from the high thread counts in GPUs. We propose the memory request prioritization buffer (MRPB) to ease GPU programming and improve GPU performance. This hardware structure improves caching efficiency of massively parallel workloads by applying two prioritization methods—request reordering and cache bypassing—to memory requests before they access a cache. MRPB then releases requests into the cache in a more cache-friendly order. The result is drastically reduced cache contention and improved use of the limited per-thread cache capacity. For a simulated 16KB L1 cache, MRPB improves the average performance of the entire PolyBench and Rodinia suites by 2.65× and 1.27× respectively, outperforming a state-of-the-art GPU cache management technique. --- paper_title: Managing shared last-level cache in a heterogeneous multicore processor paper_content: Heterogeneous multicore processors that integrate CPU cores and data-parallel accelerators such as GPU cores onto the same die raise several new issues for sharing various on-chip resources. The shared last-level cache (LLC) is one of the most important shared resources due to its impact on performance. Accesses to the shared LLC in heterogeneous multicore processors can be dominated by the GPU due to the significantly higher number of threads supported. Under current cache management policies, the CPU applications' share of the LLC can be significantly reduced in the presence of competing GPU applications. For cache sensitive CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can often tolerate increased memory access latency in the presence of LLC misses when there is sufficient thread-level parallelism. In this work, we propose Heterogeneous LLC Management (HeLM), a novel shared LLC management policy that takes advantage of the GPU's tolerance for memory access latency. HeLM is able to throttle GPU LLC accesses and yield LLC space to cache sensitive CPU applications. GPU LLC access throttling is achieved by allowing GPU threads that can tolerate longer memory access latencies to bypass the LLC. The latency tolerance of a GPU application is determined by the availability of thread-level parallelism, which can be measured at runtime as the average number of threads that are available for issuing. Our heterogeneous LLC management scheme outperforms LRU policy by 12.5% and TAP-RRIP by 5.6% for a processor with 4 CPU and 4 GPU cores. --- paper_title: Adaptive Cache Management for Energy-Efficient GPU Computing paper_content: With the SIMT execution model, GPUs can hidememory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPU scause cache contention and resource congestion. Existing CPUcache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPUcaches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid oversaturatingon-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache sensitive benchmarks. This results in a harmonic mean IPCimprovement of 74% and 17% (maximum 661% and 44% IPCimprovement), compared to the baseline GPU architecture and optimal static warp throttling, respectively. --- paper_title: Locality-Driven Dynamic GPU Cache Bypassing paper_content: This paper presents novel cache optimizations for massively parallel, throughput-oriented architectures like GPUs. L1 data caches (L1 D-caches) are critical resources for providing high-bandwidth and low-latency data accesses. However, the high number of simultaneous requests from single-instruction multiple-thread (SIMT) cores makes the limited capacity of L1 D-caches a performance and energy bottleneck, especially for memory-intensive applications. We observe that the memory access streams to L1 D-caches for many applications contain a significant amount of requests with low reuse, which greatly reduce the cache efficacy. Existing GPU cache management schemes are either based on conditional/reactive solutions or hit-rate based designs specifically developed for CPU last level caches, which can limit overall performance. To overcome these challenges, we propose an efficient locality monitoring mechanism to dynamically filter the access stream on cache insertion such that only the data with high reuse and short reuse distances are stored in the L1 D-cache. Specifically, we present a design that integrates locality filtering based on reuse characteristics of GPU workloads into the decoupled tag store of the existing L1 D-cache through simple and cost-effective hardware extensions. Results show that our proposed design can dramatically reduce cache contention and achieve up to 56.8% and an average of 30.3% performance improvement over the baseline architecture, for a range of highly-optimized cache-unfriendly applications with minor area overhead and better energy efficiency. Our design also significantly outperforms the state-of-the-art CPU and GPU bypassing schemes (especially for irregular applications), without generating extra L2 and DRAM level contention. --- paper_title: Coordinated static and dynamic cache bypassing for GPUs paper_content: The massive parallel architecture enables graphics processing units (GPUs) to boost performance for a wide range of applications. Initially, GPUs only employ scratchpad memory as on-chip memory. Recently, to broaden the scope of applications that can be accelerated by GPUs, GPU vendors have used caches in conjunction with scratchpad memory as on-chip memory in the new generations of GPUs. Unfortunately, GPU caches face many performance challenges that arise due to excessive thread contention for cache resource. Cache bypassing, where memory requests can selectively bypass the cache, is one solution that can help to mitigate the cache resource contention problem. In this paper, we propose coordinated static and dynamic cache bypassing to improve application performance. At compile-time, we identify the global loads that indicate strong preferences for caching or bypassing through profiling. For the rest global loads, our dynamic cache bypassing has the flexibility to cache only a fraction of threads. In CUDA programming model, the threads are divided into work units called thread blocks. Our dynamic bypassing technique modulates the ratio of thread blocks that cache or bypass at run-time. We choose to modulate at thread block level in order to avoid the memory divergence problems. Our approach combines compile-time analysis that determines the cache or bypass preferences for global loads with run-time management that adjusts the ratio of thread blocks that cache or bypass. Our coordinated static and dynamic cache bypassing technique achieves up to 2.28X (average I.32X) performance speedup for a variety of GPU applications. --- paper_title: Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance paper_content: In a GPU, all threads within a warp execute the same instruction in lockstep. For a memory instruction, this can lead to memory divergence: the memory requests for some threads are serviced early, while the remaining requests incur long latencies. This divergence stalls the warp, as it cannot execute the next instruction until all requests from the current instruction complete. In this work, we make three new observations. First, GPGPU warps exhibit heterogeneous memory divergence behavior at the shared cache: some warps have most of their requests hit in the cache (high cache utility), while other warps see most of their request miss (low cache utility). Second, a warp retains the same divergence behavior for long periods of execution. Third, due to high memory level parallelism, requests going to the shared cache can incur queuing delays as large as hundreds of cycles, exacerbating the effects of memory divergence. We propose a set of techniques, collectively called Memory Divergence Correction (MeDiC), that reduce the negative performance impact of memory divergence and cache queuing. MeDiC uses warp divergence characterization to guide three components: (1) a cache bypassing mechanism that exploits the latency tolerance of low cache utility warps to both alleviate queuing delay and increase the hit rate for high cache utility warps, (2) a cache insertion policy that prevents data from highcache utility warps from being prematurely evicted, and (3) a memory controller that prioritizes the few requests received from high cache utility warps to minimize stall time. We compare MeDiC to four cache management techniques, and find that it delivers an average speedup of 21.8%, and 20.1% higher energy efficiency, over a state-of-the-art GPU cache management mechanism across 15 different GPGPU applications. --- paper_title: DaCache: Memory Divergence-Aware GPU Cache Management paper_content: The lock-step execution model of GPU requires a warp to have the data blocks for all its threads before execution. However, there is a lack of salient cache mechanisms that can recognize the need of managing GPU cache blocks at the warp level for increasing the number of warps ready for execution. In addition, warp scheduling is very important for GPU-specific cache management to reduce both intra- and inter-warp conflicts and maximize data locality. In this paper, we propose a Divergence-Aware Cache (DaCache) management that can orchestrate L1D cache management and warp scheduling together for GPGPUs. In DaCache, the insertion position of an incoming data block depends on the fetching warp's scheduling priority. Blocks of warps with lower priorities are inserted closer to the LRU position of the LRU-chain so that they have shorter lifetime in cache. This fine-grained insertion policy is extended to prioritize coherent loads over divergent loads so that coherent loads are less vulnerable to both inter- and intra-warp thrashing. DaCache also adopts a constrained replacement policy with L1D bypassing to sustain a good supply of Fully Cached Warps (FCW), along with a dynamic mechanism to adjust FCW during runtime. Our experiments demonstrate that DaCache achieves 40.4% performance improvement over the baseline GPU and outperforms two state-of-the-art thrashing-resistant techniques RRIP and DIP by 40% and 24.9%, respectively. --- paper_title: Orchestrating Cache Management and Memory Scheduling for GPGPU Applications paper_content: Modern graphics processing units (GPUs) are delivering tremendous computing horsepower by running tens of thousands of threads concurrently. The massively parallel execution model has been effective to hide the long latency of off-chip memory accesses in graphics and other general computing applications exhibiting regular memory behaviors. With the fast-growing demand for general purpose computing on GPUs (GPGPU), GPU workloads are becoming highly diversified, and thus requiring a synergistic coordination of both computing and memory resources to unleash the computing power of GPUs. Accordingly, recent graphics processors begin to integrate an on-die level-2 (L2) cache. The huge number of threads on GPUs, however, poses significant challenges to L2 cache design. The experiments on a variety of GPGPU applications reveal that the L2 cache may or may not improve the overall performance depending on the characteristics of applications. In this paper, we propose efficient techniques to improve GPGPU performance by orchestrating both L2 cache and memory in a unified framework. The basic philosophy is to exploit the temporal locality among the massive number of concurrent memory requests and minimize the impact of memory divergence behaviors among simultaneously executed groups of threads. Our major contributions are twofold. First, a priority-based cache management is proposed to maximize the chance of frequently revisited data to be kept in the cache. Second, an effective memory scheduling is introduced to reorder memory requests in the memory controller according to the divergence behavior for reducing average waiting time of warps. Simulation results reveal that our techniques enhance the overall performance by 10% on average for memory intensive benchmarks, whereas the maximum gain can be up to 30%. --- paper_title: Adaptive Cache and Concurrency Allocation on GPGPUs paper_content: Memory bandwidth is critical to GPGPU performance. Exploiting locality in caches can better utilize memory bandwidth. However, memory requests issued by excessive threads cause cache thrashing and saturate memory bandwidth, degrading performance. In this paper, we propose adaptive cache and concurrency allocation (CCA) to prevent cache thrashing and improve the utilization of bandwidth and computational resources, hence improving performance. According to locality and reuse distance of access patterns in GPGPU program, warps on a stream multiprocessor are dynamically divided into three groups: cached, bypassed, and waiting. The data cache accommodates the footprint of cached warps. Bypassed warps cannot allocate cache lines in the data cache to prevent cache thrashing, but are able to take advantage of available memory bandwidth and computational resource. Waiting warps are de-scheduled. Experimental results show that adaptive CCA can significant improve benchmark performance, with 80 percent harmonic mean IPC improvement over the baseline. --- paper_title: An Efficient Compiler Framework for Cache Bypassing on GPUs paper_content: Graphics processing units (GPUs) have become ubiquitous for general purpose applications due to their tremendous computing power. Initially, GPUs only employ scratchpad memory as on-chip memory. Though scratchpad memory benefits many applications, it is not ideal for those general purpose applications with irregular memory accesses. Hence, GPU vendors have introduced caches in conjunction with scratchpad memory in the recent generations of GPUs. The caches on GPUs are highly configurable. The programmer or compiler can explicitly control cache access or bypass for global load instructions. This highly configurable feature of GPU caches opens up the opportunities for optimizing the cache performance. In this paper, we propose an efficient compiler framework for cache bypassing on GPUs. Our objective is to efficiently utilize the configurable cache and improve the overall performance for general purpose GPU applications. In order to achieve this goal, we first characterize GPU cache utilization and develop performance metrics to estimate the cache reuses and memory traffic. Next, we present efficient algorithms that judiciously select global load instructions for cache access or bypass. Finally, we present techniques to explore the unified cache and shared memory design space. We integrate our techniques into an automatic compiler framework that leverages parallel thread execution instruction set architecture to enable cache bypassing for GPUs. Experiments evaluation on NVIDIA GTX680 using a variety of applications demonstrates that compared to cache-all and bypass-all solutions, our techniques improve the performance from 4.6% to 13.1% for 16 KB L1 cache. --- paper_title: Adaptive and transparent cache bypassing for GPUs paper_content: In the last decade, GPUs have emerged to be widely adopted for general-purpose applications. To capture on-chip locality for these applications, modern GPUs have integrated multilevel cache hierarchy, in an attempt to reduce the amount and latency of the massive and sometimes irregular memory accesses. However, inferior performance is frequently attained due to serious congestion in the caches results from the huge amount of concurrent threads. In this paper, we propose a novel compile-time framework for adaptive and transparent cache bypassing on GPUs. It uses a simple yet effective approach to control the bypass degree to match the size of applications' runtime footprints. We validate the design on seven GPU platforms that cover all existing GPU generations using 16 applications from widely used GPU benchmarks. Experiments show that our design can significantly mitigate the negative impact due to small cache sizes and improve the overall performance. We analyze the performance across different platforms and applications. We also propose some optimization guidelines on how to efficiently use the GPU caches. --- paper_title: Locality-Driven Dynamic GPU Cache Bypassing paper_content: This paper presents novel cache optimizations for massively parallel, throughput-oriented architectures like GPUs. L1 data caches (L1 D-caches) are critical resources for providing high-bandwidth and low-latency data accesses. However, the high number of simultaneous requests from single-instruction multiple-thread (SIMT) cores makes the limited capacity of L1 D-caches a performance and energy bottleneck, especially for memory-intensive applications. We observe that the memory access streams to L1 D-caches for many applications contain a significant amount of requests with low reuse, which greatly reduce the cache efficacy. Existing GPU cache management schemes are either based on conditional/reactive solutions or hit-rate based designs specifically developed for CPU last level caches, which can limit overall performance. To overcome these challenges, we propose an efficient locality monitoring mechanism to dynamically filter the access stream on cache insertion such that only the data with high reuse and short reuse distances are stored in the L1 D-cache. Specifically, we present a design that integrates locality filtering based on reuse characteristics of GPU workloads into the decoupled tag store of the existing L1 D-cache through simple and cost-effective hardware extensions. Results show that our proposed design can dramatically reduce cache contention and achieve up to 56.8% and an average of 30.3% performance improvement over the baseline architecture, for a range of highly-optimized cache-unfriendly applications with minor area overhead and better energy efficiency. Our design also significantly outperforms the state-of-the-art CPU and GPU bypassing schemes (especially for irregular applications), without generating extra L2 and DRAM level contention. --- paper_title: Real-Time GPU Computing: Cache or No Cache? paper_content: Recent Graphics Processing Units (GPUs) have employed cache memories to boost performance. However, cache memories are well known to be harmful to time predictability for CPUs. For high-performance real-time systems using GPUs, it remains unknown whether or not cache memories should be employed. In this paper, we quantitatively compare the performance for GPUs with and without caches, and find that GPUs without the cache actually lead to better average-case performance, with higher time predictability. However, we also study a profiling-based cache bypassing method, which can use the L1 data cache more efficiently to achieve better average-case performance than that without the cache. Therefore, it seems still beneficial to employ caches for real-time computing on GPUs. --- paper_title: Reducing off-chip memory traffic by selective cache management scheme in GPGPUs paper_content: The performance of General Purpose Graphics Processing Units (GPGPUs) is frequently limited by the off-chip memory bandwidth. To mitigate this bandwidth wall problem, recent GPUs are equipped with on-chip L1 and L2 caches. However, there has been little work for better utilizing on-chip shared caches in GPGPUs. In this paper, we propose two cache management schemes: write-buffering and read-bypassing. The write buffering technique tries to utilize the shared cache for inter-block communication, and thereby reduces the DRAM accesses as much as the capacity of the cache. The read-bypassing scheme prevents the shared cache from being polluted by streamed data that are consumed only within a thread-block. The proposed schemes can be selectively applied to global memory instructions using newly defined cache operators. We evaluate the effects of the proposed schemes for a few GPGPU applications by simulations. We have shown that the off-chip memory accesses can be successfully reduced by the proposed techniques. We also analyze the effectiveness of these methods when the throughput gap between cores and off-chip memory becomes wider. --- paper_title: Adaptive GPU cache bypassing paper_content: Modern graphics processing units (GPUs) include hardware- controlled caches to reduce bandwidth requirements and energy consumption. However, current GPU cache hierarchies are inefficient for general purpose GPU (GPGPU) comput- ing. GPGPU workloads tend to include data structures that would not fit in any reasonably sized caches, leading to very low cache hit rates. This problem is exacerbated by the design of current GPUs, which share small caches be- tween many threads. Caching these streaming data struc- tures needlessly burns power while evicting data that may otherwise fit into the cache. We propose a GPU cache management technique to im- prove the efficiency of small GPU caches while further re- ducing their power consumption. It adaptively bypasses the GPU cache for blocks that are unlikely to be referenced again before being evicted. This technique saves energy by avoid- ing needless insertions and evictions while avoiding cache pollution, resulting in better performance. We show that, with a 16KB L1 data cache, dynamic bypassing achieves sim- ilar performance to a double-sized L1 cache while reducing energy consumption by 25% and power by 18%. The technique is especially interesting for programs that do not use programmer-managed scratchpad memories. We give a case study to demonstrate the inefficiency of current GPU caches compared to programmer-managed scratchpad memories and show the extent to which cache bypassing can make up for the potential performance loss where the effort to program scratchpad memories is impractical. --- paper_title: Efficient utilization of GPGPU cache hierarchy paper_content: Recent GPUs are equipped with general-purpose L1 and L2 caches in an attempt to reduce memory bandwidth demand and improve the performance of some irregular GPGPU applications. However, due to the massive multithreading, GPGPU caches suffer from severe resource contention and low data-sharing which may degrade the performance instead. In this work, we propose three techniques to efficiently utilize and improve the performance of GPGPU caches. The first technique aims to dynamically detect and bypass memory accesses that show streaming behavior. In the second technique, we propose dynamic warp throttling via cores sampling (DWT-CS) to alleviate cache thrashing by throttling the number of active warps per core. DWT-CS monitors the MPKI at L1, when it exceeds a specific threshold, all GPU cores are sampled with different number of active warps to find the optimal number of warps that mitigates thrashing and achieves the highest performance. Our proposed third technique addresses the problem of GPU cache associativity since many GPGPU applications suffer from severe associativity stalls and conflict misses. Prior work proposed cache bypassing on associativity stalls. In this work, instead of bypassing, we employ a better cache indexing function, Pseudo Random Interleaving Cache (PRIC), that is based on polynomial modulus mapping, in order to fairly and evenly distribute memory accesses over cache sets. The proposed techniques improve the average performance of streaming and contention applications by 1.2X and 2.3X respectively. Compared to prior work, it achieves 1.7X and 1.5X performance improvement over Cache-Conscious Wavefront Scheduler and Memory Request Prioritization Buffer respectively. --- paper_title: MRPB: Memory request prioritization for massively parallel processors paper_content: Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high performance for a broad range of programs. They are, however, complex to program, especially because of their intricate memory hierarchies with multiple address spaces. In response, modern GPUs have widely adopted caches, hoping to providing smoother reductions in memory access traffic and latency. Unfortunately, GPU caches often have mixed or unpredictable performance impact due to cache contention that results from the high thread counts in GPUs. We propose the memory request prioritization buffer (MRPB) to ease GPU programming and improve GPU performance. This hardware structure improves caching efficiency of massively parallel workloads by applying two prioritization methods—request reordering and cache bypassing—to memory requests before they access a cache. MRPB then releases requests into the cache in a more cache-friendly order. The result is drastically reduced cache contention and improved use of the limited per-thread cache capacity. For a simulated 16KB L1 cache, MRPB improves the average performance of the entire PolyBench and Rodinia suites by 2.65× and 1.27× respectively, outperforming a state-of-the-art GPU cache management technique. --- paper_title: Adaptive Cache Management for Energy-Efficient GPU Computing paper_content: With the SIMT execution model, GPUs can hidememory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPU scause cache contention and resource congestion. Existing CPUcache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPUcaches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid oversaturatingon-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache sensitive benchmarks. This results in a harmonic mean IPCimprovement of 74% and 17% (maximum 661% and 44% IPCimprovement), compared to the baseline GPU architecture and optimal static warp throttling, respectively. --- paper_title: Improving Cache Management Policies Using Dynamic Reuse Distances paper_content: Cache management policies such as replacement, bypass, or shared cache partitioning have been relying on data reuse behavior to predict the future. This paper proposes a new way to use dynamic reuse distances to further improve such policies. A new replacement policy is proposed which prevents replacing a cache line until a certain number of accesses to its cache set, called a Protecting Distance (PD). The policy protects a cache line long enough for it to be reused, but not beyond that to avoid cache pollution. This can be combined with a bypass mechanism that also relies on dynamic reuse analysis to bypass lines with less expected reuse. A miss fetch is bypassed if there are no unprotected lines. A hit rate model based on dynamic reuse history is proposed and the PD that maximizes the hit rate is dynamically computed. The PD is recomputed periodically to track a program's memory access behavior and phases. Next, a new multi-core cache partitioning policy is proposed using the concept of protection. It manages lifetimes of lines from different cores (threads) in such a way that the overall hit rate is maximized. The average per-thread lifetime is reduced by decreasing the thread's PD. The single-core PD-based replacement policy with bypass achieves an average speedup of 4.2% over the DIP policy, while the average speedups over DIP are 1.5% for dynamic RRIP (DRRIP) and 1.6% for sampling dead-block prediction (SDP). The 16-core PD-based partitioning policy improves the average weighted IPC by 5.2%, throughput by 6.4% and fairness by 9.9% over thread-aware DRRIP (TA-DRRIP). The required hardware is evaluated and the overhead is shown to be manageable. --- paper_title: Real-Time GPU Computing: Cache or No Cache? paper_content: Recent Graphics Processing Units (GPUs) have employed cache memories to boost performance. However, cache memories are well known to be harmful to time predictability for CPUs. For high-performance real-time systems using GPUs, it remains unknown whether or not cache memories should be employed. In this paper, we quantitatively compare the performance for GPUs with and without caches, and find that GPUs without the cache actually lead to better average-case performance, with higher time predictability. However, we also study a profiling-based cache bypassing method, which can use the L1 data cache more efficiently to achieve better average-case performance than that without the cache. Therefore, it seems still beneficial to employ caches for real-time computing on GPUs. --- paper_title: Coordinated static and dynamic cache bypassing for GPUs paper_content: The massive parallel architecture enables graphics processing units (GPUs) to boost performance for a wide range of applications. Initially, GPUs only employ scratchpad memory as on-chip memory. Recently, to broaden the scope of applications that can be accelerated by GPUs, GPU vendors have used caches in conjunction with scratchpad memory as on-chip memory in the new generations of GPUs. Unfortunately, GPU caches face many performance challenges that arise due to excessive thread contention for cache resource. Cache bypassing, where memory requests can selectively bypass the cache, is one solution that can help to mitigate the cache resource contention problem. In this paper, we propose coordinated static and dynamic cache bypassing to improve application performance. At compile-time, we identify the global loads that indicate strong preferences for caching or bypassing through profiling. For the rest global loads, our dynamic cache bypassing has the flexibility to cache only a fraction of threads. In CUDA programming model, the threads are divided into work units called thread blocks. Our dynamic bypassing technique modulates the ratio of thread blocks that cache or bypass at run-time. We choose to modulate at thread block level in order to avoid the memory divergence problems. Our approach combines compile-time analysis that determines the cache or bypass preferences for global loads with run-time management that adjusts the ratio of thread blocks that cache or bypass. Our coordinated static and dynamic cache bypassing technique achieves up to 2.28X (average I.32X) performance speedup for a variety of GPU applications. --- paper_title: Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance paper_content: In a GPU, all threads within a warp execute the same instruction in lockstep. For a memory instruction, this can lead to memory divergence: the memory requests for some threads are serviced early, while the remaining requests incur long latencies. This divergence stalls the warp, as it cannot execute the next instruction until all requests from the current instruction complete. In this work, we make three new observations. First, GPGPU warps exhibit heterogeneous memory divergence behavior at the shared cache: some warps have most of their requests hit in the cache (high cache utility), while other warps see most of their request miss (low cache utility). Second, a warp retains the same divergence behavior for long periods of execution. Third, due to high memory level parallelism, requests going to the shared cache can incur queuing delays as large as hundreds of cycles, exacerbating the effects of memory divergence. We propose a set of techniques, collectively called Memory Divergence Correction (MeDiC), that reduce the negative performance impact of memory divergence and cache queuing. MeDiC uses warp divergence characterization to guide three components: (1) a cache bypassing mechanism that exploits the latency tolerance of low cache utility warps to both alleviate queuing delay and increase the hit rate for high cache utility warps, (2) a cache insertion policy that prevents data from highcache utility warps from being prematurely evicted, and (3) a memory controller that prioritizes the few requests received from high cache utility warps to minimize stall time. We compare MeDiC to four cache management techniques, and find that it delivers an average speedup of 21.8%, and 20.1% higher energy efficiency, over a state-of-the-art GPU cache management mechanism across 15 different GPGPU applications. --- paper_title: RACB: Resource Aware Cache Bypass on GPUs paper_content: Caches are universally used in computing systems to hide long off-chip memory access latencies. Unlike CPUs, massive threads running simultaneously on GPUs bring a tremendous pressure on memory hierarchy. As a result, the limitation of cache resources becomes a bottleneck for a GPU to exploit thread-level parallelism (TLP) and memory-level parallelism (MLP) and achieve high performance. In this paper, we propose a mechanism to bypass L1D and L2 cache based on the availability of cache resources. Our proposed mechanism is based on the observation that a huge number of stalls coming from limited cache resources prohibit GPUs from providing a higher throughput. So we propose Resource Aware Cache Bypass (RACB) with minor hardware changes to eliminate such stalls to improve performance. We examine the effectiveness of this approach when applied to L1D and L2 cache separately as well as together. Evaluation results with NVIDIA Computing SDK show that RACB generally improves performance the most when applied to both L1D and L2 cache, which is up to 88.05% and on an average of 16.73%, additionally, energy is saved up to 22.35% and on an average of 5.88% with minor hardware overheads. --- paper_title: DaCache: Memory Divergence-Aware GPU Cache Management paper_content: The lock-step execution model of GPU requires a warp to have the data blocks for all its threads before execution. However, there is a lack of salient cache mechanisms that can recognize the need of managing GPU cache blocks at the warp level for increasing the number of warps ready for execution. In addition, warp scheduling is very important for GPU-specific cache management to reduce both intra- and inter-warp conflicts and maximize data locality. In this paper, we propose a Divergence-Aware Cache (DaCache) management that can orchestrate L1D cache management and warp scheduling together for GPGPUs. In DaCache, the insertion position of an incoming data block depends on the fetching warp's scheduling priority. Blocks of warps with lower priorities are inserted closer to the LRU position of the LRU-chain so that they have shorter lifetime in cache. This fine-grained insertion policy is extended to prioritize coherent loads over divergent loads so that coherent loads are less vulnerable to both inter- and intra-warp thrashing. DaCache also adopts a constrained replacement policy with L1D bypassing to sustain a good supply of Fully Cached Warps (FCW), along with a dynamic mechanism to adjust FCW during runtime. Our experiments demonstrate that DaCache achieves 40.4% performance improvement over the baseline GPU and outperforms two state-of-the-art thrashing-resistant techniques RRIP and DIP by 40% and 24.9%, respectively. --- paper_title: Managing shared last-level cache in a heterogeneous multicore processor paper_content: Heterogeneous multicore processors that integrate CPU cores and data-parallel accelerators such as GPU cores onto the same die raise several new issues for sharing various on-chip resources. The shared last-level cache (LLC) is one of the most important shared resources due to its impact on performance. Accesses to the shared LLC in heterogeneous multicore processors can be dominated by the GPU due to the significantly higher number of threads supported. Under current cache management policies, the CPU applications' share of the LLC can be significantly reduced in the presence of competing GPU applications. For cache sensitive CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can often tolerate increased memory access latency in the presence of LLC misses when there is sufficient thread-level parallelism. In this work, we propose Heterogeneous LLC Management (HeLM), a novel shared LLC management policy that takes advantage of the GPU's tolerance for memory access latency. HeLM is able to throttle GPU LLC accesses and yield LLC space to cache sensitive CPU applications. GPU LLC access throttling is achieved by allowing GPU threads that can tolerate longer memory access latencies to bypass the LLC. The latency tolerance of a GPU application is determined by the availability of thread-level parallelism, which can be measured at runtime as the average number of threads that are available for issuing. Our heterogeneous LLC management scheme outperforms LRU policy by 12.5% and TAP-RRIP by 5.6% for a processor with 4 CPU and 4 GPU cores. --- paper_title: Full system simulation framework for integrated CPU/GPU architecture paper_content: The integrated CPU/GPU architecture brings performance advantage since the communication cost between the CPU and GPU is reduced, and also imposes new challenges in processor architecture design, especially in the management of shared memory resources, e.g., the last-level cache and memory bandwidth. Therefore, a micro-architecture level simulator is essential to facilitate researches in this direction. In this paper, we develop the first cycle-level full-system simulation framework for CPU-GPU integration with detailed memory models. With the simulation framework, we analyze the communication cost between the CPU and GPU for GPU workloads, and perform memory system characterization running both applications concurrently. --- paper_title: A Survey of Architectural Techniques for Managing Process Variation paper_content: Process variation—deviation in parameters from their nominal specifications—threatens to slow down and even pause technological scaling, and mitigation of it is the way to continue the benefits of chip miniaturization. In this article, we present a survey of architectural techniques for managing process variation (PV) in modern processors. We also classify these techniques based on several important parameters to bring out their similarities and differences. The aim of this article is to provide insights to researchers into the state of the art in PV management techniques and motivate them to further improve these techniques for designing PV-resilient processors of tomorrow. --- paper_title: A Survey of Recent Prefetching Techniques for Processor Caches paper_content: As the trends of process scaling make memory systems an even more crucial bottleneck, the importance of latency hiding techniques such as prefetching grows further. However, naively using prefetching can harm performance and energy efficiency and, hence, several factors and parameters need to be taken into account to fully realize its potential. In this article, we survey several recent techniques that aim to improve the implementation and effectiveness of prefetching. We characterize the techniques on several parameters to highlight their similarities and differences. The aim of this survey is to provide insights to researchers into working of prefetching techniques and spark interesting future work for improving the performance advantages of prefetching even further. --- paper_title: A Survey of CPU-GPU Heterogeneous Computing Techniques paper_content: As both CPUs and GPUs become employed in a wide range of applications, it has been acknowledged that both of these Processing Units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated a significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this article, we survey Heterogeneous Computing Techniques (HCTs) such as workload partitioning that enable utilizing both CPUs and GPUs to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler, and application levels. Further, we review both discrete and fused CPU-GPU systems and discuss benchmark suites designed for evaluating Heterogeneous Computing Systems (HCSs). We believe that this article will provide insights into the workings and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance. --- paper_title: A survey of architectural techniques for improving cache power efficiency paper_content: Modern processors are using increasingly larger sized on-chip caches. Also, with each CMOS technology generation, there has been a significant increase in their leakage energy consumption. For this reason, cache power management has become a crucial research issue in modern processor design. To address this challenge and also meet the goals of sustainable computing, researchers have proposed several techniques for improving energy efficiency of cache architectures. This paper surveys recent architectural techniques for improving cache power efficiency and also presents a classification of these techniques based on their characteristics. For providing an application perspective, this paper also reviews several real-world processor chips that employ cache energy saving techniques. The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches. ---
Title: A Survey of Cache Bypassing Techniques Section 1: Introduction Description 1: Provide an overview of the increasing demands on processor performance and cache utilization, and introduce the concept of cache bypassing as a solution. Section 2: Background and Motivation Description 2: Discuss relevant concepts, terminologies, and support for cache bypassing in commercial processors. Highlight the promises and challenges associated with cache bypassing. Section 3: Key Ideas and Classification of CBTs Description 3: Summarize the main ideas utilized by various cache bypassing techniques (CBTs) and classify the CBTs based on key parameters to highlight their features and objectives. Section 4: Working Strategies of CBTs for CPUs Description 4: Explore several techniques by organizing CBTs into groups based on reuse count, reuse distance, cache miss behavior, probabilistic approaches, and cache hierarchy reorganization or bypass buffer utilization. Section 5: CBTs Based on Reuse-Count Description 5: Detail techniques that utilize counter-based and predictor-based approaches for making cache bypassing decisions based on reuse count. Section 6: CBTs Based on Reuse-Distance Description 6: Discuss techniques that use reuse distance for optimizing replacement and performing bypass to avoid cache pollution. Section 7: CBTs Based on Cache Miss Behavior Description 7: Describe methods that classify and handle cache misses to make bypassing decisions and improve cache efficiency. Section 8: Probabilistic CBTs Description 8: Outline techniques that use probabilistic and random bypassing approaches to reduce cache overheads and optimize performance. Section 9: CBTs for Inclusive and Exclusive Cache Hierarchy Description 9: Investigate cache bypassing strategies tailored for inclusive and exclusive cache hierarchies, and evaluate them using different platforms, including real processors and analytical models. Section 10: CBTs for Specific Memory Technologies Description 10: Review CBTs designed for emerging memory technologies such as NVMs and DRAM and their benefits in specific contexts. Section 11: CBTs for GPUs and CPU-GPU Heterogeneous Systems Description 11: Explore CBTs tailored for GPUs and heterogeneous systems, detailing techniques based on reuse characteristics, memory divergence properties, and efficient cache and resource management. Section 12: Future Challenges and Conclusions Description 12: Discuss future challenges in designing CBTs for heterogeneous systems, integrating CBTs with other cache management techniques, and exploring new research areas such as fault tolerance and approximate computing.
A review of opinion mining methods for analyzing citizens' contributions in public policy debate
12
--- paper_title: Mapping eParticipation Research: Four Central Challenges paper_content: The emerging research area of eParticipation can be characterized as the study of technologyfacilitated citizen participation in (democratic) deliberation and decision-making. Using conventional literature study techniques, we identify 105 articles that are considered to be highly relevant to eParticipation. We develop a definitional schema that suggests different ways of understanding an emerging socio-technical research area and use this schema to map the research contributions identified. This allows us make an initial sketch of the scientific character of the area and its central concerns, theories, and methods. We extend the analysis to define four central research challenges for the field: understanding technology and participation; the strategic challenge; the design challenge; and the evaluation challenge. This article thus contributes to a developing account of eParticipation, which will help future researchers both to navigate the research area and to focus their research agendas. --- paper_title: Towards a systematic exploitation of web 2.0 and simulation modeling tools in public policy process paper_content: This paper describes a methodology for the systematic exploitation of the emerging web 2.0 social media by government organizations in the processes of public policies formulation, aiming to enhance e-participation, in combination with established simulation modeling techniques and tools. It is based on the concept of 'Policy Gadget' (Padget), which is a micro web application combining a policy message with underlying group knowledge in social media (in the form of content and user activities) and interacting with citizens in popular web 2.0 locations in order to get and convey their input to policy makers. Such 'Padgets' are created by a central platform-toolset and then deployed in many different Web 2.0 media. Citizens input from them will be used in various simulation modeling techniques and tools (such as the 'Systems Dynamics'), which are going to simulate different policy options and estimate their outcomes and effectiveness. A use case scenario of the proposed methodology is presented, which outlines how it can be used in 'real life' public policy design problems. --- paper_title: The Transformational Effect of Web 2.0 Technologies on Government paper_content: Web 2.0 technologies are now being deployed in government settings. For example, public agencies have used blogs to communicate information on public hearings, wikis and RSS feeds to coordinate work, and wikis to internally share expertise, and intelligence information. The potential for Web 2.0 tools create a public sector paradox. On the one hand, they have the potential to create real transformative opportunities related to key public sector issues of transparency, accountability, communication and collaboration, and to promote deeper levels of civic engagement. On the other hand, information flow within government, across government agencies and between government and the public is often highly restricted through regulations, specific reporting structures and therefore usually delayed through the filter of the bureaucratic constraints. What the emergent application and popularity of Web 2.0 tools show is that there is an apparent need within government to create, distribute and collect information outside the given hierarchical information flow. Clearly, these most recent Internet technologies are creating dramatic changes in the way people at a peer-to-peer production level communicate and collaborate over the Internet. And these have potentially transformative implications for the way public sector organizations do work and communicate with each other and with citizens. But they also create potential difficulties and challenges that have their roots in the institutional contexts these technologies are or will be deployed within. In other words, it is not the technology that hinders us from transformation and innovation – it is the organizational and institutional hurdles that need to be overcome. This paper provides an overview of the transformative organizational, technological and informational challenges ahead. --- paper_title: eParticipation: The Research Gaps paper_content: eParticipation is a challenging research domain comprising a large number of academic disciplines and existing in a complex social and political environment. In this paper we identify eParticipation research needs and barriers and in so doing indicate future research direction. We do this by first setting the context for eParticipation research. We then consider the current situation and analyse the challenges facing future research. The future research direction was identified through conducting workshops and analysing published papers. The results are six main research challenges: breadth of research field; research design; technology design; institutional resistance; equity, and theory. These six challenges are described in detail along with the research direction to address them. --- paper_title: An investigation of the use of structured e-forum for enhancing e-participation in parliaments paper_content: The e-participation research has investigated and suggested some Information and Communication Technologies (ICTs) tools such as e-forum, e-petition and e-community tools. This paper investigates the use of an advanced and more structured ICT tool, the 'structured e-forum', for supporting and enhancing e-participation and e-consultation in the legislation formation process in Parliaments. For this purpose, we designed and implemented an e-consultation pilot on a law under formation in the Greek Parliament, using a structured e-forum tool based on the Issue-Based Information Systems (IBISs) framework. This pilot has been evaluated using multiple methods (analysis of discussion tree, quantitative evaluation and qualitative evaluation). --- paper_title: Web data mining: exploring hyperlinks, contents, and usage data paper_content: This paper presents a review of the book "Web Data Mining - Exploring Hyperlinks, Contents, and Usage Data" by Bing Liu. The review concludes that the breadth and depth of this book makes it a required staple for every Web mining researcher, student, or practitioner. --- paper_title: Opinion mining and sentiment analysis paper_content: An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided. --- paper_title: Opinion observer: analyzing and comparing opinions on the Web paper_content: The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he/she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him/her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly. --- paper_title: Mining the peanut gallery: opinion extraction and semantic classification of product reviews paper_content: The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful. --- paper_title: Sentiment Classification On Customer Feedback Data: Noisy Data, Large Feature Vectors, And The Role Of Linguistic Analysis paper_content: We demonstrate that it is possible to perform automatic sentiment classification in the very noisy domain of customer feedback data. We show that by using large feature vectors in combination with feature reduction, we can train linear support vector machines that achieve high classification accuracy on data that present classification challenges even for a human annotator. We also show that, surprisingly, the addition of deep linguistic analysis features to a set of surface level word n-gram features contributes consistently to classification accuracy in this domain. --- paper_title: Pulse: Mining Customer Opinions from Free Text paper_content: We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface. --- paper_title: Joint Extraction Of Entities And Relations For Opinion Recognition paper_content: We present an approach for the joint extraction of entities and relations in the context of opinion recognition and analysis. We identify two types of opinion-related entities --- expressions of opinions and sources of opinions --- along with the linking relation that exists between them. Inspired by Roth and Yih (2004), we employ an integer linear programming approach to solve the joint opinion recognition task, and show that global, constraint-based inference can significantly boost the performance of both relation extraction and the extraction of opinion-related entities. Performance further improves when a semantic role labeling system is incorporated. The resulting system achieves F-measures of 79 and 69 for entity and relation extraction, respectively, improving substantially over prior results in the area. --- paper_title: Yahoo! for amazon: Sentiment extraction from small talk on the web paper_content: Extracting sentiment from text is a hard semantic problem. We develop a methodology for extracting small investor sentiment from stock message boards. The algorithm comprises different classifier algorithms coupled together by a voting scheme. Accuracy levels are similar to widely used Bayes classifiers, but false positives are lower and sentiment accuracy higher. Time series and cross-sectional aggregation of message information improves the quality of the resultant sentiment index, particularly in the presence of slang and ambiguity. Empirical applications evidence a relationship with stock values---tech-sector postings are related to stock index levels, and to volumes and volatility. The algorithms may be used to assess the impact on investor opinion of management announcements, press releases, third-party news, and regulatory changes. --- paper_title: Customizing Sentiment Classifiers to New Domains: a Case Study paper_content: Sentiment classification is a very domainspecific problem; classifiers trained in one domain do not perform well in others. Unfortunately, many domains are lacking in large amounts of labeled data for fully-supervised learning approaches. At the same time, sentiment classifiers need to be customizable to new domains in order to be useful in practice. We attempt to address these difficulties and constraints in this paper, where we survey four different approaches to customizing a sentiment classification system to a new target domain in the absence of large amounts of labeled data. We base our experiments on data from four different domains. After establishing that naive cross-domain classification results in poor classification accuracy, we compare results obtained by using each of the four approaches and discuss their advantages, disadvantages and performance. --- paper_title: Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification paper_content: Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains. --- paper_title: Knowledge Transfer and Opinion Detection in the TREC2006 Blog Track paper_content: The paper describes the opinion detection system developed in Carnegie Mellon University for TREC 2006 Blog track. The system performed a two-stage process: passage retrieval and opinion detection. Due to lack of training data for the TREC Blog corpus, online opinion reviews provided in other domains, such as movie review and product review, were used as the training data. Knowledge transfer was performed to make the cross-domain learning possible. Logistic regression ranked the sentence-level opinions vs. objective statements. The evaluation shows that the algorithm is effective in the task. --- paper_title: Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews paper_content: This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g.,"subtle nuances") and a negative semantic orientation when it has bad associations (e.g.,"very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word"excellent"minus the mutual information between the given phrase and the word"poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews. --- paper_title: Thumbs Up? Sentiment Classification Using Machine Learning Techniques paper_content: We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we flnd that standard machine learning techniques deflnitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classiflcation, and support vector machines) do not perform as well on sentiment classiflcation as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classiflcation problem more challenging. --- paper_title: Opinion mining and sentiment analysis paper_content: An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided. --- paper_title: Customizing Sentiment Classifiers to New Domains: a Case Study paper_content: Sentiment classification is a very domainspecific problem; classifiers trained in one domain do not perform well in others. Unfortunately, many domains are lacking in large amounts of labeled data for fully-supervised learning approaches. At the same time, sentiment classifiers need to be customizable to new domains in order to be useful in practice. We attempt to address these difficulties and constraints in this paper, where we survey four different approaches to customizing a sentiment classification system to a new target domain in the absence of large amounts of labeled data. We base our experiments on data from four different domains. After establishing that naive cross-domain classification results in poor classification accuracy, we compare results obtained by using each of the four approaches and discuss their advantages, disadvantages and performance. --- paper_title: Effects Of Adjective Orientation And Gradability On Sentence Subjectivity paper_content: Subjectivity is a pragmatic, sentence-level feature that has important implications for text processing applications such as information extraction and information retrieval. We study the effects of dynamic adjectives, semantically oriented adjectives, and gradable adjectives on a simple subjectivity classifier, and establish that they are strong predictors of subjectivity. A novel trainable method that statistically combines two indicators of gradability is presented and evaluated, complementing existing automatic techniques for assigning orientation labels. --- paper_title: Automatic Identification Of Pro And Con Reasons In Online Reviews paper_content: In this paper, we present a system that automatically extracts the pros and cons from online reviews. Although many approaches have been developed for extracting opinions from text, our focus here is on extracting the reasons of the opinions, which may themselves be in the form of either fact or opinion. Leveraging online review sites with author-generated pros and cons, we propose a system for aligning the pros and cons to their sentences in review texts. A maximum entropy model is then trained on the resulting labeled set to subsequently extract pros and cons from online review sites that do not explicitly provide them. Our experimental results show that our resulting system identifies pros and cons with 66% precision and 76% recall. --- paper_title: Pulse: Mining Customer Opinions from Free Text paper_content: We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface. --- paper_title: Determining The Sentiment Of Opinions paper_content: Identifying sentiments (the affective parts of opinions) is a challenging problem. We present a system that, given a topic, automatically finds the people who hold opinions about that topic and the sentiment of each opinion. The system contains a module for determining word sentiment and another for combining sentiments within a sentence. We experiment with various models of classifying and combining sentiment at word and sentence levels, with promising results. --- paper_title: Learning Extraction Patterns For Subjective Expressions paper_content: This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision. --- paper_title: Crystal: Analyzing Predictive Opinions on the Web paper_content: In this paper, we present an election prediction system (Crystal) based on web users’ opinions posted on an election prediction website. Given a prediction message, Crystal first identifies which party the message predicts to win and then aggregates prediction analysis results of a large amount of opinions to project the election results. We collect past election prediction messages from the Web and automatically build a gold standard. We focus on capturing lexical patterns that people frequently use when they express their predictive opinions about a coming election. To predict election results, we apply SVM-based supervised learning. To improve performance, we propose a novel technique which generalizes n-gram feature patterns. Experimental results show that Crystal significantly outperforms several baselines as well as a non-generalized n-gram approach. Crystal predicts future elections with 81.68% accuracy. --- paper_title: Automatically Assessing Review Helpfulness paper_content: User-supplied reviews are widely and increasingly used to enhance e-commerce and other websites. Because reviews can be numerous and varying in quality, it is important to assess how helpful each review is. While review helpfulness is currently assessed manually, in this paper we consider the task of automatically assessing it. Experiments using SVM regression on a variety of features over Amazon.com product reviews show promising results, with rank correlations of up to 0.66. We found that the most useful features include the length of the review, its unigrams, and its product rating. --- paper_title: Learning Subjective Nouns Using Extraction Pattern Bootstrapping paper_content: We explore the idea of creating a subjectivity classifier that uses lists of subjective nouns learned by bootstrapping algorithms. The goal of our research is to develop a system that can distinguish subjective sentences from objective sentences. First, we use two bootstrapping algorithms that exploit extraction patterns to learn sets of subjective nouns. Then we train a Naive Bayes classifier using the subjective nouns, discourse features, and subjectivity clues identified in prior research. The bootstrapping algorithms learned over 1000 subjective nouns, and the subjectivity classifier performed well, achieving 77% recall with 81% precision. --- paper_title: Just How Mad Are You? Finding Strong and Weak Opinion Clauses paper_content: There has been a recent swell of interest in the automatic identification and extraction of opinions and emotions in text. In this paper, we present the first experimental results classifying the strength of opinions and other types of subjectivity and classifying the subjectivity of deeply nested clauses. We use a wide range of features, including new syntactic features developed for opinion recognition. In 10-fold cross-validation experiments using support vector regression, we achieve improvements in mean-squared error over baseline ranging from 57% to 64%. --- paper_title: Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews paper_content: This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g.,"subtle nuances") and a negative semantic orientation when it has bad associations (e.g.,"very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word"excellent"minus the mutual information between the given phrase and the word"poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews. --- paper_title: WordNet: An Electronic Lexical Database paper_content: A teaching device to acquaint dental students and also patients with endodontic root canal techniques performed by dentists and utilizing an electronic oscillator having a scale reading in electric current measurement and a pair of electrical circuit conductors being connected at one end to the terminals of the oscillator and the opposite ends thereof respectively being connectable to one or more small diameter metal wires which simulate dental reamers and files which are movable in root canal-simulating passages of uniform diameter complementary to that of said wires and formed in a transparent model of a human tooth including a root and cusp thereon and mounted in a transparent enclosure in which the root portion of the tooth extends with the cusp of the model extending above the upper end of the enclosure. --- paper_title: Towards Answering Opinion Questions: Separating Facts From Opinions And Identifying The Polarity Of Opinion Sentences paper_content: Opinion question answering is a challenging task for natural language processing. In this paper, we discuss a necessary component for an opinion question answering system: separating opinions from fact, at both the document and sentence level. We present a Bayesian classifier for discriminating between documents with a preponderance of opinions such as editorials from regular news stories, and describe three unsupervised, statistical techniques for the significantly harder task of detecting opinions at the sentence level. We also present a first model for classifying opinion sentences as positive or negative in terms of the main perspective being expressed in the opinion. Results from a large collection of news stories and a human evaluation of 400 sentences are reported, indicating that we achieve very high performance in document classification (upwards of 97% precision and recall), and respectable performance in detecting opinions and classifying them at the sentence level as positive, negative, or neutral (up to 91% accuracy). --- paper_title: Sentiment Classification of Movie Reviews Using Contextual Valence Shifters paper_content: We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone. --- paper_title: Feature Subsumption For Opinion Analysis paper_content: Lexical features are key to many approaches to sentiment analysis and opinion detection. A variety of representations have been used, including single words, multi-word Ngrams, phrases, and lexico-syntactic patterns. In this paper, we use a subsumption hierarchy to formally define different types of lexical features and their relationship to one another, both in terms of representational coverage and performance. We use the subsumption hierarchy in two ways: (1) as an analytic tool to automatically identify complex features that outperform simpler features, and (2) to reduce a feature set by removing unnecessary features. We show that reducing the feature set improves performance on three opinion classification tasks, especially when combined with traditional feature selection. --- paper_title: SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining paper_content: Opinion mining (OM) is a recent subdiscipline at the crossroads of information retrieval and computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. OM has a rich set of applications, ranging from tracking users’ opinions about products or about political candidates as expressed in online forums, to customer relationship management. In order to aid the extraction of opinions from text, recent research has tried to automatically determine the “PN-polarity” of subjective terms, i.e. identify whether a term that is a marker of opinionated content has a positive or a negative connotation. Research on determining whether a term is indeed a marker of opinionated content (a subjective term) or not (an objective term) has been, instead, much more scarce. In this work we describe SENTIWORDNET, a lexical resource in which each WORDNET synset s is associated to three numerical scoresObj(s), Pos(s) and Neg(s), describing how objective, positive, and negative the terms contained in the synset are. The method used to develop SENTIWORDNET is based on the quantitative analysis of the glosses associated to synsets, and on the use of the resulting vectorial term representations for semi-supervised synset classification. The three scores are derived by combining the results produced by a committee of eight ternary classifiers, all characterized by similar accuracy levels but different classification behaviour. SENTIWORDNET is freely available for research purposes, and is endowed with a Web-based graphical user interface. --- paper_title: Sentiment analysis: capturing favorability using natural language processing paper_content: This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95%, depending on the data) in finding sentiments within Web pages and news articles. --- paper_title: A holistic lexicon-based approach to opinion mining paper_content: One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly --- paper_title: ARSA: a sentiment-aware model for predicting sales performance using blogs paper_content: Due to its high popularity, Weblogs (or blogs in short) present a wealth of information that can be very helpful in assessing the general public's sentiments and opinions. In this paper, we study the problem of mining sentiment information from blogs and investigate ways to use such information for predicting product sales performance. Based on an analysis of the complex nature of sentiments, we propose Sentiment PLSA (S-PLSA), in which a blog entry is viewed as a document generated by a number of hidden sentiment factors. Training an S-PLSA model on the blog data enables us to obtain a succinct summary of the sentiment information embedded in the blogs. We then present ARSA, an autoregressive sentiment-aware model, to utilize the sentiment information captured by S-PLSA for predicting product sales performance. Extensive experiments were conducted on a movie data set. We compare ARSA with alternative models that do not take into account the sentiment information, as well as a model with a different feature selection method. Experiments confirm the effectiveness and superiority of the proposed approach. --- paper_title: Mining WordNet For A Fuzzy Sentiment: Sentiment Tag Extraction From WordNet Glosses paper_content: Many of the tasks required for semantic tagging of phrases and texts rely on a list of words annotated with some semantic features. We present a method for extracting sentiment-bearing adjectives from WordNet using the Sentiment Tag Extraction Program (STEP). We did 58 STEP runs on unique non-intersecting seed lists drawn from manually annotated list of positive and negative adjectives and evaluated the results against other manually annotated lists. The 58 runs were then collapsed into a single set of 7, 813 unique words. For each word we computed a Net Overlap Score by subtracting the total number of runs assigning this word a negative sentiment from the total of the runs that consider it positive. We demonstrate that Net Overlap Score can be used as a measure of the words degree of membership in the fuzzy category of sentiment: the core adjectives, which had the highest Net Overlap scores, were identified most accurately both by STEP and by human annotators, while the words on the periphery of the category had the lowest scores and were associated with low rates of inter-annotator agreement. --- paper_title: WordNet: An Electronic Lexical Database paper_content: A teaching device to acquaint dental students and also patients with endodontic root canal techniques performed by dentists and utilizing an electronic oscillator having a scale reading in electric current measurement and a pair of electrical circuit conductors being connected at one end to the terminals of the oscillator and the opposite ends thereof respectively being connectable to one or more small diameter metal wires which simulate dental reamers and files which are movable in root canal-simulating passages of uniform diameter complementary to that of said wires and formed in a transparent model of a human tooth including a root and cusp thereon and mounted in a transparent enclosure in which the root portion of the tooth extends with the cusp of the model extending above the upper end of the enclosure. --- paper_title: Expanding Domain Sentiment Lexicon through Double Propagation paper_content: In most sentiment analysis applications, the sentiment lexicon plays a key role. However, it is hard, if not impossible, to collect and maintain a universal sentiment lexicon for all application domains because different words may be used in different domains. The main existing technique extracts such sentiment words from a large domain corpus based on different conjunctions and the idea of sentiment coherency in a sentence. In this paper, we propose a novel propagation approach that exploits the relations between sentiment words and topics or product features that the sentiment words modify, and also sentiment words and product features themselves to extract new sentiment words. As the method propagates information through both sentiment words and features, we call it double propagation. The extraction rules are designed based on relations described in dependency trees. A new method is also proposed to assign polarities to newly discovered sentiment words in a domain. Experimental results show that our approach is able to extract a large number of new sentiment words. The polarity assignment method is also effective. --- paper_title: Fully Automatic Lexicon Expansion For Domain-Oriented Sentiment Analysis paper_content: This paper proposes an unsupervised lexicon building method for the detection of polar clauses, which convey positive or negative aspects in a specific domain. The lexical entries to be acquired are called polar atoms, the minimum human-understandable syntactic structures that specify the polarity of clauses. As a clue to obtain candidate polar atoms, we use context coherency, the tendency for same polarities to appear successively in contexts. Using the overall density and precision of coherency in the corpus, the statistical estimation picks up appropriate polar atoms among candidates, without any manual tuning of the threshold values. The experimental results show that the precision of polarity assignment with the automatically acquired lexicon was 94% on average, and our method is robust for corpora in diverse domains and for the size of the initial lexicon. --- paper_title: Yahoo! for amazon: Sentiment extraction from small talk on the web paper_content: Extracting sentiment from text is a hard semantic problem. We develop a methodology for extracting small investor sentiment from stock message boards. The algorithm comprises different classifier algorithms coupled together by a voting scheme. Accuracy levels are similar to widely used Bayes classifiers, but false positives are lower and sentiment accuracy higher. Time series and cross-sectional aggregation of message information improves the quality of the resultant sentiment index, particularly in the presence of slang and ambiguity. Empirical applications evidence a relationship with stock values---tech-sector postings are related to stock index levels, and to volumes and volatility. The algorithms may be used to assess the impact on investor opinion of management announcements, press releases, third-party news, and regulatory changes. --- paper_title: Mining product reputations on the Web paper_content: Knowing the reputations of your own and/or competitors' products is important for marketing and customer relationship management. It is, however, very costly to collect and analyze survey data manually. This paper presents a new framework for mining product reputations on the Internet. It automatically collects people's opinions about target products from Web pages, and it uses text mining techniques to obtain the reputations of those products.On the basis of human-test samples, we generate in advance syntactic and linguistic rules to determine whether any given statement is an opinion or not, as well as whether such any opinion is positive or negative in nature. We first collect statements regarding target products using a general search engine, and then, using the rules, extract opinions from among them and attach three labels to each opinion, labels indicating the positive/negative determination, the product name itself, and an numerical value expressing the degree of system confidence that the statement is, in fact, an opinion. The labeled opinions are then input into an opinion database.The mining of reputations, i.e., the finding of statistically meaningful information included in the database, is then conducted. We specify target categories using label values (such as positive opinions of product A) and perform four types of text mining: extraction of 1) characteristic words, 2) co-occurrence words, 3) typical sentences, for individual target categories, and 4) correspondence analysis among multiple target categories.Actual marketing data is used to demonstrate the validity and effectiveness of the framework, which offers a drastic reduction in the overall cost of reputation analysis over that of conventional survey approaches and supports the discovery of knowledge from the pool of opinions on the web. --- paper_title: Sentiment analyzer: extracting sentiments about a given topic using natural language processing techniques paper_content: We present sentiment analyzer (SA) that extracts sentiment (or opinion) about a subject from online text documents. Instead of classifying the sentiment of an entire document about a subject, SA detects all references to the given subject, and determines sentiment in each of the references using natural language processing (NLP) techniques. Our sentiment analysis consists of 1) a topic specific feature term extraction, 2) sentiment extraction, and 3) (subject, sentiment) association by relationship analysis. SA utilizes two linguistic resources for the analysis: the sentiment lexicon and the sentiment pattern database. The performance of the algorithms was verified on online product review articles ("digital camera" and "music" reviews), and more general documents including general Webpages and news articles. --- paper_title: Determining Term Subjectivity And Term Orientation For Opinion Mining paper_content: Opinion mining is a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. To aid the extraction of opinions from text, recent work has tackled the issue of determining the orientation of “subjective” terms contained in text, i.e. deciding whether a term that carries opinionated content has a positive or a negative connotation. This is believed to be of key importance for identifying the orientation of documents, i.e. determining whether a document expresses a positive or negative opinion about its subject matter. We contend that the plain determination of the orientation of terms is not a realistic problem, since it starts from the nonrealistic assumption that we already know whether a term is subjective or not; this would imply that a linguistic resource that marks terms as “subjective” or “objective” is available, which is usually not the case. In this paper we confront the task of deciding whether a given term has a positive connotation, or a negative connotation, or has no subjective connotation at all; this problem thus subsumes the problem of determining subjectivity and the problem of determining orientation. We tackle this problem by testing three different variants of a semi-supervised method previously proposed for orientation detection. Our results show that determining subjectivity and orientation is a much harder problem than determining orientation alone. --- paper_title: Using WordNet to Measure Semantic Orientations of Adjectives paper_content: Current WordNet-based measures of distance or similarity focus almost exclusively on WordNet’s taxonomic relations. This effectively restricts their applicability to the syntactic categories of noun and verb. We investigate a graph-theoretic model of WordNet’s most important relation—synonymy—and propose measures that determine the semantic orientation of adjectives for three factors of subjective meaning. Evaluation against human judgments shows the effectiveness of the resulting measures. --- paper_title: Hidden sentiment association in Chinese web opinion mining. WWW paper_content: The boom of product review websites, blogs and forums on the web has attracted many research efforts on opinion mining. Recently, there was a growing interest in the finer-grained opinion mining, which detects opinions on different review features as opposed to the whole review level. The researches on feature-level opinion mining mainly rely on identifying the explicit relatedness between product feature words and opinion words in reviews. However, the sentiment relatedness between the two objects is usually complicated. For many cases, product feature words are implied by the opinion words in reviews. The detection of such hidden sentiment association is still a big challenge in opinion mining. Especially, it is an even harder task of feature-level opinion mining on Chinese reviews due to the nature of Chinese language. In this paper, we propose a novel mutual reinforcement approach to deal with the feature-level opinion mining problem. More specially, 1) the approach clusters product features and opinion words simultaneously and iteratively by fusing both their content information and sentiment link information. 2) under the same framework, based on the product feature categories and opinion word groups, we construct the sentiment association set between the two groups of data objects by identifying their strongest n sentiment links. Moreover, knowledge from multi-source is incorporated to enhance clustering in the procedure. Based on the pre-constructed association set, our approach can largely predict opinions relating to different product features, even for the case without the explicit appearance of product feature words in reviews. Thus it provides a more accurate opinion evaluation. The experimental results demonstrate that our method outperforms the state-of-art algorithms. --- paper_title: ARSA: a sentiment-aware model for predicting sales performance using blogs paper_content: Due to its high popularity, Weblogs (or blogs in short) present a wealth of information that can be very helpful in assessing the general public's sentiments and opinions. In this paper, we study the problem of mining sentiment information from blogs and investigate ways to use such information for predicting product sales performance. Based on an analysis of the complex nature of sentiments, we propose Sentiment PLSA (S-PLSA), in which a blog entry is viewed as a document generated by a number of hidden sentiment factors. Training an S-PLSA model on the blog data enables us to obtain a succinct summary of the sentiment information embedded in the blogs. We then present ARSA, an autoregressive sentiment-aware model, to utilize the sentiment information captured by S-PLSA for predicting product sales performance. Extensive experiments were conducted on a movie data set. We compare ARSA with alternative models that do not take into account the sentiment information, as well as a model with a different feature selection method. Experiments confirm the effectiveness and superiority of the proposed approach. --- paper_title: Topic sentiment mixture: modeling facets and opinions in weblogs paper_content: In this paper, we define the problem of topic-sentiment analysis on Weblogs and propose a novel probabilistic model to capture the mixture of topics and sentiments simultaneously. The proposed Topic-Sentiment Mixture (TSM) model can reveal the latent topical facets in a Weblog collection, the subtopics in the results of an ad hoc query, and their associated sentiments. It could also provide general sentiment models that are applicable to any ad hoc topics. With a specifically designed HMM structure, the sentiment models and topic models estimated with TSM can be utilized to extract topic life cycles and sentiment dynamics. Empirical experiments on different Weblog datasets show that this approach is effective for modeling the topic facets and sentiments and extracting their dynamics from Weblog collections. The TSM model is quite general; it can be applied to any text collections with a mixture of topics and sentiments, thus has many potential applications, such as search result summarization, opinion tracking, and user behavior prediction. --- paper_title: Extracting knowledge from evaluative text paper_content: Capturing knowledge from free-form evaluative texts about an entity is a challenging task. New techniques of feature extraction, polarity determination and strength evaluation have been proposed. Feature extraction is particularly important to the task as it provides the underpinnings of the extracted knowledge. The work in this paper introduces an improved method for feature extraction that draws on an existing unsupervised method. By including user-specific prior knowledge of the evaluated entity, we turn the task of feature extraction into one of term similarity by mapping crude (learned) features into a user-defined taxonomy of the entity's features. Results show promise both in terms of the accuracy of the mapping as well as the reduction in the semantic redundancy of crude features. --- paper_title: Expanding Domain Sentiment Lexicon through Double Propagation paper_content: In most sentiment analysis applications, the sentiment lexicon plays a key role. However, it is hard, if not impossible, to collect and maintain a universal sentiment lexicon for all application domains because different words may be used in different domains. The main existing technique extracts such sentiment words from a large domain corpus based on different conjunctions and the idea of sentiment coherency in a sentence. In this paper, we propose a novel propagation approach that exploits the relations between sentiment words and topics or product features that the sentiment words modify, and also sentiment words and product features themselves to extract new sentiment words. As the method propagates information through both sentiment words and features, we call it double propagation. The extraction rules are designed based on relations described in dependency trees. A new method is also proposed to assign polarities to newly discovered sentiment words in a domain. Experimental results show that our approach is able to extract a large number of new sentiment words. The polarity assignment method is also effective. --- paper_title: Structured Models for Fine-to-Coarse Sentiment Analysis paper_content: In this paper we investigate a structured model for jointly classifying the sentiment of text at varying levels of granularity. Inference in the model is based on standard sequence classification techniques using constrained Viterbi to ensure consistent solutions. The primary advantage of such a model is that it allows classification decisions from one level in the text to influence decisions at another. Experiments show that this method can significantly reduce classification error relative to models trained in isolation. --- paper_title: Sentiment analyzer: extracting sentiments about a given topic using natural language processing techniques paper_content: We present sentiment analyzer (SA) that extracts sentiment (or opinion) about a subject from online text documents. Instead of classifying the sentiment of an entire document about a subject, SA detects all references to the given subject, and determines sentiment in each of the references using natural language processing (NLP) techniques. Our sentiment analysis consists of 1) a topic specific feature term extraction, 2) sentiment extraction, and 3) (subject, sentiment) association by relationship analysis. SA utilizes two linguistic resources for the analysis: the sentiment lexicon and the sentiment pattern database. The performance of the algorithms was verified on online product review articles ("digital camera" and "music" reviews), and more general documents including general Webpages and news articles. --- paper_title: Extracting Product Features And Opinions From Reviews paper_content: Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22% higher precision (with only 3% lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity. --- paper_title: Determining Term Subjectivity And Term Orientation For Opinion Mining paper_content: Opinion mining is a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. To aid the extraction of opinions from text, recent work has tackled the issue of determining the orientation of “subjective” terms contained in text, i.e. deciding whether a term that carries opinionated content has a positive or a negative connotation. This is believed to be of key importance for identifying the orientation of documents, i.e. determining whether a document expresses a positive or negative opinion about its subject matter. We contend that the plain determination of the orientation of terms is not a realistic problem, since it starts from the nonrealistic assumption that we already know whether a term is subjective or not; this would imply that a linguistic resource that marks terms as “subjective” or “objective” is available, which is usually not the case. In this paper we confront the task of deciding whether a given term has a positive connotation, or a negative connotation, or has no subjective connotation at all; this problem thus subsumes the problem of determining subjectivity and the problem of determining orientation. We tackle this problem by testing three different variants of a semi-supervised method previously proposed for orientation detection. Our results show that determining subjectivity and orientation is a much harder problem than determining orientation alone. --- paper_title: A holistic lexicon-based approach to opinion mining paper_content: One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly --- paper_title: Sentiment Learning on Product Reviews via Sentiment Ontology Tree paper_content: Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a human-labeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products. ---
Title: A Review of Opinion Mining Methods for Analyzing Citizens' Contributions in Public Policy Debate Section 1: Introduction Description 1: Provide background information on eParticipation, its context in both Web 1.0 and Web 2.0, and the challenges posed by large quantities of unstructured textual contributions from citizens. Section 2: Opinion Mining Objectives and Concepts Description 2: Explain the basic concepts and objectives of opinion mining, the computational processing of opinions, and the overall categorization of opinion mining research into document/sentence-level and feature-based sentiment analysis. Section 3: Document and Sentence Level Sentiment Analysis Description 3: Review the main research stream that deals with sentiment analysis at both the document and sentence levels, including methodologies and tools used for classifying sentiments. Section 4: Document-Level Sentiment Analysis Description 4: Discuss supervised and unsupervised learning methods for sentiment analysis at the document level, including the significance of domain adaptation and the importance of polar words. Section 5: Sentence-Level Sentiment Analysis Description 5: Highlight the methods used for sentence-level subjectivity and sentiment classification, including bootstrapping techniques and handling of compound sentences. Section 6: Polar Words Description 6: Describe the different approaches for developing lists of polar (opinion) words, including manual, dictionary-based, and corpus-based methods. Section 7: Feature-based Sentiment Analysis Description 7: Elaborate on methods for identifying and analyzing specific object features commented on within opinionated texts to determine positive, negative, or neutral sentiments. Section 8: Feature Extraction Description 8: Detail the techniques for extracting object features from texts using unsupervised learning, and discuss handling of synonym issues. Section 9: Identification of Opinion Orientation Description 9: Provide insights into identifying the orientation of opinions expressed on particular features within a sentence, including lexicon-based approaches and handling of negations and but-clauses. Section 10: Ontology-based Sentiment Analysis Description 10: Discuss the use of domain-specific knowledge through ontologies for feature-based sentiment analysis, outlining the modules involved in ontology-based systems. Section 11: Sentiment Analysis in project PADGETS Description 11: Examine the novel approach used in the PADGETS project, incorporating Social Network Analysis theory to handle sentiment classification in languages with limited linguistic resources. Section 12: Conclusions Description 12: Summarize the importance of using opinion mining methods for analyzing citizens' textual contributions in public policy debates and outline a basic framework to integrate these methods effectively in eParticipation processes.
Large sets of t-designs through partitionable sets: A survey
11
--- paper_title: On the maximum number of disjoint triple systems paper_content: Let D(λ;v) denote the maximum number of mutually disjoint S(λ;2,3,v). We prove that D(2;v)=v−2/2 for all v ≡ 0 or 4 (mod 6), v ≡ 0 and that D(6;v)=v−2/6 for all v ≡ 2 (mod 6). --- paper_title: Some balanced complete block designs paper_content: Letq=6t±1, ν=2q+2. The (ν/3) tri ples on ν marks may be partitioned intoq sets, each forming a BIBD of parameters (ν,3,2). Related results, some of them known, are also discussed briefly. --- paper_title: Covering all triples on n marks by disjoint Steiner systems paper_content: Abstract Let q be a number all whose prime factors divide integers of the form 2 s − 1, s odd. If n = q + 2, the ( 3 n ) triples on n marks can be partitioned into q sets, each forming a Steiner triple system. --- paper_title: On large sets of disjoint steiner triple systems, IV paper_content: Abstract To construct large sets of disjoint STS(3n) (i.e., LTS(3n)), we introduce a new kind of combinatorial designs. Let S be a set of n elements. If x ∈ S, we denote an n × n square array on S by Ax, if for every w ∈ S\{x} the following conditions are satisfied: Ax = [ayz(x)](y, z ∈ S), axx(x) = x, aww(x) ≠ w, axw(x) = axw(x) = x, and {awz(x) | z ∈ S} = {ayw(x) | y ∈ S} = S. Let j ∈ {1, 2&} , Aj = {ayz[j](y, z ∈ S) be a Latin square of order n based on S with n parallel transversals including the diagonal one. Two squares Ax and Ax′ on the same S are called disjoint, if ayz(x) ≠ ayz(x′) whenever y, z ∈ S\{x, x′}; two squares Ax and Aj on the same S are called disjoint, if ayz(x) ≠ ayz[j] whenever y, z ∈ S\ {x}; and two squares A1 and A2 on the same S are called disjoint, if ayz[1] ≠ ayz[2] whenever y ≠ez. It is a set of n + 2 pairwise disjoint squares Ax (x runs over S), A1 and A2 on S as mentioned above that is very useful to construct LTS(3n), and such a set we denote by LDS(n). The essence in the relation between LDS(n) and LTS(3n) is the following theorem which is established in the Section 2: Theorem . If there exist both an LDS(n) and an LTS(n + 2), then there exists an LTS(3n) also. The set of integers n for which LDS(n) exist is denoted by D. In the other parts of this paper, the following results are given: 1. (1) If n ∈ D, and q = 2α (α is an integer greater than 1), or q ∈ {;5, 7, 11, 19}, then qn ∈ D. 2. (2) If pα is a prime power, p > 2 and pα ∈ D, then 3pα ∈ D. 3. (3) If q is a prime power greater than 4 and 1 + n ∈ D, then 1 + qn ∈ D. 4. (4) If t is a nonnegative integer, then 7 + 12t ∈ D and 5 + 8t ∈ D. --- paper_title: Halving the Complete Design paper_content: Abstract Let X be a finite set of cardinality v. We denote the set of all k -subsets of X by In this paper we consider the problem of partitioning into two parts of equal size, each of which is the block set of a 2- (v k, λ) design. We determine necessary and sufficient conditions on v for the existence of such a partition when k=3 or 4. We also construct partitions for higher values of fc and infinitely many values of v. The case where k=3 has been solved for all values of v by Dehon [5]. The case where v=2k has been solved for all values of k by Alltop [1], The remaining results are new. The technique used is similar to that used by Denniston in his construction of a 4-(12,5,4) design without repeated blocks. We also prove an interesting corollary to Baranyai's theorem giving necessary and sufficient conditions for the existence of a partition of into 1- (v, k ,λ) designs. --- paper_title: All block designs with b= v k /2 exist paper_content: Abstract In this paper, we establish the correctness of a conjecture by Hartman on the existence of large sets of size two for t = 2. Also, we obtain some partial results for t > 2. --- paper_title: Extending t-designs paper_content: Abstract Three extension theorems for t-designs are proved; two for t even, and one for t odd. Another theorem guaranteeing that certain t-designs be (t + 1)-designs is presented. The extension theorem for odd t is used to show that every group of odd order 2k + 1, k ≠ 2r − 1, acts as an automorphism group of a 2-(2k + 2, k + 1, λ) design consisting of exactly one half of the (k + 1)-settled, Although the question of the existence of a 6-(14, 7, 4) design is not settled, certain requisite properties of the 4-designs on 12 elements derived from such a design are established. All of these results depend heavily upon generalizations of block intersection number equations of N. S. Mendelsohn. --- paper_title: Parallelisms of complete designs paper_content: Introduction 1. The existence theorem Appendix: the integrity theorem for network flows 2. The parallelogram property Appendix: the binary perfect code theorem Appendix: association schemes and metrically regular graphs 3. Steiner points and Veblen points Appendix: Steiner systems 4. Minimal edge-colourings of complete graphs Appendix: latin squares, SDRs and permanents 5. Biplanes and metric regularity Appendix: symmetric designs 6. Automorphism groups Appendix: multiply transitive groups 7. Resolutions and partition systems Bibliography Index. --- paper_title: Extending t-designs paper_content: Abstract Three extension theorems for t-designs are proved; two for t even, and one for t odd. Another theorem guaranteeing that certain t-designs be (t + 1)-designs is presented. The extension theorem for odd t is used to show that every group of odd order 2k + 1, k ≠ 2r − 1, acts as an automorphism group of a 2-(2k + 2, k + 1, λ) design consisting of exactly one half of the (k + 1)-settled, Although the question of the existence of a 6-(14, 7, 4) design is not settled, certain requisite properties of the 4-designs on 12 elements derived from such a design are established. All of these results depend heavily upon generalizations of block intersection number equations of N. S. Mendelsohn. --- paper_title: Partitions of the 4-subsets of a 13-set into disjoint projective planes paper_content: Abstract For 1⩽ t k v , let S ( t , k , v ) denote a Steiner system and let P k ( v ) be the set of all k -subsets of the set {1,2,…, v }. We partition P 4 (13) into 55 mutually disjoint S (2, 4, 13)'s (projective planes). This is the first known example of a complete partition of P k ( v ) into disjoint S ( t , k , v )'s for k ⩾4 and t ⩾2. --- paper_title: Parallelisms of complete designs paper_content: Introduction 1. The existence theorem Appendix: the integrity theorem for network flows 2. The parallelogram property Appendix: the binary perfect code theorem Appendix: association schemes and metrically regular graphs 3. Steiner points and Veblen points Appendix: Steiner systems 4. Minimal edge-colourings of complete graphs Appendix: latin squares, SDRs and permanents 5. Biplanes and metric regularity Appendix: symmetric designs 6. Automorphism groups Appendix: multiply transitive groups 7. Resolutions and partition systems Bibliography Index. --- paper_title: New Large Sets of t-Designs paper_content: Abstact: We introduce generalizations of earlier direct methods for constructing large sets of t-designs. These are based on assembling systematically orbits of t-homogeneous permutation groups in their induced actions on k-subsets. By means of these techniques and the known recursive methods we construct an extensive number of new large sets, including new infinite families. In particular, a new series of LS[3](2(2 + m), 8·3m − 2, 16·3m − 3) is obtained. This also provides the smallest known ν for a t-(ν, k, λ) design when t ≥ 16. We present our results compactly for ν ≤ 61, in tables derived from Pascal's triangle modulo appropriate primes. © 2000 John Wiley & Sons, Inc. J Combin Designs 9: 40–59, 2001 --- paper_title: Halving the Complete Design paper_content: Abstract Let X be a finite set of cardinality v. We denote the set of all k -subsets of X by In this paper we consider the problem of partitioning into two parts of equal size, each of which is the block set of a 2- (v k, λ) design. We determine necessary and sufficient conditions on v for the existence of such a partition when k=3 or 4. We also construct partitions for higher values of fc and infinitely many values of v. The case where k=3 has been solved for all values of v by Dehon [5]. The case where v=2k has been solved for all values of k by Alltop [1], The remaining results are new. The technique used is similar to that used by Denniston in his construction of a 4-(12,5,4) design without repeated blocks. We also prove an interesting corollary to Baranyai's theorem giving necessary and sufficient conditions for the existence of a partition of into 1- (v, k ,λ) designs. --- paper_title: Large sets of 3-designs from psl(2, q), with block sizes 4 and 5 paper_content: We determine the distribution of 3−(q + 1,k,λ) designs, with k ϵ {4,5}, among the orbits of k-element subsets under the action of PSL(2,q), for q ϵ 3 (mod 4), on the projective line. As a consequence, we give necessary and sufficient conditions for the existence of a uniformly-PSL(2,q) large set of 3−(q + 1,k,λ) designs, with k ϵ {4,5} and q ≡ 3 (mod 4). © 1995 John Wiley & Sons, Inc. --- paper_title: A Few More Large Sets of t-Designs paper_content: We construct several new large sets of t-designs that are invariant under Frobenius groups, and discuss their consequences. These large sets give rise to further new large sets by means of known recursive constructions including an infinite family of large sets of 3− (v, 4, λ) designs. c © 1998 John Wiley & Sons, Inc. J Combin Designs 6: 293–308, 1998 --- paper_title: Locally trivial t-designs and t-designs without repeated blocks paper_content: We simplify our construction [12] of non-trivial t-designs without repeated blocks for arbitrary t. We survey known results on partitions of the set of all (t + l)-subsets of a u-set into S(λ; t, t + I, μ) for the smallest λ allowed by the obvious necessary conditions. We also obtain some new results on this problem. In particular, we construct such partitions for t = 4 and k = 60 whenever ν = 60u + 4, u a positive integer with gcd(u, 60) = I or 2. Sixty is the smallest possible λ for such ν. --- paper_title: More on halving the complete designs paper_content: Abstract A large set of disjoint S (λ; t , k , v ) designs, denoted by LS (λ; t , k , v ), is a partition of k -subsets of v -set into S (λ; t , k , v ) designs. In this paper, we develop some recursive methods to construct large sets ot t -designs. As a consequence, we show that a conjecture of Hartman on halving complete designs is true for t = 2 and 3 ⩽ k ⩽ 15. --- paper_title: More on the Existence of Large Sets of t-Designs of Prime Sizes paper_content: In this paper, we employ the known recursive construction methods to obtain some new existence results for large sets of t-designs of prime sizes. We also present a new recursive construction which leads to more comprehensive theorems on large sets of sizes two and three. As an application, we show that for infinitely many values of block size, the trivial necessary conditions for the existence of large sets of 2-designs of size three are sufficient. --- paper_title: Extending t-designs paper_content: Abstract Three extension theorems for t-designs are proved; two for t even, and one for t odd. Another theorem guaranteeing that certain t-designs be (t + 1)-designs is presented. The extension theorem for odd t is used to show that every group of odd order 2k + 1, k ≠ 2r − 1, acts as an automorphism group of a 2-(2k + 2, k + 1, λ) design consisting of exactly one half of the (k + 1)-settled, Although the question of the existence of a 6-(14, 7, 4) design is not settled, certain requisite properties of the 4-designs on 12 elements derived from such a design are established. All of these results depend heavily upon generalizations of block intersection number equations of N. S. Mendelsohn. --- paper_title: Large sets of 3-designs from psl(2, q), with block sizes 4 and 5 paper_content: We determine the distribution of 3−(q + 1,k,λ) designs, with k ϵ {4,5}, among the orbits of k-element subsets under the action of PSL(2,q), for q ϵ 3 (mod 4), on the projective line. As a consequence, we give necessary and sufficient conditions for the existence of a uniformly-PSL(2,q) large set of 3−(q + 1,k,λ) designs, with k ϵ {4,5} and q ≡ 3 (mod 4). © 1995 John Wiley & Sons, Inc. --- paper_title: All block designs with b= v k /2 exist paper_content: Abstract In this paper, we establish the correctness of a conjecture by Hartman on the existence of large sets of size two for t = 2. Also, we obtain some partial results for t > 2. --- paper_title: More on the Existence of Large Sets of t-Designs of Prime Sizes paper_content: In this paper, we employ the known recursive construction methods to obtain some new existence results for large sets of t-designs of prime sizes. We also present a new recursive construction which leads to more comprehensive theorems on large sets of sizes two and three. As an application, we show that for infinitely many values of block size, the trivial necessary conditions for the existence of large sets of 2-designs of size three are sufficient. --- paper_title: Root cases of large sets of t-designs paper_content: A large set of t-(v, k, λ) designs of size N, denoted by LS[N](t, k, v), is a partition of all k-subsets of a v-set into N disjoint t-(v, k, λ) designs, where N = (v-t/k-t)/λ. A set of trivial necessary conditions for the existence of an LS[N](t, k, v) is N|v-t/k-t for i = 0,.....,t. In this paper we extend some of the recursive methods for constructing large sets of t-designs of prime sizes. By utilizing these methods we show that for the construction of all possible large sets with the given N, t, and k, it suffices to construct a finite number of large sets which we call root cases. As a result, we show that the trivial necessary conditions for the existence of LS[3](2, k, v) are sufficient for k ≤ 80. --- paper_title: Halving the Complete Design paper_content: Abstract Let X be a finite set of cardinality v. We denote the set of all k -subsets of X by In this paper we consider the problem of partitioning into two parts of equal size, each of which is the block set of a 2- (v k, λ) design. We determine necessary and sufficient conditions on v for the existence of such a partition when k=3 or 4. We also construct partitions for higher values of fc and infinitely many values of v. The case where k=3 has been solved for all values of v by Dehon [5]. The case where v=2k has been solved for all values of k by Alltop [1], The remaining results are new. The technique used is similar to that used by Denniston in his construction of a 4-(12,5,4) design without repeated blocks. We also prove an interesting corollary to Baranyai's theorem giving necessary and sufficient conditions for the existence of a partition of into 1- (v, k ,λ) designs. --- paper_title: All block designs with b= v k /2 exist paper_content: Abstract In this paper, we establish the correctness of a conjecture by Hartman on the existence of large sets of size two for t = 2. Also, we obtain some partial results for t > 2. --- paper_title: More on the Existence of Large Sets of t-Designs of Prime Sizes paper_content: In this paper, we employ the known recursive construction methods to obtain some new existence results for large sets of t-designs of prime sizes. We also present a new recursive construction which leads to more comprehensive theorems on large sets of sizes two and three. As an application, we show that for infinitely many values of block size, the trivial necessary conditions for the existence of large sets of 2-designs of size three are sufficient. --- paper_title: Root cases of large sets of t-designs paper_content: A large set of t-(v, k, λ) designs of size N, denoted by LS[N](t, k, v), is a partition of all k-subsets of a v-set into N disjoint t-(v, k, λ) designs, where N = (v-t/k-t)/λ. A set of trivial necessary conditions for the existence of an LS[N](t, k, v) is N|v-t/k-t for i = 0,.....,t. In this paper we extend some of the recursive methods for constructing large sets of t-designs of prime sizes. By utilizing these methods we show that for the construction of all possible large sets with the given N, t, and k, it suffices to construct a finite number of large sets which we call root cases. As a result, we show that the trivial necessary conditions for the existence of LS[3](2, k, v) are sufficient for k ≤ 80. ---
Title: Large sets of t-designs through partitionable sets: A survey Section 1: Introduction Description 1: Introduce the topic of large sets of t-designs, provide historical background, significance, and the scope of the paper. Section 2: Definitions and Preliminaries Description 2: Define key terms like t-design, block, simple design, and necessary conditions for existence. Introduce essential notation and provide preliminary examples. Section 3: Review of the known large sets Description 3: Give a brief account of known results on the existence of large sets of t-designs using various methods before focusing on partitionable sets. Section 4: The necessary conditions Description 4: Discuss the necessary conditions for the existence of LS[N](t, k, v) and provide alternative descriptions and examples. Section 5: The approach of partitionable sets Description 5: Explain the approach of (N, t)-partitionable sets and its significance. Introduce key lemmas and the method for constructing large sets using partitionable sets. Section 6: General recursive constructions Description 6: Present recursive constructions for large sets of any size using the approach of (N, t)-partitionable sets. Section 7: Large sets of prime sizes Description 7: Discuss recursive constructions specifically for large sets of prime sizes and highlight key theorems and their applications. Section 8: Root cases of large sets of prime sizes Description 8: Define root cases and show how specific large sets suffice for constructing larger sets, present important theorems and proofs. Section 9: More results on large sets of sizes two and three Description 9: Provide detailed results and constructions for large sets of sizes two and three. Discuss related theorems and their implications. Section 10: Existence results Description 10: Summarize the existence results obtained using partitionable sets, discuss significant conjectures, and present the best known results related to these conjectures. Section 11: Open Problems Description 11: Present and discuss open problems in the field, particularly focusing on determining root cases for large sets of various sizes and generalizing theorems for prime power sizes.
A review of some recent advances in causal inference
16
--- paper_title: Algorithms of causal inference for the analysis of effective connectivity among brain regions paper_content: In recent years, powerful general algorithms of causal inference have been developed. In particular, in the framework of Pearl’s causality, algorithms of inductive causation (IC and IC*) provide a procedure to determine which causal connections among nodes in a network can be inferred from empirical observations even in the presence of latent variables, indicating the limits of what can be learned without active manipulation of the system. These algorithms can in principle become important complements to established techniques such as Granger causality and Dynamic Causal Modeling (DCM) to analyze causal influences (effective connectivity) among brain regions. However, their application to dynamic processes has not been yet examined. Here we study how to apply these algorithms to time-varying signals such as electrophysiological or neuroimaging signals. We propose a new algorithm which combines the basic principles of the previous algorithms with Granger causality to obtain a representation of the causal relations suited to dynamic processes. Furthermore, we use graphical criteria to predict dynamic statistical dependencies between the signals from the causal structure. We show how some problems for causal inference from neural signals (e.g., measurement noise, hemodynamic responses, and time aggregation) can be understood in a general graphical approach. Focusing on the effect of spatial aggregation, we show that when causal inference is performed at a coarser scale than the one at which the neural sources interact, results strongly depend on the degree of integration of the neural sources aggregated in the signals, and thus characterize more the intra-areal properties than the interactions among regions. We finally discuss how the explicit consideration of latent processes contributes to understand Granger causality and DCM as well as to distinguish functional and effective connectivity. --- paper_title: Using Causal Discovery Algorithms to Learn About Our Planet’s Climate paper_content: Causal discovery is the process of identifying potential cause-and-effect relationships from observed data. We use causal discovery to construct networks that track interactions around the globe based on time series data of atmospheric fields, such as daily geopotential height data. The key idea is to interpret large-scale atmospheric dynamical processes as information flow around the globe and to identify the pathways of this information flow using causal discovery and graphical models. We first review the basic concepts of using causal discovery, specifically constraint-based structure learning of probabilistic graphical models. Then we report on our recent progress, including some results on anticipated changes in the climate’s network structure for a warming climate and computational advances that allow us to move to three-dimensional networks. --- paper_title: Network modelling methods for FMRI. paper_content: There is great interest in estimating brain “networks” from FMRI data. This is often attempted by identifying a set of functional “nodes” (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. --- paper_title: De-Novo Learning of Genome-Scale Regulatory Networks in S. cerevisiae paper_content: De-novo reverse-engineering of genome-scale regulatory networks is a fundamental problem of biological and translational research. One of the major obstacles in developing and evaluating approaches for de-novo gene network reconstruction is the absence of high-quality genome-scale gold-standard networks of direct regulatory interactions. To establish a foundation for assessing the accuracy of de-novo gene network reverse-engineering, we constructed high-quality genome-scale gold-standard networks of direct regulatory interactions in Saccharomyces cerevisiae that incorporate binding and gene knockout data. Then we used 7 performance metrics to assess accuracy of 18 statistical association-based approaches for de-novo network reverse-engineering in 13 different datasets spanning over 4 data types. We found that most reconstructed networks had statistically significant accuracies. We also determined which statistical approaches and datasets/data types lead to networks with better reconstruction accuracies. While we found that de-novo reverse-engineering of the entire network is a challenging problem, it is possible to reconstruct sub-networks around some transcription factors with good accuracy. The latter transcription factors can be identified by assessing their connectivity in the inferred networks. Overall, this study provides the gene network reverse-engineering community with a rigorous assessment of the accuracy of S. cerevisiae gene network reconstruction and variability in performance of various approaches for learning both the entire network and sub-networks around transcription factors. --- paper_title: Atypical Effective Connectivity of Social Brain Networks in Individuals with Autism paper_content: Abstract Failing to engage in joint attention is a strong marker of impaired social cognition associated with autism spectrum disorder (ASD). The goal of this study was to localize the source of impaired joint attention in individuals with ASD by examining both behavioral and fMRI data collected during various tasks involving eye gaze, directional cuing, and face processing. The tasks were designed to engage three brain networks associated with social cognition [face processing, theory of mind (TOM), and action understanding]. The behavioral results indicate that even high-functioning individuals with ASD perform less accurately and more slowly than neurotypical (NT) controls when processing eyes, but not when processing a directional cue (an arrow) that did not involve eyes. Behavioral differences between the NT and ASD groups were consistent with differences in the effective connectivity of FACE, TOM, and ACTION networks. An independent multiple-sample greedy equivalence search was used to examine these... --- paper_title: Causal Discovery for Climate Research Using Graphical Models paper_content: Causal discovery seeks to recover cause‐effect relationships from statistical data using graphical models. One goal of this paper is to provide an accessible introduction to causal discovery methods for climate scientists, with a focus on constraint-based structure learning. Second, in a detailed case study constraintbased structure learning is applied to derive hypotheses of causal relationships between four prominent modes of atmospheric low-frequency variability in boreal winter including the Western Pacific Oscillation (WPO), Eastern Pacific Oscillation (EPO), Pacific‐North America (PNA) pattern, and North Atlantic Oscillation (NAO). The results are shown in the form of static and temporal independence graphs also known as Bayesian Networks. It is found that WPO and EPO are nearly indistinguishable from the cause‐ effect perspective as strong simultaneous coupling is identified between the two. In addition, changes in the state of EPO (NAO) may cause changes in the state of NAO (PNA) approximately 18 (3‐6) days later. These results are not only consistent with previous findings on dynamical processes connecting different low-frequency modes (e.g., interaction between synoptic and low-frequency eddies) but also provide the basis for formulating new hypotheses regarding the time scale and temporal sequencing of dynamical processes responsible for these connections. Last, the authors propose to use structure learning for climate networks, which are currently based primarily on correlation analysis. While correlation-based climate networks focus on similarity between nodes, independence graphs would provide an alternative viewpoint by focusing on information flow in the network. --- paper_title: A statistical problem for inference to regulatory structure from associations of gene expression measurements with microarrays paper_content: Motivation: One approach to inferring genetic regulatory structure from microarray measurements of mRNA transcript hybridization is to estimate the associations of gene expression levels measured in repeated samples. The associations may be estimated by correlation coefficients or by conditional frequencies (for discretized measurements) or by some other statistic. Although these procedures have been successfully applied to other areas, their validity when applied to microarray measurements has yet to be tested. Results: This paper describes an elementary statistical difficulty for all such procedures, no matter whether based on Bayesian updating, conditional independence testing, or other machine learning procedures such as simulated annealing or neural net pruning. The difficulty obtains if a number of cells from a common population are aggregated in a measurement of expression levels. Although there are special cases where the conditional associations are preserved under aggregation, in general inference of genetic regulatory structure based on conditional association is unwarranted. --- paper_title: Six problems for causal inference from fMRI paper_content: Abstract Neuroimaging (e.g. fMRI) data are increasingly used to attempt to identify not only brain regions of interest (ROIs) that are especially active during perception, cognition, and action, but also the qualitative causal relations among activity in these regions (known as effective connectivity; Friston, 1994). Previous investigations and anatomical and physiological knowledge may somewhat constrain the possible hypotheses, but there often remains a vast space of possible causal structures. To find actual effective connectivity relations, search methods must accommodate indirect measurements of nonlinear time series dependencies, feedback, multiple subjects possibly varying in identified regions of interest, and unknown possible location-dependent variations in BOLD response delays. We describe combinations of procedures that under these conditions find feed-forward sub-structure characteristic of a group of subjects. The method is illustrated with an empirical data set and confirmed with simulations of time series of non-linear, randomly generated, effective connectivities, with feedback, subject to random differences of BOLD delays, with regions of interest missing at random for some subjects, measured with noise approximating the signal to noise ratio of the empirical data. --- paper_title: Inferring causal impact using Bayesian structural time-series models paper_content: An important problem in econometrics and marketing is to infer the causal impact that a designed market intervention has exerted on an outcome metric over time. In order to allocate a given budget optimally, for example, an advertiser must determine the incremental contributions that dierent advertising campaigns have made to web searches, product installs, or sales. This paper proposes to infer causal impact on the basis of a diusion-regressi on state-space model that predicts the counterfactual market response that would have occurred had no intervention taken place. In con- trast to classical dierence-in-dier ences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) exibly accommodate multiple sources of variation, including the time-varying inuence of contemporane- ous covariates, i.e., synthetic controls. Using a Markov chain Monte Carlo algorithm for posterior inference, we illustrate the statistical properties of our approach on synthetic data. We then demonstrate its practical utility by evaluating the eect of an online advertising campaign on search-related site visits. We discuss the strengths and limitations of our approach in improving the accuracy of causal at- tribution, power analyses, and principled budget allocation. --- paper_title: Causal Stability Ranking paper_content: Genotypic causes of a phenotypic trait are typically determined via randomized controlled intervention experiments. Such experiments are often prohibitive with respect to durations and costs. We therefore consider inferring stable rankings of genes, according to their causal effects on a phenotype, from observational data only. Our method allows for efficient design and prioritization of future experiments, and due to its generality it is useable for a broad spectrum of applications. --- paper_title: From correlation to causation networks: a simple approximate learning algorithm and its application to high-dimensional plant gene expression data paper_content: BackgroundThe use of correlation networks is widespread in the analysis of gene expression and proteomics data, even though it is known that correlations not only confound direct and indirect associations but also provide no means to distinguish between cause and effect. For "causal" analysis typically the inference of a directed graphical model is required. However, this is rather difficult due to the curse of dimensionality.ResultsWe propose a simple heuristic for the statistical learning of a high-dimensional "causal" network. The method first converts a correlation network into a partial correlation graph. Subsequently, a partial ordering of the nodes is established by multiple testing of the log-ratio of standardized partial variances. This allows identifying a directed acyclic causal network as a subgraph of the partial correlation network. We illustrate the approach by analyzing a large Arabidopsis thaliana expression data set.ConclusionThe proposed approach is a heuristic algorithm that is based on a number of approximations, such as substituting lower order partial correlations by full order partial correlations. Nevertheless, for small samples and for sparse networks the algorithm not only yield sensible first order approximations of the causal structure in high-dimensional genomic data but is also computationally highly efficient.Availability and RequirementsThe method is implemented in the "GeneNet" R package (version 1.2.0), available from CRAN and from http://strimmerlab.org/software/genets/. The software includes an R script for reproducing the network analysis of the Arabidopsis thaliana data. --- paper_title: Predicting causal effects in large-scale systems from observational data paper_content: Supplementary Figure 1 Comparing IDA, Lasso and Elastic-net on the five DREAM4 networks of size 10 with multifactorial data. Supplementary Table 1 Comparing IDA, Lasso and Elastic-net to random guessing on the Hughes et al. data. Supplementary Table 2 Comparing IDA, Lasso and Elastic-net to random guessing on the five DREAM4 networks of size 10, using the multifactorial data as observational data. Supplementary Methods --- paper_title: Causality: Models, Reasoning, and Inference paper_content: 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. --- paper_title: Causal inference in statistics: An Overview paper_content: Abstract: This review presents empirical researchers with recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret,” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both. --- paper_title: Structural Equations with Latent Variables paper_content: Model Notation, Covariances, and Path Analysis. Causality and Causal Models. Structural Equation Models with Observed Variables. The Consequences of Measurement Error. Measurement Models: The Relation Between Latent and Observed Variables. Confirmatory Factor Analysis. The General Model, Part I: Latent Variable and Measurement Models Combined. The General Model, Part II: Extensions. Appendices. Distribution Theory. References. Index. --- paper_title: Causal inference in statistics: An Overview paper_content: Abstract: This review presents empirical researchers with recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret,” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both. --- paper_title: Marginal Structural Models to Estimate the Causal Effect of Zidovudine on the Survival of HIV-Positive Men paper_content: Standard methods for survival analysis, such as the time-dependent Cox model, may produce biased effect estimates when there exist time-dependent confounders that are themselves affected by previous treatment or exposure. Marginal structural models are a new class of causal models the parameters of --- paper_title: Marginal structural models and causal inference in epidemiology paper_content: In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment. This paper introduces marginal structural models, a new class of causal models that allow for improved adjustment of confounding in those situations. The parameters of a marginal structural model can be consistently estimated using a new class of estimators, the inverse-probability-of-treatment weighted estimators. --- paper_title: Estimating the effect of joint interventions from observational data in sparse high-dimensional settings paper_content: We consider the estimation of joint causal effects from observational data. In particular, we propose new methods to estimate the effect of multiple simultaneous interventions (e.g., multiple gene knockouts), under the assumption that the observational data come from an unknown linear structural equation model with independent errors. We derive asymptotic variances of our estimators when the underlying causal structure is partly known, as well as high-dimensional consistency when the causal structure is fully unknown and the joint distribution is multivariate Gaussian. We also propose a generalization of our methodology to the class of nonparanormal distributions. We evaluate the estimators in simulation studies and also illustrate them on data from the DREAM4 challenge. --- paper_title: Comment: Graphical Models, Causality and Intervention paper_content: I am grateful for the opportunity to respond to these two excellent papers. Although graphical models are intuitively compelling for conceptualizing statistical associations, the scientific community generally views such models with hesitancy and suspicion. The two papers before us demonstrate the use of graphs specifically, directed acyclic graphs (DAGs) -as a mathematical tool of great versatility and thus promise to make graphical languages more common in statistical analysis. In fact, I find my own views in such close agreement with those of the authors that any attempt on my part to comment directly on their work would amount to sheer repetition. Instead, as the editor suggested, I would like to provide a personal perspective on current and future developments in the areas of graphical and causal modeling. A complementary account of the evolution of belief networks is given in Pearl (1993a). I will focus on the connection between graphical models and the notion of causality in statistical analysis. This connection has been treated very cautiously in the papers before us. In Lauritzen and Spiegelhalter (1988), the graphs were called "causal networks," for which the authors were criticized; they have agreed to refrain from using the word "causal." In the current paper, Spiegelhalter et al. deemphasize the causal interpretation of the arcs in favor of the "irrelevance" interpretation. I think this retreat is regrettable for two reasons: first, causal associations are the primary source of judgments about irrelevance, and, second, rejecting the causal interpretation of arcs prevents us from using graphical models for making legitimate predictions about the effect of actions. Such predictions are indispensable in applications such as treatment management and policy analysis. I would like to supplement the discussion with an ac'count of how causal models and graphical models are related. It is generally accepted that, because they provide information about the dynamics of the system under study, causal models, regardless of how they are discovered or tested, are more useful than associational models. In other words, whereas the joint distribution --- paper_title: Causation, prediction, and search paper_content: What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993. --- paper_title: Causal inference in statistics: An Overview paper_content: Abstract: This review presents empirical researchers with recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret,” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both. --- paper_title: Learning Equivalence Classes of Bayesian Networks Structures paper_content: Approaches to learning Bayesian networks from data typically combine a scoring function with a heuristic search procedure. Given a Bayesian network structure, many of the scoring functions derived in the literature return a score for the entire equivalence class to which the structure belongs. When using such a scoring function, it is appropriate for the heuristic search algorithm to search over equivalence classes of Bayesian networks as opposed to individual structures. We present the general formulation of a search space for which the states of the search correspond to equivalence classes of structures. Using this space, any one of a number of heuristic search algorithms can easily be applied. We compare greedy search performance in the proposed search space to greedy search performance in a search space for which the states correspond to individual Bayesian network structures. --- paper_title: A Characterization of Markov Equivalence Classes for Acyclic Digraphs paper_content: Undirected graphs and acyclic digraphs (ADGs), as well as their mutual extension to chain graphs, are widely used to describe dependencies among variables in multivariate distributions. In particular, the likelihood functions of ADG models admit convenient recursive factorizations that often allow explicit maximum likelihood estimates and that are well suited to building Bayesian networks for expert systems. Whereas the undirected graph associated with a dependence model is uniquely determined, there may, however, be many ADGs that determine the same dependence (= Markov) model. Thus, the family of all ADGs with a given set of vertices is naturally partitioned into Markov-equivalence classes, each class being associated with a unique statistical model. Statistical procedures, such as model selection or model averaging, that fail to take into account these equivalence classes, may incur substantial computational or other inefficiencies. Here it is shown that each Markov-equivalence class is uniquely determined by a single chain graph, the essential graph, that is itself simultaneously Markov equivalent to all ADGs in the equivalence class. Essential graphs are characterized, a polynomial-time algorithm for their construction is given, and their applications to model selection and other statistical questions are described. --- paper_title: Causal Inference using Graphical Models with the R Package pcalg paper_content: The pcalg package for R can be used for the following two purposes: Causal structure learning and estimation of causal effects from observational data. In this document, we give a brief overview of the methodology, and demonstrate the package’s functionality in both toy examples and applications. --- paper_title: PC algorithm for nonparanormal graphical models paper_content: The PC algorithm uses conditional independence tests for model selection in graphical modeling with acyclic directed graphs. In Gaussian models, tests of conditional independence are typically based on Pearson correlations, and high-dimensional consistency results have been obtained for the PC algorithm in this setting. Analyzing the error propagation from marginal to partial correlations, we prove that high-dimensional consistency carries over to a broader class of Gaussian copula or nonparanormal models when using rank-based measures of correlation. For graph sequences with bounded degree, our consistency result is as strong as prior Gaussian results. In simulations, the 'Rank PC' algorithm works as well as the 'Pearson PC' algorithm for normal data and considerably better for non-normal data, all the while incurring a negligible increase of computation time. While our interest is in the PC algorithm, the presented analysis of error propagation could be applied to other algorithms that test the vanishing of low-order partial correlations. --- paper_title: Estimating high-dimensional directed acyclic graphs with the PC-algorithm paper_content: We consider the PC-algorithm Spirtes et. al. (2000) for estimating the skeleton of a very high-dimensional acyclic directed graph (DAG) with corresponding Gaussian distribution. The PC-algorithm is computationally feasible for sparse problems with many nodes, i.e. variables, and it has the attractive property to automatically achieve high computational efficiency as a function of sparseness of the true underlying DAG. We prove consistency of the algorithm for very high-dimensional, sparse DAGs where the number of nodes is allowed to quickly grow with sample size n, as fast as O(n^a) for any 0<a<infinity. The sparseness assumption is rather minimal requiring only that the neighborhoods in the DAG are of lower order than sample size n. We empirically demonstrate the PC-algorithm for simulated data and argue that the algorithm is rather insensitive to the choice of its single tuning parameter. --- paper_title: Order-independent constraint-based causal structure learning paper_content: We consider constraint-based methods for causal structure learning, such as the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al., 1993, 2000; Richardson, 1996; Colombo et al., 2012; Claassen et al., 2013). The first step of all these algorithms consists of the adjacency search of the PC-algorithm. The PC-algorithm is known to be order-dependent, in the sense that the output can depend on the order in which the variables are given. This order-dependence is a minor issue in low-dimensional settings. We show, however, that it can be very pronounced in high-dimensional settings, where it can lead to highly variable results. We propose several modifications of the PC-algorithm (and hence also of the other algorithms) that remove part or all of this order-dependence. All proposed modifications are consistent in high-dimensional settings under the same conditions as their original counterparts. We compare the PC-, FCI-, and RFCI-algorithms and their modifications in simulation studies and on a yeast gene expression data set. We show that our modifications yield similar performance in low-dimensional settings and improved performance in high-dimensional settings. All software is implemented in the R-package pcalg. --- paper_title: Causation, prediction, and search paper_content: What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993. --- paper_title: Causal Inference using Graphical Models with the R Package pcalg paper_content: The pcalg package for R can be used for the following two purposes: Causal structure learning and estimation of causal effects from observational data. In this document, we give a brief overview of the methodology, and demonstrate the package’s functionality in both toy examples and applications. --- paper_title: Optimal structure identification with greedy search paper_content: In this paper we prove the so-called "Meek Conjecture". In particular, we show that if a DAG H is an independence map of another DAG G, then there exists a finite sequence of edge additions and covered edge reversals in G such that (1) after each edge modification H remains an independence map of G and (2) after all modifications G =H. As shown by Meek (1997), this result has an important consequence for Bayesian approaches to learning Bayesian networks from data: in the limit of large sample size, there exists a two-phase greedy search algorithm that---when applied to a particular sparsely-connected search space---provably identifies a perfect map of the generative distribution if that perfect map is a DAG. We provide a new implementation of the search space, using equivalence classes as states, for which all operators used in the greedy search can be scored efficiently using local functions of the nodes in the domain. Finally, using both synthetic and real-world datasets, we demonstrate that the two-phase greedy approach leads to good solutions when learning with finite sample sizes. --- paper_title: Local causal and Markov blanket induction for causal discovery and feature selection for classification. Part II: Analysis and extensions paper_content: We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes/effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. ::: ::: Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. ::: ::: In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning. --- paper_title: Understanding consistency in hybrid causal structure learning paper_content: We consider causal structure learning from observational data. The main existing approaches can be classified as constraint-based, score-based and hybrid methods, where the latter combine aspects of both constraint-based and score-based approaches. Hybrid methods often apply a greedy search on a restricted search space, where the restricted space is estimated using a constraint-based method. The restriction on the search space is a fundamental principle of the hybrid methods and makes them computationally efficient. However, this can come at the cost of inconsistency or at least at the cost of a lack of consistency proofs. In this paper, we demonstrate such inconsistency in an explicit example. In spite of the lack of consistency results, many hybrid methods have empirically been shown to outperform consistent score-based methods such as greedy equivalence search (GES). ::: We present a consistent hybrid method, called adaptively restricted GES (ARGES). It is a modification of GES, where the restriction on the search space depends on an estimated conditional independence graph and also changes adaptively depending on the current state of the algorithm. Although the adaptive modification is necessary to achieve consistency in general, our empirical results show that it has a relatively minor effect. This provides an explanation for the empirical success of (inconsistent) hybrid methods. --- paper_title: The max-min hill-climbing Bayesian network structure learning algorithm paper_content: We present a new algorithm for Bayesian network structure learning, called Max-Min Hill-Climbing (MMHC). The algorithm combines ideas from local learning, constraint-based, and search-and-score techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. In our extensive empirical evaluation MMHC outperforms on average and in terms of various metrics several prototypical and state-of-the-art algorithms, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search. These are the first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other. MMHC offers certain theoretical advantages, specifically over the Sparse Candidate algorithm, corroborated by our experiments. MMHC and detailed results of our study are publicly available at http://www.dsl-lab.org/supplements/mmhc_paper/mmhc_index.html. --- paper_title: Estimating high-dimensional intervention effects from observational data paper_content: We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can be estimated using intervention calculus. In this paper, we combine these two parts. For each DAG in the estimated equivalence class, we use intervention calculus to estimate the causal effects of the covariates on the response. This yields a collection of estimated causal effects for each covariate. We show that the distinct values in this set can be consistently estimated by an algorithm that uses only local information of the graph. This local approach is computationally fast and feasible in high-dimensional problems. We propose to use summary measures of the set of possible causal effects to determine variable importance. In particular, we use the minimum absolute value of this set, since that is a lower bound on the size of the causal effect. We demonstrate the merits of our methods in a simulation study and on a data set about riboflavin production. --- paper_title: Estimating the effect of joint interventions from observational data in sparse high-dimensional settings paper_content: We consider the estimation of joint causal effects from observational data. In particular, we propose new methods to estimate the effect of multiple simultaneous interventions (e.g., multiple gene knockouts), under the assumption that the observational data come from an unknown linear structural equation model with independent errors. We derive asymptotic variances of our estimators when the underlying causal structure is partly known, as well as high-dimensional consistency when the causal structure is fully unknown and the joint distribution is multivariate Gaussian. We also propose a generalization of our methodology to the class of nonparanormal distributions. We evaluate the estimators in simulation studies and also illustrate them on data from the DREAM4 challenge. --- paper_title: A direct method for estimating a causal ordering in a linear non-Gaussian acyclic model paper_content: Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables. In such frameworks, linear acyclic models are typically used to model the data-generating process of variables. Recently, it was shown that use of non-Gaussianity identifies a causal ordering of variables in a linear acyclic model without using any prior knowledge on the network structure, which is not the case with conventional methods. However, existing estimation methods are based on iterative search algorithms and may not converge to a correct solution in a finite number of steps. In this paper, we propose a new direct method to estimate a causal ordering based on non-Gaussianity. In contrast to the previous methods, our algorithm requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model. --- paper_title: Nonlinear causal discovery with additive noise models paper_content: The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities. --- paper_title: Order-independent constraint-based causal structure learning paper_content: We consider constraint-based methods for causal structure learning, such as the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al., 1993, 2000; Richardson, 1996; Colombo et al., 2012; Claassen et al., 2013). The first step of all these algorithms consists of the adjacency search of the PC-algorithm. The PC-algorithm is known to be order-dependent, in the sense that the output can depend on the order in which the variables are given. This order-dependence is a minor issue in low-dimensional settings. We show, however, that it can be very pronounced in high-dimensional settings, where it can lead to highly variable results. We propose several modifications of the PC-algorithm (and hence also of the other algorithms) that remove part or all of this order-dependence. All proposed modifications are consistent in high-dimensional settings under the same conditions as their original counterparts. We compare the PC-, FCI-, and RFCI-algorithms and their modifications in simulation studies and on a yeast gene expression data set. We show that our modifications yield similar performance in low-dimensional settings and improved performance in high-dimensional settings. All software is implemented in the R-package pcalg. --- paper_title: Identifiability of Gaussian structural equation models with equal error variances paper_content: We consider structural equation models in which variables can be written as a function of their parents and noise terms, which are assumed to be jointly independent. Corresponding to each structural equation model is a directed acyclic graph describing the relationships between the variables. In Gaussian structural equation models with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes, assuming faithfulness. In this work, we prove full identifiability in the case where all noise variables have the same variance: the directed acyclic graph can be recovered from the joint Gaussian distribution. Our result has direct implications for causal inference: if the data follow a Gaussian structural equation model with equal error variances, then, assuming that all variables are observed, the causal structure can be inferred from observational data only. We propose a statistical method and an algorithm based on our theoretical findings. --- paper_title: Predicting causal effects in large-scale systems from observational data paper_content: Supplementary Figure 1 Comparing IDA, Lasso and Elastic-net on the five DREAM4 networks of size 10 with multifactorial data. Supplementary Table 1 Comparing IDA, Lasso and Elastic-net to random guessing on the Hughes et al. data. Supplementary Table 2 Comparing IDA, Lasso and Elastic-net to random guessing on the five DREAM4 networks of size 10, using the multifactorial data as observational data. Supplementary Methods --- paper_title: Estimating high-dimensional intervention effects from observational data paper_content: We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can be estimated using intervention calculus. In this paper, we combine these two parts. For each DAG in the estimated equivalence class, we use intervention calculus to estimate the causal effects of the covariates on the response. This yields a collection of estimated causal effects for each covariate. We show that the distinct values in this set can be consistently estimated by an algorithm that uses only local information of the graph. This local approach is computationally fast and feasible in high-dimensional problems. We propose to use summary measures of the set of possible causal effects to determine variable importance. In particular, we use the minimum absolute value of this set, since that is a lower bound on the size of the causal effect. We demonstrate the merits of our methods in a simulation study and on a data set about riboflavin production. --- paper_title: Marginal structural models and causal inference in epidemiology paper_content: In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment. This paper introduces marginal structural models, a new class of causal models that allow for improved adjustment of confounding in those situations. The parameters of a marginal structural model can be consistently estimated using a new class of estimators, the inverse-probability-of-treatment weighted estimators. --- paper_title: Estimating the effect of joint interventions from observational data in sparse high-dimensional settings paper_content: We consider the estimation of joint causal effects from observational data. In particular, we propose new methods to estimate the effect of multiple simultaneous interventions (e.g., multiple gene knockouts), under the assumption that the observational data come from an unknown linear structural equation model with independent errors. We derive asymptotic variances of our estimators when the underlying causal structure is partly known, as well as high-dimensional consistency when the causal structure is fully unknown and the joint distribution is multivariate Gaussian. We also propose a generalization of our methodology to the class of nonparanormal distributions. We evaluate the estimators in simulation studies and also illustrate them on data from the DREAM4 challenge. --- paper_title: Estimating the effect of joint interventions from observational data in sparse high-dimensional settings paper_content: We consider the estimation of joint causal effects from observational data. In particular, we propose new methods to estimate the effect of multiple simultaneous interventions (e.g., multiple gene knockouts), under the assumption that the observational data come from an unknown linear structural equation model with independent errors. We derive asymptotic variances of our estimators when the underlying causal structure is partly known, as well as high-dimensional consistency when the causal structure is fully unknown and the joint distribution is multivariate Gaussian. We also propose a generalization of our methodology to the class of nonparanormal distributions. We evaluate the estimators in simulation studies and also illustrate them on data from the DREAM4 challenge. --- paper_title: Functional Discovery via a Compendium of Expression Profiles paper_content: Ascertaining the impact of uncharacterized perturbations on the cell is a fundamental problem in biology. Here, we describe how a single assay can be used to monitor hundreds of different cellular functions simultaneously. We constructed a reference database or "compendium" of expression profiles corresponding to 300 diverse mutations and chemical treatments in S. cerevisiae, and we show that the cellular pathways affected can be determined by pattern matching, even among very subtle profiles. The utility of this approach is validated by examining profiles caused by deletions of uncharacterized genes: we identify and experimentally confirm that eight uncharacterized open reading frames encode proteins required for sterol metabolism, cell wall function, mitochondrial respiration, or protein synthesis. We also show that the compendium can be used to characterize pharmacological perturbations by identifying a novel target of the commonly used drug dyclonine. --- paper_title: Generating realistic in silico gene networks for performance assessment of reverse engineering methods paper_content: Reverse engineering methods are typically first tested on simulated data from in silico networks, for systematic and efficient performance assessment, before an application to real biological networks. In this paper, we present a method for generating biologically plausible in silico networks, which allow realistic performance assessment of network inference algorithms. Instead of using random graph models, which are known to only partly capture the structural properties of biological networks, we generate network structures by extracting modules from known biological interaction networks. Using the yeast transcriptional regulatory network as a test case, we show that extracted modules have a biologically plausible connectivity because they preserve functional and structural properties of the original network. Our method was selected to generate the "gold standard" networks for the gene network reverse engineering challenge of the third DREAM conference (Dialogue on Reverse Engineering Assessment and Methods 2008, Cambridge, MA). --- paper_title: Order-independent constraint-based causal structure learning paper_content: We consider constraint-based methods for causal structure learning, such as the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al., 1993, 2000; Richardson, 1996; Colombo et al., 2012; Claassen et al., 2013). The first step of all these algorithms consists of the adjacency search of the PC-algorithm. The PC-algorithm is known to be order-dependent, in the sense that the output can depend on the order in which the variables are given. This order-dependence is a minor issue in low-dimensional settings. We show, however, that it can be very pronounced in high-dimensional settings, where it can lead to highly variable results. We propose several modifications of the PC-algorithm (and hence also of the other algorithms) that remove part or all of this order-dependence. All proposed modifications are consistent in high-dimensional settings under the same conditions as their original counterparts. We compare the PC-, FCI-, and RFCI-algorithms and their modifications in simulation studies and on a yeast gene expression data set. We show that our modifications yield similar performance in low-dimensional settings and improved performance in high-dimensional settings. All software is implemented in the R-package pcalg. --- paper_title: Causal Stability Ranking paper_content: Genotypic causes of a phenotypic trait are typically determined via randomized controlled intervention experiments. Such experiments are often prohibitive with respect to durations and costs. We therefore consider inferring stable rankings of genes, according to their causal effects on a phenotype, from observational data only. Our method allows for efficient design and prioritization of future experiments, and due to its generality it is useable for a broad spectrum of applications. --- paper_title: Predicting causal effects in large-scale systems from observational data paper_content: Supplementary Figure 1 Comparing IDA, Lasso and Elastic-net on the five DREAM4 networks of size 10 with multifactorial data. Supplementary Table 1 Comparing IDA, Lasso and Elastic-net to random guessing on the Hughes et al. data. Supplementary Table 2 Comparing IDA, Lasso and Elastic-net to random guessing on the five DREAM4 networks of size 10, using the multifactorial data as observational data. Supplementary Methods --- paper_title: Estimation of causal effects using linear non-Gaussian causal models with hidden variables paper_content: The task of estimating causal effects from non-experimental data is notoriously difficult and unreliable. Nevertheless, precisely such estimates are commonly required in many fields including economics and social science, where controlled experiments are often impossible. Linear causal models (structural equation models), combined with an implicit normality (Gaussianity) assumption on the data, provide a widely used framework for this task. We have recently described how non-Gaussianity in the data can be exploited for estimating causal effects. In this paper we show that, with non-Gaussian data, causal inference is possible even in the presence of hidden variables (unobserved confounders), even when the existence of such variables is unknown a priori. Thus, we provide a comprehensive and complete framework for the estimation of causal effects between the observed variables in the linear, non-Gaussian domain. Numerical simulations demonstrate the practical implementation of the proposed method, with full Matlab code available for all simulations. --- paper_title: Causal Inference in the Presence of Latent Variables and Selection Bias paper_content: We show that there is a general, informative and reliable procedure for discovering causal relations when, for all the investigator knows, both latent variables and selection bias may be at work. Given information about conditional independence and dependence relations between measured variables, even when latent variables and selection bias may be present, there are sufficient conditions for reliably concluding that there is a causal path from one variable to another, and sufficient conditions for reliably concluding when no such causal path exists. --- paper_title: Constructing Separators and Adjustment Sets in Ancestral Graphs paper_content: Ancestral graphs (AGs) are graphical causal models that can represent uncertainty about the presence of latent confounders, and can be inferred from data. Here, we present an algorithmic framework for efficiently testing, constructing, and enumerating m-separators in AGs. Moreover, we present a new constructive criterion for covariate adjustment in directed acyclic graphs (DAGs) and maximal ancestral graphs (MAGs) that characterizes adjustment sets as m-separators in a subgraph. Jointly, these results allow to find all adjustment sets that can identify a desired causal effect with multivariate exposures and outcomes in the presence of latent confounding. Our results generalize and improve upon several existing solutions for special cases of these problems. --- paper_title: Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs paper_content: The investigation of directed acyclic graphs (DAGs) encoding the same Markov property, that is the same conditional independence relations of multivariate observational distributions, has a long tradition; many algorithms exist for model selection and structure learning in Markov equivalence classes. In this paper, we extend the notion of Markov equivalence of DAGs to the case of interventional distributions arising from multiple intervention experiments. We show that under reasonable assumptions on the intervention experiments, interventional Markov equivalence defines a finer partitioning of DAGs than observational Markov equivalence and hence improves the identifiability of causal models. We give a graph theoretic criterion for two DAGs being Markov equivalent under interventions and show that each interventional Markov equivalence class can, analogously to the observational case, be uniquely represented by a chain graph called interventional essential graph (also known as CPDAG in the observational case). These are key insights for deriving a generalization of the Greedy Equivalence Search algorithm aimed at structure learning from interventional data. This new algorithm is evaluated in a simulation study. --- paper_title: Estimation of a Structural Vector Autoregression Model Using Non-Gaussianity paper_content: Analysis of causal effects between continuous-valued variables typically uses either autoregressive models or structural equation models with instantaneous effects. Estimation of Gaussian, linear structural equation models poses serious identifiability problems, which is why it was recently proposed to use non-Gaussian models. Here, we show how to combine the non-Gaussian instantaneous model with autoregressive models. This is effectively what is called a structural vector autoregression (SVAR) model, and thus our work contributes to the long-standing problem of how to estimate SVAR's. We show that such a non-Gaussian model is identifiable without prior knowledge of network structure. We propose computationally efficient methods for estimating the model, as well as methods to assess the significance of the causal influences. The model is successfully applied on financial and brain imaging data. --- paper_title: Causal inference using invariant prediction: identification and confidence intervals paper_content: Summary ::: What is the difference between a prediction that is made with a causal model and that with a non-causal model? Suppose that we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the example of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large-scale gene perturbation experiments. --- paper_title: Algorithms for large scale markov blanket discovery paper_content: This paper presents a number of new algorithms for discovering the Markov Blanket of a target variable T from training data. The Markov Blanket can be used for variable selection for classification, for causal discovery, and for Bayesian Network learning. We introduce a low-order polynomial algorithm and several variants that soundly induce the Markov Blanket under certain broad conditions in datasets with thousands of variables and compare them to other state-of-the-art local and global methods with excellent results. --- paper_title: Search for additive nonlinear time series causal models paper_content: Pointwise consistent, feasible procedures for estimating contemporaneous linear causal structure from time series data have been developed using multiple conditional independence tests, but no such procedures are available for non-linear systems. We describe a feasible procedure for learning a class of non-linear time series structures, which we call additive non-linear time series. We show that for data generated from stationary models of this type, two classes of conditional independence relations among time series variables and their lags can be tested efficiently and consistently using tests based on additive model regression. Combining results of statistical tests for these two classes of conditional independence relations and the temporal structure of time series data, a new consistent model specification procedure is able to extract relatively detailed causal information. We investigate the finite sample behavior of the procedure through simulation, and illustrate the application of this method through analysis of the possible causal connections among four ocean indices. Several variants of the procedure are also discussed. --- paper_title: Causal Discovery for Climate Research Using Graphical Models paper_content: Causal discovery seeks to recover cause‐effect relationships from statistical data using graphical models. One goal of this paper is to provide an accessible introduction to causal discovery methods for climate scientists, with a focus on constraint-based structure learning. Second, in a detailed case study constraintbased structure learning is applied to derive hypotheses of causal relationships between four prominent modes of atmospheric low-frequency variability in boreal winter including the Western Pacific Oscillation (WPO), Eastern Pacific Oscillation (EPO), Pacific‐North America (PNA) pattern, and North Atlantic Oscillation (NAO). The results are shown in the form of static and temporal independence graphs also known as Bayesian Networks. It is found that WPO and EPO are nearly indistinguishable from the cause‐ effect perspective as strong simultaneous coupling is identified between the two. In addition, changes in the state of EPO (NAO) may cause changes in the state of NAO (PNA) approximately 18 (3‐6) days later. These results are not only consistent with previous findings on dynamical processes connecting different low-frequency modes (e.g., interaction between synoptic and low-frequency eddies) but also provide the basis for formulating new hypotheses regarding the time scale and temporal sequencing of dynamical processes responsible for these connections. Last, the authors propose to use structure learning for climate networks, which are currently based primarily on correlation analysis. While correlation-based climate networks focus on similarity between nodes, independence graphs would provide an alternative viewpoint by focusing on information flow in the network. --- paper_title: Cyclic Causal Discovery from Continuous Equilibrium Data paper_content: We propose a method for learning cyclic causal models from a combination of observational and interventional equilibrium data. Novel aspects of the proposed method are its ability to work with continuous data (without assuming linearity) and to deal with feedback loops. Within the context of biochemical reactions, we also propose a novel way of modeling interventions that modify the activity of compounds instead of their abundance. For computational reasons, we approximate the nonlinear causal mechanisms by (coupled) local linearizations, one for each experimental condition. We apply the method to reconstruct a cellular signaling network from the flow cytometry data measured by Sachs et al. (2005). We show that our method finds evidence in the data for feedback loops and that it gives a more accurate quantitative description of the data at comparable model complexity. --- paper_title: Functional Discovery via a Compendium of Expression Profiles paper_content: Ascertaining the impact of uncharacterized perturbations on the cell is a fundamental problem in biology. Here, we describe how a single assay can be used to monitor hundreds of different cellular functions simultaneously. We constructed a reference database or "compendium" of expression profiles corresponding to 300 diverse mutations and chemical treatments in S. cerevisiae, and we show that the cellular pathways affected can be determined by pattern matching, even among very subtle profiles. The utility of this approach is validated by examining profiles caused by deletions of uncharacterized genes: we identify and experimentally confirm that eight uncharacterized open reading frames encode proteins required for sterol metabolism, cell wall function, mitochondrial respiration, or protein synthesis. We also show that the compendium can be used to characterize pharmacological perturbations by identifying a novel target of the commonly used drug dyclonine. --- paper_title: Local causal and Markov blanket induction for causal discovery and feature selection for classification. Part II: Analysis and extensions paper_content: We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes/effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. ::: ::: Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. ::: ::: In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning. --- paper_title: On Causal Discovery with Cyclic Additive Noise Models paper_content: We study a particular class of cyclic causal models, where each variable is a (possibly nonlinear) function of its parents and additive noise. We prove that the causal graph of such models is generically identifiable in the bivariate, Gaussian-noise case. We also propose a method to learn such models from observational data. In the acyclic case, the method reduces to ordinary regression, but in the more challenging cyclic case, an additional term arises in the loss function, which makes it a special case of nonlinear independent component analysis. We illustrate the proposed method on synthetic data. --- paper_title: Identification of Conditional Interventional Distributions paper_content: The subject of this paper is the elucidation of effects of actions from causal assumptions represented as a directed graph, and statistical knowledge given as a probability distribution. In particular, we are interested in predicting distributions on post-action outcomes given a set of measurements. We provide a necessary and sufficient graphical condition for the cases where such distributions can be uniquely computed from the available information, as well as an algorithm which performs this computation whenever the condition holds. Furthermore, we use our results to prove completeness of do-calculus [Pearl, 1995] for the same identification problem, and show applications to sequential decision making. --- paper_title: Learning Sparse Causal Models is not NP-hard paper_content: This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks. --- paper_title: Integrating locally learned causal structures with overlapping variables paper_content: In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset. While there are asymptotically correct, informative algorithms for discovering causal relationships from a single dataset, even with missing values and hidden variables, there have been no such reliable procedures for distributed data with overlapping variables. We present a novel, asymptotically correct procedure that discovers a minimal equivalence class of causal DAG structures using local independence information from distributed data of this form and evaluate its performance using synthetic and real-world data against causal discovery algorithms for single datasets and applying Structural EM, a heuristic DAG structure learning procedure for data with missing values, to the concatenated data. --- paper_title: Order-independent constraint-based causal structure learning paper_content: We consider constraint-based methods for causal structure learning, such as the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al., 1993, 2000; Richardson, 1996; Colombo et al., 2012; Claassen et al., 2013). The first step of all these algorithms consists of the adjacency search of the PC-algorithm. The PC-algorithm is known to be order-dependent, in the sense that the output can depend on the order in which the variables are given. This order-dependence is a minor issue in low-dimensional settings. We show, however, that it can be very pronounced in high-dimensional settings, where it can lead to highly variable results. We propose several modifications of the PC-algorithm (and hence also of the other algorithms) that remove part or all of this order-dependence. All proposed modifications are consistent in high-dimensional settings under the same conditions as their original counterparts. We compare the PC-, FCI-, and RFCI-algorithms and their modifications in simulation studies and on a yeast gene expression data set. We show that our modifications yield similar performance in low-dimensional settings and improved performance in high-dimensional settings. All software is implemented in the R-package pcalg. --- paper_title: P-values for high-dimensional regression paper_content: Assigning significance in high-dimensional regression is challenging. Most computationally efficient selection algorithms cannot guard against inclusion of noise variables. Asymptotically valid p-values are not available. An exception is a recent proposal by Wasserman and Roeder (2008) which splits the data into two parts. The number of variables is then reduced to a manageable size using the first split, while classical variable selection techniques can be applied to the remaining variables, using the data from the second split. This yields asymptotic error control under minimal conditions. It involves, however, a one-time random split of the data. Results are sensitive to this arbitrary choice: it amounts to a `p-value lottery' and makes it difficult to reproduce results. Here, we show that inference across multiple random splits can be aggregated, while keeping asymptotic control over the inclusion of noise variables. We show that the resulting p-values can be used for control of both family-wise error (FWER) and false discovery rate (FDR). In addition, the proposed aggregation is shown to improve power while reducing the number of falsely selected variables substantially. --- paper_title: Learning Causal Structure from Overlapping Variable Sets paper_content: We present an algorithm name cSAT+ for learning the causal structure in a domain from datasets measuring different variable sets. The algorithm outputs a graph with edges corresponding to all possible pairwise causal relations between two variables, named Pairwise Causal Graph (PCG). Examples of interesting inferences include the induction of the absence or presence of some causal relation between two variables never measured together. cSAT+ converts the problem to a series of SAT problems, obtaining leverage from the efficiency of state-ofthe-art solvers. In our empirical evaluation, it is shown to outperform ION, the first algorithm solving a similar but more general problem, by two orders of magnitude. --- paper_title: Causation, prediction, and search paper_content: What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993. --- paper_title: Causal Inference using Graphical Models with the R Package pcalg paper_content: The pcalg package for R can be used for the following two purposes: Causal structure learning and estimation of causal effects from observational data. In this document, we give a brief overview of the methodology, and demonstrate the package’s functionality in both toy examples and applications. --- paper_title: Transportability from Multiple Environments with Limited Experiments: Completeness Results paper_content: This paper addresses the problem of mz-transportability, that is, transferring causal knowledge collected in several heterogeneous domains to a target domain in which only passive observations and limited experimental data can be collected. The paper first establishes a necessary and sufficient condition for deciding the feasibility of mz-transportability, i.e., whether causal effects in the target domain are estimable from the information available. It further proves that a previously established algorithm for computing transport formula is in fact complete, that is, failure of the algorithm implies non-existence of a transport formula. Finally, the paper shows that the do-calculus is complete for the mz-transportability class. --- paper_title: Inferring causal impact using Bayesian structural time-series models paper_content: An important problem in econometrics and marketing is to infer the causal impact that a designed market intervention has exerted on an outcome metric over time. In order to allocate a given budget optimally, for example, an advertiser must determine the incremental contributions that dierent advertising campaigns have made to web searches, product installs, or sales. This paper proposes to infer causal impact on the basis of a diusion-regressi on state-space model that predicts the counterfactual market response that would have occurred had no intervention taken place. In con- trast to classical dierence-in-dier ences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) exibly accommodate multiple sources of variation, including the time-varying inuence of contemporane- ous covariates, i.e., synthetic controls. Using a Markov chain Monte Carlo algorithm for posterior inference, we illustrate the statistical properties of our approach on synthetic data. We then demonstrate its practical utility by evaluating the eect of an online advertising campaign on search-related site visits. We discuss the strengths and limitations of our approach in improving the accuracy of causal at- tribution, power analyses, and principled budget allocation. --- paper_title: Learning high-dimensional directed acyclic graphs with latent and selection variables paper_content: We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg. --- paper_title: Towards integrative causal analysis of heterogeneous data sets and studies paper_content: We present methods able to predict the presence and strength of conditional and unconditional dependencies (correlations) between two variables Y and Z never jointly measured on the same samples, based on multiple data sets measuring a set of common variables. The algorithms are specializations of prior work on learning causal structures from overlapping variable sets. This problem has also been addressed in the field of statistical matching. The proposed methods are applied to a wide range of domains and are shown to accurately predict the presence of thousands of dependencies. Compared against prototypical statistical matching algorithms and within the scope of our experiments, the proposed algorithms make predictions that are better correlated with the sample estimates of the unknown parameters on test data ; this is particularly the case when the number of commonly measured variables is low. ::: ::: The enabling idea behind the methods is to induce one or all causal models that are simultaneously consistent with (fit) all available data sets and prior knowledge and reason with them. This allows constraints stemming from causal assumptions (e.g., Causal Markov Condition, Faithfulness) to propagate. Several methods have been developed based on this idea, for which we propose the unifying name Integrative Causal Analysis (INCA). A contrived example is presented demonstrating the theoretical potential to develop more general methods for co-analyzing heterogeneous data sets. The computational experiments with the novel methods provide evidence that causally-inspired assumptions such as Faithfulness often hold to a good degree of approximation in many real systems and could be exploited for statistical inference. Code, scripts, and data are available at www.mensxmachina.org. ---
Title: A Review of Some Recent Advances in Causal Inference Section 1: Causal versus Non-causal Research Questions Description 1: This section discusses the difference between causal and non-causal research questions using hypothetical examples. Section 2: Observational versus Experimental Data Description 2: This section highlights the distinction between observational and experimental data and explains their implications on answering causal questions. Section 3: Problem Formulation Description 3: This section formulates the problem setting and outlines the necessity of moving from the observational world to the experimental world using causal assumptions. Section 4: Graph Terminology Description 4: This section defines various graph terminologies, including directed and undirected edges, paths, and directed acyclic graphs (DAGs). Section 5: Structural Equation Model (SEM) Description 5: This section provides an overview of structural equation models (SEM), their properties, and the role of DAGs in representing causal structures. Section 6: Post-intervention Distributions and Causal Effects Description 6: This section explains how interventions are represented in SEMs and defines the computation of causal effects and the total effect of interventions. Section 7: Causal Structure Learning Description 7: This section discusses methods for learning causal DAGs from observational data, including constraint-based, score-based, and hybrid methods. Section 8: Constraint-based Methods Description 8: This section explains constraint-based methods such as the PC algorithm for learning CPDAGs by exploiting conditional independence constraints. Section 9: Score-based Methods Description 9: This section covers score-based methods like the GES algorithm that search for optimally scoring DAGs. Section 10: Hybrid Methods Description 10: This section explores hybrid methods that combine constraint-based and score-based methods to learn CPDAGs. Section 11: Learning SEMs with Additional Restrictions Description 11: This section discusses approaches that impose additional restrictions on SEMs to identify causal DAGs. Section 12: IDA Description 12: This section details the IDA method for estimating the effect of single interventions when the causal structure is unknown. Section 13: JointIDA Description 13: This section explains the JointIDA method for estimating the effects of multiple simultaneous interventions. Section 14: Application Description 14: This section highlights the practical application of IDA and JointIDA methods on real data, including gene expression data. Section 15: Extensions Description 15: This section mentions various extensions of the discussed methods, such as local causal structure learning, handling hidden variables, time series data, and heterogeneous data. Section 16: Summary Description 16: This section summarizes the estimation of causal effects from observational data and emphasizes the role of randomized controlled experiments for validation.
From Logic to Biology via Physics: a survey
9
--- paper_title: Life Itself: A Comprehensive Inquiry Into the Nature, Origin, and Fabrication of Life paper_content: 1. Prolegomenon2. Strategic Considerations: The Special and the General3. Some Necessary Epistemological Considerations4. The Concept of State5. Entailment Without States: Relational Biology6. Analytic and Synthetic Models7. On Simulation8. Machines and Mechanisms9. Relational Theory of Machines10. Life Itself: The Preliminary Steps11. Relational Biology and Biology --- paper_title: Laplace, Turing and the "imitation game " impossible geometry: randomness, determinism and programs in Turing's test 1. paper_content: From the physico-mathematical view point, the imitation game between man and machine, proposed by Turing in his 1950 paper for the journal "Mind", is a game between a discrete and a continuous system. Turing stresses several times the laplacian nature of his discrete-state machine, yet he tries to show the undetectability of a functional imitation, by his machine, of a system (the brain) that, in his words, is not a discrete-state machine, as it is sensitive to limit conditions. We shortly compare this tentative imitation with Turing's mathematical modelling of morphogenesis (his 1952 paper, focusing on continuous systems, as he calls non-linear dynamics, which are sensitive to initial conditions). On the grounds of recent knowledge about dynamical systems, we show the detectability of a Turing Machine from many dynamical processes. Turing's hinted distinction between imitation and modelling is developed, jointly to a discussion on the repeatability of computational processes in relation to physical systems. The main references are of a physico-mathematical nature, but the analysis is purely conceptual. --- paper_title: Model theory: an introduction paper_content: This book is a modern introduction to model theory which stresses applications to algebra throughout the text. The first half of the book includes classical material on model construction techniques, type spaces, prime models, saturated models, countable models, and indiscernibles and their applications. The author also includes an introduction to stability theory beginning with Morley's Categoricity Theorem and concentrating on omega-stable theories. One significant aspect of this text is the inclusion of chapters on important topics not covered in other introductory texts, such as omega-stable groups and the geometry of strongly minimal sets. The author then goes on to illustrate how these ingredients are used in Hrushovski's applications to diophantine geometry. David Marker is Professor of Mathematics at the University of Illinois at Chicago. His main area of research involves mathematical logic and model theory, and their applications to algebra and geometry. This book was developed from a series of lectures given by the author at the Mathematical Sciences Research Institute in 1998. --- paper_title: A computable expression of closure to efficient causation paper_content: In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability. --- paper_title: SOME BRIDGING RESULTS AND CHALLENGES IN CLASSICAL, QUANTUM AND COMPUTATIONAL RANDOMNESS paper_content: We encountered randomness in our dierent elds of interest, as unpredictable phenomena are omnipresent in natural and articial processes. In classical physical systems (and by this we mean also relativistic ones) randomness may be dened as 'deterministic unpredictability'. That is, since Poincare's results (on the Three Body Problem) and his invention of the geometry of dynamical systems, deterministic systems include various forms of chaotic ones, from weak (mixing) systems to ones highly sensitive to border conditions, where random behaviours are part of the deterministic evolutions. Randomness got a new status with the birth of quantum mechanics: access to information on a given systems passes through a nondeterministic process (measurement). In computer sciences, randomness is at the core of algorithmic information theory, all the while nondeterministic algorithms and networks present crucial random aspects. Finally, an extensive use of randomness is made also in biology. Thus we wondered: all these dierent sciences refer to a concept of randomness, but is it really the same concept? And if they are dierent concepts, what is the relation between them? --- paper_title: Biological Phase Spaces and Enablement paper_content: This chapter analyzes, in terms of critical transitions, the phase spaces of biological dynamics. The phase space is the space where the scientific description and determination of a phenomenon is given.We first hint to the historical path that lead physics to give a central role to the construction of a sound notion of phase space, as a condition of possibility for physico-mathematical analyses to be developed. We then argue that one major aspect of biological evolution is the continual change of the pertinent phase space and the unpredictability of these changes. This analysis will be based on the role of theoretical symmetries in biology and on their critical instability along evolution. Our hypothesis deeply modifies the tools and concepts used in physical theorizing, when adapted to biology. In particular, we argue that causality has to be understood differently, and we discuss two notions to do so: differential causality and enablement. In this context constraints play a key role: on one side, they restrict possibilities, on the other, they also enable biological systems to integrate changing constraints in their organization, by correlated variations, in un-prestatable ways. This corresponds to the formation of new phenotypes and organisms. --- paper_title: Comparing Symmetries in Models and Simulations paper_content: Computer simulations brought remarkable novelties to knowledge construction. In this chapter, we first distinguish between mathematical modeling, computer implementations of these models and purely computational approaches. In all three cases, different answers are provided to the questions the observer may have concerning the processes under investigation. These differences will be highlighted by looking at the different theoretical symmetries of each frame. In the latter case, the peculiarities of agent-based or object oriented languages allow to discuss the role of phase spaces in mathematical analyses of physical versus biological dynamics. Symmetry breaking and randomness are finally correlated in the various contexts where they may be observed. --- paper_title: Symmetry and Symmetry Breakings in Physics paper_content: Symmetries play a major theoretical role in physics, in particular since the work by E. Noether and H. Weyl in the first half of last century. --- paper_title: Scaling and Scale Symmetries in Biological Systems paper_content: This chapter reviews experimental results showing scaling, as a fundamental form of “theoretical symmetry” in biology. Allometry and scaling are the transformations of quantitative biological observables engendered by considering organisms of different sizes and at different scales, respectively. We then analyze anatomical fractal-like structures, the latter being ubiquitous in organs’ shape, yet with a fair amount of variability. We also discuss some observed temporal fractal like structures in biological time series. In the final part, we will provide some examples of space-time and of network configurations and dynamics. --- paper_title: Phase Transitions and Renormalization Group paper_content: 1. Quantum Field Theory and Renormalization Group 2. Gaussian Expectation Values. Steepest Descent Method . 3. Universality and Continuum Limit 4. Classical Statistical Physics: One Dimension 5. Continuum Limit and Path Integral 6. Ferromagnetic Systems. Correlations 7. Phase transitions: Generalities and Examples 8. Quasi-Gaussian Approximation: Universality, Critical Dimension 9. Renormalization Group: General Formulation 10. Perturbative Renormalization Group: Explicit Calculations 11. Renormalization group: N-component fields 12. Statistical Field Theory: Perturbative Expansion 13. The sigma4 Field Theory near Dimension 4 14. The O(N) Symmetric (phi2)2 Field Theory: Large N Limit 15. The Non-Linear sigma-Model 16. Functional Renormalization Group Appendix --- paper_title: Mean field theory, the Ginzburg criterion, and marginal dimensionality of phase transitions paper_content: By applying a real space version of the Ginzburg criterion, the role of fluctuations and thence the self‐consistency of mean field theory are assessed in a simple fashion for a variety of phase transitions. It is shown that in using this approach the concept of ’’marginal dimensionality’’ emerges in a natural way. For example, it is shown that for many homogeneous structural transformations the marginal dimensionality is two, so that mean field theory will be valid for real three‐dimensional systems. It is suggested that this simple self‐consistent approach to Landau theory should be incorporated in the teaching of elementary phase transition phenomena. --- paper_title: Critical Phase Transitions paper_content: In this chapter, we first present the basic principles of a relatively new area of physics, the analysis of critical phase transitions and more generally the theory of criticality. Then, we will introduce some mathematical methods that set the physics of criticality on robust grounds.We will also discuss briefly some variation on the theme of criticality such as self-organized criticality, often used in theoretical approaches to biology. Following the current analyses in physics, we present them here as point-wise transitions, with respect to (usually) one control parameter. This will constitute an opening towards the approach to criticality in the following chapters seen as an “extended” phenomenon in biology, that, we propose, is ranging on a non-trivial interval of definition. --- paper_title: Renormalization group theory: Its basis and formulation in statistical physics paper_content: Lengths of both sides of each end plane of a light-intensity distribution uniformizing element receiving a light flux and having an F-number of 1 are set to ½ of those of a reflecting surface of a reflecting optical-spatial modulator element, position information of uniformed light fluxes output from the light-intensity distribution uniformizing element is Fourier-transformed into diverging angle information indicated by incident light fluxes output from a first group of lenses, a relay deformed diaphragm intercepts an interference component of each incident light flux, which is expected to interfere with an outgoing light flux, to produce asymmetric light fluxes, the asymmetric light fluxes are incident on the reflecting optical-spatial modulator element, a projection lens deformed diaphragm removes stray light from outgoing light fluxes output from the reflecting optical-spatial modulator element, and an image is displayed according to the outgoing light fluxes. --- paper_title: Theoretical principles for biology: Variation paper_content: Abstract Darwin introduced the concept that random variation generates new living forms. In this paper, we elaborate on Darwin's notion of random variation to propose that biological variation should be given the status of a fundamental theoretical principle in biology. We state that biological objects such as organisms are specific objects. Specific objects are special in that they are qualitatively different from each other. They can undergo unpredictable qualitative changes, some of which are not defined before they happen. We express the principle of variation in terms of symmetry changes, where symmetries underlie the theoretical determination of the object. We contrast the biological situation with the physical situation, where objects are generic (that is, different objects can be assumed to be identical) and evolve in well-defined state spaces. We derive several implications of the principle of variation, in particular, biological objects show randomness, historicity and contextuality. We elaborate on the articulation between this principle and the two other principles proposed in this special issue: the principle of default state and the principle of organization. --- paper_title: Theoretical principles for biology: Organization paper_content: In the search of a theory of biological organisms, we propose to adopt organization as a theoretical principle. Organization constitutes an overarching hypothesis that frames the intelligibility of biological objects, by characterizing their relevant aspects. After a succinct historical survey on the understanding of organization in the organicist tradition, we offer a specific characterization in terms of closure of constraints. We then discuss some implications of the adoption of organization as a principle and, in particular, we focus on how it fosters an original approach to biological stability, as well as and its interplay with variation. --- paper_title: A 2-dimensional Geometry for Biological Time paper_content: This chapter proposes a mathematical schema for describing some features of biological time. The key point is that the usual physical (linear) representation of time is insufficient, in our view, for the understanding key phenomena of life, such as rhythms, both physical (circadian, seasonal, . . . rhythms) and properly biological ones (heart beating, respiration, metabolic, . . . ). In particular, the role of biological rhythms do not seem to have a counterpart in the mathematical formalization of physical clocks, which are based on frequencies along the usual (possibly thermodynamical, thus oriented) time. We then suggest a functional representation of biological time by a 2-dimensional manifold as a mathematical frame for accommodating autonomous biological rhythms. The “visual” representation of rhythms so obtained, in particular heart beatings, will provide, by a few examples, hints towards possible applications of our approach to the understanding of interspecific differences or intraspecific pathologies. The 3-dimensional embedding space, needed for purely mathematical reasons, allows to introduce a suitable extra-dimension for “representation time”, with a cognitive significance. --- paper_title: Biological Order as a Consequence of Randomness: Anti-entropy and Symmetry Changes paper_content: In this chapter, we introduce the notion and the analysis of phenotypic complexity, as anti-entropy, proposed in [Bailly & Longo, 2009] and develop further theoretical consequences. In particular, we analyze how randomness, an essential component of biological variability, is associated to the growth of biological organization, both in evolution and in ontogenesis. Our approach, in particular, will focus on the role of global entropy production and will provide a tool for a mathematical understanding of some fundamental observations by S.J. Gould on how phenotypic complexity increases, on average, along random evolutionary paths, without a bias towards an increase. We also propose a preliminary analysis of biological regenerative processes, which allows to associate entropy production of adults to anti-entropy, by considering “collisions” between entropy and anti-entropy. Lastly, we analyze the situation in terms of theoretical symmetries, in order to further specify the biological meaning of anti-entropy as well as its strong correlations to randomness. --- paper_title: From Bottom-Up Approaches to Levels of Organization and Extended Critical Transitions paper_content: Biological thinking is structured by the notion of level of organization. We will show that this notion acquires a precise meaning in critical phenomena: they disrupt, by the appearance of infinite quantities, the mathematical (possibly equational) determination at a given level, when moving at an ``higher'' one. As a result, their analysis cannot be called genuinely bottom-up, even though it remains upward in a restricted sense. At the same time, criticality and related phenomena are very common in biology. Because of this, we claim that bottom-up approaches are not sufficient, in principle, to capture biological phenomena. In the second part of this paper, following the work of Francis Bailly, we discuss a strong criterium of level transition. The core idea of the criterium is to start from the breaking of the symmetries and determination at a ``first'' level in order to ``move'' at the others. If biological phenomena have multiple, \emph{sustained} levels of organization in this sense, then they should be interpreted as \emph{extended} critical transitions. --- paper_title: Phase Transitions and Renormalization Group paper_content: 1. Quantum Field Theory and Renormalization Group 2. Gaussian Expectation Values. Steepest Descent Method . 3. Universality and Continuum Limit 4. Classical Statistical Physics: One Dimension 5. Continuum Limit and Path Integral 6. Ferromagnetic Systems. Correlations 7. Phase transitions: Generalities and Examples 8. Quasi-Gaussian Approximation: Universality, Critical Dimension 9. Renormalization Group: General Formulation 10. Perturbative Renormalization Group: Explicit Calculations 11. Renormalization group: N-component fields 12. Statistical Field Theory: Perturbative Expansion 13. The sigma4 Field Theory near Dimension 4 14. The O(N) Symmetric (phi2)2 Field Theory: Large N Limit 15. The Non-Linear sigma-Model 16. Functional Renormalization Group Appendix --- paper_title: From Physics to Biology by Extending Criticality and Symmetry Breakings paper_content: Symmetries play a major role in physics, in particular since the work by E. Noether and H. Weyl in the first half of last century. Herein, we briefly review their role by recalling how symmetry changes allow to conceptually move from classical to relativistic and quantum physics. We then introduce our ongoing theoretical analysis in biology and show that symmetries play a radically different role in this discipline, when compared to those in current physics. By this comparison, we stress that symmetries must be understood in relation to conservation and stability properties, as represented in the theories. We posit that the dynamics of biological organisms, in their various levels of organization, are not "just" processes, but permanent (extended, in our terminology) critical transitions and, thus, symmetry changes. Within the limits of a relative structural stability (or interval of viability), variability is at the core of these transitions. --- paper_title: Protention and retention in biological systems paper_content: This article proposes an abstract mathematical frame for describing some features of cognitive and biological time. We focus here on the so called “extended present” as a result of protentional and retentional activities (memory and anticipation). Memory, as retention, is treated in some physical theories (relaxation phenomena, which will inspire our approach), while protention (or anticipation) seems outside the scope of physics. We then suggest a simple functional representation of biological protention. This allows us to introduce the abstract notion of “biological inertia”. --- paper_title: Mathematics and the Natural Sciences: The Physical Singularity of Life paper_content: Introduction Incompleteness and Indetermination Space and Time in Biology Invariances and Symetries Causality and Symetries Extended Criticality Randomness and Determination Towards a Conclusion . --- paper_title: Biological Phase Spaces and Enablement paper_content: This chapter analyzes, in terms of critical transitions, the phase spaces of biological dynamics. The phase space is the space where the scientific description and determination of a phenomenon is given.We first hint to the historical path that lead physics to give a central role to the construction of a sound notion of phase space, as a condition of possibility for physico-mathematical analyses to be developed. We then argue that one major aspect of biological evolution is the continual change of the pertinent phase space and the unpredictability of these changes. This analysis will be based on the role of theoretical symmetries in biology and on their critical instability along evolution. Our hypothesis deeply modifies the tools and concepts used in physical theorizing, when adapted to biology. In particular, we argue that causality has to be understood differently, and we discuss two notions to do so: differential causality and enablement. In this context constraints play a key role: on one side, they restrict possibilities, on the other, they also enable biological systems to integrate changing constraints in their organization, by correlated variations, in un-prestatable ways. This corresponds to the formation of new phenotypes and organisms. --- paper_title: Randomness and multilevel interactions in biology paper_content: The dynamic instability of living systems and the "superposition" of different forms of randomness are viewed, in this paper, as components of the contingently changing, or even increasing, organization of life through ontogenesis or evolution. To this purpose, we first survey how classical and quantum physics define randomness differently. We then discuss why this requires, in our view, an enriched understanding of the effects of their concurrent presence in biological systems' dynamics. Biological randomness is then presented not only as an essential component of the heterogeneous determination and intrinsic unpredictability proper to life phenomena, due to the nesting of, and interaction between many levels of organization, but also as a key component of its structural stability. We will note as well that increasing organization, while increasing "order", induces growing disorder, not only by energy dispersal effects, but also by increasing variability and differentiation. Finally, we discuss the cooperation between diverse components in biological networks; this cooperation implies the presence of constraints due to the particular nature of bio-entanglement and bio-resonance, two notions to be reviewed and defined in the paper. --- paper_title: Biological Order as a Consequence of Randomness: Anti-entropy and Symmetry Changes paper_content: In this chapter, we introduce the notion and the analysis of phenotypic complexity, as anti-entropy, proposed in [Bailly & Longo, 2009] and develop further theoretical consequences. In particular, we analyze how randomness, an essential component of biological variability, is associated to the growth of biological organization, both in evolution and in ontogenesis. Our approach, in particular, will focus on the role of global entropy production and will provide a tool for a mathematical understanding of some fundamental observations by S.J. Gould on how phenotypic complexity increases, on average, along random evolutionary paths, without a bias towards an increase. We also propose a preliminary analysis of biological regenerative processes, which allows to associate entropy production of adults to anti-entropy, by considering “collisions” between entropy and anti-entropy. Lastly, we analyze the situation in terms of theoretical symmetries, in order to further specify the biological meaning of anti-entropy as well as its strong correlations to randomness. --- paper_title: From energetics to ecosystems : the dynamics and structure of ecological systems paper_content: SECTION I.- A Process-Oriented Approach to the Multispecies Functional Response.- Homage to Yodzis and Innes 1992: Scaling up Feeding-Based Population Dynamics to Complex Ecological Networks.- Food Webs, Body Size and the Curse of the Latin Binomial.- An Energetic Framework for Trophic Control.- SECTION II.- Experimental Studies of Food Webs: Causes and Consequences of Trophic Interactions.- Interplay Between Scale, Resolution, Life History and Food Web Properties.- Heteroclinic Cycles in the Rain Forest: Insights from Complex Dynamics.- Emergence in Ecological Systems.- Dynamic Signatures of Real and Model Ecosystems.- SECTION III.- Evolutionary Branching of Single Traits.- Feedback Effects Between the Food Chain and Induced Defense Strategies.- Evolutionary Demography: The Invasion Exponent and the Effective Population Density in Nonlinear Matrix Models.- Of Experimentalists, Empiricists, and Theoreticians. --- paper_title: Modeling mammary organogenesis from biological first principles: Cells and their physical constraints paper_content: In multicellular organisms, relations among parts and between parts and the whole are contextual and interdependent. These organisms and their cells are ontogenetically linked: an organism starts as a cell that divides producing non-identical cells, which organize in tri-dimensional patterns. These association patterns and cells types change as tissues and organs are formed. This contextuality and circularity makes it difficult to establish detailed cause and effect relationships. Here we propose an approach to overcome these intrinsic difficulties by combining the use of two models; 1) an experimental one that employs 3D culture technology to obtain the structures of the mammary gland, namely, ducts and acini, and 2) a mathematical model based on biological principles. The typical approach for mathematical modeling in biology is to apply mathematical tools and concepts developed originally in physics or computer sciences. Instead, we propose to construct a mathematical model based on proper biological principles. Specifically, we use principles identified as fundamental for the elaboration of a theory of organisms, namely i) the default state of cell proliferation with variation and motility and ii) the principle of organization by closure of constraints. This model has a biological component, the cells, and a physical component, a matrix which contains collagen fibers. Cells display agency and move and proliferate unless constrained; they exert mechanical forces that i) act on collagen fibers and ii) on other cells. As fibers organize, they constrain the cells on their ability to move and to proliferate. The model exhibits a circularity that can be interpreted in terms of closure of constraints. Implementing the mathematical model shows that constraints to the default state are sufficient to explain ductal and acinar formation, and points to a target of future research, namely, to inhibitors of cell proliferation and motility generated by the epithelial cells. The success of this model suggests a step-wise approach whereby additional constraints imposed by the tissue and the organism could be examined in silico and rigorously tested by in vitro and in vivo experiments, in accordance with the organicist perspective we embrace. --- paper_title: From Physics to Biology by Extending Criticality and Symmetry Breakings paper_content: Symmetries play a major role in physics, in particular since the work by E. Noether and H. Weyl in the first half of last century. Herein, we briefly review their role by recalling how symmetry changes allow to conceptually move from classical to relativistic and quantum physics. We then introduce our ongoing theoretical analysis in biology and show that symmetries play a radically different role in this discipline, when compared to those in current physics. By this comparison, we stress that symmetries must be understood in relation to conservation and stability properties, as represented in the theories. We posit that the dynamics of biological organisms, in their various levels of organization, are not "just" processes, but permanent (extended, in our terminology) critical transitions and, thus, symmetry changes. Within the limits of a relative structural stability (or interval of viability), variability is at the core of these transitions. --- paper_title: Critique of Pure Reason paper_content: ion of space and attend merely to the act by which we determine the internal sense according to its form, is that which produces the conception of succession. The understanding, therefore, does by no means find in the internal sense any such synthesis of the manifold, but produces it, in that it affects this sense. At the same time, how “I who think” is distinct from the “I” which intuites itself (other modes of intuition being cogitable as at least possible), and yet one and the same with this latter as the same subject; how, therefore, I am able to say: “I, as an intelligence and thinking subject, cognize myself as an object thought, so far as I am, moreover, given to myself in intuition– only, like other phenomena, not as I am in myself, and as considered by the understanding, but merely as I appear”– is a question that has in it neither more nor less difficulty than the question– “How can I be an object to myself?” or this– “How I can be an object of my own intuition and internal perceptions?” But that such must be the fact, if we admit that space is merely a pure form of the phenomena of external sense, can be clearly proved by the consideration that we cannot represent time, which is not an object of external intuition, in any other way than under the image of a line, which we draw in thought, a mode of representation without which we could not cognize the unity of its dimension, and also that we are necessitated to take our determination of periods of time, or of points of time, for all our internal perceptions from the changes which we perceive in outward things. It follows that we must arrange the determinations of the internal sense, as phenomena in time, exactly in the same manner as we arrange those of the external senses in space. And consequently, if we grant, respecting this latter, that by means of them we know objects only in so far as we are affected externally, we must also confess, with regard to the internal sense, that by means of it we intuite ourselves only as we are internally affected by ourselves; in other words, as regards internal intuition, we cognize our own subject only as phenomenon, and not as it is in itself.[23][2] [22]Motion of an object in space does not belong to a pure science, consequently not to geometry; because, that a thing is movable cannot be known a priori, but only from experience. But motion, considered as the description of a space, is a pure act of the successive synthesis of the manifold in external intuition by means of productive imagination, and belongs not only to geometry, but even to transcendental philosophy. [23][2] I do not see why so much difficulty should be found in admitting that our internal sense is affected by ourselves. Every act of attention exemplifies it. In such an act the understanding determines the internal sense by the synthetical conjunction which it cogitates, conformably to the internal intuition which corresponds to the manifold in the synthesis of the understanding. How much the mind is usually affected thereby every one will be able to perceive in himself. | Table of --- paper_title: Symmetry and Symmetry Breakings in Physics paper_content: Symmetries play a major theoretical role in physics, in particular since the work by E. Noether and H. Weyl in the first half of last century. ---
Title: From Logic to Biology via Physics: a survey Section 1: Introduction Description 1: This section provides an analysis of the interdisciplinary approach necessary to gain insights into biological phenomena, drawing particularly from physical theorizing. Section 2: A Definition of Life? Description 2: This section discusses the challenges of defining life, emphasizing operational notions over ideal definitions and exploring the role of evolutionary theory. Section 3: Symmetry breakings and randomness Description 3: This section examines how symmetry changes and randomness in biological systems compare to those in physical theories, and their relevance to biological coherence structures. Section 4: Symmetries and theoretical extensions of physical theories Description 4: This section highlights the differences in the role of symmetries between biological and physical theories and introduces the concept of variability as a primary invariant in biology. Section 5: Extended criticality Description 5: This section proposes the concept of extended critical transitions to analyze biological variability and coherence structures beyond pointwise criticality in physics. Section 6: More on critical phase transitions in physics Description 6: This section delves deeper into the properties of critical phase transitions in physics and their theoretical significance. Section 7: Variability and stability Description 7: This section discusses how symmetry changes and variability relate to structural stability and autonomy in biological organism, despite their inherent instability. Section 8: Remarks on reductionism and renormalization Description 8: This section addresses the complexity of biological phenomena and critiques the reductionist approach, suggesting that biological systems may require new theoretical frameworks beyond existing physical theories. Section 9: Phase spaces and enablement Description 9: This section explores the limits of physical methods for describing biological phase spaces and introduces enablement as a unique form of causation in biological processes.
Applications of Trajectory Data in Transportation: Literature Review and Maryland Case Study
16
--- paper_title: ESTIMATING ORIGIN-DESTINATION MATRICES FROM ROADSIDE SURVEY DATA paper_content: The purpose of this study is to develop a valid and efficient method for estimating origin-destination tables from roadside survey data. Roadside surveys, whether conducted by interviews or postcard mailback methods, typically have in common the sampling of trip origin and destination information at survey stations. These survey stations are generally located where roads cross "screenlines," which are imaginary barriers drawn to intercept the trip types of interest.Such surveys also include counts of traffic volumes, by which the partial origin-destination (O-D) tables obtained at the different stations can be expanded and combined to obtain the complete O-D table which represents travel throughout the entire study area. The procedure used to expand the sample O-D information from the survey stations must recognize and deal appropriately with a number of problems: 1. (i) The "double counting" problem: Long-distance trips may pass through more than one survey station location; thus certain trips have the possibility of being sampled and expanded more than once, leading to a potentially serious overrepresentation of long-distance trips in the complete expanded trip table. 2. (ii) The "leaky screenline" problem: Some route choices, particularly those using very lightly traveled roads, may miss the survey stations entirely, leading to an underestimation of certain O-D patterns, or to distorted estimates if such sites are arbitrarily coupled with actual nearby station locations. 3. (iii) The efficient use of the data: There is a need to adjust expansion factors to compensate for double counting and leaky screenlines. How can this be accomplished such that all of the data obtained at the stations are used without loss of information? 4. (iv) The consequences of uncertainty and unknown travel behavior: Since the O-D data and other sampled variables are subject to random error, and since in general the probability of encountering a long-distance trip at some survey stations is affected by traveler route-choice behavior, which is not understood, the sample expansion procedure must rely on the use of erroneous input data and questionable assumptions. The preferred procedure must minimize, rather than amplify, the effects of such input errors. Here, five alternate methods for expanding roadside survey data in an unbiased manner are proposed and evaluated. In all cases, it is assumed that traveler route choice generally follows the pattern described by Dial's multipath assignment method. All methods are applied to a simple hypothetical network in order to examine their efficiency and error amplification properties. The evaluation of the five methods reveals that their performance properties vary considerably and that no single method is best in all circumstances. A microcomputer program has been provided as a tool to facilitate comparison among methods and to select the most appropriate expansion method for a particular application. --- paper_title: The most likely trip matrix estimated from traffic counts paper_content: For a large number of applications conventional methods for estimating an origin destination matrix become too expensive to use. Two models, based on information minimisation and entropy maximisation principles, have been developed by the authors to estimate an O-D matrix from traffic counts. The models assume knowledge of the paths followed by the vehicles over the network. The models then use the traffic counts to estimate the most likely O-D matrix consistent with the link volumes available and any prior information about the trip matrix. Both models can be used to update and improve a previous O-D matrix. An algorithm to find a solution to the model is then described. The models have been tested with artificial data and performed reasonably well. Further research is being carried out to validate the models with real data. --- paper_title: Development of origin–destination matrices using mobile phone call data paper_content: Abstract In this research, we propose a methodology to develop OD matrices using mobile phone Call Detail Records (CDR) and limited traffic counts. CDR, which consist of time stamped tower locations with caller IDs, are analyzed first and trips occurring within certain time windows are used to generate tower-to-tower transient OD matrices for different time periods. These are then associated with corresponding nodes of the traffic network and converted to node-to-node transient OD matrices. The actual OD matrices are derived by scaling up these node-to-node transient OD matrices. An optimization based approach, in conjunction with a microscopic traffic simulation platform, is used to determine the scaling factors that result best matches with the observed traffic counts. The methodology is demonstrated using CDR from 2.87 million users of Dhaka, Bangladesh over a month and traffic counts from 13 key locations over 3 days of that month. The applicability of the methodology is supported by a validation study. --- paper_title: ESTIMATION OF AN ORIGIN-DESTINATION MATRIX WITH RANDOM LINK CHOICE PROPORTIONS: A STATISTICAL APPROACH paper_content: The use of statistical modeling in the estimation of Origin-Destination (OD) matrix from traffic counts is reviewed. In particular, statistical models that consider explicitly the presence of measurement and sampling errors in the observed link flows are discussed. This paper proposes treating the link choice proportions as random variables. Accordingly, new statistical models are formulated and the corresponding Maximum Likelihood Estimator and Bayesian Estimator of the OD matrix are developed. The accuracies of these estimators are compared with those obtained by previous methods. --- paper_title: Nonresponse Rates and Nonresponse Bias in Household Surveys paper_content: Many surveys of the U.S. household population are experiencing higher refusal rates. Nonresponse can, but need not, induce nonresponse bias in survey estimates. Recent empirical findings illustrate cases when the linkage between nonresponse rates and nonresponse biases is absent. Despite this, professional standards continue to urge high response rates. Statistical expressions of nonresponse bias can be translated into causal models to guide hypotheses about when nonresponse. causes bias. Alternative designs to measure nonresponse bias exist, providing different but incomplete information about the nature of the bias. A synthesis of research studies estimating nonresponse bias shows the bias often present. A logical question at this moment in history is what advantage probability sample surveys have if they suffer from high nonresponse rates. Since postsurvey adjustment for nonresponse requires auxiliary variables, the answer depends on the nature of the design and the quality of the auxiliary variables. --- paper_title: Feasibility and Benefits of Advanced Four-Step and Activity-Based Travel Demand Models for Maryland paper_content: This research identifies current and emerging transportation planning policy issues faced by the Maryland State Highway Administration (SHA) and other State agencies. After reviewing the recently-developed Maryland Statewide Transportation Model (MSTM), the University of Maryland (UMD) research team summarizes the capability of the MSTM to analyze these planning and policy issues and prioritizes various model improvement needs. Based on identified planning and policy analysis needs, a thorough synthesis of existing transportation data sources, and proposed MSTM applications in Maryland, a Strategic MSTM Improvement Plan (SMIP) is developed to guide short- and long-run MSTM improvement and applications. This research also explores the feasibility of incorporating departure time choices into the MSTM by developing a prototype time-of-day choice model. --- paper_title: The path most traveled: Travel demand estimation using big data resources paper_content: Abstract Rapid urbanization is placing increasing stress on already burdened transportation infrastructure. Ubiquitous mobile computing and the massive data it generates presents new opportunities to measure the demand for this infrastructure, diagnose problems, and plan for the future. However, before these benefits can be realized, methods and models must be updated to integrate these new data sources into existing urban and transportation planning frameworks for estimating travel demand and infrastructure usage. While recent work has made great progress extracting valid and useful measurements from new data resources, few present end-to-end solutions that transform and integrate raw, massive data into estimates of travel demand and infrastructure performance. Here we present a flexible, modular, and computationally efficient software system to fill this gap. Our system estimates multiple aspects of travel demand using call detail records (CDRs) from mobile phones in conjunction with open- and crowdsourced geospatial data, census records, and surveys. We bring together numerous existing and new algorithms to generate representative origin–destination matrices, route trips through road networks constructed using open and crowd-sourced data repositories, and perform analytics on the system’s output. We also present an online, interactive visualization platform to communicate these results to researchers, policy makers, and the public. We demonstrate the flexibility of this system by performing analyses on multiple cities around the globe. We hope this work will serve as unified and comprehensive guide to integrating new big data resources into customary transportation demand modeling. --- paper_title: Learning travel recommendations from user-generated GPS traces paper_content: The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness. --- paper_title: Estimation of Statewide Origin–Destination Truck Flows From Large Streams of GPS Data: Application for Florida Statewide Model paper_content: This paper demonstrates the use of large streams of truck GPS data from the American Transportation Research Institute for the estimation of statewide freight truck flows in Florida. Raw GPS data streams, which comprised more than 145 million GPS records, were used to derive a database of more than 1.2 million truck trips that started or ended in Florida. The paper sheds light on the extent to which the trips derived from the GPS data captured the observed truck traffic flows in Florida. The paper includes insights into (a) the truck type composition, (b) the proportion of the observed truck traffic flows covered by the data, and (c) the geographical differences in the coverage. The paper applies origin–destination (O-D) matrix estimation to combine the GPS data with the observed truck traffic volumes at different locations within and outside Florida to derive an O-D table of truck flows within, into, and out of the state. The procedures, implementation details, and findings discussed in the paper are exp... --- paper_title: Development of origin–destination matrices using mobile phone call data paper_content: Abstract In this research, we propose a methodology to develop OD matrices using mobile phone Call Detail Records (CDR) and limited traffic counts. CDR, which consist of time stamped tower locations with caller IDs, are analyzed first and trips occurring within certain time windows are used to generate tower-to-tower transient OD matrices for different time periods. These are then associated with corresponding nodes of the traffic network and converted to node-to-node transient OD matrices. The actual OD matrices are derived by scaling up these node-to-node transient OD matrices. An optimization based approach, in conjunction with a microscopic traffic simulation platform, is used to determine the scaling factors that result best matches with the observed traffic counts. The methodology is demonstrated using CDR from 2.87 million users of Dhaka, Bangladesh over a month and traffic counts from 13 key locations over 3 days of that month. The applicability of the methodology is supported by a validation study. --- paper_title: Building practical origin-destination (od/trip) matrices from automatically collected GPS data paper_content: Origin-destination matrices are key elements in transport models; however, they are costly to collect and process. Typical survey methods such as roadside interviews, or car park surveys, are expensive and disruptive. The instrument can introduce bias, by traveller avoiding the interview sites. Increasingly, authorities refuse to have live surveys on their roads due to Health and Safety concerns. Other methods for collecting origins and destinations, such as Automatic Number Plate Recognition and mailbacks to vehicle owners, suffer from privacy concerns. However, data sources exist that are routinely collected, which contain inherent information on origins and destinations. These are the locational points of GPS Tracker services, produced by vehicles fitted with advanced GPS devices and collected by organisations like TrafficMaster and iTIS. As the vehicles poll every few seconds, a huge amount of travel data is in principle available, including origins and destinations of the equipped vehicles. This data has successfully been used in the past to generate speed maps for whole regions; the increasing rate of take up of these devices means that coverage and sample sizes are now such that more detailed analyses are possible. The authors describe five applications where they have used commercially available, automatically collected GPS data to derive origin destination trip matrices. These are: 1) developing HGV origin destination matrices; 2) investigation of ODs and routing patterns for evidence against a proposed Motorway Service Area; 3) trip distribution patterns of short trips (i.e. junction hopping) and through trips using the urban motorway network; 4) roundabout turning proportions study; and, 5) roadside interviews (RSI) stability analyses between different survey years The applications give an insight into data quality and sample sizes, resulting in the ability to advise on the applicability of automatically collected GPS data for the creation of OD trip matrices. The authors are currently investigating whether and how additional travel characteristics can be determined, for example trip purpose (from land uses at origin and destination, or from the timing and regularity of the trip). --- paper_title: Time-evolving O-D matrix estimation using high-speed GPS data streams paper_content: We proposed a data driven method to incrementally build O-D matrices.The Regions of Interest on this matrix are defined using spatial clustering.The sample's attributes are discretized using a novel multidimensional hierarchy.The target's p.d.f. is approximated using multidimensional histograms.This methodology is validated using 1 million of real-world trips. Portable digital devices equipped with GPS antennas are ubiquitous sources of continuous information for location-based Expert and Intelligent Systems. The availability of these traces on the human mobility patterns is growing explosively. To mine this data is a fascinating challenge which can produce a big impact on both travelers and transit agencies.This paper proposes a novel incremental framework to maintain statistics on the urban mobility dynamics over a time-evolving origin-destination (O-D) matrix. The main motivation behind such task is to be able to learn from the location-based samples which are continuously being produced, independently on their source, dimensionality or (high) communicational rate. By doing so, the authors aimed to obtain a generalist framework capable of summarizing relevant context-aware information which is able to follow, as close as possible, the stochastic dynamics on the human mobility behavior. Its potential impact ranges Expert Systems for decision support across multiple industries, from demand estimation for public transportation planning till travel time prediction for intelligent routing systems, among others.The proposed methodology settles on three steps: (i) Half-Space trees are used to divide the city area into dense subregions of equal mass. The uncovered regions form an O-D matrix which can be updated by transforming the trees'leaves into conditional nodes (and vice-versa). The (ii) Partioning Incremental Algorithm is then employed to discretize the target variable's historical values on each matrix cell. Finally, a (iii) dimensional hierarchy is defined to discretize the domains of the independent variables depending on the cell's samples.A Taxi Network running on a mid-sized city in Portugal was selected as a case study. The Travel Time Estimation (TTE) problem was regarded as a real-world application. Experiments using one million data samples were conducted to validate the methodology. The results obtained highlight the straightforward contribution of this method: it is capable of resisting to the drift while still approximating context-aware solutions through a multidimensional discretization of the feature space. It is a step ahead in estimating the real-time mobility dynamics, regardless of its application field. --- paper_title: Origin–destination trips by purpose and time of day inferred from mobile phone data paper_content: In this work, the authors present methods to estimate average daily origin–destination trips from triangulated mobile phone records of millions of anonymized users. These records are first converted into clustered locations at which users engage in activities for an observed duration. These locations are inferred to be home, work, or other depending on observation frequency, day of week, and time of day, and represent a user’s origins and destinations. Since the arrival time and duration at these locations reflect the observed (based on phone usage) rather than true arrival time and duration of a user, the authors probabilistically infer departure time using survey data on trips in major US cities. Trips are then constructed for each user between two consecutive observations in a day. These trips are multiplied by expansion factors based on the population of a user’s home Census Tract and divided by the number of days on which the authors observed the user, distilling average daily trips. Aggregating individuals’ daily trips by Census Tract pair, hour of the day, and trip purpose results in trip matrices that form the basis for much of the analysis and modeling that inform transportation planning and investments. The applicability of the proposed methodology is supported by validation against the temporal and spatial distributions of trips reported in local and national surveys. --- paper_title: Urban traffic modelling and prediction using large scale taxi GPS traces paper_content: Monitoring, predicting and understanding traffic conditions in a city is an important problem for city planning and environmental monitoring. GPS-equipped taxis can be viewed as pervasive sensors and the large-scale digital traces produced allow us to have a unique view of the underlying dynamics of a city's road network. In this paper, we propose a method to construct a model of traffic density based on large scale taxi traces. This model can be used to predict future traffic conditions and estimate the effect of emissions on the city's air quality. We argue that considering traffic density on its own is insufficient for a deep understanding of the underlying traffic dynamics, and hence propose a novel method for automatically determining the capacity of each road segment. We evaluate our methods on a large scale database of taxi GPS logs and demonstrate their outstanding performance. --- paper_title: Activity-Based Human Mobility Patterns Inferred from Mobile Phone Data: A Case Study of Singapore paper_content: In this study, with Singapore as an example, we demonstrate how we can use mobile phone call detail record (CDR) data, which contains millions of anonymous users, to extract individual mobility networks comparable to the activity-based approach. Such an approach is widely used in the transportation planning practice to develop urban micro simulations of individual daily activities and travel; yet it depends highly on detailed travel survey data to capture individual activity-based behavior. We provide an innovative data mining framework that synthesizes the state-of-the-art techniques in extracting mobility patterns from raw mobile phone CDR data, and design a pipeline that can translate the massive and passive mobile phone records to meaningful spatial human mobility patterns readily interpretable for urban and transportation planning purposes. With growing ubiquitous mobile sensing, and shrinking labor and fiscal resources in the public sector globally, the method presented in this research can be used as a low-cost alternative for transportation and planning agencies to understand the human activity patterns in cities, and provide targeted plans for future sustainable development. --- paper_title: Unveiling the complexity of human mobility by querying and mining massive trajectory data paper_content: The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people's travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability. --- paper_title: Unravelling daily human mobility motifs paper_content: Human mobility is differentiated by time scales. While the mechanism for long time scales has been studied, the underlying mechanism on the daily scale is still unrevealed. Here, we uncover the mechanism responsible for the daily mobility patterns by analysing the temporal and spatial trajectories of thousands of persons as individual networks. Using the concept of motifs from network theory, we find only 17 unique networks are present in daily mobility and they follow simple rules. These networks, called here motifs, are sufficient to capture up to 90 per cent of the population in surveys and mobile phone datasets for different countries. Each individual exhibits a characteristic motif, which seems to be stable over several months. Consequently, daily human mobility can be reproduced by an analytically tractable framework for Markov chains by modelling periods of high-frequency trips followed by periods of lower activity as the key ingredient. --- paper_title: Trajectory Data Mining: An Overview paper_content: The advances in location-acquisition and mobile computing techniques have generated massive spatial trajectory data, which represent the mobility of a diversity of moving objects, such as people, vehicles, and animals. Many techniques have been proposed for processing, managing, and mining trajectory data in the past decade, fostering a broad range of applications. In this article, we conduct a systematic survey on the major research into trajectory data mining, providing a panorama of the field as well as the scope of its research topics. Following a road map from the derivation of trajectory data, to trajectory data preprocessing, to trajectory data management, and to a variety of mining tasks (such as trajectory pattern mining, outlier detection, and trajectory classification), the survey explores the connections, correlations, and differences among these existing techniques. This survey also introduces the methods that transform trajectories into other data formats, such as graphs, matrices, and tensors, to which more data mining and machine learning techniques can be applied. Finally, some public trajectory datasets are presented. This survey can help shape the field of trajectory data mining, providing a quick understanding of this field to the community. --- paper_title: Time-evolving O-D matrix estimation using high-speed GPS data streams paper_content: We proposed a data driven method to incrementally build O-D matrices.The Regions of Interest on this matrix are defined using spatial clustering.The sample's attributes are discretized using a novel multidimensional hierarchy.The target's p.d.f. is approximated using multidimensional histograms.This methodology is validated using 1 million of real-world trips. Portable digital devices equipped with GPS antennas are ubiquitous sources of continuous information for location-based Expert and Intelligent Systems. The availability of these traces on the human mobility patterns is growing explosively. To mine this data is a fascinating challenge which can produce a big impact on both travelers and transit agencies.This paper proposes a novel incremental framework to maintain statistics on the urban mobility dynamics over a time-evolving origin-destination (O-D) matrix. The main motivation behind such task is to be able to learn from the location-based samples which are continuously being produced, independently on their source, dimensionality or (high) communicational rate. By doing so, the authors aimed to obtain a generalist framework capable of summarizing relevant context-aware information which is able to follow, as close as possible, the stochastic dynamics on the human mobility behavior. Its potential impact ranges Expert Systems for decision support across multiple industries, from demand estimation for public transportation planning till travel time prediction for intelligent routing systems, among others.The proposed methodology settles on three steps: (i) Half-Space trees are used to divide the city area into dense subregions of equal mass. The uncovered regions form an O-D matrix which can be updated by transforming the trees'leaves into conditional nodes (and vice-versa). The (ii) Partioning Incremental Algorithm is then employed to discretize the target variable's historical values on each matrix cell. Finally, a (iii) dimensional hierarchy is defined to discretize the domains of the independent variables depending on the cell's samples.A Taxi Network running on a mid-sized city in Portugal was selected as a case study. The Travel Time Estimation (TTE) problem was regarded as a real-world application. Experiments using one million data samples were conducted to validate the methodology. The results obtained highlight the straightforward contribution of this method: it is capable of resisting to the drift while still approximating context-aware solutions through a multidimensional discretization of the feature space. It is a step ahead in estimating the real-time mobility dynamics, regardless of its application field. --- paper_title: Urban traffic modelling and prediction using large scale taxi GPS traces paper_content: Monitoring, predicting and understanding traffic conditions in a city is an important problem for city planning and environmental monitoring. GPS-equipped taxis can be viewed as pervasive sensors and the large-scale digital traces produced allow us to have a unique view of the underlying dynamics of a city's road network. In this paper, we propose a method to construct a model of traffic density based on large scale taxi traces. This model can be used to predict future traffic conditions and estimate the effect of emissions on the city's air quality. We argue that considering traffic density on its own is insufficient for a deep understanding of the underlying traffic dynamics, and hence propose a novel method for automatically determining the capacity of each road segment. We evaluate our methods on a large scale database of taxi GPS logs and demonstrate their outstanding performance. --- paper_title: Using Bluetooth to track mobility patterns: depicting its potential based on various case studies paper_content: During the past years the interest in the exploitation of mobility information has increased significantly. A growing number of companies and research institutions are interested in the analysis of mobility data with demand of a high level of spatial detail. Means of tracking persons in our environment can nowadays be fulfilled by utilizing several technologies, for example the Bluetooth technology, offering means to obtain movement data. This paper gives an overview of four case studies in the field of Bluetooth tracking which were conducted in order to provide helpful insights on movement aspects for decision makers in their specific microcosm. Aim is to analyse spatio-temporal validity of Bluetooth tracking, and in doing so, to describe the potential of Bluetooth in pedestrian mobility mining. --- paper_title: Visually exploring movement data via similarity-based analysis paper_content: Data analysis and knowledge discovery over moving object databases discovers behavioral patterns of moving objects that can be exploited in applications like traffic management and location-based services. Similarity search over trajectories is imperative for supporting such tasks. Related works in the field, mainly inspired from the time-series domain, employ generic similarity metrics that ignore the peculiarity and complexity of the trajectory data type. Aiming at providing a powerful toolkit for analysts, in this paper we propose a framework that provides several trajectory similarity measures, based on primitive (space and time) as well as on derived parameters of trajectories (speed, acceleration, and direction), which quantify the distance between two trajectories and can be exploited for trajectory data mining, including clustering and classification. We evaluate the proposed similarity measures through an extensive experimental study over synthetic (for measuring efficiency) and real (for assessing effectiveness) trajectory datasets. In particular, the latter could serve as an iterative, combinational knowledge discovery methodology enhanced with visual analytics that provides analysts with a powerful tool for "hands-on" analysis for trajectory data. --- paper_title: Learning travel recommendations from user-generated GPS traces paper_content: The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness. --- paper_title: Discovering urban activity patterns in cell phone data paper_content: Massive and passive data such as cell phone traces provide samples of the whereabouts and movements of individuals. These are a potential source of information for models of daily activities in a city. The main challenge is that phone traces have low spatial precision and are sparsely sampled in time, which requires a precise set of techniques for mining hidden valuable information they contain. Here we propose a method to reveal activity patterns that emerge from cell phone data by analyzing relational signatures of activity time, duration, and land use. First, we present a method of how to detect stays and extract a robust set of geolocated time stamps that represent trip chains. Second, we show how to cluster activities by combining the detected trip chains with land use data. This is accomplished by modeling the dependencies between activity type, trip scheduling, and land use types via a Relational Markov Network. We apply the method to two different kinds of mobile phone datasets from the metropolitan areas of Vienna, Austria and Boston, USA. The former data includes information from mobility management signals, while the latter are usual Call Detail Records. The resulting trip sequence patterns and activity scheduling from both datasets agree well with their respective city surveys, and we show that the inferred activity clusters are stable across different days and both cities. This method to infer activity patterns from cell phone data allows us to use these as a novel and cheaper data source for activity-based modeling and travel behavior studies. --- paper_title: Understanding individual routing behaviour paper_content: Knowing how individuals move between places is fundamental to advance our understanding of human mobility (Gonzalez et al. 2008 Nature 453, 779–782. (doi:10.1038/nature06958)), improve our urban in... --- paper_title: Collective benefits in traffic during mega events via the use of information technologies paper_content: Information technologies today can inform each of us about the best alternatives for shortest paths from origins to destinations, but they do not contain incentives or alternatives that manage the information efficiently to get collective benefits. To obtain such benefits, we need to have not only good estimates of how the traffic is formed but also to have target strategies to reduce enough vehicles from the best possible roads in a feasible way. The opportunity is that during large events the traffic inconveniences in large cities are unusually high, yet temporary, and the entire population may be more willing to adopt collective recommendations for social good. In this paper, we integrate for the first time big data resources to quantify the impact of events and propose target strategies for collective good at urban scale. In the context of the Olympic Games in Rio de Janeiro, we first predict the expected increase in traffic. To that end, we integrate data from: mobile phones, Airbnb, Waze, and transit information, with game schedules and information of venues. Next, we evaluate the impact of the Olympic Games to the travel of commuters, and propose different route choice scenarios during the peak hours. Moreover, we gather information on the trips that contribute the most to the global congestion and that could be redirected from vehicles to transit. Interestingly, we show that (i) following new route alternatives during the event with individual shortest path can save more collective travel time than keeping the routine routes, uncovering the positive value of information technologies during events; (ii) with only a small proportion of people selected from specific areas switching from driving to public transport, the collective travel time can be reduced to a great extent. Results are presented on-line for the evaluation of the public and policy makers. --- paper_title: Discrete Choice Analysis: Theory and Application to Travel Demand paper_content: This book, which is intended as a graduate level text and a general professional reference, presents the methods of discrete choice analysis and their applications in the modeling of transportation systems. The first seven chapters provide a basic introduction to discrete choice analysis that covers the material needed to apply basic binary and multiple choice models. The chapters are as follows: introduction; review of the statistics of model estimation; theories of individual choice behavior; binary choice models; multinomial choice; aggregate forecasting techniques; and tests and practical issues in developing discrete choice models. The rest of the chapters cover more advanced material and culminate in the development of a complete travel demand model system presented in chapter 11. The advanced chapters are as follows: theory of sampling; aggregation and sampling of alternatives; models of multidimensional choice and the nested logit model; and systems of models. The last chapter (12) presents an overview of current research frontiers. --- paper_title: OPTIMIZATION OF GRID TRANSIT SYSTEM IN HETEROGENEOUS URBAN ENVIRONMENT paper_content: Current analytic models for optimizing urban transit systems tend to sacrifice geographic realism and detail in order to obtain their solutions. The model presented here shows how an optimization approach can be successful without oversimplifying the spatial characteristics and demand patterns of urban areas. This model is designed to optimize a grid transit system in a heterogeneous urban environment whose demand and supply characteristics may vary arbitrarily among adjacent zones. Network characteristics (route and station locations) and operating headways are found that minimize the total cost, including supplier and user costs. Irregular many-to-many demand patterns, zonal variations in route costs, and vehicle capacity constraints are considered in a sequential optimization process. --- paper_title: Data-Driven Transit Network Design From Mobile Phone Trajectories paper_content: This paper presents a data-driven method for transit network design that relies on a large sample of user location data available from mobile phone telecommunication networks. Such data provide opportunistic sensing and the means for transit operators to match supply with mobility demand inferred from mobile phone locations. In contrast to previous methods of transit network design, the proposed method is entirely data driven, leveraging the large-sample properties of disaggregate mobile phone network data and mobility pattern mining. The method works by deriving frequent patterns of movements from anonymized mobile phone location data and merging them to generate candidate route designs. Additional routines for optimal route selection and service frequency setting are then employed to select a network configuration made up of routes that maximizes systemwide traveler utility. Using data from half a million mobile phone users in Abidjan from the telco operator Orange, we demonstrated to provide resource-neutral system improvement of 27% in terms of end-user journey times. --- paper_title: Transit Network Design And Scheduling: a Global Review paper_content: This paper presents a global review of the crucial strategic and tactical steps of transit planning: the design and scheduling of the network. These steps influence directly the quality of service through coverage and directness concerns but also the economic profitability of the system since operational costs are highly dependent on the network structure. We first exhibit the context and the goals of strategic and tactical transit planning. We then establish a terminology proposal in order to name sub-problems and thereby structure the review. Then, we propose a classification of 69 approaches dealing with the design, frequencies setting, timetabling of transit lines and their combinations. We provide a descriptive analysis of each work so as to highlight their main characteristics in the frame of a two-fold classification referencing both the problem tackled and the solution method used. Finally, we expose recent context evolutions and identify some trends for future research. This paper aims to contribute to unification of the field and constitutes a useful complement to the few existing reviews. --- paper_title: OPTIMAL TRANSIT ROUTE NETWORK DESIGN PROBLEM: ALGORITHMS, IMPLEMENTATIONS, AND NUMERICAL RESULTS paper_content: Previous approaches used to solve the transit route network design problem (TRNDP) can be classified into three categories: 1) Practical guidelines and ad hoc procedures; 2) Analytical optimization models for idealized situations; and 3) Meta-heuristic approaches for more practical problems. When the TRNDP is solved for a network of realistic size in which many parameters need to be determined, it is a combinatorial and NP-hard problem in nature and several sources of nonlinearities and non-convexities involved preclude guaranteed globally optimal solution algorithms. As a result, the meta-heuristic approaches, which are able to pursue reasonably good local (possibly global) optimal solutions and deal with simultaneous design of the transit route network and determination of its associated service frequencies, become necessary. The objective of this research is to systematically study the optimal TRNDP using hybrid heuristic algorithms at the distribution node level without aggregating the travel demand zones into a single node. A multi-objective nonlinear mixed integer model is formulated for the TRNDP. The proposed solution framework consists of three main components: an Initial Candidate Route Set Generation Procedure (ICRSGP) that generates all feasible routes incorporating practical bus transit industry guidelines; a Network Analysis Procedure (NAP) that determines transit trips for the TRNDP with variable demand, assigns these transit trips, determines service frequencies and computes performance measures; and a Heuristic Search Procedure (HSP) that guides the search techniques. Five heuristic algorithms, including the genetic algorithm, local search, simulated annealing, random search and tabu search, are employed as the solution methods for finding an optimal set of routes from the huge solution space. For the TRNDP with small network, the exhaustive search method is also used as a benchmark to examine the efficiency and measure the quality of the solutions obtained by using these heuristic algorithms. Several C++ program codes are developed to implement these algorithms for the TRNDP with both fixed and variable transit demand. Comprehensive experimental networks are used and successfully tested. Sensitivity analyses for each algorithm are conducted and model comparisons are performed. Numerical results are presented and the multi-objective decision making nature of the TRNDP is explored. Related characteristics underlying the TRNDP are identified, inherent tradeoffs are described and the redesign of the existing transit network is also discussed. --- paper_title: Minimizing Transfer Times in Public Transit Network with Genetic Algorithm paper_content: This paper presents a systemwide approach based on a genetic algorithm for the optimization of bus transit system transfer times. The algorithm attempts to find the best feasible solution for the transfer time optimization problem by shifting existing timetables. It makes use of existing scheduled timetables and ridership data at all transfer locations and takes into consideration the randomness of bus arrivals. The complexity of the problem is mainly due to the use of a large set of binary and discrete variables. The combinatorial nature of the problem results in a significant computational burden, and thus it is difficult to solve with classical methods. Scheduling data from Broward County Transit, Florida, were used to calculate total transfer times for the existing and proposed systems. Results showed that the algorithm produced significant transfer time savings. --- paper_title: Optimization of grid bus transit systems with elastic demand paper_content: Current analytic models for optimizing urban bus transit systems tend to sacrifice geographic realism and detail in order to obtain their solutions. The models presented here shows how an optimization approach can be successful without oversimplifying spatial characteristics and demand patterns of urban areas and how a grid bus transit system in a heterogeneous urban environment with elastic demand is optimized. The demand distribution over the service region is discrete, which can realistically represent geographic variation. Optimal network characteristics (route and station spacings), operating headways and fare are found, which maximize the total operator profit and social welfare. Irregular service regions, many-to-many demand patterns, and vehicle capacity constraints are considered in a sequential optimization process. The numerical results show that at the optima the operator profit and social welfare functions are rather flat with respect to route spacing and headway, thus facilitating the tailoring of design variables to the actual street network and particular operating schedule without a substantial decrease in profit. The sensitivities of the design variables to some important exogenous factors are also presented. --- paper_title: AllAboard: a system for exploring urban mobility and optimizing public transport using cellphone data paper_content: The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens’ interactions and use data-driven insights to better plan and manage services. In this context, transit operators can leverage pervasive mobile sensing to better match observed demand for travel with their service offerings. With large scale data on mobility patterns, operators can move away from the costly and resource intensive transportation planning processes prevalent in the West, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. --- paper_title: Data-Driven Transit Network Design From Mobile Phone Trajectories paper_content: This paper presents a data-driven method for transit network design that relies on a large sample of user location data available from mobile phone telecommunication networks. Such data provide opportunistic sensing and the means for transit operators to match supply with mobility demand inferred from mobile phone locations. In contrast to previous methods of transit network design, the proposed method is entirely data driven, leveraging the large-sample properties of disaggregate mobile phone network data and mobility pattern mining. The method works by deriving frequent patterns of movements from anonymized mobile phone location data and merging them to generate candidate route designs. Additional routines for optimal route selection and service frequency setting are then employed to select a network configuration made up of routes that maximizes systemwide traveler utility. Using data from half a million mobile phone users in Abidjan from the telco operator Orange, we demonstrated to provide resource-neutral system improvement of 27% in terms of end-user journey times. --- paper_title: Data-Driven Transit Network Design From Mobile Phone Trajectories paper_content: This paper presents a data-driven method for transit network design that relies on a large sample of user location data available from mobile phone telecommunication networks. Such data provide opportunistic sensing and the means for transit operators to match supply with mobility demand inferred from mobile phone locations. In contrast to previous methods of transit network design, the proposed method is entirely data driven, leveraging the large-sample properties of disaggregate mobile phone network data and mobility pattern mining. The method works by deriving frequent patterns of movements from anonymized mobile phone location data and merging them to generate candidate route designs. Additional routines for optimal route selection and service frequency setting are then employed to select a network configuration made up of routes that maximizes systemwide traveler utility. Using data from half a million mobile phone users in Abidjan from the telco operator Orange, we demonstrated to provide resource-neutral system improvement of 27% in terms of end-user journey times. --- paper_title: Discovering Trip Patterns from Incomplete Passenger Trajectories for Inter-zonal Bus Line Planning paper_content: Collecting the trajectories occurring in the city and mining the patterns implied in the trajectories can support the ITS (Intelligent Transportation System) applications and foster the development of smart cities. For improving the operations of inter-zonal buses in the cities, we define a new trip pattern, i.e., frequent bus passenger trip patterns for bus lines (FBPT4BL patterns in short). We utilize the passenger trajectories from bus smart card data and propose a two-phase approach to mine FBPT4BL patterns and then recommend inter-zonal bus lines. We conduct extensive experiments on the real data from the Beijing Public Transport Group. By comparing the experimental results with the actual operation of inter-zonal buses at the Beijing Public Transport Group, we verify the validity of our proposed method. --- paper_title: B-Planner: Planning Bidirectional Night Bus Routes Using Large-Scale Taxi GPS Traces paper_content: Taxi GPS traces can inform us the human mobility patterns in modern cities. Instead of leveraging the costly and inaccurate human surveys about people's mobility, we intend to explore the night bus route planning issue by using taxi GPS traces. Specifically, we propose a two-phase approach for bidirectional night bus route planning. In the first phase, we develop a process to cluster “hot” areas with dense passenger pick up/drop off and then propose effective methods to split big hot areas into clusters and identify a location in each cluster as a candidate bus stop. In the second phase, given the bus route origin, destination, candidate bus stops, and bus operation time constraints, we derive several effective rules to build the bus route graph and prune invalid stops and edges iteratively. Based on this graph, we further develop a bidirectional probability-based spreading algorithm to generate candidate bus routes automatically. We finally select the best bidirectional bus route, which expects the maximum number of passengers under the given conditions and constraints. To validate the effectiveness of the proposed approach, extensive empirical studies are performed on a real-world taxi GPS data set, which contains more than 1.57 million night passenger delivery trips, generated by 7600 taxis in a month. --- paper_title: Performance Measures to Characterize Corridor Travel Time Delay Based on Probe Vehicle Data paper_content: Anonymous probe vehicle data are being collected on roadways throughout the United States. These data are incorporated into local and statewide mobility reports to measure the performance of highways and arterial systems. Predefined spatially located segments, known as traffic message channels (TMCs), are spatially and temporally joined with probe vehicle speed data. Through the analysis of these data, transportation agencies have been developing agencywide travel time performance measures. One widely accepted performance measure is travel time reliability, which is calculated along a series of TMCs. When reliable travel times are not achieved because of incidents and recurring congestion, it is desirable to understand the time and the location of these occurrences so that the corridor can be proactively managed. This research emphasizes a visually intuitive methodology that aggregates a series of TMC segments based on a cursory review of congestion hotspots within a corridor. Instead of a fixed congestio... --- paper_title: Empirical flow-density and speed-spacing relationships: Evidence of vehicle length dependency paper_content: Traffic flow theory has come to a point where conventional, fixed time averaged data are limiting our insight into critical behavior both at the macroscopic and microscopic scales. This paper develops a methodology to measure relationships of density and vehicle spacing on freeways. These relationships are central to most traffic flow theories but have historically been difficult to measure empirically. The work leads to macroscopic flow-density and microscopic speed-spacing relationships in the congested regime derived entirely from dual loop detector data and then verified against the NGSIM data set. The methodology eliminates the need to seek out stationary conditions and yields clean relationships that do not depend on prior assumptions of the curve shape before fitting the data. Upon review of the clean empirical relationships a key finding of this work is the fact that many of the critical parameters of the macroscopic flow-density and microscopic speed-spacing relationships depend on vehicle length, e.g., upstream moving waves should travel through long vehicles faster than through short vehicles. Thus, the commonly used assumption of a homogeneous vehicle fleet likely obscures these important phenomena. More broadly, if waves travel faster or slower depending on the length of the vehicles through which the waves pass, then the way traffic is modeled should be updated to explicitly account for inhomogeneous vehicle lengths. --- paper_title: Quantifying Loop Detector Sensitivity and Correcting Detection Problems on Freeways paper_content: Loop detectors are the most commonly used vehicle detector for freeway management. A loop detector consists of a physical loop of wire embedded in the pavement connected to a sensor located in a nearby cabinet. The sensor detects the presence or absence of vehicles over the loop and typically allows a user to manually select the sensitivity level of operation to accommodate for a wide range of responsiveness from the physical loop. In conventional practice, however, it is difficult to know the physical loop’s responsiveness, which makes selecting the appropriate sensitivity level difficult. If the sensitivity and responsiveness are poorly matched it will degrade the detector’s data and the performance of applications that use the data, including: traffic management, control, and traveler information. To resolve this often overlooked problem, this paper presents an algorithm to assess how well a loop detector’s sensitivity is set by calculating the daily median on-time from the data reported by the loop detector. The algorithm can be incorporated into conventional controller software or run off-line. The result can be used both to correct the detector on-times for an inappropriate sensitivity setting in software (e.g., through a multiplicative correction factor) and to trigger an alarm to dispatch a technician to adjust the hardware sensitivity. Plotting the daily median on-time over months or years can show how the detector performance evolves. The approach is then transposed to dual-loop detectors to identify and correct for inaccurate spacing between the paired detectors. Finally, the methodology is evaluated by comparing the loop detector speeds against the concurrent velocities from a GPS-equipped probe vehicle. Although the focus of this paper is on loop detectors, with only minor modification the algorithm should also be applicable to other detector technologies that emulate loop detector operation, e.g., side-fire microwave radar. --- paper_title: Work Zone Performance Measurement Using Probe Data paper_content: This report assesses the potential for probe data to support work zone performance measurement programs. It includes an overview of probe data and the advantages and disadvantages of probe data sources relative to traditional fixed sensors. It identifies when and how probe data sources can be used to support work zone performance measures. It also characterizes the applicability of different types of probe data to help manage different types of work zones. The report then exemplifies this information by presenting summaries of projects that made use of probe data for work zone performance measures or examined the capabilities and limitations of probe vehicle data. A particular focus was on a recent Maryland State Highway Administration project that provided a comprehensive example of the use of probe data sources to compute the performance measures by developing a web-based work zone performance measure application. --- paper_title: Probe Vehicle-Based Statewide Mobility Performance Measures for Decision Makers paper_content: Decision makers in state transportation agencies typically manage budgets approaching or exceeding $1 billion. Historically, the data used to make investment decisions have been quite coarse and have been typically based on short-term volume counts fed into models to forecast performance. As a result, it is not uncommon for construction projects to address needs that were forecast to be a priority 5 to 10 years earlier, while more pressing congestion challenges go unmet. It is essential that long-term planning begins to be supplemented by more current performance measures. The emerging private-sector probe vehicle data obtained from commercial providers offer an opportunity to augment traditional forward-looking planning models with performance measures that reflect the conditions motorists are experiencing today. This paper proposes scalable, analytical probe data-reduction techniques to create technically sound, yet visually intuitive, system-performance measures of current freeway conditions. These typ... --- paper_title: Algorithm for Detector-Error Screening on Basis of Temporal and Spatial Information paper_content: Although average effective vehicle length (AEVL) has been recognized as one of the most popular methods for detecting data errors, how to set proper thresholds so as to prevent false alarms and missed detections remains a challenging ongoing issue. This study proposed a sequential screening algorithm that employed multiple comparisons with the best statistics to compare concurrently the estimated AEVLs between lanes and stations for assessment of the data quality of a target detector. With both the temporal and spatial information, the proposed method can reliably generate a confidence interval and determine whether the target detector is malfunctioning or in need of calibration. The proposed algorithm was tested with 2 weeks of detector data from Ocean City, Maryland. The analysis results demonstrate the effectiveness of the proposed sequential screening algorithm and its potential for field applications. --- paper_title: Comparing INRIX speed data against concurrent loop detector stations over several months paper_content: Abstract Many real-time traffic-monitoring applications only require speed or travel time. In recent years INRIX Traffic has started collecting and selling real-time speed data collected from “a variety of sources.” The clients include direct to consumer and operating agencies alike. So far the INRIX speed data have received little independent evaluation in the literature, with only a few published studies. The current study exploits a unique juncture as the Ohio Department of Transportation transitioned from loop detectors to third party traffic data for real time management. The two traffic surveillance systems operated concurrently for about half a year in Columbus, Ohio, USA. This paper uses two months of the concurrent data to evaluate INRIX performance on 14 mi of I-71, including both recurrent and non-recurrent events. The work compared reported speeds from INRIX against the concurrent loop detector data, as detailed herein. Three issues became apparent: First, the reported INRIX speeds tend to lag the loop detector measurements by almost 6 min. This latency appears to be within INRIX specifications, but from an operational standpoint it is important that time sensitive applications account for it, e.g., traffic responsive ramp metering. Second, although INRIX reports speed every minute, most of the time the reported speed is identical to the previous sample, suggesting that INRIX is effectively calculating the speeds over a longer period than it uses to report the speeds. This work observed an effective average sampling period of 3–5 min, with many periods of repeated reported speed lasting in excess of 10 min. Third, although INRIX reports two measures of confidence, these confidence measures do not appear to reflect the latency or the occurrence of repeated INRIX reported speeds. --- paper_title: Traffic Flow Dynamics: Data, Models and Simulation paper_content: This instructional guide describes the use of simulation and mathematical models in determining traffic flow dynamics. Part 1 focuses on data collection with chapters on trajectory and floating-car data, cross-sectional data, and spatiotemporal reconstruction of the traffic state. Part 2 presents various mathematical models including: continuity equations, the Lighthill–Whitham–Richards Model, macroscopic models, car-following models, lane-changing models, stability analysis, and phase diagrams. Part 3 outlines traffic flow theory applications. Chapters include: travel time estimation, fuel consumption and emissions, and optimization. This textbook includes problems and solutions and an accompanying website, www.traffic-flow-dynamics.org, provides some interactive content. --- paper_title: Data Assimilation: The Ensemble Kalman Filter paper_content: Statistical definitions.- Analysis scheme.- Sequential data assimilation.- Variational inverse problems.- Nonlinear variational inverse problems.- Probabilistic formulation.- Generalized Inverse.- Ensemble methods.- Statistical optimization.- Sampling strategies for the EnKF.- Model errors.- Square Root Analysis schemes.- Rank issues.- Spurious correlations, localization, and inflation.- An ocean prediction system.- Estimation in an oil reservoir simulator. --- paper_title: Calibration Framework based on Bluetooth Sensors for Traffic State Estimation Using a Velocity based Cell Transmission Model paper_content: The velocity based cell transmission model (CTM-v) is a discrete time dynamical model that mimics the evolution of the traffic velocity field on highways. In this paper the CTM-v model is used together with an ensemble Kalman filter (EnKF) for the purpose of velocity sensor data assimilation. We present a calibration framework for the CTM-v and EnKF. The framework consists of two separate phases. The first phase is the calibration of the parameters of the fundamental diagram and the second phase is the calibration of demand and filter parameters. Results from the calibrated model are presented for a highway stretch north of Stockholm, Sweden. --- paper_title: Travel time prediction using the GPS test vehicle and Kalman filtering techniques paper_content: A sudden traffic surge immediately after special events (e.g., conventions, concerts) can create substantial traffic congestion in the area where the events are held. It is desired that the special events related traffic performance can be measured so that the traffic flow can be improved via some existing methods such as a temporary traffic signal timing adjustment. This paper focuses on the study of the arterial travel time prediction using the Kalman filtering and estimation technique, and a graduation ceremony is chosen as our case study. The Global Positioning System (GPS) test vehicle technique is used to collect after events travel time data. Based on the real-time data collected, a discrete-time Kalman filter is then applied to predict travel time exiting the area under study. An assessment of the performance and its effectiveness at the test site are investigated. The approaches to further improve the accuracy of the prediction error are also discussed. --- paper_title: Kalman Filter Approach to Speed Estimation Using Single Loop Detector Measurements under Congested Conditions paper_content: The ability to measure or estimate accurate speed data are of great importance to a large number of transportation system operations applications. Estimating speeds from the widely used single inductive loop sensor has been a difficult, yet important challenge for transportation engineers. Based on empirical evidence observed from sensor data collected in two metropolitan regions in Virginia and California, this research developed a Kalman filter model to perform speed estimation for congested traffic. Taking advantage of the coexistence of dual loop and single loop stations in many freeway management systems, a calibration procedure was developed to seed and initiate the algorithm. Finally, the paper presents an evaluation that illustrates that the proposed algorithm can produce acceptable speed estimates under congested traffic conditions, consistently outperforming the conventional g-factor approach. --- paper_title: A STATISTICAL ALGORITHM FOR ESTIMATING SPEED FROM SINGLE LOOP VOLUME AND OCCUPANCY MEASUREMENTS paper_content: This paper presents an algorithm for estimating mean traffic speed using volume and occupancy data from a single inductance loop. The algorithm is based on the statistics of the measurements obtained from a traffic management system. The algorithm produces an estimate of speed and provides a reliability test for the speed estimate. --- paper_title: Trajectory Data Mining: An Overview paper_content: The advances in location-acquisition and mobile computing techniques have generated massive spatial trajectory data, which represent the mobility of a diversity of moving objects, such as people, vehicles, and animals. Many techniques have been proposed for processing, managing, and mining trajectory data in the past decade, fostering a broad range of applications. In this article, we conduct a systematic survey on the major research into trajectory data mining, providing a panorama of the field as well as the scope of its research topics. Following a road map from the derivation of trajectory data, to trajectory data preprocessing, to trajectory data management, and to a variety of mining tasks (such as trajectory pattern mining, outlier detection, and trajectory classification), the survey explores the connections, correlations, and differences among these existing techniques. This survey also introduces the methods that transform trajectories into other data formats, such as graphs, matrices, and tensors, to which more data mining and machine learning techniques can be applied. Finally, some public trajectory datasets are presented. This survey can help shape the field of trajectory data mining, providing a quick understanding of this field to the community. --- paper_title: Assessing Expected Accuracy of Probe Vehicle Travel Time Reports paper_content: The use of probe vehicles to provide estimates of link travel times has been suggested as a means of obtaining travel times within signalized networks for use in advanced travel information systems. Past research in the literature has provided contradictory conclusions regarding the expected accuracy of these probe-based estimates, and consequently has estimated different levels of market penetration of probe vehicles required to sustain accurate data within an advanced traveler information system. This paper examines the effect of sampling bias on the accuracy of the probe estimates. An analytical expression is derived on the basis of queuing theory to prove that bias in arrival time distributions and/or in the proportion of probes associated with each link departure turning movement will lead to a systematic bias in the sample estimate of the mean delay. Subsequently, the potential for and impact of sampling bias on a signalized link is examined by simulating an arterial corridor. The analytical derivation and the simulation analysis show that the reliability of probe-based average link travel times is highly affected by sampling bias. Furthermore, this analysis shows that the contradictory conclusions of previous research are directly related to the presence or absence of sample bias. --- paper_title: OPTIMIZATION OF GRID TRANSIT SYSTEM IN HETEROGENEOUS URBAN ENVIRONMENT paper_content: Current analytic models for optimizing urban transit systems tend to sacrifice geographic realism and detail in order to obtain their solutions. The model presented here shows how an optimization approach can be successful without oversimplifying the spatial characteristics and demand patterns of urban areas. This model is designed to optimize a grid transit system in a heterogeneous urban environment whose demand and supply characteristics may vary arbitrarily among adjacent zones. Network characteristics (route and station locations) and operating headways are found that minimize the total cost, including supplier and user costs. Irregular many-to-many demand patterns, zonal variations in route costs, and vehicle capacity constraints are considered in a sequential optimization process. --- paper_title: Instantaneous Emission Modeling with GPS-Based Vehicle Activity Data: Results of Diesel Trucks for One-Day Trips paper_content: This paper presents an instantaneous analysis for traffic emissions using GPS-based vehicle activity data. The different driving conditions, including real-time and average speed, short-time stops and long-time stops, acceleration and deceleration, etc., are extracted from GPS data. The hot emission, cold-start emission and idling emission, varied by nitrogen compounds and particulate matter are calculated, respectively, in terms of the driving condition and vehicle characteristics. Results simulated based on a one-day trip activity dataset show that trucks spend most kilometers on national roads, followed by municipal and provincial roads. The number of short-time stops is significantly higher than long-time stops, and the time spent for long-time stops is higher than short-time duration. The hot emission accounts for the largest proportion of emissions, and the idling emission also contribute substantially. Results of sensitivity analyses indicate that pollutions in urban area from freight transport can be significantly decreased by increasing the vehicle classes and guiding the heavy trucks out of the region. --- paper_title: The effects of route choice decisions on vehicle energy consumption and emissions paper_content: Motorists typically select routes that minimize their travel time or generalized cost. This may entail traveling on longer but faster routes. This raises questions concerning whether traveling along a longer but faster route results in energy and/or air quality improvements. We investigate the impacts of route choice decisions on vehicle energy consumption and emission rates for different vehicle types using microscopic and macroscopic emission estimation tools. The results demonstrate that the faster highway route choice is not always the best from an environmental and energy consumption perspective. Specifically, significant improvements to energy and air quality can be achieved when motorists utilize a slower arterial route although they incur additional travel time. The study also demonstrates that macroscopic emission estimation tools (e.g., MOBILE6) can produce erroneous conclusions given that they ignore transient vehicle behavior along a route. The findings suggest that an emission- and energy-optimized traffic assignment can significantly improve emissions over the standard user equilibrium and system optimum assignment formulations. Finally, the study demonstrates that a small portion of the entire trip involves high engine-load conditions that produce significant increases in emissions; demonstrating that by minimizing high-emitting driving behavior, air quality can be improved significantly. --- paper_title: DEVELOPMENT OF VT-MICRO MODEL FOR ESTIMATING HOT STABILIZED LIGHT DUTY VEHICLE AND TRUCK EMISSIONS paper_content: Abstract The paper applies a framework for developing microscopic emission models (VT-Micro model version 2.0) for assessing the environmental impacts of transportation projects. The original VT-Micro model was developed using chassis dynamometer data on nine light duty vehicles. The VT-Micro model is expanded by including data from 60 light duty vehicles and trucks. Statistical clustering techniques are applied to group vehicles into homogenous categories. Specifically, classification and regression tree algorithms are utilized to classify the 60 vehicles into 5 LDV and 2 LDT categories. In addition, the framework accounts for temporal lags between vehicle operational variables and measured vehicle emissions. The VT-Micro model is validated by comparing against laboratory measurements with prediction errors within 17%. --- paper_title: Statistical Vehicle Specific Power Profiling for Urban Freeways paper_content: Vehicle Specific Power (VSP) is conventionally defined to represent the instantaneous vehicle engine power. It has been widely utilized to reveal the impact of vehicle operating conditions on emission and energy consumption estimates that are dependent upon speed, roadway grade and acceleration or deceleration on the basis of the second-by-second vehicle operation. VSP has hence been incorporated into a key contributing factor in the vehicle emission models including Motor Vehicle Emission Simulator (MOVES). To facilitate the preparation of MOVES vehicle operating mode distribution inputs, an enhanced understanding and modeling of VSP distribution versus roadway grade become indispensable. This paper presents a study in which previous studies are extended by deeply investigating the characteristics of VSP distributions and their impacts due to varying freeway grades, as well as time-of-day factors. Afterwards, statistical distribution models with a scope of bins is identified through a goodness of fit testing approach by using the sample data collected from the interstate freeway segments in Cincinnati area. The Global Positioning System (GPS) data were collected at a selected length of 30 km urban freeway for AM, PM and Mid-day periods. The datasets representing the vehicle operating conditions for VSP calculation are then extracted from the GPS trajectory data. The distribution fit modeling results demonstrated that the Wakeby distribution with five parameters dominates the most fitting parameters with the samples. In addition, the speed variation lies behind the time of day differences is also identified to be a contributing factor of urban freeway VSP distribution. --- paper_title: Model-based traffic control for balanced reduction of fuel consumption , emissions , and travel time ∗ paper_content: Abstract In this paper we integrate the macroscopic traffic flow model METANET with the microscopic dynamic emission and fuel consumption model VT-Micro. We use the integrated models in the model predictive control (MPC) framework to reduce exhaust emissions, fuel consumption, and travel time using dynamic speed limit control. With simulation experiments we demonstrate the countereffects and conflicting nature of the different traffic control objectives. Our simulation results indicate that a model-based traffic control approach, particularly MPC, can be used to obtain a balanced trade-off between the conflicting traffic control objectives. --- paper_title: How Much Does Traffic Congestion Increase Fuel Consumption and Emissions? Applying Fuel Consumption Model to NGSIM Trajectory Data paper_content: The fuel consumption of vehicular traffic (and associated CO2 emissions) on a given road section depends strongly on the velocity profiles of the vehicles. The basis for a detailed estimation is therefore the consumption rate as a function of instantaneous velocity and acceleration. This paper will present a model for the instantaneous fuel consumption that includes vehicle properties, engine properties, and gear-selection schemes and implement it for different passenger car types representing the vehicle fleet under consideration. The paper will apply the model to trajectories from microscopic traffic simulation. The proposed model can directly be used in a microscopic traffic simulation software to calculate fuel consumption and derived emission such as carbon dioxide. Next to travel times, the fuel consumption is an important measure for the performance of future Intelligent Transportation Systems. Furthermore, the model is applied to real traffic situations by taking the velocity and acceleration as input from several sets of the NGSIM trajectory data. Dedicated data processing and smoothing algorithms have been applied to the NGSIM data to suppress the data noise that is multiplied by the necessary differentiations for obtaining more realistic velocity and acceleration time series. On the road sections covered by the NGSIM data, we found that traffic congestion typically lead to an increase of fuel consumption of the order of 80% while the traveling time has increased by a factor of up to 4. We conclude that the influence of congestion on fuel consumption is distinctly lower than that on travel time. --- paper_title: A data-driven optimization-based approach for siting and sizing of electric taxi charging stations paper_content: Abstract This paper presents a data-driven optimization-based approach to allocate chargers for battery electric vehicle (BEV) taxis throughout a city with the objective of minimizing the infrastructure investment. To account for charging congestion, an M / M / x / s queueing model is adopted to estimate the probability of BEV taxis being charged at their dwell places. By means of regression and logarithmic transformation, the charger allocation problem is formulated as an integer linear program (ILP), which can be solved efficiently using Gurobi solver. The proposed method is applied using large-scale GPS trajectory data collected from the taxi fleet of Changsha, China. The key findings from the results include the following: (1) the dwell pattern of the taxi fleet determines the siting of charging stations; (2) by providing waiting spots, in addition to charging spots, the utilization of chargers increases and the number of required chargers at each site decreases; and (3) the tradeoff between installing more chargers versus providing more waiting spaces can be quantified by the cost ratio of chargers and parking spots. --- paper_title: Learning transportation mode from raw gps data for geographic applications on the web paper_content: Geographic information has spawned many novel Web applications where global positioning system (GPS) plays important roles in bridging the applications and end users. Learning knowledge from users' raw GPS data can provide rich context information for both geographic and mobile applications. However, so far, raw GPS data are still used directly without much understanding. In this paper, an approach based on supervised learning is proposed to automatically infer transportation mode from raw GPS data. The transportation mode, such as walking, driving, etc., implied in a user's GPS data can provide us valuable knowledge to understand the user. It also enables context-aware computing based on user's present transportation mode and design of an innovative user interface for Web users. Our approach consists of three parts: a change point-based segmentation method, an inference model and a post-processing algorithm based on conditional probability. The change point-based segmentation method was compared with two baselines including uniform duration based and uniform length based methods. Meanwhile, four different inference models including Decision Tree, Bayesian Net, Support Vector Machine (SVM) and Conditional Random Field (CRF) are studied in the experiments. We evaluated the approach using the GPS data collected by 45 users over six months period. As a result, beyond other two segmentation methods, the change point based method achieved a higher degree of accuracy in predicting transportation modes and detecting transitions between them. Decision Tree outperformed other inference models over the change point based segmentation method. --- paper_title: Optimization of Charging Infrastructure for Electric Taxis paper_content: For the transportation sector, electromobility presents a chance both to abolish oil dependency and to open the possibility of using sustainable energy sources. In the case of taxis, the substitution of electric vehicles for conventional vehicles with an internal combustion engine may be especially favorable because the driving patterns involve several waiting periods, which can be used for recharging the battery. An infrastructure consisting of charging stations at taxi stands needs to be developed to ensure the energy supply of these taxis. Designing a charging infrastructure requires the development of an optimization method to maximize the economic benefit of the whole system, consisting of electric taxis (e-taxis) and charging stations (CSs). The number of charging stations should be kept as minimal as possible to reduce costs. Simultaneously, the energy supply for these taxis has to be ensured to enable high mileage and earnings. This study introduces an event-based simulation of the e-taxis' mileag... --- paper_title: Transportation mode detection using mobile phones and GIS information paper_content: The transportation mode such as walking, cycling or on a train denotes an important characteristic of the mobile user's context. In this paper, we propose an approach to inferring a user's mode of transportation based on the GPS sensor on her mobile device and knowledge of the underlying transportation network. The transportation network information considered includes real time bus locations, spatial rail and spatial bus stop information. We identify and derive the relevant features related to transportation network information to improve classification effectiveness. This approach can achieve over 93.5% accuracy for inferring various transportation modes including: car, bus, aboveground train, walking, bike, and stationary. Our approach improves the accuracy of detection by 17% in comparison with the GPS only approach, and 9% in comparison with GPS with GIS models. The proposed approach is the first to distinguish between motorized transportation modes such as bus, car and aboveground train with such high accuracy. Additionally, if a user is travelling by bus, we provide further information about which particular bus the user is riding. Five different inference models including Bayesian Net, Decision Tree, Random Forest, Naive Bayesian and Multilayer Perceptron, are tested in the experiments. The final classification system is deployed and available to the public. --- paper_title: Where do cyclists ride? A route choice model developed with revealed preference GPS data paper_content: To better understand bicyclists’ preferences for facility types, GPS units were used to observe the behavior of 164 cyclists in Portland, Oregon, USA for several days each. Trip purpose and several other trip-level variables recorded by the cyclists, and the resulting trips were coded to a highly detailed bicycle network. The authors used the 1449 non-exercise, utilitarian trips to estimate a bicycle route choice model. The model used a choice set generation algorithm based on multiple permutations of path attributes and was formulated to account for overlapping route alternatives. The findings suggest that cyclists are sensitive to the effects of distance, turn frequency, slope, intersection control (e.g. presence or absence of traffic signals), and traffic volumes. In addition, cyclists appear to place relatively high value on off-street bike paths, enhanced neighborhood bikeways with traffic calming features (aka “bicycle boulevards”), and bridge facilities. Bike lanes more or less exactly offset the negative effects of adjacent traffic, but were no more or less attractive than a basic low traffic volume street. Finally, route preferences differ between commute and other utilitarian trips; cyclists were more sensitive to distance and less sensitive to other infrastructure characteristics for commute trips. --- paper_title: Mapping cyclist activity and injury risk in a network combining smartphone GPS data and bicycle counts. paper_content: In recent years, the modal share of cycling has been growing in North American cities. With the increase of cycling, the need of bicycle infrastructure and road safety concerns have also raised. Bicycle flows are an essential component in safety analysis. The main objective of this work is to propose a methodology to estimate and map bicycle volumes and cyclist injury risk throughout the entire network of road segments and intersections on the island of Montreal, achieved by combining smartphone GPS traces and count data. In recent years, methods have been proposed to estimate average annual daily bicycle (AADB) volume and injury risk estimates at both the intersection and segment levels using bicycle counts. However, these works have been limited to small samples of locations for which count data is available. In this work, a methodology is proposed to combine short- and long-term bicycle counts with GPS data to estimate AADB volumes along segments and intersections in the entire network. As part of the validation process, correlation is observed between AADB values obtained from GPS data and AADB values from count data, with R-squared values of 0.7 for signalized intersections, 0.58 for non-signalized intersections and between 0.48 and 0.76 for segments with and without bicycle infrastructure. The methodology is also validated through the calibration of safety performance functions using both sources of AADB estimates, from counts and from GPS data. Using the validated AADB estimates, the factors associated with injury risk were identified using data from the entire population of intersections and segments throughout Montreal. Bayesian injury risk maps are then generated and the concentrations of expected injuries and risk at signalized intersections are identified. Signalized intersections, which are often located at the intersection of major arterials, witness 4 times more injuries and 2.5 times greater risk than non-signalized intersections. A similar observation can be made for arterials which not only have a higher concentration of injuries but also injury rates (risk). On average, streets with cycle tracks have a greater concentration of injuries due to greater bicycle volumes, however, and in accordance with recent works, the individual risk per cyclist is lower, justifying the benefits of cycle tracks. --- paper_title: Understanding the Impact of Electric Vehicle Driving Experience on Range Anxiety paper_content: Objective: The objective of the present research was to increase understanding of the phenomenon of range anxiety and to determine the degree to which practical experience with battery electric vehicles (BEVs) reduces different levels of range anxiety. Background: Limited range is a challenge for BEV users. A frequently discussed phenomenon in this context is range anxiety. There is some evidence suggesting that range anxiety might only be a problem for inexperienced BEV drivers, and therefore, might decrease with practical experience. Method: We compared 12 motorists with high BEV driving experience (M = 60.500 km) with 12 motorists, who had never driven a BEV before. The test drive was designed to lead to a critical range situation (remaining range < trip length). We examined range appraisal and range stress (i.e., range anxiety) on different levels (cognitive, emotional and behavioral). Results: Experienced BEV drivers exhibited less negative range appraisal and range anxiety than inexperienced BEV drivers, revealing significant, strong effects for all but one variable. Conclusion: Hence, BEV driving experience (defined as absolute km driven with a BEV) seems to be one important variable that predicts less range anxiety. Application: In order to reduce range anxiety in BEV drivers even when there is a critical range situation, it is important to increase efficiency and effectiveness of the learning process. --- paper_title: Classifying pedestrian movement behaviour from GPS trajectories using visualization and clustering paper_content: The quantity and quality of spatial data are increasing rapidly. This is particularly evident in the case of movement data. Devices capable of accurately recording the position of moving entities have become ubiquitous and created an abundance of movement data. Valuable knowledge concerning processes occurring in the physical world can be extracted from these large movement data sets. Geovisual analytics offers powerful techniques to achieve this. This article describes a new geovisual analytics tool specifically designed for movement data. The tool features the classic space-time cube augmented with a novel clustering approach to identify common behaviour. These techniques were used to analyse pedestrian movement in a city environment which revealed the effectiveness of the tool for identifying spatiotemporal patterns. --- paper_title: Using Bluetooth to track mobility patterns: depicting its potential based on various case studies paper_content: During the past years the interest in the exploitation of mobility information has increased significantly. A growing number of companies and research institutions are interested in the analysis of mobility data with demand of a high level of spatial detail. Means of tracking persons in our environment can nowadays be fulfilled by utilizing several technologies, for example the Bluetooth technology, offering means to obtain movement data. This paper gives an overview of four case studies in the field of Bluetooth tracking which were conducted in order to provide helpful insights on movement aspects for decision makers in their specific microcosm. Aim is to analyse spatio-temporal validity of Bluetooth tracking, and in doing so, to describe the potential of Bluetooth in pedestrian mobility mining. --- paper_title: Health and environmental benefits related to electric vehicle introduction in EU countries paper_content: Abstract Introduction of electric vehicles (EV) can help to reduce CO2-emissions and the dependence on petroleum products. However, sometimes relatively larger air pollutant emissions from certain power plants can offset the benefits of replacing internal combustion engine (ICE) cars with EV. The goal of this study was to compare the societal impact (climate change & health effects) of EV introduction in the EU-27 under different scenarios for electricity production. The analysis shows that countries that rely on low air pollutant emitting fuel mixes may gain millions of Euro/annum in terms of avoided external costs. Benefits extend across the EU, especially for emissions in small countries. Transport pollution affects the local scale, while electricity pollution has a regional reach. Other European countries, that depend on more polluting fuel mixes, may not benefit at all from introducing EV. Data on the present fuel mix were available for Belgium, France, Portugal, Denmark and the UK on a detailed time scale (5–30′ basis) and show that the time dependent variation of external cost for charging EV is dwarfed compared to the overall gain for introducing EV. The largest benefit is found in not driving an ICE car and avoiding local combustion related emissions. Data on the present fuel mix were also available for Romania on a detailed time scale (10′) and show that the variation in external costs is relatively larger than for the other countries and at some moments it may be worth the effort, at least in theory, to reschedule EV loading schemes taking into account social impact analysis. --- paper_title: Understanding mobility based on GPS data paper_content: Both recognizing human behavior and understanding a user's mobility from sensor data are critical issues in ubiquitous computing systems. As a kind of user behavior, the transportation modes, such as walking, driving, etc., that a user takes, can enrich the user's mobility with informative knowledge and provide pervasive computing systems with more context information. In this paper, we propose an approach based on supervised learning to infer people's motion modes from their GPS logs. The contribution of this work lies in the following two aspects. On one hand, we identify a set of sophisticated features, which are more robust to traffic condition than those other researchers ever used. On the other hand, we propose a graph-based post-processing algorithm to further improve the inference performance. This algorithm considers both the commonsense constraint of real world and typical user behavior based on location in a probabilistic manner. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change point-based segmentation method and Decision Tree-based inference model, the new features brought an eight percent improvement in inference accuracy over previous result, and the graph-based post-processing achieve a further four percent enhancement. --- paper_title: Sensing Human Activity: GPS Tracking paper_content: The enhancement of GPS technology enables the use of GPS devices not only as navigation and orientation tools, but also as instruments used to capture travelled routes: as sensors that measure activity on a city scale or the regional scale. TU Delft developed a process and database architecture for collecting data on pedestrian movement in three European city centres, Norwich, Rouen and Koblenz, and in another experiment for collecting activity data of 13 families in Almere (The Netherlands) for one week. The question posed in this paper is: what is the value of GPS as ‘sensor technology’ measuring activities of people? The conclusion is that GPS offers a widely useable instrument to collect invaluable spatial-temporal data on different scales and in different settings adding new layers of knowledge to urban studies, but the use of GPS-technology and deployment of GPS-devices still offers significant challenges for future research. --- paper_title: Bicycle Route Choice Data Collection using GPS-Enabled Smartphones paper_content: The proliferation of consumer-grade smartphones with global position satellite (GPS) location capabilities opens a new data collection method for researchers. When confronted with a lack of data on bicycle routes preferred by local cyclists, the San Francisco County Transportation Authority (SFCTA) developed a freely downloadable iPhone/Android smartphone “app” called CycleTracks to collect actual bicycle routes traversed by city cyclists. Cooperation with local bicycle advocacy groups, along with social media and email campaigns, encouraged use of the app by regular citizenry. Several rounds of pre-release testing showed that making the app start up quickly, and minimizing battery usage during recording, were critical to getting good data. Once installed on a user's smartphone, a single “tap” would start and stop recording a bicycle trip; after completing a trip, the app automatically uploaded the track to a central database/web server, via the phone's built-in data plan. Approximately 5,000 usable bicycle trips were collected from hundreds of users in the region. Demographic data was optionally provided by some users, and showed a bias toward frequent cyclists, and toward male users (even more than cycling is already male-dominated in the region's most recent household travel survey). A bicycle route choice model developed using the data revealed sensitivity to slope, presence of bike lanes and/or bike route designations, trip purpose, and gender. The bike route choice model is now being integrated into San Francisco's regional travel model. --- paper_title: Safety route guidance system using participatory sensing paper_content: This study proposes a safety route guidance system used after a natural disaster by gathering sensing data from pedestrians' smartphones. In order to find the problems caused by a disaster such as collapsed house, fissures and floods, the system uses the information retrieved from pedestrians' smartphones with GPS receivers and accelerometers. In addition, the system does not use the information from the map data before the disaster for detecting the geographical information in case of large-scale disasters. Therefore, the map data after a disaster is mapped with only the information after the disaster. As a result, pedestrians' moving paths and walking states are evaluated and used for evacuation guidance. --- paper_title: An Evacuation Route Planning for Safety Route Guidance System after Natural Disaster Using Multi-objective Genetic Algorithm paper_content: When a natural disaster occurred, some roads cannot be used anymore and sometimes blocked. Also, survivors and refugees cannot follow the evacuation procedures by just using default maps after disaster. A previous study proposed a safety route guidance system that can be used after natural disasters by using participatory sensing. The system estimates safe routes and generates an evacuation map by collecting GPS data and accelerometer data from pedestrians' smartphone. However, the system does not base on default map data. After that, the system evaluates the safety of each route. However, the previous study did not propose a method of finding evacuation routes from the users' current location to their destination. Therefore, in this study, we proposed a method of evacuation route planning. We have implemented Multi-Objective Genetic Algorithm (MOGA) into the route planning methodology. The proposed system has three objective functions, which are: evacuation distance, evacuation time and safety of evacuation route. Also, we proposed a new safety evaluation method. As a result, this study gives a better reflection of the change of road conditions. Also, the safety evaluation values are more useful than the previous study's evaluation method of the route. Moreover, the system can provide evacuation routes with different characteristics to users. As a result, the users can select a route which is suitable for their situation. --- paper_title: Intelligent system for urban emergency management during large-scale disaster paper_content: The frequency and intensity of natural disasters has significantly increased over the past decades and this trend is predicted to continue. Facing these possible and unexpected disasters, urban emergency management has become the especially important issue for the whole governments around the world. In this paper, we present a novel intelligent system for urban emergency management during the large-scale disasters. The proposed system stores and manages the global positioning system (GPS) records from mobile devices used by approximately 1.6 million people throughout Japan over one year. By mining and analyzing population movements after the Great East Japan Earthquake, our system can automatically learn a probabilistic model to better understand and simulate human mobility during the emergency situations. Based on the learning model, population mobility in various urban areas impacted by the earthquake throughout Japan can be automatically simulated or predicted. On the basis of such kind of system, it is easy for us to find some new features or population mobility patterns after the recent and unprecedented composite disasters, which are likely to provide valuable experience and play a vital role for future disaster management worldwide. --- paper_title: Traffic Monitoring immediately after a major natural disaster as revealed by probe data – A case in Ishinomaki after the Great East Japan Earthquake paper_content: This study analyzes how people behaved and traffic congestion expanded immediately after the Great East Japan Earthquake on March 11, 2011 using information such as probe vehicle and smartphone GPS data. One of the cities most seriously damaged during the earthquake was Ishinomaki. Understanding human evacuation behavior and observing road network conditions are key for the creation of effective evacuation support plans and operations. In many cases, however, a major natural disaster destroys most infrastructure sensors and detailed dynamic information on people’s movements cannot be recorded. Following the Great East Japan Earthquake, vehicle detectors did not work due to the severe tsunami and electric power failure. Therefore, information was only available from individuals’ probe vehicles and smartphone GPS data. These probe data, along with disaster measurements such as water immersion levels, revealed the sudden transition of vehicle speed (i.e., it eventually slowed to less than walking speed and a serious gridlock phenomenon in the Ishinomaki central area occurred). These quantitative findings, which could not be identified without probe data, should be utilized during future disaster mitigation planning. --- paper_title: Introducing naturalistic cycling data: what factors influence bicyclists' safety in the real world? paper_content: Presently, the collection and analysis of naturalistic data is the most credited method for understanding road user behavior and improving traffic safety. Such methodology was developed for motorized vehicles, such as cars and trucks, and is still largely applied to those vehicles. However, a reasonable question is whether bicycle safety can also benefit from the naturalistic methodology, once collection and analyses are properly ported from motorized vehicles to bicycles. This paper answers this question by showing that instrumented bicycles can also collect analogous naturalistic data. In addition, this paper shows how naturalistic cycling data from 16 bicyclists can be used to estimate risk while cycling. The results show that cycling near an intersection increased the risk of experiencing a critical event by four times, and by twelve times when the intersection presented some form of visual occlusion (e.g., buildings and hedges). Poor maintenance of the road increased the risk tenfold. Furthermore, the risk of experiencing a critical event was twice as large when at least one pedestrian or another bicyclist crossed the bicyclist’s trajectory. Finally, this study suggests the two most common scenarios for bicycle accidents, which result from different situations and thus require different countermeasures. The findings presented in this paper show that bicycle safety can benefit from the naturalistic methodology, which provides data able to guide development and evaluation of (intelligent) countermeasures to increase cycling safety. --- paper_title: Categorizing bicycling environments using GPS-based public bicycle speed data paper_content: A promising alternative transportation mode to address growing transportation and environmental issues is bicycle transportation, which is human-powered and emission-free. To increase the use of bicycles, it is fundamental to provide bicycle-friendly environments. The scientific assessment of a bicyclist’s perception of roadway environment, safety and comfort is of great interest. This study developed a methodology for categorizing bicycling environments defined by the bicyclist’s perceived level of safety and comfort. Second-by-second bicycle speed data were collected using global positioning systems (GPS) on public bicycles. A set of features representing the level of bicycling environments was extracted from the GPS-based bicycle speed and acceleration data. These data were used as inputs for the proposed categorization algorithm. A support vector machine (SVM), which is a well-known heuristic classifier, was adopted in this study. A promising rate of 81.6% for correct classification demonstrated the technical feasibility of the proposed algorithm. In addition, a framework for bicycle traffic monitoring based on data and outcomes derived from this study was discussed, which is a novel feature for traffic surveillance and monitoring. --- paper_title: Mapping cyclist activity and injury risk in a network combining smartphone GPS data and bicycle counts. paper_content: In recent years, the modal share of cycling has been growing in North American cities. With the increase of cycling, the need of bicycle infrastructure and road safety concerns have also raised. Bicycle flows are an essential component in safety analysis. The main objective of this work is to propose a methodology to estimate and map bicycle volumes and cyclist injury risk throughout the entire network of road segments and intersections on the island of Montreal, achieved by combining smartphone GPS traces and count data. In recent years, methods have been proposed to estimate average annual daily bicycle (AADB) volume and injury risk estimates at both the intersection and segment levels using bicycle counts. However, these works have been limited to small samples of locations for which count data is available. In this work, a methodology is proposed to combine short- and long-term bicycle counts with GPS data to estimate AADB volumes along segments and intersections in the entire network. As part of the validation process, correlation is observed between AADB values obtained from GPS data and AADB values from count data, with R-squared values of 0.7 for signalized intersections, 0.58 for non-signalized intersections and between 0.48 and 0.76 for segments with and without bicycle infrastructure. The methodology is also validated through the calibration of safety performance functions using both sources of AADB estimates, from counts and from GPS data. Using the validated AADB estimates, the factors associated with injury risk were identified using data from the entire population of intersections and segments throughout Montreal. Bayesian injury risk maps are then generated and the concentrations of expected injuries and risk at signalized intersections are identified. Signalized intersections, which are often located at the intersection of major arterials, witness 4 times more injuries and 2.5 times greater risk than non-signalized intersections. A similar observation can be made for arterials which not only have a higher concentration of injuries but also injury rates (risk). On average, streets with cycle tracks have a greater concentration of injuries due to greater bicycle volumes, however, and in accordance with recent works, the individual risk per cyclist is lower, justifying the benefits of cycle tracks. --- paper_title: Analysis of cyclist behavior using naturalistic data: data processing for model development paper_content: Cycling has been increasingly popular in many cities over the past decades because of its benefits for both environment and human health. However, there is still lack of knowledge on the characteristics specific to this traveler group and recent promotion of bicycle use in transport policies has even expanded the demand for understanding cyclist behavior and bicycle dynamics. It is believed that such understanding can further facilitate the evaluation and improvement of cycling safety as well as accessibility on the network. This paper therefore presents an essential methodological framework for processing and analyzing naturalistic data collected by commuter cyclists in Stockholm equipped with portable GPS devices. On one hand, the GPS coordinates are filtered by the Kalman smoothing algorithm to obtain accurate and consistent estimates of cyclistsr position, speed and acceleration. On the other locally weighted regression is applied to abstract gradient profiles of cycling paths using data of both altitude and travel distance. After information estimation, the characteristics of cyclist acceleration behavior are then analyzed using statistical approaches. The results show that the acceleration profiles have a linear correlation with the total variance in speed during acceleration or deceleration. The data is finally applied to identify cyclist acceleration models proposed for the development of cycling simulation. --- paper_title: Risky riding: Naturalistic methods comparing safety behavior from conventional bicycle riders and electric bike riders. paper_content: As electric bicycles (e-bikes) have emerged as a new transportation mode, their role in transportation systems and their impact on users have become important issues for policy makers and engineers. Little safety-related research has been conducted in North America or Europe because of their relatively small numbers. This work describes the results of a naturalistic GPS-based safety study between regular bicycle (i.e., standard bicycle) and e-bike riders in the context of a unique bikesharing system that allows comparisons between instrumented bike technologies. We focus on rider safety behavior under four situations: (1) riding in the correct direction on directional roadway segments, (2) speed on on-road and shared use paths, (3) stopping behavior at stop-controlled intersections, and (4) stopping behavior at signalized intersections. We find that, with few exceptions, riders of e-bike behave very similarly to riders of bicycles. Violation rates were very high for both vehicles. Riders of regular bicycles and e-bikes both ride wrong-way on 45% and 44% of segments, respectively. We find that average on-road speeds of e-bike riders (13.3kph) were higher than regular bicyclists (10.4kph) but shared use path (greenway) speeds of e-bike riders (11.0kph) were lower than regular bicyclists (12.6kph); both significantly different at >95% confidence. At stop control intersections, both bicycle and e-bike riders violate the stop signs at the similar rate with bicycles violating stop signs at a slightly higher rate at low speed thresholds (∼80% violations at 6kph, 40% violations at 11kph). Bicycles and e-bikes violate traffic signals at similar rates (70% violation rate). These findings suggest that, among the same population of users, e-bike riders exhibit nearly identical safety behavior as regular bike riders and should be regulated in similar ways. Users of both technologies have very high violation rates of traffic control devices and interventions should occur to improve compliance. --- paper_title: Modeling work zone crash frequency by quantifying measurement errors in work zone length. paper_content: Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. --- paper_title: Predicting Motor Vehicle Crashes Using Support Vector Machine Models paper_content: Crash prediction models have been very popular in highway safety analyses. However, in highway safety research, the prediction of outcomes is seldom, if ever, the only research objective when estimating crash prediction models. Only very few existing methods can be used to efficiently predict motor vehicle crashes. Thus, there is a need to examine new methods for better predicting motor vehicle crashes. The objective of this study is to evaluate the application of Support Vector Machine (SVM) models for predicting motor vehicle crashes. SVM models, which are based on the statistical learning theory, are a new class of models that can be used for predicting values. To accomplish the objective of this study, Negative Binomial (NB) regression and SVM models were developed and compared using data collected on rural frontage roads in Texas. Several models were estimated using different sample sizes. The study shows that SVM models predict crash data more effectively and accurately than traditional NB models. In addition, SVM models do not over-fit the data and offer similar, if not better, performance than Back-Propagation Neural Network (BPNN) models documented in previous research. Given this characteristic and the fact that SVM models are faster to implement than BPNN models, it is suggested to use these models if the sole purpose of the study consists of predicting motor vehicle crashes. --- paper_title: A GPS APPROACH FOR THE ANALYSIS OF CAR FOLLOWING BEHAVIOR paper_content: The objective of this research study was to develop a GPS methodology for the collection of car following data under actual highway driving conditions. The methodology involved the application of GPS hardware and software to simultaneously collect speed and location information for two test vehicles. Vehicle position information was then linearly referenced using a GIS. The data were analyzed using both numerical and graphical methods to examine the relationship between the relative positions, speeds, and accelerations of the vehicles. Based on this research a simplified, cost effective methodology for the collection reduction, and analysis of car following behavior is presented --- paper_title: Predicting Work Zone Collision Probabilities via Clustering: Application in Optimal Deployment of Highway Response Teams paper_content: This paper proposes a clustering approach to predict the probability of a collision occurring in the proximity of planned road maintenance operations (i.e., work zones). The proposed method is applied to over 54,000 short-term work zones in the state of Maryland and demonstrates an ability to predict work zone collision probabilities. One of the key applications of this work is using the predicted probabilities at the operational level to help allocate highway response teams. To this end, a two-stage stochastic program is used to locate response vehicles on the Maryland highway network in order to minimize expected response times. --- paper_title: Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis. paper_content: Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives. --- paper_title: Work zone safety analysis and modeling: a state-of-the-art review. paper_content: OBJECTIVE ::: Work zone safety is one of the top priorities for transportation agencies. In recent years, a considerable volume of research has sought to determine work zone crash characteristics and causal factors. Unlike other non-work zone-related safety studies (on both crash frequency and severity), there has not yet been a comprehensive review and assessment of methodological approaches for work zone safety. To address this deficit, this article aims to provide a comprehensive review of the existing extensive research efforts focused on work zone crash-related analysis and modeling, in the hopes of providing researchers and practitioners with a complete overview. ::: ::: ::: METHODS ::: Relevant literature published in the last 5 decades was retrieved from the National Work Zone Crash Information Clearinghouse and the Transport Research International Documentation database and other public digital libraries and search engines. Both peer-reviewed publications and research reports were obtained. Each study was carefully reviewed, and those that focused on either work zone crash data analysis or work zone safety modeling were identified. The most relevant studies are specifically examined and discussed in the article. ::: ::: ::: RESULTS ::: The identified studies were carefully synthesized to understand the state of knowledge on work zone safety. Agreement and inconsistency regarding the characteristics of the work zone crashes discussed in the descriptive studies were summarized. Progress and issues about the current practices on work zone crash frequency and severity modeling are also explored and discussed. The challenges facing work zone safety research are then presented. ::: ::: ::: CONCLUSIONS ::: The synthesis of the literature suggests that the presence of a work zone is likely to increase the crash rate. Crashes are not uniformly distributed within work zones and rear-end crashes are the most prevalent type of crashes in work zones. There was no across-the-board agreement among numerous papers reviewed on the relationship between work zone crashes and other factors such as time, weather, victim severity, traffic control devices, and facility types. Moreover, both work zone crash frequency and severity models still rely on relatively simple modeling techniques and approaches. In addition, work zone data limitations have caused a number of challenges in analyzing and modeling work zone safety. Additional efforts on data collection, developing a systematic data analysis framework, and using more advanced modeling approaches are suggested as future research tasks. --- paper_title: Safety Models for Rural Freeway Work Zones paper_content: Construction and maintenance work zones have traditionally been hazardous locations within the highway environment. Studies show that the accident rates during road construction are generally higher than during periods of regular traffic operations. The increase in the number of crashes may be attributed to (a) general disruption to the flowing traffic due to sudden discontinuities caused by closed lanes, (b) improper lane merging maneuvers, (c) the presence of heavy construction equipment within the work area, (d) inappropriate use of traffic control devices, and (e) poor traffic management. Research was conducted to develop regression models predicting the expected number of crashes at work zones on rural, two-lane freeway segments. Crashes on approaches to work zones and those inside the work zones were analyzed separately. For developing these models, an extensive database was obtained, including freeway data, crash data, and work zone characteristics. Negative binomial models were developed with aver... --- paper_title: Analysis of Crash Frequency in Work Zones with Focus on Police Enforcement paper_content: Highway work zone safety has been a concern nationwide and will likely draw ever increasing attention as more highway funds are invested in the maintenance of existing highways. To improve work zone safety, the Indiana Department of Transportation (DOT) established a special fund for work zone patrolling, and this study was commissioned to help the Indiana DOT achieve the maximum safety benefits within its budget constraint. With help from the Indiana DOT, a survey of project engineers was conducted to collect work zone information. The findings from the survey were linked with other available data. A random-effect negative binomial model was developed to identify the contributing factors and to estimate crash frequency in highway work zones. The results from the model provided insight for better understanding of crashes in work zones. Various factors, including roadway information, traffic volume, work zone-specific features, and police presence, were identified as things that affect crash frequency in w... --- paper_title: OpenStreetMap: User-Generated Street Maps paper_content: The OpenStreetMap project is a knowledge collective that provides user-generated street maps. OSM follows the peer production model that created Wikipedia; its aim is to create a set of map data that's free to use, editable, and licensed under new copyright schemes. A considerable number of contributors edit the world map collaboratively using the OSM technical infrastructure, and a core group, estimated at approximately 40 volunteers, dedicate their time to creating and improving OSM's infrastructure, including maintaining the server, writing the core software that handles the transactions with the server, and creating cartographical outputs. There's also a growing community of software developers who develop software tools to make OSM data available for further use across different application domains, software platforms, and hardware devices. The OSM project's hub is the main OSM Web site. --- paper_title: Hidden Markov map matching through noise and sparseness paper_content: The problem of matching measured latitude/longitude points to roads is becoming increasingly important. This paper describes a novel, principled map matching algorithm that uses a Hidden Markov Model (HMM) to find the most likely road route represented by a time-stamped sequence of latitude/longitude pairs. The HMM elegantly accounts for measurement noise and the layout of the road network. We test our algorithm on ground truth data collected from a GPS receiver in a vehicle. Our test shows how the algorithm breaks down as the sampling rate of the GPS is reduced. We also test the effect of increasing amounts of additional measurement noise in order to assess how well our algorithm could deal with the inaccuracies of other location measurement systems, such as those based on WiFi and cell tower multilateration. We provide our GPS data and road network representation as a standard test set for other researchers to use in their map matching work. --- paper_title: Visually exploring movement data via similarity-based analysis paper_content: Data analysis and knowledge discovery over moving object databases discovers behavioral patterns of moving objects that can be exploited in applications like traffic management and location-based services. Similarity search over trajectories is imperative for supporting such tasks. Related works in the field, mainly inspired from the time-series domain, employ generic similarity metrics that ignore the peculiarity and complexity of the trajectory data type. Aiming at providing a powerful toolkit for analysts, in this paper we propose a framework that provides several trajectory similarity measures, based on primitive (space and time) as well as on derived parameters of trajectories (speed, acceleration, and direction), which quantify the distance between two trajectories and can be exploited for trajectory data mining, including clustering and classification. We evaluate the proposed similarity measures through an extensive experimental study over synthetic (for measuring efficiency) and real (for assessing effectiveness) trajectory datasets. In particular, the latter could serve as an iterative, combinational knowledge discovery methodology enhanced with visual analytics that provides analysts with a powerful tool for "hands-on" analysis for trajectory data. --- paper_title: A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise paper_content: Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. --- paper_title: Using Bluetooth to track mobility patterns: depicting its potential based on various case studies paper_content: During the past years the interest in the exploitation of mobility information has increased significantly. A growing number of companies and research institutions are interested in the analysis of mobility data with demand of a high level of spatial detail. Means of tracking persons in our environment can nowadays be fulfilled by utilizing several technologies, for example the Bluetooth technology, offering means to obtain movement data. This paper gives an overview of four case studies in the field of Bluetooth tracking which were conducted in order to provide helpful insights on movement aspects for decision makers in their specific microcosm. Aim is to analyse spatio-temporal validity of Bluetooth tracking, and in doing so, to describe the potential of Bluetooth in pedestrian mobility mining. --- paper_title: Development of origin–destination matrices using mobile phone call data paper_content: Abstract In this research, we propose a methodology to develop OD matrices using mobile phone Call Detail Records (CDR) and limited traffic counts. CDR, which consist of time stamped tower locations with caller IDs, are analyzed first and trips occurring within certain time windows are used to generate tower-to-tower transient OD matrices for different time periods. These are then associated with corresponding nodes of the traffic network and converted to node-to-node transient OD matrices. The actual OD matrices are derived by scaling up these node-to-node transient OD matrices. An optimization based approach, in conjunction with a microscopic traffic simulation platform, is used to determine the scaling factors that result best matches with the observed traffic counts. The methodology is demonstrated using CDR from 2.87 million users of Dhaka, Bangladesh over a month and traffic counts from 13 key locations over 3 days of that month. The applicability of the methodology is supported by a validation study. --- paper_title: Circos: An information aesthetic for comparative genomics paper_content: We created a visualization tool called Circos to facilitate the identification and analysis of similarities and differences arising from comparisons of genomes. Our tool is effective in displaying variation in genome structure and, generally, any other kind of positional relationships between genomic intervals. Such data are routinely produced by sequence alignments, hybridization arrays, genome mapping, and genotyping studies. Circos uses a circular ideogram layout to facilitate the display of relationships between pairs of positions by the use of ribbons, which encode the position, size, and orientation of related genomic elements. Circos is capable of displaying data as scatter, line, and histogram plots, heat maps, tiles, connectors, and text. Bitmap or vector images can be created from GFF-style data inputs and hierarchical configuration files, which can be easily generated by automated tools, making Circos suitable for rapid deployment in data analysis and reporting pipelines. --- paper_title: Scikit-learn: Machine Learning in Python paper_content: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net. --- paper_title: Optimal Allocation of Truck Inspection Stations Based on k-Shortest Paths paper_content: Overweight trucks damage the road infrastructure, contribute to greenhouse gas emissions, and represent a potential treat to traffic safety. An efficient way of reducing the number of overweight trucks is to implement weigh-in-motion (WIM) systems that are designed to record axle and gross vehicle weights as they pass over a sensor. Although they are effective in detecting overweight trucks, WIM systems are costly and only their efficient allocation can justify the investment. So far several models were developed to optimize the location of WIM checkpoints, but they were all built on a simplifying assumption that trucks travel along the shortest paths between their origins and destinations. This paper proposes a model that allocates WIM checkpoints while considering that overweight trucks try to bypass checkpoints along the shortest unmonitored detours. The problem is formulated as a binary program and applied to minimize the damage due to overweight trucks. This proposed model is applied in a realistic case study on the road network of Nevada. The results imply that considerable savings can be achieved by optimally allocating WIM checkpoints and that the proposed model can provide a valuable decision support for government agencies involved in road infrastructure maintenance and control. --- paper_title: The avoidance of weigh stations in Virginia by overweight trucks. paper_content: The primary objective of this research was to examine the avoidance of weigh stations in Virginia by overweight trucks. Secondary objectives were (1) to determine the magnitude of overweight truck activity on selected routes and (2) to compare traffic loading data collected using static scales with enforcement with data collected using weigh-in-motion without enforcement. Two weigh stations on 1-81 were studied for weigh station avoidance. It was found that 11 and 14 percent (respectively) of the trucks on routes used to bypass the Stephens City and Troutville stations were overweight. At the Stephens City station, 50 percent of the runbys (which are trucks that travel past the weigh station without being weighed because the entrance lane to the station is filled with a queue of trucks) were overweight on Sunday night. Based on the number and percentage of overweight runbys, there is a need to increase the truck weighing capacity of this weigh station. From 12 to 27 percent of the trucks on two primary routes and one interstate route were overweight. Traffic loadings collected with WIM without enforcement are 30 to 60 percent higher than loadings collected using static scales and enforcement. --- paper_title: Evasive flow capture: Optimal location of weigh-in-motion systems, tollbooths, and security checkpoints paper_content: The flow-capturing problem FCP consists of locating facilities to maximize the number of flow-based customers that encounter at least one of these facilities along their predetermined travel paths. The FCP literature assumes that if a facility is located along or "close enough" to a predetermined path of a flow of customers, that flow is considered captured. However, existing models for the FCP do not consider targeted users who behave noncooperatively by changing their travel paths to avoid fixed facilities. Examples of facilities that targeted subjects may have an incentive to avoid include weigh-in-motion stations used to detect and fine overweight trucks, tollbooths, and security and safety checkpoints. This article introduces a new type of flow-capturing model, called the "evasive flow-capturing problem" EFCP, which generalizes the FCP and has relevant applications in transportation, revenue management, and security and safety management. We formulate deterministic and stochastic versions of the EFCP, analyze their structural properties, study exact and approximate solution techniques, and show an application to a real-world transportation network. © 2014 Wiley Periodicals, Inc. NETWORKS, Vol. 651, 22-42. 2015 --- paper_title: OPTIMIZATION OF GRID TRANSIT SYSTEM IN HETEROGENEOUS URBAN ENVIRONMENT paper_content: Current analytic models for optimizing urban transit systems tend to sacrifice geographic realism and detail in order to obtain their solutions. The model presented here shows how an optimization approach can be successful without oversimplifying the spatial characteristics and demand patterns of urban areas. This model is designed to optimize a grid transit system in a heterogeneous urban environment whose demand and supply characteristics may vary arbitrarily among adjacent zones. Network characteristics (route and station locations) and operating headways are found that minimize the total cost, including supplier and user costs. Irregular many-to-many demand patterns, zonal variations in route costs, and vehicle capacity constraints are considered in a sequential optimization process. --- paper_title: DEVELOPMENT OF VT-MICRO MODEL FOR ESTIMATING HOT STABILIZED LIGHT DUTY VEHICLE AND TRUCK EMISSIONS paper_content: Abstract The paper applies a framework for developing microscopic emission models (VT-Micro model version 2.0) for assessing the environmental impacts of transportation projects. The original VT-Micro model was developed using chassis dynamometer data on nine light duty vehicles. The VT-Micro model is expanded by including data from 60 light duty vehicles and trucks. Statistical clustering techniques are applied to group vehicles into homogenous categories. Specifically, classification and regression tree algorithms are utilized to classify the 60 vehicles into 5 LDV and 2 LDT categories. In addition, the framework accounts for temporal lags between vehicle operational variables and measured vehicle emissions. The VT-Micro model is validated by comparing against laboratory measurements with prediction errors within 17%. ---
Title: Applications of Trajectory Data in Transportation: Literature Review and Maryland Case Study Section 1: Introduction Description 1: Introduce the importance of trajectory data in transportation and the objectives of the study. Section 2: Literature Review Description 2: Review the existing literature on the use of trajectory data in transportation across six major areas of transportation engineering. Section 3: Demand Estimation Description 3: Discuss how trajectory data is utilized for estimating transportation demand. Section 4: Modeling Human Behavior Description 4: Explore the use of trajectory data to quantify human behavior in transportation. Section 5: Designing Public Transit Description 5: Examine how trajectory data contributes to designing public transit systems. Section 6: Traffic Performance Measurement and Prediction Description 6: Analyze the role of trajectory data in measuring and predicting traffic performance. Section 7: Environmental Impact Description 7: Discuss the applications of trajectory data in assessing and mitigating transportation's environmental impact. Section 8: Safety Description 8: Review the use of trajectory data to enhance transportation safety and emergency response. Section 9: Data Description 9: Provide an overview of the dataset used in the study. Section 10: Preprocessing Description 10: Describe the preprocessing steps required for using the trajectory data. Section 11: Database Description 11: Explain the database setup used for storing and querying the trajectory data. Section 12: Penetration Rate Description 12: Detail the calculation and significance of the dataset's penetration rate. Section 13: Methods Description 13: Outline the machine learning and data visualization methods used in the study. Section 14: Illustrative Case Study Description 14: Present several applications of trajectory data in transportation using a case study from Maryland. Section 15: Discussion Description 15: Summarize the key findings and discuss potential challenges and recommendations for using trajectory data. Section 16: Conclusions Description 16: Conclude the paper by synthesizing the innovative applications of trajectory data and its future potential.
Robot Collisions: A Survey on Detection, Isolation, and Identification
27
--- paper_title: Collision detection, isolation and identification for humanoids paper_content: High-performance collision handling, which is divided into the five phases detection, isolation, estimation, classification and reaction, is a fundamental robot capability for safe and sensitive operation/interaction in unknown environments. For complex humanoid robots collision handling is obviously significantly more complex than for classical static manipulators. In particular, the robot stability during the collision reaction phase has to be carefully designed and relies on high fidelity contact information that is generated during the first three phases. In this paper, a unified realtime algorithm is presented for determining unknown contact forces and contact locations for humanoid robots based on proprioceptive sensing only, i.e. joint position, velocity and torque, as well as force/torque sensing along the structure. The proposed scheme is based on nonlinear model-based momentum observers that are able to recover the unknown contact forces and the respective locations. The dynamic loads acting on internal force/torque sensors are also corrected based on a novel nonlinear compensator. The theoretical capabilities of the presented methods are evaluated in simulation with the Atlas robot. In summary, we propose a full solution to the problem of collision detection, collision isolation and collision identification for the general class of humanoid robots. --- paper_title: Residual-based contacts estimation for humanoid robots paper_content: The residual method for detecting contacts is a promising approach to allow physical interaction tasks with humanoid robots. Nevertheless, the classical formulation, as developed for fixed-base robots, cannot be directly applied to floating-base systems. This paper presents a novel formulation of the residual based on the floating-base dynamics modeling of humanoids. This leads to the definition of the internal and external residual. The first estimates the joints effort due to the external perturbation acting on the robot. The latter is an estimation of the external forces acting on the floating-base of the robot. The potential of the method is shown proposing a simple internal residual-based reaction strategy, and a procedure for estimating the contact point that combines both the internal and external residuals. --- paper_title: Pushing a robot along - A natural interface for human-robot interaction paper_content: Humans use direct physical interactions to move objects and guide people, and the same should be done with robots. However, most of today's mobile robots use non-backdrivable motors for locomotion, making them potentially dangerous in case of collision. This paper presents a robot, named AZIMUT-3, equipped with differential elastic actuators that are backdrivable and torque controlled, capable of being force-guided. Real world results demonstrate that AZIMUT-3 can move efficiently in response to physical commands given by a human pushing the robot in the intended direction. --- paper_title: An atlas of physical human-robot interaction paper_content: Abstract A broad spectrum of issues have to be addressed in order to tackle the problem of a safe and dependable physical Human–Robot Interaction (pHRI). In the immediate future, metrics related to safety and dependability have to be found in order to successfully introduce robots in everyday enviornments. While there are certainly also “cognitive” issues involved, due to the human perception of the robot (and vice versa), and other objective metrics related to fault detection and isolation, our discussion focuses on the peculiar aspects of “physical” interaction with robots. In particular, safety and dependability are the underlying evaluation criteria for mechanical design, actuation, and control architectures. Mechanical and control issues are discussed with emphasis on techniques that provide safety in an intrinsic way or by means of control components. Attention is devoted to dependability, mainly related to sensors, control architectures, and fault handling and tolerance. Suggestions are provided to draft metrics for evaluating safety and dependability in pHRI, and references to the works of the scientific groups involved in the pHRI research complete the study. The present atlas is a result of the EURON perspective research project “Physical Human–Robot Interaction in anthropic DOMains (PHRIDOM)”, aimed at charting the new territory of pHRI, and constitutes the scientific basis for the ongoing STReP project “Physical Human–Robot Interaction: depENDability and Safety (PHRIENDS)”, aimed at developing key components for the next generation of robots, designed to share their environment with people. --- paper_title: Simultaneous estimation of aerodynamic and contact forces in flying robots: Applications to metric wind estimation and collision detection paper_content: In this paper, we extend our previous external wrench estimation scheme for flying robots with an aerodynamic model such that we are able to simultaneously estimate aerodynamic and contact forces online. This information can be used to identify the metric wind velocity vector via model inversion. Noticeably, we are still able to accurately sense collision forces at the same time. Discrimination between the two is achieved by identifying the natural contact frequency characteristics for both “interaction cases”. This information is then used to design suitable filters that are able to separate the aerodynamic from the collision forces for subsequent use. Now, the flying system is able to correctly respond to typical contact forces and does not accidentally “hallucinate” contacts due to a misinterpretation of wind disturbances. Overall, this paper generalizes our previous results towards significantly more complex environments. --- paper_title: An approach to collision detection and recovery motion in industrial robot paper_content: The authors describe a method of collision detection and the following continued avoidance motion. In an industrial robot, if there is an unknown obstacle on the trajectory, it is required to detect the collision against the obstacle and to avoid it autonomously. In the proposed method, an extra sensor such as the tactile sensor or the visual sensor for detecting the collision is not necessary. Instead of these sensors, the collision is detected by the signal of the disturbance observer. The algorithm for how the robot knows the collision is coming and avoids the obstacle is shown. This algorithm is simple and easy to apply to the robot controller. Several experimental results are shown. > --- paper_title: A failure-to-safety "Kyozon" system with simple contact detection and stop capabilities for safe human-autonomous robot coexistence paper_content: In this paper, we discuss a method to achieve safe autonomous robot system coexistence (or Kyozon in Japanese). First, we clarify human pain tolerance and point out that a robot working next to an operator should be covered with a soft material. Thus, we propose a concept and a design method of covering a robot with a viscoelastic material to achieve both impact force attenuation and contact sensitivity, keeping within the human pain tolerance limit. We stress the necessity of a simple robot system from the viewpoint of reliability. We provide a method of sensing contact force without any force sensors by monitoring the direct drive motor current and velocity of the robot. Finally, we covered a two-link arm manipulator with the optimum soft covering material, and applied the developed manipulator system to practical coexistence tasks. --- paper_title: Robot Motion Planning paper_content: 1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References. --- paper_title: Artificial Skin in Robotics paper_content: Artificial Skin - A comprehensive interface for system-environment interaction - This thesis investigates a multifunctional artificial skin as touch sensitive whole-body cover for robotic systems. To further the evolution from tactile sensors to an implementable artificial skin a general concept for the design process is derived. A standard test procedure is proposed to evaluate the performance. The artificial skin contributes to a safe and intuitive physical human robot interaction. --- paper_title: OpenRAVE : A Planning Architecture for Autonomous Robotics paper_content: One of the challenges in developing real-world autonomous robots is the need for integrating and rigorously testing high-level scripting, motion planning, perception, and control algorithms. For this purpose, we introduce an open-source cross-platform software architecture called OpenRAVE, the Open Robotics and Animation Virtual Environment. OpenRAVE is targeted for real-world autonomous robot applications, and includes a seamless integration of 3-D simulation, visualization, planning, scripting and control. A plugin architecture allows users to easily write custom controllers or extend functionality. With OpenRAVE plugins, any planning algorithm, robot controller, or sensing subsystem can be distributed and dynamically loaded at run-time, which frees developers from struggling with monolithic code-bases. Users of OpenRAVE can concentrate on the development of planning and scripting aspects of a problem without having to explicitly manage the details of robot kinematics and dynamics, collision detection, world updates, and robot control. The OpenRAVE architecture provides a flexible interface that can be used in conjunction with other popular robotics packages such as Player and ROS because it is focused on autonomous motion planning and high-level scripting rather than low-level control and message protocols. OpenRAVE also supports a powerful network scripting environment which makes it simple to control and monitor robots and change execution flow during run-time. One of the key advantages of open component architectures is that they enable the robotics research community to easily share and compare algorithms. --- paper_title: Elastic Strips: A Framework for Motion Generation in Human Environments paper_content: Robotic applications are expanding into dynamic, unstructured, and populated environments. Mechanisms specifically designed to address the challenges arising in these environments, such as humanoid robots, exhibit high kinematic complexity. This creates the need for new algorithmic approaches to motion generation, capable of performing task execution and real-time obstacle avoidance in high-dimensional configuration spaces. The elastic strip framework presented in this paper enables the execution of a previously planned motion in a dynamic environment for robots with many degrees of freedom. To modify a motion in reaction to changes in the environment, real-time obstacle avoidance is combined with desired posture behavior. The modification of a motion can be performed in a task-consistent manner, leaving task execution unaffected by obstacle avoidance and posture behavior. The elastic strip framework also encompasses methods to suspend task behavior when its execution becomes inconsistent with other const... --- paper_title: A depth space approach to human-robot collision avoidance paper_content: In this paper a real-time collision avoidance approach is presented for safe human-robot coexistence. The main contribution is a fast method to evaluate distances between the robot and possibly moving obstacles (including humans), based on the concept of depth space. The distances are used to generate repulsive vectors that are used to control the robot while executing a generic motion task. The repulsive vectors can also take advantage of an estimation of the obstacle velocity. In order to preserve the execution of a Cartesian task with a redundant manipulator, a simple collision avoidance algorithm has been implemented where different reaction behaviors are set up for the end-effector and for other control points along the robot structure. The complete collision avoidance framework, from perception of the environment to joint-level robot control, is presented for a 7-dof KUKA Light-Weight-Robot IV using the Microsoft Kinect sensor. Experimental results are reported for dynamic environments with obstacles and a human. --- paper_title: Safe human-robot-cooperation: image-based collision detection for industrial robots paper_content: This paper analyzes the problem of sensor-based collision detection for an industrial robotic manipulator. A method to perform collision tests based on images taken from several stationary cameras in the work cell is presented. The collision test works entirely based on the images, and does not construct a representation of the Cartesian space. It is shown how to perform a collision test for all possible robot configurations using only a single set of images taken simultaneously. --- paper_title: Force/tactile sensor for robotic applications paper_content: Abstract The paper describes the detailed design and the prototype characterization of a novel tactile sensor 1 for robotic applications. The sensor is based on a two-layer structure, i.e. a printed circuit board with optoelectronic components below a deformable silicon layer with a suitably designed geometry. The mechanical structure of the sensor has been optimized in terms of geometry and material physical properties to provide the sensor with different capabilities. The first capability is to work as a six-axis force/torque sensor; additionally, the sensor can be used as a tactile sensor providing a spatially distributed information exploited to estimate the geometry of the contact with a stiff external object. An analytical physical model and a complete experimental characterization of the sensor are presented. --- paper_title: Real-time collision avoidance in teleoperated whole-sensitive robot arm manipulators paper_content: The problem of generating collision-free motion in an operator-assisted teleoperated robot arm manipulator system is discussed. The concentration is on several system requirements: a real real-time operation, a guarantee of collision-free motion for the entire body of the arm manipulator, and an ability to handle obstacles of arbitrary shapes. The suggested methodology draws on recent work on motion planning with incomplete information for whole-sensitive robots. > --- paper_title: Real-time optimization-based planning in dynamic environments using GPUs paper_content: We present a novel algorithm to compute collision-free trajectories in dynamic environments. Our approach is general and does not require a priori knowledge about the obstacles or their motion. We use a replanning framework that interleaves optimization-based planning with execution. Furthermore, we describe a parallel formulation that exploits a high number of cores on commodity graphics processors (GPUs) to compute a high-quality path in a given time interval. We derive bounds on how parallelization can improve the responsiveness of the planner and the quality of the trajectory. --- paper_title: Directions Toward Effective Utilization of Tactile Skin: A Review paper_content: A wide variety of tactile (touch) sensors exist today for robotics and related applications. They make use of various transduction methods, smart materials and engineered structures, complex electronics, and sophisticated data processing. While highly useful in themselves, effective utilization of tactile sensors in robotics applications has been slow to come and largely remains elusive today. This paper surveys the state of the art and the research issues in this area, with the emphasis on effective utilization of tactile sensors in robotic systems. One specific with the use of tactile sensing in robotics is that the sensors have to be spread along the robot body, the way the human skin is-thus dictating varied 3-D spatio-temporal requirements, decentralized and distributed control, and handling of multiple simultaneous tactile contacts. Satisfying these requirements pose challenges to making tactile sensor modality a reality. Overcoming these challenges requires dealing with issues such as sensors placement, electronic/mechanical hardware, methods to access and acquire signals, automatic calibration techniques, and algorithms to process and interpret sensing data in real time. We survey this field from a system perspective, recognizing the fact that the system performance tends to depend on how its various components are put together. It is hoped that the survey will be of use to practitioners designing tactile sensing hardware (whole-body or large-patch sensor coverage), and to researchers working on cognitive robotics involving tactile sensing. --- paper_title: CHOMP: Gradient optimization techniques for efficient motion planning paper_content: Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot. --- paper_title: Reactive mobile manipulation using dynamic trajectory tracking paper_content: A solution to the trajectory tracking problem for mobile manipulators is proposed, that allows for the base to be influenced by a reactive, obstacle avoidance behavior. Given a trajectory for the gripper to follow, a tracking algorithm for the manipulator is designed, and at the same time the base motions are generated in such a way that the base is coordinated with the gripper. Furthermore, it is shown that the method allows arbitrary upper and lower bounds on the gripper-base distance to be set and this can be achieved without introducing deadlocks into the system. The solution also ensures that the control effort, spent on slow base motions, is kept small. --- paper_title: A Human-Aware Manipulation Planner paper_content: With recent advances in safe and compliant hardware and control, robots are close to finding their places in our homes. As the safety barrier between humans and robots is beginning to fade, the necessity to design pertinent robot behavior in human environments is becoming a crucial step. In order to obtain a safe, comfortable, and socially acceptable interaction, the robot should be engineered from top to bottom by considering the presence of the human. In this paper, we present a manipulation planning framework and its implementation human-aware manipulation planner. This planner generates paths not only safe but comfortable and “socially acceptable” as well by reasoning explicitly on human's kinematics, vision field, posture, and preferences. The planner, which is applied into “robot handing over an object” scenarios, breaks the human centric interaction that depends mostly on human effort and allows the robot to take initiative by computing automatically where the interaction takes place, thus decreasing the cognitive weight of interaction on human side. --- paper_title: Collision detection system for manipulator based on adaptive impedance control law paper_content: In this paper, we propose a collision detection system based on a nonlinear adaptive impedance control law. The collision detection detects collisions of a manipulator with its environment without using external sensors. The adaptive impedance control law is employed to estimate the dynamic parameters of the manipulators, and allows the manipulator to have interaction with its environment. The system detects collisions based on the difference between the actual input torque to the manipulator and the reference input torque, which is calculated based on the estimated parameters of the manipulator dynamics. The manipulator stops when a collision detection system is implemented in an industrial manipulator and experimental results illustrate the validity of the proposed system. --- paper_title: Real-Time Obstacle Avoidance for Manipulators and Mobile Robots paper_content: This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an... --- paper_title: STOMP: Stochastic trajectory optimization for motion planning paper_content: We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in. --- paper_title: Human-robot collaborative manipulation planning using early prediction of human motion paper_content: In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the human's motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a prediction of human workspace occupancy by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy while interleaving planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motions and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robot's task, even in cases where the initial prediction of the human's motion is incorrect. We also show that taking into account the predicted human workspace occupancy in the robot's motion planner leads to safer and more efficient interactions between the user and the robot than only considering the human's current configuration. --- paper_title: Global path planning using artificial potential fields paper_content: The author describes a path planning technique for robotic manipulators and mobile robots in the presence of stationary obstacles. The planning consists of applying potential fields around configuration-space obstacles and using these fields to select a safe path for the robot to follow. The advantage of using potential fields in path planning is that they offer a relatively fast and effective way to solve for safe paths around obstacles. In the proposed method of path planning, a trial path is chosen and then modified under the influence of the potential field until an appropriate path is found. By considering the entire path, the problem of being trapped in a local minimum is greatly reduced, allowing the method to be used for global planning. The algorithm was tried with success on many different realistic planning problems. By way of illustration, the algorithm is applied to a two-dimensional revolute manipulator, a mobile robot capable of translation only, and a mobile robot capable of both translation and rotation. > --- paper_title: Fast vision-based minimum distance determination between known and unkown objects paper_content: We present a method for quickly determining the minimum distance between multiple known and multiple unkown objects within a camera image. Known objects are objects with known geometry, position, orientation, and configuration. Unkown objects are objects which have to be detected by a vision sensor but with unkown geometry, position, orientation and configuration. The known objects are modeled and expanded in 3D and then projected into a camera image. The camera image is classified into object areas including known and unknown objects and into non-object areas. The distance is conservatively estimated by searching for the largest expansion radius where the projected model does not intersect the object areas classified as unknown in the camera image. The method requires only minimal computation times and can be used for surveillance and safety applications. --- paper_title: Collision Detection and Safe Reaction with the DLR-III Lightweight Manipulator Arm paper_content: A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported. --- paper_title: Sensorless Robot Collision Detection and Hybrid Force/Motion Control paper_content: We consider the problem of real-time detection of collisions between a robot manipulator and obstacles of unknown geometry and location in the environment without the use of extra sensors. The idea is to handle a collision at a generic point along the robot as a fault of its actuating system. A previously developed dynamic FDI (fault detection and isolation) technique is used, which does not require acceleration or force measurements. The actual robot link that has collided can also be identified. Once contact has been detected, it is possible to switch to a suitably defined hybrid force/motion controller that enables to keep the contact, while sliding on the obstacle, and to regulate the interaction force. Simulation results are shown for a two-link planar robot. --- paper_title: The role of the robot mass and velocity in physical human-robot interaction - Part II: Constrained blunt impacts paper_content: Accidents occurring with classical industrial robots often lead to fatal injuries. Presumably, this is to a great extent caused by the possibility of clamping the human in the confined workspace of the robot. Before generally allowing physical cooperation of humans and robots in future applications it is therefore absolutely crucial to analyze this extremely dangerous situation. In this paper we will investigate many aspects relevant to this sort of injury mechanisms and discuss the importance to domestic environments or production assistants. Since clamped impacts are intrinsically more dangerous than free ones it is fundamental to discuss and evaluate metrics to ensure safe interaction if clamping is possible. We compare various robots with respect to their injury potential leading to a main safety requirement of robot design: Reduce the intrinsic injury potential of a robot by reducing its weight. --- paper_title: Using tactile sensation for learning contact knowledge: Discriminate collision from physical interaction paper_content: Detecting and interpreting contacts is a crucial aspect of physical Human-Robot Interaction. In order to discriminate between intended and unintended contact types, we derive a set of linear and non-linear features based on physical contact model insights and from observing real impact data that may even rely on proprioceptive sensation only. We implement a classification system with a standard non-linear Support Vector Machine and show empirically both in simulations and on a real robot the high accuracy in off- as well as on-line settings of the system. We argue that these successful results are based on our feature design derived from first principles. --- paper_title: Exploiting robot redundancy in collision detection and reaction paper_content: We present a method that allows automatic reaction of a robot to physical collisions, while preserving as much as possible the execution of a Cartesian task for which the robot is kinematically redundant. The work is motivated by human-robot interaction scenarios, where ensuring safety is of primary concern whereas preserving task performance is an appealing secondary goal. Unexpected collisions may occur anywhere along the manipulator structure. Their fast detection is realized using our previous momentum-based method, which does not require any external sensing. The reaction torque applied to the joints reduces the effective robot inertia seen at the contact and lets the robot safely move away from the collision area. If we wish, however, to continue the execution of a Cartesian trajectory, robot redundancy can be exploited by projecting the reaction torque into the null space of a dynamic task matrix so as not to affect the original end-effector motion. This leads to the use of the so-called dynamically consistent approach to redundancy resolution, which is further elaborated in the paper. A partial task relaxation strategy can also be devised, with the objective of keeping contact forces below a user-defined safety threshold. Simulation results are reported for the 7R KUKA/DLR lightweight robot arm. --- paper_title: Modular state-based behavior control for safe human-robot interaction: A lightweight control architecture for a lightweight robot paper_content: In this paper we present a novel control architecture for realizing human-friendly behaviors and intuitive state based programming. The design implements strategies that take advantage of sophisticated soft-robotics features for providing reactive, robust, and safe robot actions in dynamic environments. Quick access to the various functionality of the robot enables the user to develop flexible hybrid state automata for programming complex robot behaviors. The real-time robot control takes care of all safety critical aspects and provides reactive reflexes that directly respond to external stimuli. --- paper_title: Collision detection and reaction: A contribution to safe physical Human-Robot Interaction paper_content: In the framework of physical human-robot interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective ldquosafetyrdquo feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented. --- paper_title: External force estimation using joint torque sensors and its application to impedance control of a robot manipulator paper_content: This paper proposes an algorithm to estimate external forces exerted on the end-effector of a robot manipulator using information from joint torque sensors (JTS). The algorithm is the combination of Time Delay Estimation (TDE) and input estimation technique where the external force is considered as an unknown input to the robot manipulator. Based on TDE's idea, the estimator which does not require an accurate dynamics model of the robot manipulator is developed. The simultaneous input and state estimation (SISE) is used to reduce not only nonlinear uncertainties of the robot dynamics but also the noise of measurements. The performance of the proposed estimation algorithm is evaluated through an application to impedance control of a four degree-of-freedom manipulator. --- paper_title: Collision Detection and Safe Reaction with the DLR-III Lightweight Manipulator Arm paper_content: A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported. --- paper_title: An adapt-and-detect actuator FDI scheme for robot manipulators paper_content: An adaptive scheme is presented for actuator fault detection and isolation (FDI) in robotic systems, based on the use of generalized momenta and of a suitable overparametrization of the uncertain robot dynamics. This allows to obtain an accurate and reliable detection and isolation of possibly concurrent faults also during the parameter adaptation phase. Experimental results are reported for a planar robot under gravity, considering partial, total, or bias-type failures of the motor torques. --- paper_title: Simultaneous estimation of aerodynamic and contact forces in flying robots: Applications to metric wind estimation and collision detection paper_content: In this paper, we extend our previous external wrench estimation scheme for flying robots with an aerodynamic model such that we are able to simultaneously estimate aerodynamic and contact forces online. This information can be used to identify the metric wind velocity vector via model inversion. Noticeably, we are still able to accurately sense collision forces at the same time. Discrimination between the two is achieved by identifying the natural contact frequency characteristics for both “interaction cases”. This information is then used to design suitable filters that are able to separate the aerodynamic from the collision forces for subsequent use. Now, the flying system is able to correctly respond to typical contact forces and does not accidentally “hallucinate” contacts due to a misinterpretation of wind disturbances. Overall, this paper generalizes our previous results towards significantly more complex environments. --- paper_title: Collision detection and reaction: A contribution to safe physical Human-Robot Interaction paper_content: In the framework of physical human-robot interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective ldquosafetyrdquo feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented. --- paper_title: Cartesian contact force estimation for robotic manipulators using Kalman filters and the generalized momentum paper_content: Estimating contact forces and torques in Cartesian space enables force-controlled robotic applications as well as collision detection without costly additional sensing. A new approach towards online estimation of contact forces and torques at the tool center point from motor torques as well as joint angles and speeds is presented, which is based on analyzing the generalized momentum of the manipulator. Existing generalized momentum based methods are extended by designing a Kalman filter to estimate the generalized momentum as well as contact forces and torques simultaneously. This approach introduces additional degrees of freedom in the design of the estimator that are exploited to increase robustness with respect to disturbances such as uncertainty in joint friction. The method is verified by simulation results obtained from a dynamic model of an ABB YuMi, a dual-arm collaborative robot with 7DOF each arm. --- paper_title: A Mathematical Introduction to Robotic Manipulation paper_content: INTRODUCTION: Brief History. Multifingered Hands and Dextrous Manipulation. Outline of the Book. Bibliography. RIGID BODY MOTION: Rigid Body Transformations. Rotational Motion in R3. Rigid Motion in R3. Velocity of a Rigid Body. Wrenches and Reciprocal Screws. MANIPULATOR KINEMATICS: Introduction. Forward Kinematics. Inverse Kinematics. The Manipulator Jacobian. Redundant and Parallel Manipulators. ROBOT DYNAMICS AND CONTROL: Introduction. Lagrange's Equations. Dynamics of Open-Chain Manipulators. Lyapunov Stability Theory. Position Control and Trajectory Tracking. Control of Constrained Manipulators. MULTIFINGERED HAND KINEMATICS: Introduction to Grasping. Grasp Statics. Force-Closure. Grasp Planning. Grasp Constraints. Rolling Contact Kinematics. HAND DYNAMICS AND CONTROL: Lagrange's Equations with Constraints. Robot Hand Dynamics. Redundant and Nonmanipulable Robot Systems. Kinematics and Statics of Tendon Actuation. Control of Robot Hands. NONHOLONOMIC BEHAVIOR IN ROBOTIC SYSTEMS: Introduction. Controllability and Frobenius' Theorem. Examples of Nonholonomic Systems. Structure of Nonholonomic Systems. NONHOLONOMIC MOTION PLANNING: Introduction. Steering Model Control Systems Using Sinusoids. General Methods for Steering. Dynamic Finger Repositioning. FUTURE PROSPECTS: Robots in Hazardous Environments. Medical Applications for Multifingered Hands. Robots on a Small Scale: Microrobotics. APPENDICES: Lie Groups and Robot Kinematics. A Mathematica Package for Screw Calculus. Bibliography. Index Each chapter also includes a Summary, Bibliography, and Exercises --- paper_title: Variable impedance actuators: A review paper_content: Variable Impedance Actuators (VIA) have received increasing attention in recent years as many novel applications involving interactions with an unknown and dynamic environment including humans require actuators with dynamics that are not well-achieved by classical stiff actuators. This paper presents an overview of the different VIAs developed and proposes a classification based on the principles through which the variable stiffness and damping are achieved. The main classes are active impedance by control, inherent compliance and damping actuators, inertial actuators, and combinations of them, which are then further divided into subclasses. This classification allows for designers of new devices to orientate and take inspiration and users of VIA's to be guided in the design and implementation process for their targeted application. --- paper_title: The KUKA-DLR Lightweight Robot arm - a new reference platform for robotics research and manufacturing paper_content: Transforming research results into marketable products requires considerable endurance and a strong sense of entrepreneurship. The KUKA Lightweight Robot (LWR) is the latest outcome of a bilateral research collaboration between KUKA Roboter, Augsburg, and the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), Wessling. The LWR has unique characteristics including a low mass-payload ratio and a programmable, active compliance which enables researchers and engineers to develop new industrial and service robotics applications with unprecedented performance, making it a unique reference platform for robotics research and future manufacturing. The stages of product genesis, the most innovative features and first application examples are presented. --- paper_title: An Acceleration-based State Observer for Robot Manipulators with Elastic Joints paper_content: Robots that use cycloidal gears, belts, or long shafts for transmitting motion from the motors to the driven rigid links display visco-elastic phenomena that can be assumed to be concentrated at the joints. For the design of advanced, possibly nonlinear, trajectory tracking control laws that are able to fully counteract the vibrations due to joint elasticity, full state feedback is needed. However, no robot with elastic joints has sensors available for its whole state, i.e., for measuring positions and velocities of both motors and links. Several nonlinear observers have been proposed in the past, assuming different reduced sets of measurements. We introduce here a new observer which uses only motor position sensing, together with accelerometers suitably mounted on the links of the robot arm. Its main advantage is that the error dynamics on the estimated state is independent from the dynamic parameters of the robot links, and can be tuned with standard decentralized linear techniques (locally to each joint). We present an experimental validation of this observer for the three base joints of a KUKA KR15/2 industrial robot and illustrate the control use of the obtained results. --- paper_title: Collision detection and reaction: A contribution to safe physical Human-Robot Interaction paper_content: In the framework of physical human-robot interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective ldquosafetyrdquo feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented. --- paper_title: Collision Detection and Safe Reaction with the DLR-III Lightweight Manipulator Arm paper_content: A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported. --- paper_title: Sensorless Robot Collision Detection and Hybrid Force/Motion Control paper_content: We consider the problem of real-time detection of collisions between a robot manipulator and obstacles of unknown geometry and location in the environment without the use of extra sensors. The idea is to handle a collision at a generic point along the robot as a fault of its actuating system. A previously developed dynamic FDI (fault detection and isolation) technique is used, which does not require acceleration or force measurements. The actual robot link that has collided can also be identified. Once contact has been detected, it is possible to switch to a suitably defined hybrid force/motion controller that enables to keep the contact, while sliding on the obstacle, and to regulate the interaction force. Simulation results are shown for a two-link planar robot. --- paper_title: A modified newton-euler method for dynamic computations in robot fault detection and control paper_content: We present a modified recursive Newton-Euler method for computing some dynamic expressions that arise in two problems of fault detection and control of serial robot manipulators, and which cannot be evaluated numerically using the standard method. The two motivating problems are: i) the computation of the residual vector that allows accurate detection of actuator faults or unexpected collisions using only robot proprioceptive measurements, and ii) the evaluation of a passivity-based trajectory tracking control law. The modified Newton-Euler algorithm generates factorization matrices of the Coriolis and centrifugal terms that satisfy the skew-symmetric property. The computational advantages with respect to numerical evaluation of symbolically obtained dynamic expressions is illustrated on a 7R DLR lightweight manipulato --- paper_title: Collision Detection and Safe Reaction with the DLR-III Lightweight Manipulator Arm paper_content: A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported. --- paper_title: Collision detection and reaction: A contribution to safe physical Human-Robot Interaction paper_content: In the framework of physical human-robot interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective ldquosafetyrdquo feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented. --- paper_title: Localizing external contact using proprioceptive sensors: The Contact Particle Filter paper_content: In order for robots to interact safely and intelligently with their environment they must be able to reliably estimate and localize external contacts. This paper introduces CPF, the Contact Particle Filter, which is a general algorithm for detecting and localizing external contacts on rigid body robots without the need for external sensing. CPF finds external contact points that best explain the observed external joint torque, and returns sensible estimates even when the external torque measurement is corrupted with noise. We demonstrate the capability of the CPF to track multiple external contacts on a simulated Atlas robot, and compare our work to existing approaches. --- paper_title: Collision detection, isolation and identification for humanoids paper_content: High-performance collision handling, which is divided into the five phases detection, isolation, estimation, classification and reaction, is a fundamental robot capability for safe and sensitive operation/interaction in unknown environments. For complex humanoid robots collision handling is obviously significantly more complex than for classical static manipulators. In particular, the robot stability during the collision reaction phase has to be carefully designed and relies on high fidelity contact information that is generated during the first three phases. In this paper, a unified realtime algorithm is presented for determining unknown contact forces and contact locations for humanoid robots based on proprioceptive sensing only, i.e. joint position, velocity and torque, as well as force/torque sensing along the structure. The proposed scheme is based on nonlinear model-based momentum observers that are able to recover the unknown contact forces and the respective locations. The dynamic loads acting on internal force/torque sensors are also corrected based on a novel nonlinear compensator. The theoretical capabilities of the presented methods are evaluated in simulation with the Atlas robot. In summary, we propose a full solution to the problem of collision detection, collision isolation and collision identification for the general class of humanoid robots. --- paper_title: Localizing external contact using proprioceptive sensors: The Contact Particle Filter paper_content: In order for robots to interact safely and intelligently with their environment they must be able to reliably estimate and localize external contacts. This paper introduces CPF, the Contact Particle Filter, which is a general algorithm for detecting and localizing external contacts on rigid body robots without the need for external sensing. CPF finds external contact points that best explain the observed external joint torque, and returns sensible estimates even when the external torque measurement is corrupted with noise. We demonstrate the capability of the CPF to track multiple external contacts on a simulated Atlas robot, and compare our work to existing approaches. --- paper_title: Estimation of contact forces using a virtual force sensor paper_content: Physical human-robot collaboration is character- ized by a suitable exchange of contact forces between human and robot, which can occur in general at any point along the robot structure. If the contact location and the exchanged forces were known in real time, a safe and controlled collaboration could be established. We present a novel approach that allows localizing the contact between a robot and human parts with a depth camera, while determining in parallel the joint torques generated by the physical interaction using the so-called resid- ual method. The combination of such exteroceptive sensing and model-based techniques is sufficient, under suitable conditions, for a reliable estimation of the actual exchanged force at the contact, realizing thus a virtual force sensor. Multiple contacts can be handled as well. We validate quantitatively the proposed estimation method with a number of static experiments on a KUKA LWR. An illustration of the use of estimated contact forces in the realization of collaborative behaviors is given, reporting preliminary experiments on a generalized admittance control scheme at the contact point. --- paper_title: New insights concerning intrinsic joint elasticity for safety paper_content: In this paper we present various new insights on the effect intrinsic joint elasticity has on safety in pHRI. We address the fact that the intrinsic safety of elastic mechanisms has been discussed rather one sided in favor of this new designs and intend to give a more differentiated view on the problem. An important result is that intrinsic joint elasticity does not reduce the Head Injury Criterion or impact forces compared to conventional actuation with some considerable elastic behavior in the joint, if considering full scale robots. We also elaborate conditions under which intrinsically compliant actuation is potentially more dangerous than rigid one. Furthermore, we present collision detection and reaction schemes for such mechanisms and verify their effectiveness experimentally. --- paper_title: Nonlinear decoupled motion-stiffness control and collision detection/reaction for the VSA-II variable stiffness device paper_content: Variable Stiffness Actuation (VSA) devices are being used to jointly address the issues of safety and performance in physical human-robot interaction. With reference to the VSA-II prototype, we present a feedback linearization approach that allows the simultaneous decoupling and accurate tracking of motion and stiffness reference profiles. The operative condition that avoids control singularities is characterized. Moreover, a momentum-based collision detection scheme is introduced, which does not require joint torque sensing nor information on the time-varying stiffness of the device. Based on the residual signal, a collision reaction strategy is presented that takes advantage of the proposed nonlinear control to rapidly let the arm bounce away after detecting the impact, while limiting contact forces through a sudden reduction of the stiffness. Simulations results are reported to illustrate the performance and robustness of the overall approach. Extensions to the multidof case of robot manipulators equipped with VSA-II devices are also considered. --- paper_title: Friction observer and compensation for control of robots with joint torque measurement paper_content: In this paper we introduce a friction observer for robots with joint torque sensing (in particular for the DLR medical robot) in order to increase the positioning accuracy and the performance of torque control. The observer output corresponds to the low-pass filtered friction torque. It is used for friction compensation in conjunction with a MIMO controller designed for flexible joint arms. A passivity analysis is done for this friction compensation, allowing a Lyapunov based convergence analysis in the context of the nonlinear robot dynamics. For the complete controlled system, global asymptotic stability can be shown. Experimental results validate the practical efficiency of the approach. --- paper_title: Series elastic actuators paper_content: It is traditional to make the interface between an actuator and its load as stiff as possible. Despite this tradition, reducing interface stiffness offers a number of advantages, including greater shock tolerance, lower reflected inertia, more accurate and stable force control, less inadvertent damage to the environment, and the capacity for energy storage. As a trade-off, reducing interface stiffness also lowers zero motion force bandwidth. In this paper, the authors propose that for natural tasks, zero motion force bandwidth isn't everything, and incorporating series elasticity as a purposeful element within the actuator is a good idea. The authors use the term elasticity instead of compliance to indicate the presence of a passive mechanical spring in the actuator. After a discussion of the trade-offs inherent in series elastic actuators, the authors present a control system for their use under general force or impedance control. The authors conclude with test results from a revolute series-elastic actuator meant for the arms of the MIT humanoid robot Cog and for a small planetary rover. --- paper_title: Inertial Properties in Robotic Manipulation: An Object-Level Framework paper_content: Consideration of dynamics is critical in the analysis, design, and control of robot systems. This article presents an extensive study of the dynamic properties of several important classes of robotic structures and proposes a number of general dynamic strategies for their coordination and control. This work is a synthesis of both previous and new results developed within the task-oriented operational space formulation. Here we in troduce a unifying framework for the analysis and control of robotic systems, beginning with an analysis of inertial prop erties based on two models that independently describe the mass and inertial characteristics associated with linear and angular motions. To visualize these properties, we propose a new geometric representation, termed the belted ellipsoid, that displays the magnitudes of the mass/inertial properties directly rather than their square roots. Our study of serial macro/mini structures is based on two models of redundant mechanisms. The first is a description of th... --- paper_title: Collision detection and reaction: A contribution to safe physical Human-Robot Interaction paper_content: In the framework of physical human-robot interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective ldquosafetyrdquo feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented. --- paper_title: The KUKA-DLR Lightweight Robot arm - a new reference platform for robotics research and manufacturing paper_content: Transforming research results into marketable products requires considerable endurance and a strong sense of entrepreneurship. The KUKA Lightweight Robot (LWR) is the latest outcome of a bilateral research collaboration between KUKA Roboter, Augsburg, and the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), Wessling. The LWR has unique characteristics including a low mass-payload ratio and a programmable, active compliance which enables researchers and engineers to develop new industrial and service robotics applications with unprecedented performance, making it a unique reference platform for robotics research and future manufacturing. The stages of product genesis, the most innovative features and first application examples are presented. ---
Title: Robot Collisions: A Survey on Detection, Isolation, and Identification Section 1: Introduction Description 1: Introduce the topic of robot collisions and the motivation behind studying detection, isolation, and identification of these collisions. Section 2: Collision Event Pipeline and State of the Art Description 2: Discuss the proposed collision event pipeline, detailing the phases from precollision to postcollision, and review the current state of the art in robot collision handling. Section 3: Contribution Description 3: Outline the unique contributions of the paper, including the proposed collision event pipeline, developed methods, and experimental comparisons. Section 4: Robot Dynamics and Properties Description 4: Summarize the dynamic modeling and relevant properties of rigid and flexible joint robots, incorporating collision dynamics. Section 4.1: Rigid Robots Description 4.1: Discuss the dynamics and properties specific to rigid robots. Section 4.2: Robots with Flexible Joints Description 4.2: Discuss the dynamics and properties specific to robots with flexible joints. Section 5: Collision Monitoring Methods Description 5: Describe various collision monitoring methods such as energy observers, torque estimations, and momentum-based observers, highlighting their advantages and disadvantages. Section 5.1: Estimation of Power via Energy Observer Description 5.1: Explain the use of energy observers to estimate external joint torque during collisions. Section 5.2: Direct Estimation of External Joint Torque Description 5.2: Discuss methods for algebraic estimation of external joint torque. Section 5.3: Monitoring via Inverse Dynamics Description 5.3: Outline the method of using inverse dynamics to monitor collisions. Section 5.4: Estimation via Joint Velocity Observer Description 5.4: Detail the use of joint velocity observers for collision detection. Section 5.5: Estimation via Momentum Observer Description 5.5: Explain the momentum observer approach to collision detection and the advantages this method offers. Section 5.6: Computational Issues Description 5.6: Discuss computational considerations for implementing the discussed monitoring methods. Section 6: Detection, Isolation, and Identification Description 6: Review methods for collision detection, isolation, and identification explaining how each phase can be addressed and the role of monitoring signals. Section 6.1: Detection Description 6.1: Explain how collision detection is determined using monitoring functions and address robustness issues. Section 6.2: Isolation Description 6.2: Detail the process of identifying the specific robot link involved in a collision. Section 6.3: Identification Description 6.3: Discuss techniques to estimate the collision joint torques and contact wrenches acting on the robot structure. Section 7: Comparison of Methods for Rigid Robots Description 7: Provide a systematic comparison of various collision monitoring methods, considering their computational effort, required measurements, and performance. Section 8: Extension to Robots with Flexible Joints Description 8: Explain how the collision monitoring methods can be extended to robots with flexible joints and discuss the different types of flexible joint robot implementations. Section 9: Experimental Evaluation Description 9: Present experimental results to validate the theoretical findings and compare the performance of different monitoring methods. Section 9.1: Observer Dynamics Description 9.1: Compare the dynamics and performance of velocity and momentum observers. Section 9.2: Observer Errors and Thresholding Description 9.2: Discuss the impact of various errors and the process of threshold tuning for robust collision detection. Section 9.3: Performance of the Link Momentum Observer Description 9.3: Analyze the performance, timing properties, and practical aspects of the link momentum observer for collision detection. Section 9.4: Collision Detection with an Energy Observer Description 9.4: Showcase the performance of the energy observer in detecting collisions during various robot activities. Section 9.5: Collision Detection/Isolation with a Link Momentum Observer Description 9.5: Validate the effectiveness of the link momentum observer for both detecting and isolating collisions through experimental setups. Section 9.6: Estimating the External Force and Contact Point Description 9.6: Demonstrate the capability of estimating external forces and the exact contact point during collisions. Section 10: Conclusion Description 10: Summarize the findings, emphasize the significance of the collision event pipeline, and highlight the practical implications of the developed methods for real-world applications.
A Survey of Potential Security Issues in Existing Wireless Sensor Network Protocols
11
--- paper_title: An Overview of Privacy and Security Issues in the Internet of Things paper_content: While the general definition of the Internet of Things (IoT) is almost mature, roughly defining it as an information network connecting virtual and physical objects, there is a consistent lack of consensus around technical and regulatory solutions. There is no doubt, though, that the new paradigm will bring forward a completely new host of issues because of its deep impact on all aspects of human life. In this work, the authors outline the current technological and technical trends and their impacts on the security, privacy, and governance. The work is split into short- and long-term analysis where the former is focused on already or soon available technology, while the latter is based on vision concepts. Also, an overview of the vision of the European Commission on this topic will be provided. --- paper_title: Network Challenges for Cyber Physical Systems with Tiny Wireless Devices: A Case Study on Reliable Pipeline Condition Monitoring paper_content: The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: Towards an Analysis of Security Issues, Challenges, and Open Problems in the Internet of Things paper_content: The Internet of Things (IoT) devices have become popular in diverse domains such as e-Health, e-Home, e-Commerce, and e-Trafficking, etc. With increased deployment of IoT devices in the real world, they can be, and in some cases, already are subject to malicious attacks to compromise the security and privacy of the IoT devices. While a number of researchers have explored such security challenges and open problems in IoT, there is an unfortunate lack of a systematic study of the security challenges in the IoT landscape. In this paper, we aim at bridging this gap by conducting a thorough analysis of IoT security challenges and problems. We present a detailed analysis of IoT attack surfaces, threat models, security issues, requirements, forensics, and challenges. We also provide a set of open problems in IoT security and privacy to guide the attention of researchers into solving the most critical problems. --- paper_title: Cyber-physical systems: the next computing revolution paper_content: Cyber-physical systems (CPS) are physical and engineered systems whose operations are monitored, coordinated, controlled and integrated by a computing and communication core. Just as the internet transformed how humans interact with one another, cyber-physical systems will transform how we interact with the physical world around us. Many grand challenges await in the economically vital domains of transportation, health-care, manufacturing, agriculture, energy, defense, aerospace and buildings. The design, construction and verification of cyber-physical systems pose a multitude of technical challenges that must be addressed by a cross-disciplinary community of researchers and educators. --- paper_title: A survey of security issues in wireless sensor networks paper_content: Wireless Sensor Networks (WSNs) are used in many applications in military, ecological, and health-related areas. These applications often include the monitoring of sensitive information such as enemy movement on the battlefield or the location of personnel in a building. Security is therefore important in WSNs. However, WSNs suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the use of insecure wireless communication channels. These constraints make security in WSNs a challenge. In this article we present a survey of security issues in WSNs. First we outline the constraints, security requirements, and attacks with their corresponding countermeasures in WSNs. We then present a holistic view of security issues. These issues are classified into five categories: cryptography, key management, secure routing, secure data aggregation, and intrusion detection. Along the way we highlight the advantages and disadvantages of various WSN security protocols and further compare and evaluate these protocols based on each of these five categories. We also point out the open research issues in each subarea and conclude with possible future research directions on security in WSNs. --- paper_title: Cyber–Physical Security of a Smart Grid Infrastructure paper_content: It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the “smart” grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber-physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure. --- paper_title: New Generation Sensor Web Enablement paper_content: Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement. --- paper_title: Towards an Analysis of Security Issues, Challenges, and Open Problems in the Internet of Things paper_content: The Internet of Things (IoT) devices have become popular in diverse domains such as e-Health, e-Home, e-Commerce, and e-Trafficking, etc. With increased deployment of IoT devices in the real world, they can be, and in some cases, already are subject to malicious attacks to compromise the security and privacy of the IoT devices. While a number of researchers have explored such security challenges and open problems in IoT, there is an unfortunate lack of a systematic study of the security challenges in the IoT landscape. In this paper, we aim at bridging this gap by conducting a thorough analysis of IoT security challenges and problems. We present a detailed analysis of IoT attack surfaces, threat models, security issues, requirements, forensics, and challenges. We also provide a set of open problems in IoT security and privacy to guide the attention of researchers into solving the most critical problems. --- paper_title: Protocols for self-organization of a wireless sensor network paper_content: We present a suite of algorithms for self-organization of wireless sensor networks in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. --- paper_title: Principles of Information Security paper_content: Explore the field of information security and assurance with this valuable resource that focuses on both the managerial and technical aspects of the discipline. Principles of Information Security, Third Edition builds on internationally recognized standards and bodies of knowledge to provide the knowledge and skills that information systems students need for their future roles as business decision-makers. Coverage includes key knowledge areas of the CISSP (Certified Information Systems Security Professional), as well as risk management, cryptography, physical security, and more. The third edition has retained the real-world examples and scenarios that made previous editions so successful, but has updated the content to reflect technologys latest capabilities and trends. With this emphasis on currency and comprehensive coverage, readers can feel confident that they are using a standards-based, content-driven resource to prepare them for their work in the field. --- paper_title: A Survey on Wireless Security: Technical Challenges, Recent Advances and Future Trends paper_content: This paper examines the security vulnerabilities and threats imposed by the inherent open nature of wireless communications and to devise efficient defense mechanisms for improving the wireless network security. We first summarize the security requirements of wireless networks, including their authenticity, confidentiality, integrity and availability issues. Next, a comprehensive overview of security attacks encountered in wireless networks is presented in view of the network protocol architecture, where the potential security threats are discussed at each protocol layer. We also provide a survey of the existing security protocols and algorithms that are adopted in the existing wireless network standards, such as the Bluetooth, Wi-Fi, WiMAX, and the long-term evolution (LTE) systems. Then, we discuss the state-of-the-art in physical-layer security, which is an emerging technique of securing the open communications environment against eavesdropping attacks at the physical layer. We also introduce the family of various jamming attacks and their counter-measures, including the constant jammer, intermittent jammer, reactive jammer, adaptive jammer and intelligent jammer. Additionally, we discuss the integration of physical-layer security into existing authentication and cryptography mechanisms for further securing wireless networks. Finally, some technical challenges which remain unresolved at the time of writing are summarized and the future trends in wireless security are discussed. --- paper_title: TinySec: a link layer security architecture for wireless sensor networks paper_content: We introduce TinySec, the first fully-implemented link layer security architecture for wireless sensor networks. In our design, we leverage recent lessons learned from design vulnerabilities in security protocols for other wireless networks such as 802.11b and GSM. Conventional security protocols tend to be conservative in their security guarantees, typically adding 16--32 bytes of overhead. With small memories, weak processors, limited energy, and 30 byte packets, sensor networks cannot afford this luxury. TinySec addresses these extreme resource constraints with careful design; we explore the tradeoffs among different cryptographic primitives and use the inherent sensor network limitations to our advantage when choosing parameters to find a sweet spot for security, packet overhead, and resource requirements. TinySec is portable to a variety of hardware and radio platforms. Our experimental results on a 36 node distributed sensor network application clearly demonstrate that software based link layer protocols are feasible and efficient, adding less than 10% energy, latency, and bandwidth overhead. --- paper_title: SPINS: security protocols for sensor networks paper_content: As sensor networks edge closer towards wide-spread deployment, security issues become a central concern. So far, much research has focused on making sensor networks feasible and useful, and has not concentrated on security. We present a suite of security building blocks optimized for resource-constrained environments and wireless communication. SPINS has two secure building blocks: SNEP and μTESLA SNEP provides the following important baseline security primitives: Data confidentiality, two-party data authentication, and data freshness. A particularly hard problem is to provide efficient broadcast authentication, which is an important mechanism for sensor networks. μTESLA is a new protocol which provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols. --- paper_title: LEAP+: Efficient security mechanisms for large-scale distributed sensor networks paper_content: We describe LEAPp (Localized Encryption and Authentication Protocol), a key management protocol for sensor networks that is designed to support in-network processing, while at the same time restricting the security impact of a node compromise to the immediate network neighborhood of the compromised node. The design of the protocol is motivated by the observation that different types of messages exchanged between sensor nodes have different security requirements, and that a single keying mechanism is not suitable for meeting these different security requirements. LEAPp supports the establishment of four types of keys for each sensor node: an individual key shared with the base station, a pairwise key shared with another sensor node, a cluster key shared with multiple neighboring nodes, and a global key shared by all the nodes in the network. LEAPp also supports (weak) local source authentication without precluding in-network processing. Our performance analysis shows that LEAPp is very efficient in terms of computational, communication, and storage costs. We analyze the security of LEAPp under various attack models and show that LEAPp is very effective in defending against many sophisticated attacks, such as HELLO flood attacks, node cloning attacks, and wormhole attacks. A prototype implementation of LEAPp on a sensor network testbed is also described. --- paper_title: A power efficient link-layer security protocol (LLSP) for wireless sensor networks paper_content: Wireless sensor networks (WSNs) rely on wireless communications, which is by nature a broadcast medium and is more vulnerable to security attacks than its wired counterpart due to lack of a physical boundary. Anybody with an appropriate transceiver can eavesdrop on, intercept, inject, and even alter the transmitted data. Security services are, therefore, urgently needed to ensure effective access control and information confidentiality. WSNs are severally energy constrained since many sensor networks are designed to operate unattended for a long time and battery recharging or replacement may be infeasible or impossible. To optimize the limited capability of the sensor nodes, security requirements are generally abandoned. This leaves WSNs under security attacks, which could cause more battery power consumption and reduced lifespan of the WSNs. In the worst case, an adversary may be able to undetectably take control of some sensor nodes, compromise the cryptographic keys and reprogram the sensor nodes. In this paper, a power efficient link-layer security protocol (LLSP) is proposed. LLSP provides node authentication, message integrity check, and message semantic security at a minimal cost by minimizing the security overhead in data packet and applying only symmetric security algorithms. The security analysis and performance analysis show that LLSP is secure and computationally efficient --- paper_title: SIGF: a family of configurable, secure routing protocols for wireless sensor networks paper_content: As sensor networks are deployed in adversarial environments and used for critical applications such as battlefield surveillance and medical monitoring, security weaknesses become a big concern. The severe resource constraints of WSNs give rise to the need for resource bound security solutions.In this paper we present SIGF (Secure Implicit Geographic Forwarding), a configurable secure routing protocol family for wireless sensor networks that provides "good enough" security and high performance. By avoiding or limiting shared state, the protocols prevent many common attacks against routing, and contain others to the local neighborhood.SIGF makes explicit the tradeoff between security provided and state which must be stored and maintained. It comprises three protocols, each forming a basis for the next: SIGF-0 keeps no state, but provides probabilistic defenses; SIGF-1 uses local history and reputation to protect against certain attacks; and SIGF-2 uses neighborhood-shared state to provide stronger security guarantees.Our performance evaluation shows that SIGF achieves high packet delivery ratios with low overhead and end-to-end delay. We evaluate the security of SIGF protocols under various security attacks and show that it effectively contains the damage from compromised nodes and defends against black hole, selective forwarding, Sybil, and some denial of service attacks. --- paper_title: Smart home energy management system using IEEE 802.15.4 and zigbee paper_content: Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management "Smart Energy" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad. --- paper_title: Real-time enabled IEEE 802.15.4 sensor networks in industrial automation paper_content: Sensor networks have been investigated in many scenarios and a good number of protocols have been developed. With the standardization of the IEEE 802.15.4 protocol, sensor networks became also an interesting topic in industrial automation. Here, the main focus is on real-time capabilities and reliability. We analyzed the IEEE 802.15.4 standard both in a simulation environment and analytically to figure out to which degree the standard fulfills these specific requirements. Our results can be used for planning and deploying IEEE 802.15.4 based sensor networks with specific performance demands. Furthermore, we clearly identified specific protocol limitations that prevent its applicability for delay bounded real-time applications. We therefore propose some protocol modifications that enable real-time operation based on standard IEEE 802.15.4 compliant sensor hardware. --- paper_title: On experimentally evaluating the impact of security on IEEE 802.15.4 networks paper_content: IEEE 802.15.4 addresses low-rate wireless personal area networks, enables low power devices, and includes a number of security provisions and options (the security sublayer). Security competes with performance for the scarce resources of low power, low cost sensor devices. So, a proper design of efficient and secure applications requires to know the impact that IEEE 802.15.4 security services have on the protocol performance. In this paper we present the preliminary results of a research activity aimed at quantitatively evaluating such impact from different standpoints including memory consumption, network performance, and energy consumption. The evaluation exploits a free implementation of the IEEE 802.15.4 security sublayer. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: Energy-efficient link-layer jamming attacks against wireless sensor network MAC protocols paper_content: A typical wireless sensor node has little protection against radio jamming. The situation becomes worse if energy-efficient jamming can be achieved by exploiting knowledge of the data link layer. Encrypting the packets may help prevent the jammer from taking actions based on the content of the packets, but the temporal arrangement of the packets induced by the nature of the protocol might unravel patterns that the jammer can take advantage of even when the packets are encrypted. By looking at the packet interarrival times in three representative MAC protocols, S-MAC, LMAC and B-MAC, we derive several jamming attacks that allow the jammer to jam S-MAC, LMAC and B-MAC energy-efficiently. The jamming attacks are based on realistic assumptions. The algorithms are described in detail and simulated. The effectiveness and efficiency of the attacks are examined. Careful analysis of other protocols belonging to the respective categories of S-MAC, LMAC and B-MAC reveal that those protocols are, to some extent, also susceptible to our attacks. The result of this investigation provides new insights into the security considerations of MAC protocols. --- paper_title: Versatile low power media access for wireless sensor networks paper_content: We propose B-MAC, a carrier sense media access protocol for wireless sensor networks that provides a flexible interface to obtain ultra low power operation, effective collision avoidance, and high channel utilization. To achieve low power operation, B-MAC employs an adaptive preamble sampling scheme to reduce duty cycle and minimize idle listening. B-MAC supports on-the-fly reconfiguration and provides bidirectional interfaces for system services to optimize performance, whether it be for throughput, latency, or power conservation. We build an analytical model of a class of sensor network applications. We use the model to show the effect of changing B-MAC's parameters and predict the behavior of sensor network applications. By comparing B-MAC to conventional 802.11-inspired protocols, specifically SMAC, we develop an experimental characterization of B-MAC over a wide range of network conditions. We show that B-MAC's flexibility results in better packet delivery rates, throughput, latency, and energy consumption than S-MAC. By deploying a real world monitoring application with multihop networking, we validate our protocol design and model. Our results illustrate the need for flexible protocols to effectively realize energy efficient sensor network applications. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: 6LoWPAN: a study on QoS security threats and countermeasures using intrusion detection system approach paper_content: Fuelled to bring the Internet of Things concept to real life, the Internet Engineering Task Force is working on 6LoWPAN, in which the standard allows a vast number of smart objects to be deployed in local wireless sensor networks (WSNs) using the huge address space of IPv6 for data and information harvesting through the Internet. From the security point of view, 6LoWPAN/WSN will be open to security threats from the local network itself and the Internet. Cryptography techniques applied as the front line of defence or deterrent can easily be broken because of the weak secure nature of LoWPAN devices and the wireless environment. Compromised nodes could lead to insider attacks without being detected by any cryptography checking. An intrusion detection system (IDS) is, primarily needed as a second line of defence to monitor the network operations and raise an alarm in case of any anomaly. This paper analyses potential security threats in 6LoWPAN and reviews the current countermeasures, in particular, the IDS-based solutions for countering insider/internal threats. Additionally, it discovers three novel QoS-related security threats, namely rank attack, local repair attack, and resource depleting attack, which are more seriously affecting the routing protocol for low-power and lossy network, the routing protocol used to establish 6LoWPAN network topology. A new two-layer IDS concept is introduced as a countermeasure method for securing the routing protocol for low-power and lossy network-built network topology from the internal QoS attacks. Potential research works are also presented to provide baseline reference to researchers in this field. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Standardized Protocol Stack for the Internet of (Important) Things paper_content: We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality. --- paper_title: 6LoWPAN: a study on QoS security threats and countermeasures using intrusion detection system approach paper_content: Fuelled to bring the Internet of Things concept to real life, the Internet Engineering Task Force is working on 6LoWPAN, in which the standard allows a vast number of smart objects to be deployed in local wireless sensor networks (WSNs) using the huge address space of IPv6 for data and information harvesting through the Internet. From the security point of view, 6LoWPAN/WSN will be open to security threats from the local network itself and the Internet. Cryptography techniques applied as the front line of defence or deterrent can easily be broken because of the weak secure nature of LoWPAN devices and the wireless environment. Compromised nodes could lead to insider attacks without being detected by any cryptography checking. An intrusion detection system (IDS) is, primarily needed as a second line of defence to monitor the network operations and raise an alarm in case of any anomaly. This paper analyses potential security threats in 6LoWPAN and reviews the current countermeasures, in particular, the IDS-based solutions for countering insider/internal threats. Additionally, it discovers three novel QoS-related security threats, namely rank attack, local repair attack, and resource depleting attack, which are more seriously affecting the routing protocol for low-power and lossy network, the routing protocol used to establish 6LoWPAN network topology. A new two-layer IDS concept is introduced as a countermeasure method for securing the routing protocol for low-power and lossy network-built network topology from the internal QoS attacks. Potential research works are also presented to provide baseline reference to researchers in this field. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Standardized Protocol Stack for the Internet of (Important) Things paper_content: We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality. --- paper_title: RPL: IPv6 Routing Protocol for LOW Power and Lossy Networks paper_content: Today, Low Power and Lossy Networks (LLNs) represent one of the most interesting research areas. They include Wireless Personal Area Networks (WPANs), low-power Power Line Communication (PLC) networks and Wireless Sensor Networks (WSNs). Such networks are often optimized to save energy, support tra c patterns di↵erent from the standard unicast communication, run routing protocols over link layers with restricted frame-sizes and many others [14]. This paper presents the IPv6 Routing Protocol for Low power and Lossy Networks (RPL) [19], which has been designed to overcome routing issues in LLNs. It implements measures to reduce energy consumption such as dynamic sending rate of control messages and addressing topology inconsistencies only when data packets have to be sent. The protocol makes use of IPv6 and supports not only tra c in the upward direction, but also tra c flowing from a gateway node to all other network participants. This paper focuses on the employment of RPL in WSNs and gives a brief overview of the protocol’s performance in two di↵erent testbeds. --- paper_title: Routing without routes: the backpressure collection protocol paper_content: Current data collection protocols for wireless sensor networks are mostly based on quasi-static minimum-cost routing trees. We consider an alternative, highly-agile approach called backpressure routing, in which routing and forwarding decisions are made on a per-packet basis. Although there is a considerable theoretical literature on backpressure routing, it has not been implemented on practical systems to date due to concerns about packet looping, the effect of link losses, large packet delays, and scalability. Addressing these concerns, we present the Backpressure Collection Protocol (BCP) for sensor networks, the first ever implementation of dynamic backpressure routing in wireless networks. In particular, we demonstrate for the first time that replacing the traditional FIFO queue service in backpressure routing with LIFO queues reduces the average end-to-end packet delays for delivered packets drastically (75% under high load, 98% under low load). Further, we improve backpressure scalability by introducing a new concept of floating queues into the backpressure framework. Under static network settings, BCP shows a more than 60% improvement in max-min rate over the state of the art Collection Tree Protocol (CTP). We also empirically demonstrate the superior delivery performance of BCP in highly dynamic network settings, including conditions of extreme external interference and highly mobile sinks. --- paper_title: Trust-based backpressure routing in wireless sensor networks paper_content: In this paper, we apply a vector autoregression VAR based trust model over the backpressure collection protocol BCP, a collection mechanism based on dynamic backpressure routing in wireless sensor networks WSNs. The backpressure scheduling is known for being throughput optimal. In the presence of malicious nodes, the throughput optimality no longer holds. This affects the network performance in collection tree applications of sensor networks. We apply an autoregression based scheme to embed trust into the link weights, so that the trusted links are scheduled. We have evaluated our work in a real sensor network testbed and shown that by carefully setting the trust parameters, substantial benefit in terms of throughput can be obtained with minimal overheads. Our results show that even when 50% of network nodes are malicious, VAR trust offers approximately 73% throughput and ensures reliable routing, with a small trade-off in the end-to-end packet delay and energy consumptions. --- paper_title: Collection tree protocol paper_content: This paper presents and evaluates two principles for wireless routing protocols. The first is datapath validation: data traffic quickly discovers and fixes routing inconsistencies. The second is adaptive beaconing: extending the Trickle algorithm to routing control traffic reduces route repair latency and sends fewer beacons. We evaluate datapath validation and adaptive beaconing in CTP Noe, a sensor network tree collection protocol. We use 12 different testbeds ranging in size from 20--310 nodes, comprising seven platforms, and six different link layers, on both interference-free and interference-prone channels. In all cases, CTP Noe delivers > 90% of packets. Many experiments achieve 99.9%. Compared to standard beaconing, CTP Noe sends 73% fewer beacons while reducing topology repair latency by 99.8%. Finally, when using low-power link layers, CTP Noe has duty cycles of 3% while supporting aggregate loads of 30 packets/minute. --- paper_title: Simulation and Evaluation of CTP and Secure-CTP Protocols paper_content: The paper discusses characteristics and qualities of two routing protocols - Collection Tree Protocol and its secure modification. The original protocol, as well as other protocols for wireless sensors, solves only problems of ra- dio communication and limited resources. Our design of the secure protocol tries to solve also the essential security ob- jectives. For the evaluation of properties of our protocol in large networks, a TOSSIM simulator was used. Our effort was to show the influence of the modification of the routing protocol to its behavior and quality of routing trees. We have proved that adding security into protocol design does not necessarily mean higher demands for data transfer, power consumption or worse protocol efficiency. In the paper, we manifest that security in the protocol may be achieved with low cost and may offer similar performance as the original protocol. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: CoAP: An Application Protocol for Billions of Tiny Internet Nodes paper_content: The Constrained Application Protocol (CoAP) is a transfer protocol for constrained nodes and networks, such as those that will form the Internet of Things. Much like its older and heavier cousin HTTP, CoAP uses the REST architectural style. Based on UDP and unencumbered by historical baggage, however, CoAP aims to achieve its modest goals with considerably less complexity. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: Security services and enhancements in the IEEE 802.15.4 wireless sensor networks paper_content: The IEEE 802.15.4 specification defines medium access control (MAC) layer and physical layer for wireless sensor networks. Furthermore, security mechanisms are also defined in the specification. This paper first surveys security services provided in the IEEE 802.15.4 wireless sensor networks. Then, some security enhancements are proposed to prevent same-nonce attack, denial-of-service attack, reply-protection attack, ACK attack, etc. --- paper_title: Message Denial and Alteration on IEEE 802.15.4 Low-Power Radio Networks paper_content: The severely constrained resources present in many IEEE 802.15.4 wireless nodes limits the available security protocols which these nodes can run. Selecting protection for the resource-constrained devices requires understanding the attacks which can be performed. This paper presents several simple attacks which allow a reader of any background to understand the vulnerability of IEEE 802.15.4 networks. Denial of Service (DoS), Passive Listening, and Man In The Middle (MITM) attacks are demonstrated running from a simple and low-cost platform. Considerations for real-life deployments of the attacks covers issues such as defeating channel hopping or attacking at a distance. The ease of performing the attacks demonstrates why security is critical on all networks. Protection against the attacks through both academic and industry-developed standards is briefly discussed. --- paper_title: Effects of Denial-of-Sleep Attacks on Wireless Sensor Network MAC Protocols paper_content: Wireless platforms are becoming less expensive and more powerful, enabling the promise of widespread use for everything from health monitoring to military sensing. Like other networks, sensor networks are vulnerable to malicious attack. However, the hardware simplicity of these devices makes defense mechanisms designed for traditional networks infeasible. This paper explores the denial-of-sleep attack, in which a sensor node's power supply is targeted. Attacks of this type can reduce the sensor lifetime from years to days and have a devastating impact on a sensor network. This paper classifies sensor network denial-of-sleep attacks in terms of an attacker's knowledge of the medium access control (MAC) layer protocol and ability to bypass authentication and encryption protocols. Attacks from each classification are then modeled to show the impacts on four sensor network MAC protocols, i.e., Sensor MAC (S-MAC), Timeout MAC (T-MAC), Berkeley MAC (B-MAC), and Gateway MAC (G-MAC). Implementations of selected attacks on S-MAC, T-MAC, and B-MAC are described and analyzed in detail to validate their effectiveness and analyze their efficiency. Our analysis shows that the most efficient attack on S-MAC can keep a cluster of nodes awake 100% of the time by an attacker that sleeps 99% of the time. Attacks on T-MAC can keep victims awake 100% of the time while the attacker sleeps 92% of the time. A framework for preventing denial-of-sleep attacks in sensor networks is also introduced. With full protocol knowledge and an ability to penetrate link-layer encryption, all wireless sensor network MAC protocols are susceptible to a full domination attack, which reduces the network lifetime to the minimum possible by maximizing the power consumption of the nodes' radio subsystem. Even without the ability to penetrate encryption, subtle attacks can be launched, which reduce the network lifetime by orders of magnitude. If sensor networks are to meet current expectations, they must be robust in the face of network attacks to include denial-of-sleep. --- paper_title: Energy-efficient link-layer jamming attacks against wireless sensor network MAC protocols paper_content: A typical wireless sensor node has little protection against radio jamming. The situation becomes worse if energy-efficient jamming can be achieved by exploiting knowledge of the data link layer. Encrypting the packets may help prevent the jammer from taking actions based on the content of the packets, but the temporal arrangement of the packets induced by the nature of the protocol might unravel patterns that the jammer can take advantage of even when the packets are encrypted. By looking at the packet interarrival times in three representative MAC protocols, S-MAC, LMAC and B-MAC, we derive several jamming attacks that allow the jammer to jam S-MAC, LMAC and B-MAC energy-efficiently. The jamming attacks are based on realistic assumptions. The algorithms are described in detail and simulated. The effectiveness and efficiency of the attacks are examined. Careful analysis of other protocols belonging to the respective categories of S-MAC, LMAC and B-MAC reveal that those protocols are, to some extent, also susceptible to our attacks. The result of this investigation provides new insights into the security considerations of MAC protocols. --- paper_title: Protection Against Packet Fragmentation Attacks at 6LoWPAN Adaptation Layer paper_content: The IPv6 over low-power wireless personal area network (6LoWPAN) typically includes devices that work together to connect the physical environment to real-world applications, e.g., wireless sensors. However, since, in some cases, security may be requested at the application layer as need, and then, security problems should be identified such as security threats model, threats analysis, attack scenarios, and light-weight security algorithms etc. This paper presents an analysis of security threats to the 6LoWPAN adaptation layer from the point of view of IP packet fragmentation attacks. And to protect replay attacks being occurred by IP packet fragmentations and to guarantee packet freshness, we also propose a protection mechanism against packet fragmentation attacks. The mechanism uses timestamp and nonce options that are added to the fragmented packets at the 6LoWPAN adaptation layer. --- paper_title: Securing communication in 6LoWPAN with compressed IPsec paper_content: Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. It may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between IP enabled sensor networks and the traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec. Our extension supports both IPsec's Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms. --- paper_title: A Network Access Control Framework for 6LoWPAN Networks paper_content: Low power over wireless personal area networks (LoWPAN), in particular wireless sensor networks, represent an emerging technology with high potential to be employed in critical situations like security surveillance, battlefields, smart-grids, and in e-health applications. The support of security services in LoWPAN is considered a challenge. First, this type of networks is usually deployed in unattended environments, making them vulnerable to security attacks. Second, the constraints inherent to LoWPAN, such as scarce resources and limited battery capacity, impose a careful planning on how and where the security services should be deployed. Besides protecting the network from some well-known threats, it is important that security mechanisms be able to withstand attacks that have not been identified before. One way of reaching this goal is to control, at the network access level, which nodes can be attached to the network and to enforce their security compliance. This paper presents a network access security framework that can be used to control the nodes that have access to the network, based on administrative approval, and to enforce security compliance to the authorized nodes. --- paper_title: Attack Model and Detection Scheme for Botnet on 6LoWPAN paper_content: Recently, Botnet has been used to launch spam-mail, key-logging, and DDoS attacks. Botnet is a network of bots which are controlled by attacker. A lot of detection mechanisms have been proposed to detect Botnet on wired network. However, in IP based sensor network environment, there is no detection mechanism for Botnet attacks. In this paper, we analyze the threat of Botnet on 6LoWPAN and propose a mechanism to detect Botnet on 6LoWPAN. --- paper_title: 6LoWPAN fragmentation attacks and mitigation mechanisms paper_content: 6LoWPAN is an IPv6 adaptation layer that defines mechanisms to make IP connectivity viable for tightly resource-constrained devices that communicate over low power, lossy links such as IEEE 802.15.4. It is expected to be used in a variety of scenarios ranging from home automation to industrial control systems. To support the transmission of IPv6 packets exceeding the maximum frame size of the link layer, 6LoWPAN defines a packet fragmentation mechanism. However, the best effort semantics for fragment transmissions, the lack of authentication at the 6LoWPAN layer, and the scarce memory resources of the networked devices render the design of the fragmentation mechanism vulnerable. In this paper, we provide a detailed security analysis of the 6LoWPAN fragmentation mechanism. We identify two attacks at the 6LoWPAN design-level that enable an attacker to (selectively) prevent correct packet reassembly on a target node at considerably low cost. Specifically, an attacker can mount our identified attacks by only sending a single protocol-compliant 6LoWPAN fragment. To counter these attacks, we propose two complementary, lightweight defense mechanisms, the content chaining scheme and the split buffer approach. Our evaluation shows the practicality of the identified attacks as well as the effectiveness of our proposed defense mechanisms at modest trade-offs. --- paper_title: Evaluating sinkhole defense techniques in RPL networks paper_content: In this work, we present the results of a study on the detrimental effects of sinkhole attacks on Wireless Sensor Networks (WSNs) which employ the Routing Protocol for LLNs (Low-power and Lossy Networks). A sinkhole is a compromised node which attempts to capture traffic with the intent to drop messages, thus degrading the end-to-end delivery performance, that is, reducing the number of messages successfully delivered to their destination. The mechanism by which the sinkhole captures traffic is by advertising an attractive route to its neighbors. We evaluate two countermeasures addressing the sinkhole problem: a parent fail-over and a rank authentication technique. We show via simulation that while each technique, applied alone, does not work all that well, the combination of the two techniques significantly improves the performance of a network under attack. We also demonstrate that, with the defenses described, increasing the density of the network can combat a penetration of sinkholes nodes, without needing to identify the sinkholes. --- paper_title: Enhancing RPL Resilience Against Routing Layer Insider Attacks paper_content: To gather and transmit data, low cost wireless devices are often deployed in open, unattended and possibly hostile environment, making them particularly vulnerable to physical attacks. Resilience is needed to mitigate such inherent vulnerabilities and risks related to security and reliability. In this paper, Routing Protocol for Low-Power and Lossy Networks (RPL) is studied in presence of packet dropping malicious compromised nodes. Random behavior and data replication have been introduced to RPL to enhance its resilience against such insider attacks. The classical RPL and its resilient variants have been analyzed through Cooja simulations and hardware emulation. Resilient techniques introduced to RPL have enhanced significantly the resilience against attacks providing route diversification to exploit the redundant topology created by wireless communications. In particular, the proposed resilient RPL exhibits better performance in terms of delivery ratio (up to 40%), fairness and connectivity while staying energy efficient. --- paper_title: A Security Framework for Routing over Low Power and Lossy Networks paper_content: This document presents a security framework for routing over low power ::: and lossy networks. The development builds upon previous work on ::: routing security and adapts the assessments to the issues and ::: constraints specific to low power and lossy networks. A systematic ::: approach is used in defining and evaluating the security threats and ::: identifying applicable countermeasures. These assessments provide the ::: basis of the security recommendations for incorporation into low ::: power, lossy network routing protocols. As an illustration, this ::: framework is applied to RPL. --- paper_title: Mitigation of topological inconsistency attacks in RPL-based low-power lossy networks paper_content: The RPL is a routing protocol for low-power and lossy networks. A malicious node can manipulate header options used by RPL to create topological inconsistencies, thereby causing denial of service attacks, reducing channel availability, increasing control message overhead, and increasing energy consumption at the targeted node and its neighborhood. RPL overcomes these topological inconsistencies via a fixed threshold, upon reaching which all subsequent packets with erroneous header options are ignored. However, this threshold value is arbitrarily chosen, and the performance can be improved by taking into account network characteristics. To address this, we present a mitigation strategy that allows nodes to dynamically adapt against a topological inconsistency attack based on the current network conditions. Results from our experiments show that our approach outperforms the fixed threshold and mitigates these attacks without significant overhead. Copyright © 2015John Wiley & Sons, Ltd. --- paper_title: Addressing DODAG inconsistency attacks in RPL networks paper_content: RPL is a routing protocol for low-power and lossy constrained node networks. A malicious node can manipulate header options used by RPL to track DODAG inconsistencies, thereby causing denial of service attacks, increased control message overhead, and black-holes at the targeted node. RPL counteracts DODAG inconsistencies by using a fixed threshold, upon reaching which all subsequent packets with erroneous header options are ignored. However, the fixed threshold is arbitrary and does not resolve the black-hole issue either. To address this we present a mitigation strategy that allows nodes to dynamically adapt against a DODAG inconsistency attack. We also present the forced black-hole attack problem and a solution that can be used to mitigate it. Results from our experiments show that our proposed approach mitigates these attacks without any significant overhead. --- paper_title: TinySec: a link layer security architecture for wireless sensor networks paper_content: We introduce TinySec, the first fully-implemented link layer security architecture for wireless sensor networks. In our design, we leverage recent lessons learned from design vulnerabilities in security protocols for other wireless networks such as 802.11b and GSM. Conventional security protocols tend to be conservative in their security guarantees, typically adding 16--32 bytes of overhead. With small memories, weak processors, limited energy, and 30 byte packets, sensor networks cannot afford this luxury. TinySec addresses these extreme resource constraints with careful design; we explore the tradeoffs among different cryptographic primitives and use the inherent sensor network limitations to our advantage when choosing parameters to find a sweet spot for security, packet overhead, and resource requirements. TinySec is portable to a variety of hardware and radio platforms. Our experimental results on a 36 node distributed sensor network application clearly demonstrate that software based link layer protocols are feasible and efficient, adding less than 10% energy, latency, and bandwidth overhead. --- paper_title: Preventing wormhole attacks on wireless ad hoc networks: a graph theoretic approach paper_content: We study the problem of characterizing the wormhole attack, an attack that can be mounted on a wide range of wireless network protocols without compromising any cryptographic quantity or network node. A wormhole, in essence, creates a communication link between an origin and a destination point that could not exist with the use of the regular communication channel. Hence, a wormhole modifies the connectivity matrix of the network, and can be described by a graph abstraction of the ad hoc network. Making use of geometric random graphs induced by the communication range constraint of the nodes, we present the necessary and sufficient conditions for detecting and defending against wormholes. Using our theory, we also present a defense mechanism based on local broadcast keys. We believe our work is the first one to present analytical calculation of the probabilities of detection. We also present simulation results to illustrate our theory. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: Wormhole attack prevention mechanism for RPL based LLN network paper_content: Smart metering application has received a lot of attention from the research community lately. Usually, LLN based network runs on RPL protocol which constructs a DAG structure for its normal operation. In this paper, we devise a wormhole attack scenario in such a network. Furthermore, we propose a Merkle tree based authentication protocol which runs on the notion of constructing a tree of hashed security information. We evaluated the approach by formulating the wormhole problem as graph theoretic problem and have shown the effectiveness of the proposed Merkle tree based approach for authenticating communications. Furthermore, we perform simulations in NS 2 to observe the network performance by adopting Merkle tree based to prevent from disrupting the links and observed boost in throughput, reduction in jitter and end to end delay. In the end, we have taken the step towards optimizing the performance of the algorithm by proposing an effective tree traversal algorithm which works on the notion of electing nodes as root in a large scale network from where a Merkle tree will be originated this will assist in managing the authentication in a huge sized network broken down into many trees avoiding wormhole attacks in the network. --- paper_title: Routing Attacks and Countermeasures in the RPL-Based Internet of Things paper_content: The Routing Protocol for Low-Power and Lossy Networks (RPL) is a novel routing protocol standardized for constrained environments such as 6LoWPAN networks. Providing security in IPv6/RPL connected 6LoWPANs is challenging because the devices are connected to the untrusted Internet and are resource constrained, the communication links are lossy, and the devices use a set of novel IoT technologies such as RPL, 6LoWPAN, and CoAP/CoAPs. In this paper we provide a comprehensive analysis of IoT technologies and their new security capabilities that can be exploited by attackers or IDSs. One of the major contributions in this paper is our implementation and demonstration of well-known routing attacks against 6LoWPAN networks running RPL as a routing protocol. We implement these attacks in the RPL implementation in the Contiki operating system and demonstrate these attacks in the Cooja simulator. Furthermore, we highlight novel security features in the IPv6 protocol and exemplify the use of these features for intrusion detection in the IoT by implementing a lightweight heartbeat protocol. --- paper_title: SVELTE: Real-time intrusion detection in the Internet of Things paper_content: In the Internet of Things (IoT), resource-constrained things are connected to the unreliable and untrusted Internet via IPv6 and 6LoWPAN networks. Even when they are secured with encryption and authentication, these things are exposed both to wireless attacks from inside the 6LoWPAN network and from the Internet. Since these attacks may succeed, Intrusion Detection Systems (IDS) are necessary. Currently, there are no IDSs that meet the requirements of the IPv6-connected IoT since the available approaches are either customized for Wireless Sensor Networks (WSN) or for the conventional Internet. In this paper we design, implement, and evaluate a novel intrusion detection system for the IoT that we call SVELTE. In our implementation and evaluation we primarily target routing attacks such as spoofed or altered information, sinkhole, and selective-forwarding. However, our approach can be extended to detect other attacks. We implement SVELTE in the Contiki OS and thoroughly evaluate it. Our evaluation shows that in the simulated scenarios, SVELTE detects all malicious nodes that launch our implemented sinkhole and/or selective forwarding attacks. However, the true positive rate is not 100%, i.e., we have some false alarms during the detection of malicious nodes. Also, SVELTE's overhead is small enough to deploy it on constrained nodes with limited energy and memory capacity. --- paper_title: Topology Authentication in RPL paper_content: The Routing Protocol for Low-Power and Lossy Networks (RPL) is a proposed standard by the Internet Engineering Task Force (IETF). Although RPL defines basic security modes, it is still subject to topology attacks. VeRA is an authentication scheme which protects against attacks, based on the version number and rank. This work presents two rank attacks which are not mitigated by VeRA. In the first attack, the adversary can decrease its rank arbitrarily. Hence, it can impersonate even the root node. In the second attack, the adversary can decrease its rank to that of any node within its access range. We present an enhancement of VeRA to mitigate the first attack. Additionally, a basic approach for mitigating the second attack is introduced. --- paper_title: VeRA - Version Number and Rank Authentication in RPL paper_content: Designing a routing protocol for large low-power and lossy networks (LLNs), consisting of thousands of constrained nodes and unreliable links, presents new challenges. The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), have been developed by the IETF ROLL Working Group as a preferred routing protocol to provide IPv6 routing functionality in LLNs. RPL provides path diversity by building and maintaining directed acyclic graphs (DAG) rooted at one (or more) gateway. However, an adversary that impersonates a gateway or has compromised one of the nodes close to the gateway can divert a large part of network traffic forward itself and/or exhaust the nodes' batteries. Therefore in RPL, special security care must be taken when the Destination Oriented Directed Acyclic Graph (DODAG) root is updating the Version Number by which reconstruction of the routing topology can be initiated. The same care also must be taken to prevent an internal attacker (compromised DODAG node) to publish decreased Rank value, which causes a large part of the DODAG to connect to the DODAG root via the attacker and give it the ability to eavesdrop a large part of the network traffic forward itself. Unfortunately, the currently available security services in RPL will not protect against a compromised internal node that can construct and disseminate fake messages. In this paper, a new security service is described that prevents any misbehaving node from illegitimately increasing the Version Number and compromise illegitimate decreased Rank values. --- paper_title: Securing the Backpressure Algorithm for Wireless Networks paper_content: The backpressure algorithm is known to provide throughput optimality in routing and scheduling decisions for multi-hop networks with dynamic traffic. The essential assumption in the backpressure algorithm is that all nodes are benign and obey the algorithm rules governing the information exchange and underlying optimization needs. Nonetheless, such an assumption does not always hold in realistic scenarios, especially in the presence of security attacks with intent to disrupt network operations. In this paper, we propose a novel mechanism, called virtual trust queuing , to protect backpressure algorithm based routing and scheduling protocols against various insider threats. Our objective is not to design yet another trust-based routing to heuristically bargain security and performance, but to develop a generic solution with strong guarantees of attack resilience and throughput performance in the backpressure algorithm. To this end, we quantify a node's algorithm-compliance behavior over time and construct a virtual trust queue that maintains deviations of a give node from expected algorithm outcomes. We show that by jointly stabilizing the virtual trust queue and the real packet queue, the backpressure algorithm not only achieves resilience, but also sustains the throughput performance under an extensive set of security attacks. Our proposed solution clears a major barrier for practical deployment of backpressure algorithm for secure wireless applications. --- paper_title: Trust-based backpressure routing in wireless sensor networks paper_content: In this paper, we apply a vector autoregression VAR based trust model over the backpressure collection protocol BCP, a collection mechanism based on dynamic backpressure routing in wireless sensor networks WSNs. The backpressure scheduling is known for being throughput optimal. In the presence of malicious nodes, the throughput optimality no longer holds. This affects the network performance in collection tree applications of sensor networks. We apply an autoregression based scheme to embed trust into the link weights, so that the trusted links are scheduled. We have evaluated our work in a real sensor network testbed and shown that by carefully setting the trust parameters, substantial benefit in terms of throughput can be obtained with minimal overheads. Our results show that even when 50% of network nodes are malicious, VAR trust offers approximately 73% throughput and ensures reliable routing, with a small trade-off in the end-to-end packet delay and energy consumptions. --- paper_title: Kinesis: a security incident response and prevention system for wireless sensor networks paper_content: This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates. --- paper_title: Simulation and Evaluation of CTP and Secure-CTP Protocols paper_content: The paper discusses characteristics and qualities of two routing protocols - Collection Tree Protocol and its secure modification. The original protocol, as well as other protocols for wireless sensors, solves only problems of ra- dio communication and limited resources. Our design of the secure protocol tries to solve also the essential security ob- jectives. For the evaluation of properties of our protocol in large networks, a TOSSIM simulator was used. Our effort was to show the influence of the modification of the routing protocol to its behavior and quality of routing trees. We have proved that adding security into protocol design does not necessarily mean higher demands for data transfer, power consumption or worse protocol efficiency. In the paper, we manifest that security in the protocol may be achieved with low cost and may offer similar performance as the original protocol. --- paper_title: LESS: Lightweight Establishment of Secure Session: A Cross-Layer Approach Using CoAP and DTLS-PSK Channel Encryption paper_content: Secure yet lightweight protocol for communication over the Internet is a pertinent problem for constrained environments in the context of Internet of Things (IoT) / Machine to Machine (M2M) applications. This paper extends the initial approaches published in [1], [2] and presents a novel cross-layer lightweight implementation to establish a secure channel. It distributes the responsibility of communication over secure channel in between the application and transport layers. Secure session establishment is performed using a payload embedded challenge response scheme over the Constrained Application Protocol (CoAP) [3]. Record encryption mechanism of Datagram Transport Layer Security (DTLS) [4] with Pre-Shared Key (PSK) [5] is used for encrypted exchange of application layer data. The secure session credentials derived from the application layer is used for encrypted exchange over the transport layer. The solution is designed in such a way that it can easily be integrated with an existing system deploying CoAP over DTLS-PSK. The proposed method is robust under different security attacks like replay attack, DoS and chosen cipher text. The improved performance of the proposed solution is established with comparative results and analysis. --- paper_title: Security for the Internet of Things: A Survey of Existing Protocols and Open Research Issues paper_content: The Internet of Things (IoT) introduces a vision of a future Internet where users, computing systems, and everyday objects possessing sensing and actuating capabilities cooperate with unprecedented convenience and economical benefits. As with the current Internet architecture, IP-based communication protocols will play a key role in enabling the ubiquitous connectivity of devices in the context of IoT applications. Such communication technologies are being developed in line with the constraints of the sensing platforms likely to be employed by IoT applications, forming a communications stack able to provide the required power-efficiency, reliability, and Internet connectivity. As security will be a fundamental enabling factor of most IoT applications, mechanisms must also be designed to protect communications enabled by such technologies. This survey analyzes existing protocols and mechanisms to secure communications in the IoT, as well as open research issues. We analyze how existing approaches ensure fundamental security requirements and protect communications on the IoT, together with the open challenges and strategies for future research work in the area. This is, as far as our knowledge goes, the first survey with such goals. --- paper_title: Lithe: Lightweight Secure CoAP for the Internet of Things paper_content: The Internet of Things (IoT) enables a wide range of application scenarios with potentially critical actuating and sensing tasks, e.g., in the e-health domain. For communication at the application layer, resource-constrained devices are expected to employ the constrained application protocol (CoAP) that is currently being standardized at the Internet Engineering Task Force. To protect the transmission of sensitive information, secure CoAP mandates the use of datagram transport layer security (DTLS) as the underlying security protocol for authenticated and confidential communication. DTLS, however, was originally designed for comparably powerful devices that are interconnected via reliable, high-bandwidth links. In this paper, we present Lithe-an integration of DTLS and CoAP for the IoT. With Lithe, we additionally propose a novel DTLS header compression scheme that aims to significantly reduce the energy consumption by leveraging the 6LoWPAN standard. Most importantly, our proposed DTLS header compression scheme does not compromise the end-to-end security properties provided by DTLS. Simultaneously, it considerably reduces the number of transmitted bytes while maintaining DTLS standard compliance. We evaluate our approach based on a DTLS implementation for the Contiki operating system. Our evaluation results show significant gains in terms of packet size, energy consumption, processing time, and network-wide response times when compressed DTLS is enabled. --- paper_title: Simulation of Attacks for Security in Wireless Sensor Network paper_content: The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. --- paper_title: Powertrace: Network-level Power Profiling for Low-power Wireless Networks paper_content: Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, ::: which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy ::: is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To ::: demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are ::: able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox. ---
Title: A Survey of Potential Security Issues in Existing Wireless Sensor Network Protocols Section 1: Introduction Description 1: Introduce the significance of WSNs, their applications, and the overarching security concerns in such networks. Section 2: Security Features of WSNs Description 2: Discuss the unique features that make WSNs different from wired networks and prevent their security mechanisms. Section 3: Security Requirements Description 3: Elaborate on traditional and additional security requirements necessary for WSNs, classified based on data, access, and device levels. Section 4: Security Vulnerabilities Description 4: Present the security vulnerabilities of WSNs, categorized as external or internal attacks, and their impact. Section 5: Security Mechanisms Description 5: Describe the range of security mechanisms that can be applied at different layers of the WSN stack, followed by examples of proposed protocols. Section 6: WSN Standards and Protocols Description 6: Provide an overview of selected WSN standards and protocols essential for understanding their security requirements and challenges. Section 7: Attacks and Security Mechanisms for PHY and MAC Layer Communication Protocols Description 7: Offer an analysis of security attacks and mechanisms specifically targeting PHY and MAC layer communication protocols in WSNs. Section 8: Attacks and Security Mechanisms for Network Layer Communication Protocols Description 8: Examine the network layer protocols, focusing on the threats and security mechanisms associated with each. Section 9: Attacks and Security Mechanisms for CoAP Application Layer Communication Protocol Description 9: Discuss the security mechanisms for the CoAP application layer protocol, including the use of DTLS and its limitations. Section 10: Security Attacks Evaluation in Cooja Description 10: Present an evaluation framework using the Cooja network simulator to assess the impact of attacks on WSNs and propose recommendations for designing countermeasures. Section 11: Conclusion Description 11: Summarize the key findings of the survey and emphasize the necessity for automated response mechanisms and the use of simulation tools like Cooja in developing new security defenses.
Security Models in Vehicular Ad-hoc Networks: A Survey
8
--- paper_title: Overview of security issues in Vehicular Ad-hoc Networks paper_content: Vehicular ad-hoc networks (VANETs) are a promising communication scenario. Several new applications are envisioned, which will improve traffic management and safety. Nevertheless, those applications have stringent security requirements, as they affect road traffic safety. Moreover, VANETs face several security threats. As VANETs present some unique features (e.g. high mobility of nodes, geographic extension, etc.) traditional security mechanisms are not always suitable. Because of that, a plethora of research contributions have been presented so far. This chapter aims to describe and analyze the most representative VANET security developments. --- paper_title: Privacy Protection Through k.anonymity in Location.based Services paper_content: AbstractThe advent of Location-based Services (LBS), especially in wireless communications systems, has raised a growing concern for user about his privacy. As for every location-based query, the user has to reveal his location coordinates (through technologies like Global Positioning Systems); if this information could be revealed to anybody, it becomes a privacy breach. To take care of these issues, several techniques have come up among which k-anonymity has been most widely used and studied in different forms and different contexts. In this paper, we have reviewed the application of k-anonymity for LBS and its recent advancements. While doing so, we have recognized three perspectives for the applicability of k-anonymity for LBS: the application of k-anonymity based on the architecture, based on the algorithms for anonymization, and based on the types of k-anonymity (according to the different query processing techniques). Hence, the review has been done within the framework of these perspectives. These... --- paper_title: An Efficient Anonymous Authentication Protocol for Secure Vehicular Communications paper_content: As vehicular communications bring the promise of improved road safety and optimized road traffic through cooperative systems applications, it becomes a prerequisite to make vehicular communications secure for the successful deployment of vehicular ad hoc networks. In this paper, we propose an efficient authentication protocol with anonymous public key certificates for secure vehicular communications. The proposed protocol follows a system model to issue on-the-fly anonymous public key certificates to vehicles by road-side units. In order to design an efficient authentication protocol, we consider a key-insulated signature scheme for certifying anonymous public keys of vehicles to the system model. We demonstrate experimental results to confirm that the proposed protocol has better performance than other protocols based on group signature schemes. --- paper_title: Security in automotive bus systems paper_content: This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues. --- paper_title: ASPE: attribute-based secure policy enforcement in vehicular ad hoc networks paper_content: Vehicular ad hoc networks (VANETs) are usually operated among vehicles moving at high speeds, and thus their communication relations can be changed frequently. In such a highly dynamic environment, establishing trust among vehicles is difficult. To solve this problem, we propose a flexible, secure and decentralized attribute based secure key management framework for VANETs. Our solution is based on attribute based encryption (ABE) to construct an attribute based security policy enforcement (ASPE) framework. ASPE considers various road situations as attributes. These attributes are used as encryption keys to secure the transmitted data. ASPE is flexible in that it can dynamically change encryption keys depending on the VANET situations. At the same time, ASPE naturally incorporates data access control policies on the transmitted data. ASPE provides an integrated solution to involve data access control, key management, security policy enforcement, and secure group formation in highly dynamic vehicular communication environments. Our performance evaluations show that ASPE is efficient and it can handle large amount of data encryption/decryption flows in VANETs. --- paper_title: Secure Vehicular Communication Systems: Design and Architecture paper_content: Significant developments have taken place over the past few years in the area of vehicular communication systems. Now, it is well understood in the community that security and protection of private user information are a prerequisite for the deployment of the technology. This is so precisely because the benefits of VC systems, with the mission to enhance transportation safety and efficiency, are at stake. Without the integration of strong and practical security and privacy enhancing mechanisms, VC systems can be disrupted or disabled, even by relatively unsophisticated attackers. We address this problem within the SeVeCom project, having developed a security architecture that provides a comprehensive and practical solution. We present our results in a set of two articles in this issue. In this first one, we analyze threats and types of adversaries, identify security and privacy requirements, and present a spectrum of mechanisms to secure VC systems. We provide a solution that can be quickly adopted and deployed. In the second article we present our progress toward the implementation of our architecture and results on the performance of the secure VC system, along with a discussion of upcoming research challenges and our related current results. --- paper_title: A Secure Cooperative Approach for Nonline-of-Sight Location Verification in VANET paper_content: In vehicular ad hoc networks (VANETs), network services and applications (e.g., safety messages) require an exchange of vehicle and event location information. The data are exchanged among vehicles within each vehicle's respective radio communication range through direct communication. In reality, direct communication is susceptible to interference and blocked by physical obstacles, which prevent the proper exchange of information about localization information. Obstacles can create a state of nonline of sight (NLOS) between two vehicles, which restricts direct communication even when corresponding vehicles exist within each other's physical communication range, thus preventing them from exchanging proper data and affecting the localization services' integrity and reliability. Dealing with such obstacles is a challenge in VANETs as moving obstacles such as trucks are parts of the network and have the same characteristics of a VANET node (e.g., high-speed mobility and change of driving behavior). In this paper, we present a location verification protocol among cooperative neighboring vehicles to overcome an NLOS condition and secure the integrity of localization services for VANETs. The simulation results showed improvement in neighborhood awareness under NLOS conditions. A solution such as that we propose will help to maintain localization service integrity and reliability. --- paper_title: An Efficient Trust Management System for Balancing the Safety and Location Privacy in VANETs paper_content: In VANETs, how to determine the trustworthiness of event messages has received a great deal of attentions in recent years for improving the safety and location privacy of vehicles. Among these research studies, the accuracy and delay of trustworthiness decision are both important problems. In this paper, we propose a road-side unit (RSU) and beacon-based trust management system, called RaBTM, which aims to prorogate message opinions quickly and thwart internal attackers from sending or forwarding forged messages in privacy-enhanced VANETs. To evaluate the performance and efficiency of the proposed system, we conducted a set of simulations under alteration attacks and bogus message attacks with various adversary ratios. The simulation results show that the proposed system RaBTM is highly resilient to adversarial attacks and performs at least 15% better than weighted vote (WV) scheme. --- paper_title: Selective and confidential message exchange in vehicular ad hoc networks paper_content: Vehicular Ad-hoc Networks are a promising and increasingly important paradigm. Their applications range from safety enhancement to mobile entertainment services. However, their deployment requires several security issues to be resolved, particularly, since they rely on insecure wireless communication. In this paper, we propose a cryptographic-based access control framework for vehicles to securely exchange messages in a controlled fashion by integrating moving object modeling techniques with cryptographic policies. --- paper_title: PAAVE: Protocol for Anonymous Authentication in Vehicular Networks Using Smart Cards paper_content: Vehicular communications are envisioned to play a substantial role in providing safety in transportation by means of safety message exchange. However, the deployment of vehicular networks is strongly dependent on security and privacy features. In this paper, we present a Protocol for Anonymous Authentication in Vehicular Networks (PAAVE) to address the issue of privacy preservation with authority traceability in vehicular ad hoc networks (VANETs). The proposed protocol is based on smart cards to generate on-the-fly anonymous keys between vehicles and Roadside units (RSUs). PAAVE is lightweight and provides fast anonymous authentication and location privacy while requiring a vehicle to store one cryptographic key. We demonstrate the merits gained by the proposed protocol through extensive analysis and show that PAAVE outperforms existing schemes. --- paper_title: AMOEBA: Robust Location Privacy Scheme for VANET paper_content: Communication messages in vehicular ad hoc networks (VANET) can be used to locate and track vehicles. While tracking can be beneficial for vehicle navigation, it can also lead to threats on location privacy of vehicle user. In this paper, we address the problem of mitigating unauthorized tracking of vehicles based on their broadcast communications, to enhance the user location privacy in VANET. Compared to other mobile networks, VANET exhibits unique characteristics in terms of vehicular mobility constraints, application requirements such as a safety message broadcast period, and vehicular network connectivity. Based on the observed characteristics, we propose a scheme called AMOEBA, that provides location privacy by utilizing the group navigation of vehicles. By simulating vehicular mobility in freeways and streets, the performance of the proposed scheme is evaluated under VANET application constraints and two passive adversary models. We make use of vehicular groups for anonymous access to location based service applications in VANET, for user privacy protection. The robustness of the user privacy provided is considered under various attacks. --- paper_title: AMOEBA: Robust Location Privacy Scheme for VANET paper_content: Communication messages in vehicular ad hoc networks (VANET) can be used to locate and track vehicles. While tracking can be beneficial for vehicle navigation, it can also lead to threats on location privacy of vehicle user. In this paper, we address the problem of mitigating unauthorized tracking of vehicles based on their broadcast communications, to enhance the user location privacy in VANET. Compared to other mobile networks, VANET exhibits unique characteristics in terms of vehicular mobility constraints, application requirements such as a safety message broadcast period, and vehicular network connectivity. Based on the observed characteristics, we propose a scheme called AMOEBA, that provides location privacy by utilizing the group navigation of vehicles. By simulating vehicular mobility in freeways and streets, the performance of the proposed scheme is evaluated under VANET application constraints and two passive adversary models. We make use of vehicular groups for anonymous access to location based service applications in VANET, for user privacy protection. The robustness of the user privacy provided is considered under various attacks. --- paper_title: V-Tokens for Conditional Pseudonymity in VANETs paper_content: Privacy is an important requirement in vehicle networks, because vehicles broadcast detailed location information. Also of importance is accountability due to safety critical applications. Conditional pseudonymity, i.e., usage of resolvable pseudonyms, is a common approach to address both. Often, resolvability of pseudonyms is achieved by authorities maintaining pseudonym- identity mappings. However, these mappings are privacy sensitive and require strong protection to prevent abuse or leakage. We present a new approach that does not rely on pseudonym-identity mappings to be stored by any party. Resolution information is directly embedded in pseudonyms and can only be accessed when multiple authorities cooperate. Our privacy-preserving pseudonym issuance protocol ensures that pseudonyms contain valid resolution information but prevents issuing authorities from creating pseudonym-identity mappings. --- paper_title: GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications paper_content: In this paper, we first identify some unique design requirements in the aspects of security and privacy preservation for communications between different communication devices in vehicular ad hoc networks. We then propose a secure and privacy-preserving protocol based on group signature and identity (ID)-based signature techniques. We demonstrate that the proposed protocol cannot only guarantee the requirements of security and privacy but can also provide the desired traceability of each vehicle in the case where the ID of the message sender has to be revealed by the authority for any dispute event. Extensive simulation is conducted to verify the efficiency, effectiveness, and applicability of the proposed protocol in various application scenarios under different road systems. --- paper_title: PAACP: A portable privacy-preserving authentication and access control protocol in vehicular ad hoc networks paper_content: Recently, several studies addressed security and privacy issues in vehicular ad hoc networks (VANETs). Most of them focused on safety applications. As VANETs will be available widely, it is anticipated that Internet services could be accessed through VANETs in the near future. Thus, non-safety applications for VANETs would rise in popularity. This paper proposes a novel portable privacy-preserving authentication and access control protocol, named PAACP, for non-safety applications in VANETs. In addition to the essential support of authentication, key establishment, and privacy preservation, PAACP is developed to provide sophisticated differentiated service access control, which will facilitate the deployment of a variety of non-safety applications. Besides, the portability feature of PAACP can eliminate the backend communications with service providers. Therefore, better performance and scalability can be achieved in PAACP. --- paper_title: Secure Vehicle-to-roadside Communication Protocol Using Certificate-based Cryptosystem paper_content: AbstractAs various applications of vehicular ad hoc networks (VANETs) have been proposed, security has become one of the big research challenges and is receiving increasing attention. In this paper, we propose a secure and an efficient vehicle-to-roadside communication protocol by using the recently developed concepts of a certificate-based cryptosystem. The proposed approach combines the best aspects of identity-based public key cryptography approaches (implicit certification) and traditional public key infrastructure approaches (no key escrow). As compared with the previous works, which were implemented with the traditional public key infrastructure and identity-based public key cryptography, the proposed approach is more secure and efficient. --- paper_title: V-Tokens for Conditional Pseudonymity in VANETs paper_content: Privacy is an important requirement in vehicle networks, because vehicles broadcast detailed location information. Also of importance is accountability due to safety critical applications. Conditional pseudonymity, i.e., usage of resolvable pseudonyms, is a common approach to address both. Often, resolvability of pseudonyms is achieved by authorities maintaining pseudonym- identity mappings. However, these mappings are privacy sensitive and require strong protection to prevent abuse or leakage. We present a new approach that does not rely on pseudonym-identity mappings to be stored by any party. Resolution information is directly embedded in pseudonyms and can only be accessed when multiple authorities cooperate. Our privacy-preserving pseudonym issuance protocol ensures that pseudonyms contain valid resolution information but prevents issuing authorities from creating pseudonym-identity mappings. --- paper_title: Eviction of Misbehaving and Faulty Nodes in Vehicular Networks paper_content: Vehicular networks (VNs) are emerging, among civilian applications, as a convincing instantiation of the mobile networking technology. However, security is a critical factor and a significant challenge to be met. Misbehaving or faulty network nodes have to be detected and prevented from disrupting network operation, a problem particularly hard to address in the life-critical VN environment. Existing networks rely mainly on node certificate revocation for attacker eviction, but the lack of an omnipresent infrastructure in VNs may unacceptably delay the retrieval of the most recent and relevant revocation information; this will especially be the case in the early deployment stages of such a highly volatile and large-scale system. In this paper, we address this specific problem. We propose protocols, as components of a framework, for the identification and local containment of misbehaving or faulty nodes, and then for their eviction from the system. We tailor our design to the VN characteristics and analyze our system. Our results show that the distributed approach to contain nodes and contribute to their eviction is efficiently feasible and achieves a sufficient level of robustness. --- paper_title: A Secure Cooperative Approach for Nonline-of-Sight Location Verification in VANET paper_content: In vehicular ad hoc networks (VANETs), network services and applications (e.g., safety messages) require an exchange of vehicle and event location information. The data are exchanged among vehicles within each vehicle's respective radio communication range through direct communication. In reality, direct communication is susceptible to interference and blocked by physical obstacles, which prevent the proper exchange of information about localization information. Obstacles can create a state of nonline of sight (NLOS) between two vehicles, which restricts direct communication even when corresponding vehicles exist within each other's physical communication range, thus preventing them from exchanging proper data and affecting the localization services' integrity and reliability. Dealing with such obstacles is a challenge in VANETs as moving obstacles such as trucks are parts of the network and have the same characteristics of a VANET node (e.g., high-speed mobility and change of driving behavior). In this paper, we present a location verification protocol among cooperative neighboring vehicles to overcome an NLOS condition and secure the integrity of localization services for VANETs. The simulation results showed improvement in neighborhood awareness under NLOS conditions. A solution such as that we propose will help to maintain localization service integrity and reliability. --- paper_title: AMOEBA: Robust Location Privacy Scheme for VANET paper_content: Communication messages in vehicular ad hoc networks (VANET) can be used to locate and track vehicles. While tracking can be beneficial for vehicle navigation, it can also lead to threats on location privacy of vehicle user. In this paper, we address the problem of mitigating unauthorized tracking of vehicles based on their broadcast communications, to enhance the user location privacy in VANET. Compared to other mobile networks, VANET exhibits unique characteristics in terms of vehicular mobility constraints, application requirements such as a safety message broadcast period, and vehicular network connectivity. Based on the observed characteristics, we propose a scheme called AMOEBA, that provides location privacy by utilizing the group navigation of vehicles. By simulating vehicular mobility in freeways and streets, the performance of the proposed scheme is evaluated under VANET application constraints and two passive adversary models. We make use of vehicular groups for anonymous access to location based service applications in VANET, for user privacy protection. The robustness of the user privacy provided is considered under various attacks. --- paper_title: Defense against Sybil attack in vehicular ad hoc network based on roadside unit support paper_content: In this paper, we propose a timestamp series approach to defend against Sybil attack in a vehicular ad hoc network (VANET) based on roadside unit support. The proposed approach targets the initial deployment stage of VANET when basic roadside unit (RSU) support infrastructure is available and a small fraction of vehicles have network communication capability. Unlike previously proposed schemes that require a dedicated vehicular public key infrastructure to certify individual vehicles, in our approach RSUs are the only components issuing the certificates. Due to the differences of moving dynamics among vehicles, it is rare to have two vehicles passing by multiple RSUs at exactly the same time. By exploiting this spatial and temporal correlation between vehicles and RSUs, two messages will be treated as Sybil attack issued by one vehicle if they have the similar timestamp series issued by RSUs. The timestamp series approach needs neither vehicular-based public-key infrastructure nor Internet accessible RSUs, which makes it an economical solution suitable for the initial stage of VANET. --- paper_title: Detecting and correcting malicious data in VANETs paper_content: In order to meet performance goals, it is widely agreed that vehicular ad hoc networks (VANETs) must rely heavily on node-to-node communication, thus allowing for malicious data traffic. At the same time, the easy access to information afforded by VANETs potentially enables the difficult security goal of data validation. We propose a general approach to evaluating the validity of VANET data. In our approach a node searches for possible explanations for the data it has collected based on the fact that malicious nodes may be present. Explanations that are consistent with the node's model of the VANET are scored and the node accepts the data as dictated by the highest scoring explanations. Our techniques for generating and scoring explanations rely on two assumptions: 1) nodes can tell "at least some" other nodes apart from one another and 2) a parsimony argument accurately reflects adversarial behavior in a VANET. We justify both assumptions and demonstrate our approach on specific VANETs. --- paper_title: A Security Framework with Strong Non-Repudiation and Privacy in VANETs paper_content: This paper proposes a security framework with strong non-repudiation and privacy using new approach of ID-based cryptosystem in VANETs. To remove the overheads of certificate management in PKI, security frameworks using an ID-based cryptosystem are proposed. These systems, however, cannot guarantee strong non-repudiation and private communication since they suffer from the inherent weakness of an ID-based cryptosystem like the key escrow problem. The key idea of this paper is that the ID of the third-party is used as the verifier of vehicle's ID and self-generated RSA public key instead of using the ID of the peers. Our scheme provides strong nonrepudiation and privacy preservation without the inherent weaknesses of an ID-based cryptosystem in VANETs. Also, the proposed scheme is efficient in terms of signature and verification time for safety-related applications. --- paper_title: Overview of security issues in Vehicular Ad-hoc Networks paper_content: Vehicular ad-hoc networks (VANETs) are a promising communication scenario. Several new applications are envisioned, which will improve traffic management and safety. Nevertheless, those applications have stringent security requirements, as they affect road traffic safety. Moreover, VANETs face several security threats. As VANETs present some unique features (e.g. high mobility of nodes, geographic extension, etc.) traditional security mechanisms are not always suitable. Because of that, a plethora of research contributions have been presented so far. This chapter aims to describe and analyze the most representative VANET security developments. --- paper_title: ASPE: attribute-based secure policy enforcement in vehicular ad hoc networks paper_content: Vehicular ad hoc networks (VANETs) are usually operated among vehicles moving at high speeds, and thus their communication relations can be changed frequently. In such a highly dynamic environment, establishing trust among vehicles is difficult. To solve this problem, we propose a flexible, secure and decentralized attribute based secure key management framework for VANETs. Our solution is based on attribute based encryption (ABE) to construct an attribute based security policy enforcement (ASPE) framework. ASPE considers various road situations as attributes. These attributes are used as encryption keys to secure the transmitted data. ASPE is flexible in that it can dynamically change encryption keys depending on the VANET situations. At the same time, ASPE naturally incorporates data access control policies on the transmitted data. ASPE provides an integrated solution to involve data access control, key management, security policy enforcement, and secure group formation in highly dynamic vehicular communication environments. Our performance evaluations show that ASPE is efficient and it can handle large amount of data encryption/decryption flows in VANETs. --- paper_title: SAT: situation-aware trust architecture for vehicular networks paper_content: Establishing trust in vehicular networks is a critical but also difficult task. In this position paper, we present a new trust architecture and model - Situation-Aware Trust (SAT) - to address several important trust issues in vehicular networks that we believe are essential to overcome the weaknesses of the current vehicular network security and trust models. Our model also strengthens the tie between Internet infrastructure. The new SAT includes three main components: (a) an attribute based policy control model for highly dynamic communication environments, (b) a proactive trust model to build trust among vehicles and prevent the breakage of the existing trust, and (c) a social network based trust system to enhance trust and to allow the set up of a decentralized trust framework when the vehicular network is under infrastructure failure or under attacks. --- paper_title: An Efficient Anonymous Authentication Protocol for Secure Vehicular Communications paper_content: As vehicular communications bring the promise of improved road safety and optimized road traffic through cooperative systems applications, it becomes a prerequisite to make vehicular communications secure for the successful deployment of vehicular ad hoc networks. In this paper, we propose an efficient authentication protocol with anonymous public key certificates for secure vehicular communications. The proposed protocol follows a system model to issue on-the-fly anonymous public key certificates to vehicles by road-side units. In order to design an efficient authentication protocol, we consider a key-insulated signature scheme for certifying anonymous public keys of vehicles to the system model. We demonstrate experimental results to confirm that the proposed protocol has better performance than other protocols based on group signature schemes. --- paper_title: DESCV--A Secure Wireless Communication Scheme for Vehicle ad hoc Networking paper_content: As an indispensable part of intelligent transportation system (ITS), inter-vehicle communication (IVC) emerges as an important research topic. The inter-vehicle communication works based on vehicular ad hoc networking (VANET), and provides communications among different vehicles. The wide applications of VANET helps to improve driving safety with the help of traffic information updates. To ensure that messages can be delivered effectively, the security in VANET becomes a critical issue. Conventional security systems rely heavily on centralized infrastructure to perform security operations such as key assignment and management, which may not suit well for VANET due to its high mobility and ad hoc links. Some works suggested that vehicles should be connected to fixed devices such as road side units (RSUs), but this requires deployment of a large number of costly RSUs in a specific area. This paper is focused on the issues on decentralized IVC without fixed infrastructure and proposes a method for Dynamic Establishment of Secure Communications in VANET (DESCV), which works in particular well for VANET communication key management when centralistic infrastructure or RSU is not available. We will demonstrate through synergy analysis and simulations that DESCV performs well in providing secure communications among vehicles traveling at a relative velocity as high as 240 km/h. --- paper_title: V-Tokens for Conditional Pseudonymity in VANETs paper_content: Privacy is an important requirement in vehicle networks, because vehicles broadcast detailed location information. Also of importance is accountability due to safety critical applications. Conditional pseudonymity, i.e., usage of resolvable pseudonyms, is a common approach to address both. Often, resolvability of pseudonyms is achieved by authorities maintaining pseudonym- identity mappings. However, these mappings are privacy sensitive and require strong protection to prevent abuse or leakage. We present a new approach that does not rely on pseudonym-identity mappings to be stored by any party. Resolution information is directly embedded in pseudonyms and can only be accessed when multiple authorities cooperate. Our privacy-preserving pseudonym issuance protocol ensures that pseudonyms contain valid resolution information but prevents issuing authorities from creating pseudonym-identity mappings. --- paper_title: A mechanism to enforce privacy in vehicle-to-infrastructure communication paper_content: Privacy-related issues are crucial for the wide diffusion of Vehicular Communications (VC). In particular, traffic analysis is one of the subtler threats to privacy in VC. In this paper we first briefly review current work in literature addressing privacy issues and survey vehicular mobility models. Then we present VIPER: a Vehicle-to-Infrastructure communication Privacy Enforcement pRotocol. VIPER is inspired to solutions provided for the Internet-mix-and cryptography-universal re-encryption. The protocol is shown to be resilient to traffic analysis attacks and analytical results suggest that it also performs well with respect to key performance indicators: queue occupancy, message path length and message delivery time; simulation results support our analytical findings. Finally, a comprehensive analysis has been performed to assess the overhead introduced by our mechanism. Simulation results show that the overhead introduced by VIPER in terms of extra bits required, computations, time delay, and message overhead is feasible even for increasing requirements on the security of the underlying cryptographic mechanisms. --- paper_title: Autonomous Certification with List-Based Revocation for Secure V2V Communication paper_content: Privacy and authenticity are two essential security attributes of secure Vehicle-to-Vehicle communications. Pseudonymous Public Key Infrastructure (PPKI), an extension of standard PKI, has been proposed to achieve these security attributes. In Pseudonymous PKI, a user needs certificates or pseudonyms periodically from the Certificate Authority (CA) to authenticate messages anonymously. But the infrastructure presence to communicate with the CA may not be ubiquitous, at least in the initial development phases of vehicular communication. Another proposal, PKI+ reduces dependence on the CA by allowing users to generate pseudonyms autonomously. However, user revocation in PKI+ is rather inconvenient, since it requires the entire network of non-revoked users to be reconfigured after each such event. In this paper, we propose PKI++, an improvement over PKI+, which brings together the desirable features of PKI and PKI+, namely autonomous certification and list-based revocation. We compare the proposed algorithm with PKI and PKI+, and show revocation to be less costly in PKI++. --- paper_title: Mutual Identification and Key Exchange Scheme in Secure VANETs Based on Group Signature paper_content: This paper proposes an identilication and key exchange scheme in secure VANETs based on group signature. Security requirements such as authentication, conditional privacy, non-repudiation, and confidentiality are required to satisfy various vehicular applications. Although the existinig group sIgnature schemes are suitable for secure vehicular communications, they do not provide mutual identification and key exchange for data confidentiality. The principal idea of this paper is that the proposed scheme allows only one credential to authenticate ephemeral Diffie-Hellman parameters generated be all the session keys. Our scheme achieves security requirements for various VANET-based applications. --- paper_title: GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications paper_content: In this paper, we first identify some unique design requirements in the aspects of security and privacy preservation for communications between different communication devices in vehicular ad hoc networks. We then propose a secure and privacy-preserving protocol based on group signature and identity (ID)-based signature techniques. We demonstrate that the proposed protocol cannot only guarantee the requirements of security and privacy but can also provide the desired traceability of each vehicle in the case where the ID of the message sender has to be revealed by the authority for any dispute event. Extensive simulation is conducted to verify the efficiency, effectiveness, and applicability of the proposed protocol in various application scenarios under different road systems. --- paper_title: Efficient and robust pseudonymous authentication in VANET paper_content: Effective and robust operations, as well as security and privacy are critical for the deployment of vehicular ad hoc networks (VANETs). Efficient and easy-to-manage security and privacy-enhancing mechanisms are essential for the wide-spread adoption of the VANET technology. In this paper, we are concerned with this problem; and in particular, how to achieve efficient and robust pseudonym-based authentication. We design mechanisms that reduce the security overhead for safety beaconing, and retain robustness for transportation safety, even in adverse network settings. Moreover, we show how to enhance the availability and usability of privacy-enhancing VANET mechanisms: Our proposal enables vehicle on-board units to generate their own pseudonyms, without affecting the system security. --- paper_title: AEMA: An Aggregated Emergency Message Authentication Scheme for Enhancing the Security of Vehicular Ad Hoc Networks paper_content: To achieve efficient authentication on emergency events in vehicular ad hoc networks, we introduce a novel aggregated emergency message authentication (AEMA) scheme to validate an emergency event. We make use of syntactic aggregation and cryptographic aggregation techniques to dramatically reduce the transmission cost, and adopt batch verification technique for efficient emergency messages verification. Compared with existing emergency message authentication approaches, our scheme shows the superiority on generality, enhanced security and efficiency. --- paper_title: PAACP: A portable privacy-preserving authentication and access control protocol in vehicular ad hoc networks paper_content: Recently, several studies addressed security and privacy issues in vehicular ad hoc networks (VANETs). Most of them focused on safety applications. As VANETs will be available widely, it is anticipated that Internet services could be accessed through VANETs in the near future. Thus, non-safety applications for VANETs would rise in popularity. This paper proposes a novel portable privacy-preserving authentication and access control protocol, named PAACP, for non-safety applications in VANETs. In addition to the essential support of authentication, key establishment, and privacy preservation, PAACP is developed to provide sophisticated differentiated service access control, which will facilitate the deployment of a variety of non-safety applications. Besides, the portability feature of PAACP can eliminate the backend communications with service providers. Therefore, better performance and scalability can be achieved in PAACP. --- paper_title: Secure Vehicle-to-roadside Communication Protocol Using Certificate-based Cryptosystem paper_content: AbstractAs various applications of vehicular ad hoc networks (VANETs) have been proposed, security has become one of the big research challenges and is receiving increasing attention. In this paper, we propose a secure and an efficient vehicle-to-roadside communication protocol by using the recently developed concepts of a certificate-based cryptosystem. The proposed approach combines the best aspects of identity-based public key cryptography approaches (implicit certification) and traditional public key infrastructure approaches (no key escrow). As compared with the previous works, which were implemented with the traditional public key infrastructure and identity-based public key cryptography, the proposed approach is more secure and efficient. --- paper_title: A Security Framework with Strong Non-Repudiation and Privacy in VANETs paper_content: This paper proposes a security framework with strong non-repudiation and privacy using new approach of ID-based cryptosystem in VANETs. To remove the overheads of certificate management in PKI, security frameworks using an ID-based cryptosystem are proposed. These systems, however, cannot guarantee strong non-repudiation and private communication since they suffer from the inherent weakness of an ID-based cryptosystem like the key escrow problem. The key idea of this paper is that the ID of the third-party is used as the verifier of vehicle's ID and self-generated RSA public key instead of using the ID of the peers. Our scheme provides strong nonrepudiation and privacy preservation without the inherent weaknesses of an ID-based cryptosystem in VANETs. Also, the proposed scheme is efficient in terms of signature and verification time for safety-related applications. --- paper_title: Secure, selective group broadcast in vehicular networks using dynamic attribute based encryption paper_content: Ciphertext-policy attribute-based encryption (CP-ABE) provides an encrypted access control mechanism for broadcasting messages. Basically, a sender encrypts a message with an access control policy tree which is logically composed of attributes; receivers are able to decrypt the message when their attributes satisfy the policy tree. A user's attributes stand for the properties that he currently owns. A user should keep his attributes up-to-date. However, this is not easy in CP-ABE because whenever one attribute changes, the entire private key, which is based on all the attributes, must be changed. In this paper, we introduce fading function, which renders attributes “dynamic” and allows users to update each attribute separately. We study how choosing fading rate for fading function affects the efficiency and security. We also compare our design with CP-ABE and find our scheme performs significantly better under certain circumstance. --- paper_title: A novel secure communication scheme in vehicular ad hoc networks paper_content: Vehicular networks are very likely to become the most pervasive and applicable of mobile ad hoc networks in this decade. Vehicular Ad hoc NETwork (VANET) has become a hot emerging research subject, but few academic publications describing its security infrastructure. In this paper, we review the secure infrastructure of VANET, some potential applications and interesting security challenges. To cope with these security challenges, we propose a novel secure scheme for vehicular communication on VANETs. The proposed scheme not only protects the privacy but also maintains the liability in the secure communications by using session keys. We also analyze the robustness of the proposed scheme. --- paper_title: Eviction of Misbehaving and Faulty Nodes in Vehicular Networks paper_content: Vehicular networks (VNs) are emerging, among civilian applications, as a convincing instantiation of the mobile networking technology. However, security is a critical factor and a significant challenge to be met. Misbehaving or faulty network nodes have to be detected and prevented from disrupting network operation, a problem particularly hard to address in the life-critical VN environment. Existing networks rely mainly on node certificate revocation for attacker eviction, but the lack of an omnipresent infrastructure in VNs may unacceptably delay the retrieval of the most recent and relevant revocation information; this will especially be the case in the early deployment stages of such a highly volatile and large-scale system. In this paper, we address this specific problem. We propose protocols, as components of a framework, for the identification and local containment of misbehaving or faulty nodes, and then for their eviction from the system. We tailor our design to the VN characteristics and analyze our system. Our results show that the distributed approach to contain nodes and contribute to their eviction is efficiently feasible and achieves a sufficient level of robustness. --- paper_title: On Data-Centric Trust Establishment in Ephemeral Ad Hoc Networks paper_content: We argue that the traditional notion of trust as a relation among entities, while useful, becomes insufficient for emerging data-centric mobile ad hoc networks. In these systems, setting the data trust level equal to the trust level of the data- providing entity would ignore system salient features, rendering applications ineffective and systems inflexible. This would be even more so if their operation is ephemeral, i.e., characterized by short-lived associations in volatile environments. In this paper, we address this challenge by extending the traditional notion of trust to data-centric trust: trustworthiness attributed to node-reported data per se. We propose a framework for data-centric trust establishment: First, trust in each individual piece of data is computed; then multiple, related but possibly contradictory, data are combined; finally, their validity is inferred by a decision component based on one of several evidence evaluation techniques. We consider and evaluate an instantiation of our framework in vehicular networks as a case study. Our simulation results show that our scheme is highly resilient to attackers and converges stably to the correct decision. --- paper_title: A Secure Cooperative Approach for Nonline-of-Sight Location Verification in VANET paper_content: In vehicular ad hoc networks (VANETs), network services and applications (e.g., safety messages) require an exchange of vehicle and event location information. The data are exchanged among vehicles within each vehicle's respective radio communication range through direct communication. In reality, direct communication is susceptible to interference and blocked by physical obstacles, which prevent the proper exchange of information about localization information. Obstacles can create a state of nonline of sight (NLOS) between two vehicles, which restricts direct communication even when corresponding vehicles exist within each other's physical communication range, thus preventing them from exchanging proper data and affecting the localization services' integrity and reliability. Dealing with such obstacles is a challenge in VANETs as moving obstacles such as trucks are parts of the network and have the same characteristics of a VANET node (e.g., high-speed mobility and change of driving behavior). In this paper, we present a location verification protocol among cooperative neighboring vehicles to overcome an NLOS condition and secure the integrity of localization services for VANETs. The simulation results showed improvement in neighborhood awareness under NLOS conditions. A solution such as that we propose will help to maintain localization service integrity and reliability. --- paper_title: An Efficient Trust Management System for Balancing the Safety and Location Privacy in VANETs paper_content: In VANETs, how to determine the trustworthiness of event messages has received a great deal of attentions in recent years for improving the safety and location privacy of vehicles. Among these research studies, the accuracy and delay of trustworthiness decision are both important problems. In this paper, we propose a road-side unit (RSU) and beacon-based trust management system, called RaBTM, which aims to prorogate message opinions quickly and thwart internal attackers from sending or forwarding forged messages in privacy-enhanced VANETs. To evaluate the performance and efficiency of the proposed system, we conducted a set of simulations under alteration attacks and bogus message attacks with various adversary ratios. The simulation results show that the proposed system RaBTM is highly resilient to adversarial attacks and performs at least 15% better than weighted vote (WV) scheme. --- paper_title: An ID-based Framework Achieving Privacy and Non-Repudiation in Vehicular Ad Hoc Networks paper_content: Security requirements should be integrated into the design of vehicular ad hoc networks (VANETs), which bear unique features like road-safety and life-critical message dissemination. In this paper, we propose a security framework for VANETs to achieve privacy desired by vehicles and non-repudiation required by authorities, in addition to satisfying fundamental security requirements including authentication, message integrity and confidentiality. The proposed framework employs an ID-based cryptosystem where certificates are not needed for authentication. It increases the communication efficiency for VANET applications where the real-time constraint on message delivery should be guaranteed. We also briefly review the requirements for VANET security and verify the fulfillment of our proposed framework against these requirements. --- paper_title: Detecting and correcting malicious data in VANETs paper_content: In order to meet performance goals, it is widely agreed that vehicular ad hoc networks (VANETs) must rely heavily on node-to-node communication, thus allowing for malicious data traffic. At the same time, the easy access to information afforded by VANETs potentially enables the difficult security goal of data validation. We propose a general approach to evaluating the validity of VANET data. In our approach a node searches for possible explanations for the data it has collected based on the fact that malicious nodes may be present. Explanations that are consistent with the node's model of the VANET are scored and the node accepts the data as dictated by the highest scoring explanations. Our techniques for generating and scoring explanations rely on two assumptions: 1) nodes can tell "at least some" other nodes apart from one another and 2) a parsimony argument accurately reflects adversarial behavior in a VANET. We justify both assumptions and demonstrate our approach on specific VANETs. --- paper_title: Selective and confidential message exchange in vehicular ad hoc networks paper_content: Vehicular Ad-hoc Networks are a promising and increasingly important paradigm. Their applications range from safety enhancement to mobile entertainment services. However, their deployment requires several security issues to be resolved, particularly, since they rely on insecure wireless communication. In this paper, we propose a cryptographic-based access control framework for vehicles to securely exchange messages in a controlled fashion by integrating moving object modeling techniques with cryptographic policies. --- paper_title: Secure Traffic Data Propagation in Vehicular Ad hoc Networks paper_content: In vehicular ad hoc network, vehicles can share traffic/emergency information. The information should not be modified/manipulated during transmission without detection. We present two novel approaches to provide reliable traffic information propagation: two-directional data verification, and time-based data verification. The traffic message is sent through two (spatially or temporally spaced) channels. A recipient vehicle verifies the message integrity by checking if data received from both channels are matched. Compared with the popular public-key based security systems, the proposed approaches are much simpler and cheaper to implement, especially during the initial transition stage when a mature VANET network infrastructure does not exist. --- paper_title: Support of Anonymity in VANETs - Putting Pseudonymity into Practice paper_content: Despite great advantages of vehicular ad hoc networks (VANETs), they also introduce challenges with respect to security and privacy. Today, people are more and more concerned about their privacy. Using unique identifiers for communication, a vehicle can easily be located and tracked. Alternatively, a technical solution to protect drivers' privacy is the use of changing pseudonyms. Existing work mainly focuses on algorithms for pseudonym change and neglect practical implications and realizability. For deployment and integration of pseudonymity into a VANET communication system, several issues need to be solved. This paper analyzes the practical challenges and proposes protocol- and implementation-related solutions necessary to turn pseudonymity support into practice. Finally, the paper concludes by means of analysis and measurements that the burden of pseudonymity can be alleviated at reasonable costs and compromises in anonymity support. --- paper_title: PAAVE: Protocol for Anonymous Authentication in Vehicular Networks Using Smart Cards paper_content: Vehicular communications are envisioned to play a substantial role in providing safety in transportation by means of safety message exchange. However, the deployment of vehicular networks is strongly dependent on security and privacy features. In this paper, we present a Protocol for Anonymous Authentication in Vehicular Networks (PAAVE) to address the issue of privacy preservation with authority traceability in vehicular ad hoc networks (VANETs). The proposed protocol is based on smart cards to generate on-the-fly anonymous keys between vehicles and Roadside units (RSUs). PAAVE is lightweight and provides fast anonymous authentication and location privacy while requiring a vehicle to store one cryptographic key. We demonstrate the merits gained by the proposed protocol through extensive analysis and show that PAAVE outperforms existing schemes. --- paper_title: On the efficiency of secure beaconing in VANETs paper_content: Direct inter-vehicle communication enables numerous safety applications like intersection collision warning. Beacons - periodic one-hop link-layer broadcast messages containing, e.g., location, heading, and speed - are the basis for many such applications. For security, current work often requires all messages to be signed and to carry a certificate to ensure integrity and authenticity. However, high beacon frequency of 1 - 10 Hz and dense traffic situations lead to significant communication and computational overhead. In this paper, we propose several mechanisms to significantly reduce this overhead while maintaining a comparable level of security. The general idea is to omit signatures, certificates, or certificate verification in situations where they are not necessarily required. This creates a security-performance trade-off that we analyze in detail. The results show that significant savings can be achieved with only small impact on security. --- paper_title: AMOEBA: Robust Location Privacy Scheme for VANET paper_content: Communication messages in vehicular ad hoc networks (VANET) can be used to locate and track vehicles. While tracking can be beneficial for vehicle navigation, it can also lead to threats on location privacy of vehicle user. In this paper, we address the problem of mitigating unauthorized tracking of vehicles based on their broadcast communications, to enhance the user location privacy in VANET. Compared to other mobile networks, VANET exhibits unique characteristics in terms of vehicular mobility constraints, application requirements such as a safety message broadcast period, and vehicular network connectivity. Based on the observed characteristics, we propose a scheme called AMOEBA, that provides location privacy by utilizing the group navigation of vehicles. By simulating vehicular mobility in freeways and streets, the performance of the proposed scheme is evaluated under VANET application constraints and two passive adversary models. We make use of vehicular groups for anonymous access to location based service applications in VANET, for user privacy protection. The robustness of the user privacy provided is considered under various attacks. --- paper_title: Deploying Proxy Signature in VANETs paper_content: We introduce a verifiable, self authenticating, and anonymous message delivery protocol for VANET communications using different implementations of proxy signature scheme, where RSU-to-OBU, OBU-to- RSU, and OBU-to-OBU message delivery issues have been addressed. An RSU-to-OBU message delivery scheme is developed, in which a message is protected against potential forgery launched by a malicious RSU. Also, a new proxy signature based approach is provided for message integrity and anonymity for the OBU message delivery. The total process is accountable. The security analysis confirms the validity of the proposed protocol. --- paper_title: ID-based Safety Message Authentication for Security and Trust in Vehicular Networks paper_content: We present a safety message authentication scheme for vehicular ad hoc networks using an ID-based signature and verification mechanism. An ID-based technique offers a certificate-less public key verification, while a proxy signature provides flexibilities in message authentication and trust management. In this scheme, we incorporate an ID-based proxy signature framework with the standard ECDSA for VANET's road-side unit (RSU) originated safety application messages. Also, forwarding of signed messages are specially handled to ensure the trust and authentication of RSU's application messages. We claim that this scheme is resilient against all major security threats and also efficient in terms of computation complexity. --- paper_title: Defense against Sybil attack in vehicular ad hoc network based on roadside unit support paper_content: In this paper, we propose a timestamp series approach to defend against Sybil attack in a vehicular ad hoc network (VANET) based on roadside unit support. The proposed approach targets the initial deployment stage of VANET when basic roadside unit (RSU) support infrastructure is available and a small fraction of vehicles have network communication capability. Unlike previously proposed schemes that require a dedicated vehicular public key infrastructure to certify individual vehicles, in our approach RSUs are the only components issuing the certificates. Due to the differences of moving dynamics among vehicles, it is rare to have two vehicles passing by multiple RSUs at exactly the same time. By exploiting this spatial and temporal correlation between vehicles and RSUs, two messages will be treated as Sybil attack issued by one vehicle if they have the similar timestamp series issued by RSUs. The timestamp series approach needs neither vehicular-based public-key infrastructure nor Internet accessible RSUs, which makes it an economical solution suitable for the initial stage of VANET. --- paper_title: Probabilistic key distribution in vehicular networks with infrastructure support paper_content: We propose a probabilistic key distribution protocol for vehicular network that alleviates the burden of traditional public-key infrastructures. Roadside units act as trusted nodes and are used for secret-sharing among vehicles in their vicinity. Secure communication is immediately possible between these vehicles with high probability. Our performance evaluation, which uses both analysis and simulation, shows that high reliability and short dissemination time can be achieved with low complexity. ---
Title: Security Models in Vehicular Ad-hoc Networks: A Survey Section 1: Background. Vehicular Ad-hoc Networks and Security Needs Description 1: In this section, a brief overview of the main elements appearing in vehicular ad-hoc networks (VANETs) is given, along with the security-related needs usually considered in these networks. Section 2: Survey Overview and Scope Description 2: This section provides an overview of the survey, its scope, and the aspects of security models that are considered. Section 3: Review on Security Models Description 3: This section presents a revision of the security models considered in VANET-related recent works, addressing the assumptions about the security of vehicular devices, RSUs, TTPs, and attacker features. Section 4: Standards Position on Security Models Description 4: This section describes the positions held by relevant standards, such as ISO 21217 and IEEE 1609.2, regarding security models in VANETs. Section 5: Analysis and Discussion Description 5: This section analyzes the findings from the survey and standards review, discussing the relevance of the surveyed contributions and the compatibility of different security models. Section 6: Basic Approach to Compare Security Models Description 6: This section proposes a systematic classification approach to compare security models, highlighting similarities and differences between different contributions. Section 7: Related Work Description 7: This section reviews previous works related to security issues in vehicular ad-hoc networks, providing context and contrast to the presented survey. Section 8: Conclusions and Future Work Description 8: This section summarizes the conclusions drawn from the survey and analysis, and suggests directions for future research in the area of security models for vehicular networks.
An Overview of Quality-ofService Routing for the Next Generation High-Speed Networks: Problems and Solutions
11
--- paper_title: Routing subject to quality of service constraints in integrated communication networks paper_content: With increasingly diverse QOS requirements, it is impractical to continue to rely on conventional routing paradigms that emphasize the search for an optimal path based on a predetermined metric, or a particular function of multiple metrics. Modern routing strategies must not only be adaptive to network changes but also offer considerable economy of scope. We consider the problem of routing in networks subject to QOS constraints. After providing an overview of prior routing work, we define various QOS constraints. We present a call architecture that may be used for QOS matching and a connection management mechanism for network resource allocation. We discuss fallback routing, and review some existing routing frameworks. We also present a new rule-based, call-by-call source routing strategy for integrated communication networks. > --- paper_title: Competitive routing of virtual circuits with unknown duration paper_content: In this paper we present a strategy to route unknown duration virtual circuits in a high-speed communication network. Previous work on virtual circuit routing concentrated on the case where the call duration is known in advance. We show that by allowing O(logn) reroutes per call, we can achieve O(logn) competitive ratio with respect to the maximum load (congestion) for the unknown duration case, where n is the number of nodes in the network. This is in contrast to the ?(n) lower bound on the competitive ratio for this case if no rerouting is allowed (Azar et al., 1992, Proc. 33rd IEEE Annual Symposium of Foundations of Computer Science, pp. 218?225). Our routing algorithm can be also applied in the context of machine load balancing of tasks with unknown duration. We present an algorithm that makes O(logn) reassignments per task and achieves O(logn) competitive ratio with respect to the load, where n is the number of parallel machines. For a special case of unit load tasks we design a constant competitive algorithm. The previously known algorithms that achieve up to polylogarithmic competitive ratio for load balancing of tasks with unknown duration dealt only with special cases of related machines case and unit-load tasks with restricted assignment (Azar et al., 1993, Proc. Workshop on Algorithms and Data Structures, pp. 119?130; Azar et al., 1992, Proc. 3rd ACM-SIAM Symposium on Discrete Algorithms, pp. 203?210). --- paper_title: On finding multi-constrained paths paper_content: New emerging distributed multimedia applications provide guaranteed end-to-end quality of service (QoS) and have stringent constraints on delay, delay-jitter, cost, etc. The task of QoS routing is to find a route in the network which has sufficient resources to satisfy the constraints. The delay-cost-constrained routing problem is NP-complete. We propose a heuristic algorithm for this problem. The idea is to first reduce the NP-complete problem to a simpler one which can be solved in polynomial time, and then solve the new problem by either an extended Dijkstra's algorithm or an extended Bellman-Ford algorithm. We prove the correctness of our algorithm by showing that a solution for the simpler problem must also be a solution for the original problem. The performance of the algorithm is studied by both theoretical analysis and simulation. --- paper_title: Multicast Routing with End-to-End Delay and Delay Variation Constraints paper_content: We study the problem or constructing multicast trees to meet the quality of service requirements of real-time interactive applications operating in high-speed packet-switched environments. In particular, we assume that multicast communication depends on: (1) bounded delay along the paths from the source to each destination and (2) bounded variation among the delays along these paths. We first establish that the problem of determining such a constrained tree is NP-complete. We then present a heuristic that demonstrates good average case behavior in terms of the maximum interdestination delay variation. The heuristic achieves its best performance under conditions typical of multicast scenarios in high speed networks. We also show that it is possible to dynamically reorganize the initial tree in response to changes in the destination set, in a way that is minimally disruptive to the multicast session. --- paper_title: Multi-path routing combined with resource reservation paper_content: In high-speed networks it is desirable to interleave routing and resource (such as bandwidth) reservation. The PNNI standard for private ATM networks is an example of an algorithm that does this using a sequential crank-back mechanism. We suggest the implementation of resource reservation along several routes in parallel. We present an analytical model that demonstrates that when there are several routes to the destination it pays to attempt reservation along more than a single route. Following this analytic observation, we present a family of algorithms that route and reserve resources along parallel subroutes. The algorithms of the family represent different trade-offs between the speed and the quality of the established route. The presented algorithms are simulated against several legacy algorithms, including the PNNI crank-back, and exhibit higher network utilization and faster connection set-up time. --- paper_title: A Scheme for Real-Time Channel Establishment in Wide-Area Networks paper_content: Multimedia communication involving digital audio and/or digital video has rather strict delay requirements. A real-time channel is defined as a simplex connection between a source and a destination characterized by parameters representing the performance requirements of the client. A study is made of the feasibility of providing real-time services on a packet-switched store-and-forward wide-area network with general topology. A description is given of a scheme for the establishment of channels with deterministic or statistical delay bounds, and the results of the simulation experiments run to evaluate it are presented. The results are judged encouraging: the approach satisfies the guarantees even in worst case situations, uses the network's resources to a fair extent, and efficiently handles channels with a variety of offered load and burstiness characteristics. Also, the packet transmission overhead is quite low, and the channel establishment overhead is small enough to be acceptable in most practical cases. > --- paper_title: RSVP: a new resource ReSerVation Protocol paper_content: A resource reservation protocol (RSVP), a flexible and scalable receiver-oriented simplex protocol, is described. RSVP provides receiver-initiated reservations to accommodate heterogeneity among receivers as well as dynamic membership changes; separates the filters from the reservation, thus allowing channel changing behavior; supports a dynamic and robust multipoint-to-multipoint communication model by taking a soft-state approach in maintaining resource reservations; and decouples the reservation and routing functions. A simple network configuration with five hosts connected by seven point-to-point links and three switches is presented to illustrate how RSVP works. Related work and unresolved issues are discussed. > --- paper_title: A Distributed Algorithm for Delay-Constrained Unicast Routing paper_content: We study the NP-hard delay-constrained least cost (DCLC) path problem. A solution to this problem is needed to provide real-time communication service to connection-oriented applications, such as video and voice. We propose a simple, distributed heuristic solution, called the delay-constrained unicast routing (DCUR) algorithm, DCUR requires limited network state information to be kept at each node: a cost vector and a delay vector. We prove DCUR's correctness by showing that it is always capable of constructing a loop-free delay-constrained path within finite time, if such a path exists. The worst case message complexity of DCUR is O(|V|/sup 2/) messages, where |V| is the number of nodes. However, simulation results show that, on the average, DCUR requires much fewer messages. Therefore, DCUR scales well to large networks. We also use simulation to compare DCUR to the optimal algorithm, and to the least delay path algorithm. Our results show that DCUR's path costs are within 10% of those of the optimal solution. --- paper_title: A distributed route-selection scheme for establishing real-time channels paper_content: To guarantee the delivery of real-time messages before their deadline, a real-time channel or connection must be established before the transmission of any real-time messages. During this channel establishment phase, one must first select a route between the source and destination of this channel and then reserve sufficient resources along this route so that the end-to-end delay over the selected route may not exceed the user-specified delay bound. --- paper_title: Distributed quality-of-service routing in high-speed networks based on selective probing paper_content: We propose an integrated QoS routing framework based on selective probing for high-speed packet-switching networks. The framework is fully distributed and depends only on the local state maintained at every individual node. By using controlled diffusion computations, the framework captures the common messaging and computational structure of distributed QoS routing, and allows an efficient implementation due to its simplicity. Different distributed routing algorithms (DRAs) can be quickly developed by specifying only a few well-defined constraint-dependent parameters within the framework. Our simulation shows that the overhead of the proposed algorithms is stable and modest. --- paper_title: Routing with end to end QoS guarantees in broadband networks paper_content: We consider routing schemes for connections with end to end delay requirements, and investigate several fundamental problems. First, we focus on networks which employ rate-based schedulers and hence map delay guarantees into nodal rate guarantees, as done with the guaranteed service class proposed for the Internet. We consider first the basic problem of identifying a feasible route for the connection, for which a straightforward, yet computationally costly solution exists. Accordingly, we establish several /spl epsiv/-optimal solutions that offer substantially lower computational complexity. We then consider the more general problem of optimizing the route choice in terms of balancing loads and accommodating multiple connections, for which we formulate and validate several optimal algorithms. We discuss the implementation of such schemes in the context of link-state and distance-vector protocols. Next, we consider the fundamental problem of constrained path optimization. This problem, typical of QoS routing, is NP-hard. While standard approximation methods exist, their complexity may often be prohibitive in terms of scalability. Such approximations do not make use of the particular properties of large-scale networks, such as the fact that the path selection process is typically presented with a hierarchical, aggregated topology. By exploiting the structure of such topologies, we obtain an /spl epsiv/-optimal algorithm for the constrained shortest path problem, which offers a substantial improvement in terms of scalability. --- paper_title: Analysis and simulation of a fair queueing algorithm paper_content: We discuss gateway queueing algorithms and their role in controlling congestion in datagram networks. A fair queueing algorithm, based on an earlier suggestion by Nagle, is proposed. Analysis and simulations are used to compare this algorithm to other congestion control schemes. We find that fair queueing provides several important advantages over the usual first-come-first-serve queueing algorithm: fair allocation of bandwidth, lower delay for sources using less than their full share of bandwidth, and protection from ill-behaved sources. --- paper_title: QoS routing in networks with uncertain parameters paper_content: This article considers the problem of routing connections with QoS requirements across networks, when the information available for making routing decisions is inaccurate. This uncertainty about the actual state of a network component arises naturally in a number of different environments, which are reviewed in the paper. The goal of the route selection process is then to identify a path that is most likely to satisfy the QoS requirements. For end to end delay guarantees, this problem is intractable. However we show that by decomposing the end-to-end constraint into local delay constraints, efficient and tractable solutions can be established. We first consider the simpler problem of decomposing the end-to-end constraint into local constraints, for a given path. We show that, for general distributions, this problem is also intractable. Nonetheless, by defining a certain class of probability distributions, which posses a certain convexity property, and restricting ourselves to that class, we are able to establish efficient and exact solutions. Moreover, we show that typical distributions would belong to that class. We then consider the general problem, of combined path optimization and delay decomposition. We present an efficient solution scheme for the above class of probability distributions. Our solution is similar to that of the restricted shortest-path problem, which renders itself to near-optimal approximations of polynomial complexity. We also show that yet simpler solutions exist in the special case of uniform distributions. --- paper_title: On finding multi-constrained paths paper_content: New emerging distributed multimedia applications provide guaranteed end-to-end quality of service (QoS) and have stringent constraints on delay, delay-jitter, cost, etc. The task of QoS routing is to find a route in the network which has sufficient resources to satisfy the constraints. The delay-cost-constrained routing problem is NP-complete. We propose a heuristic algorithm for this problem. The idea is to first reduce the NP-complete problem to a simpler one which can be solved in polynomial time, and then solve the new problem by either an extended Dijkstra's algorithm or an extended Bellman-Ford algorithm. We prove the correctness of our algorithm by showing that a solution for the simpler problem must also be a solution for the original problem. The performance of the algorithm is studied by both theoretical analysis and simulation. --- paper_title: A distributed route-selection scheme for establishing real-time channels paper_content: To guarantee the delivery of real-time messages before their deadline, a real-time channel or connection must be established before the transmission of any real-time messages. During this channel establishment phase, one must first select a route between the source and destination of this channel and then reserve sufficient resources along this route so that the end-to-end delay over the selected route may not exceed the user-specified delay bound. --- paper_title: Competitive routing of virtual circuits with unknown duration paper_content: In this paper we present a strategy to route unknown duration virtual circuits in a high-speed communication network. Previous work on virtual circuit routing concentrated on the case where the call duration is known in advance. We show that by allowing O(logn) reroutes per call, we can achieve O(logn) competitive ratio with respect to the maximum load (congestion) for the unknown duration case, where n is the number of nodes in the network. This is in contrast to the ?(n) lower bound on the competitive ratio for this case if no rerouting is allowed (Azar et al., 1992, Proc. 33rd IEEE Annual Symposium of Foundations of Computer Science, pp. 218?225). Our routing algorithm can be also applied in the context of machine load balancing of tasks with unknown duration. We present an algorithm that makes O(logn) reassignments per task and achieves O(logn) competitive ratio with respect to the load, where n is the number of parallel machines. For a special case of unit load tasks we design a constant competitive algorithm. The previously known algorithms that achieve up to polylogarithmic competitive ratio for load balancing of tasks with unknown duration dealt only with special cases of related machines case and unit-load tasks with restricted assignment (Azar et al., 1993, Proc. Workshop on Algorithms and Data Structures, pp. 119?130; Azar et al., 1992, Proc. 3rd ACM-SIAM Symposium on Discrete Algorithms, pp. 203?210). --- paper_title: Multicast Routing for Multimedia Communication paper_content: The authors present heuristics for multicast tree construction for communication that depends on: bounded end-to-end delay along the paths from source to each destination and minimum cost of the multicast tree, where edge cost and edge delay can be independent metrics. The problem of computing such a constrained multicast tree is NP-complete. It is shown that the heuristics demonstrate good average case behavior in terms of cost, as determined by simulations on a large number of graphs. > --- paper_title: A source-based algorithm for delay-constrained minimum-cost multicasting paper_content: A new heuristic algorithm is presented for constructing minimum-cost multicast trees with delay constraints. The new algorithm can set variable delay bounds on destinations and handles two variants of the network cost optimization goal: one minimizing the total cost (total bandwidth utilization) of the tree, and another minimizing the maximal link cost (the most congested link). Instead of the single-pass tree construction approach used in most previous heuristics, the new algorithm is based on a feasible search optimization method which starts with the minimum-delay tree and monotonically decreases the cost by iterative improvement of the delay-bounded tree. The optimality of the costs of the delay-bounded trees obtained with the new algorithm is analyzed by simulation. Depending on how tight the delay bounds are, the costs of the multicast trees obtained with the new algorithm are shown to be very close to the costs of the trees obtained by the Kou, Markowsky and Berman's algorithm (1981). --- paper_title: Multicast Routing with End-to-End Delay and Delay Variation Constraints paper_content: We study the problem or constructing multicast trees to meet the quality of service requirements of real-time interactive applications operating in high-speed packet-switched environments. In particular, we assume that multicast communication depends on: (1) bounded delay along the paths from the source to each destination and (2) bounded variation among the delays along these paths. We first establish that the problem of determining such a constrained tree is NP-complete. We then present a heuristic that demonstrates good average case behavior in terms of the maximum interdestination delay variation. The heuristic achieves its best performance under conditions typical of multicast scenarios in high speed networks. We also show that it is possible to dynamically reorganize the initial tree in response to changes in the destination set, in a way that is minimally disruptive to the multicast session. --- paper_title: Competitive routing of virtual circuits with unknown duration paper_content: In this paper we present a strategy to route unknown duration virtual circuits in a high-speed communication network. Previous work on virtual circuit routing concentrated on the case where the call duration is known in advance. We show that by allowing O(logn) reroutes per call, we can achieve O(logn) competitive ratio with respect to the maximum load (congestion) for the unknown duration case, where n is the number of nodes in the network. This is in contrast to the ?(n) lower bound on the competitive ratio for this case if no rerouting is allowed (Azar et al., 1992, Proc. 33rd IEEE Annual Symposium of Foundations of Computer Science, pp. 218?225). Our routing algorithm can be also applied in the context of machine load balancing of tasks with unknown duration. We present an algorithm that makes O(logn) reassignments per task and achieves O(logn) competitive ratio with respect to the load, where n is the number of parallel machines. For a special case of unit load tasks we design a constant competitive algorithm. The previously known algorithms that achieve up to polylogarithmic competitive ratio for load balancing of tasks with unknown duration dealt only with special cases of related machines case and unit-load tasks with restricted assignment (Azar et al., 1993, Proc. Workshop on Algorithms and Data Structures, pp. 119?130; Azar et al., 1992, Proc. 3rd ACM-SIAM Symposium on Discrete Algorithms, pp. 203?210). --- paper_title: On finding multi-constrained paths paper_content: New emerging distributed multimedia applications provide guaranteed end-to-end quality of service (QoS) and have stringent constraints on delay, delay-jitter, cost, etc. The task of QoS routing is to find a route in the network which has sufficient resources to satisfy the constraints. The delay-cost-constrained routing problem is NP-complete. We propose a heuristic algorithm for this problem. The idea is to first reduce the NP-complete problem to a simpler one which can be solved in polynomial time, and then solve the new problem by either an extended Dijkstra's algorithm or an extended Bellman-Ford algorithm. We prove the correctness of our algorithm by showing that a solution for the simpler problem must also be a solution for the original problem. The performance of the algorithm is studied by both theoretical analysis and simulation. --- paper_title: A route pre-computation algorithm for integrated services networks paper_content: We provide an algorithm for computing best paths on a graph where edges have a multidimensional cost, one dimension representing delay, the others representing available capacity. Best paths are those which guarantee maximum capacity with least possible delay. The complexity of the algorithm is of the order ofO(V3) in the bidimensional case, for a graph withV vertices. The results can be used for routing connections with guaranteed capacity in a communication network. --- paper_title: Competitive Routing of Virtual Circuits in ATM networks paper_content: Classical routing and admission control strategies achieve provably good performance by relying on an assumption that the virtual circuits arrival pattern can be described by some a priori known probabilistic model. A new on-line routing framework, based on the notion of competitive analysis, was proposed. This framework is geared toward design of strategies that have provably good performance even in the case where there are no statistical assumptions on the arrival pattern and parameters of the virtual circuits. The on-line strategies motivated by this framework are quite different from the min-hop and reservation-based strategies. This paper surveys the on-line routing framework, the proposed routing and admission control strategies, and discusses some of the implementation issues. > --- paper_title: Competitive routing of virtual circuits with unknown duration paper_content: In this paper we present a strategy to route unknown duration virtual circuits in a high-speed communication network. Previous work on virtual circuit routing concentrated on the case where the call duration is known in advance. We show that by allowing O(logn) reroutes per call, we can achieve O(logn) competitive ratio with respect to the maximum load (congestion) for the unknown duration case, where n is the number of nodes in the network. This is in contrast to the ?(n) lower bound on the competitive ratio for this case if no rerouting is allowed (Azar et al., 1992, Proc. 33rd IEEE Annual Symposium of Foundations of Computer Science, pp. 218?225). Our routing algorithm can be also applied in the context of machine load balancing of tasks with unknown duration. We present an algorithm that makes O(logn) reassignments per task and achieves O(logn) competitive ratio with respect to the load, where n is the number of parallel machines. For a special case of unit load tasks we design a constant competitive algorithm. The previously known algorithms that achieve up to polylogarithmic competitive ratio for load balancing of tasks with unknown duration dealt only with special cases of related machines case and unit-load tasks with restricted assignment (Azar et al., 1993, Proc. Workshop on Algorithms and Data Structures, pp. 119?130; Azar et al., 1992, Proc. 3rd ACM-SIAM Symposium on Discrete Algorithms, pp. 203?210). --- paper_title: Routing virtual circuits with timing requirements in virtual path based ATM networks paper_content: Real-time communication with performance guarantees is expected to become an important feature of future computer networks. Given an ATM network topology, its virtual path (VP) layout, and its traffic demands, we consider in this paper the problem of selecting for each virtual circuit (VC) with timing requirements a route (i.e., a sequence of VPs) along which sufficient resources are available to meet the user-specified end-to-end delay requirements. Our objective is to (i) provide the timing guarantee for the VC, while not jeopardizing the QoS guarantees to other existing VCs. We adopt the real-time channel model to characterize the traffic characteristics and the timing requirements of a VC. We then propose a distributed VC routing scheme based on the distributed Bellman-Ford algorithm to identify an "appropriate" route through the network. By "appropriate", we mean that the route traverses a minimum number of VPs among all possible routes that have sufficient resources over to fulfill the end-to-end timing requirement of the VC. To ensure that there are sufficient bandwidths over all the VPs along the selected route, we incorporate in our proposed scheme a priority assignment method to calculate the minimum worst-case traversal time which messages of a VC will experience on a VP along which the VC is routed. We also comment on the performance of, and the message overhead incurred in the proposed scheme. --- paper_title: Distributed quality-of-service routing in high-speed networks based on selective probing paper_content: We propose an integrated QoS routing framework based on selective probing for high-speed packet-switching networks. The framework is fully distributed and depends only on the local state maintained at every individual node. By using controlled diffusion computations, the framework captures the common messaging and computational structure of distributed QoS routing, and allows an efficient implementation due to its simplicity. Different distributed routing algorithms (DRAs) can be quickly developed by specifying only a few well-defined constraint-dependent parameters within the framework. Our simulation shows that the overhead of the proposed algorithms is stable and modest. --- paper_title: Multicast routing in datagram internetworks and extended LANs paper_content: Multicasting, the transmission of a packet to a group of hosts, is an important service for improving the efficiency and robustness of distributed systems and applications. Although multicast capability is available and widely used in local area networks, when those LANs are interconnected by store-and-forward routers, the multicast service is usually not offered across the resulting internetwork . To address this limitation, we specify extensions to two common internetwork routing algorithms—distance-vector routing and link-state routing—to support low-delay datagram multicasting beyond a single LAN. We also describe modifications to the single-spanning-tree routing algorithm commonly used by link-layer bridges, to reduce the costs of multicasting in large extended LANs. Finally, we discuss how the use of multicast scope control and hierarchical multicast routing allows the multicast service to scale up to large internetworks. --- paper_title: Shortest connection networks and some generalizations paper_content: The basic problem considered is that of interconnecting a given set of terminals with a shortest possible network of direct links. Simple and practical procedures are given for solving this problem both graphically and computationally. It develops that these procedures also provide solutions for a much broader class of problems, containing other examples of practical interest. --- paper_title: Multicast Routing for Multimedia Communication paper_content: The authors present heuristics for multicast tree construction for communication that depends on: bounded end-to-end delay along the paths from source to each destination and minimum cost of the multicast tree, where edge cost and edge delay can be independent metrics. The problem of computing such a constrained multicast tree is NP-complete. It is shown that the heuristics demonstrate good average case behavior in terms of cost, as determined by simulations on a large number of graphs. > --- paper_title: A source-based algorithm for delay-constrained minimum-cost multicasting paper_content: A new heuristic algorithm is presented for constructing minimum-cost multicast trees with delay constraints. The new algorithm can set variable delay bounds on destinations and handles two variants of the network cost optimization goal: one minimizing the total cost (total bandwidth utilization) of the tree, and another minimizing the maximal link cost (the most congested link). Instead of the single-pass tree construction approach used in most previous heuristics, the new algorithm is based on a feasible search optimization method which starts with the minimum-delay tree and monotonically decreases the cost by iterative improvement of the delay-bounded tree. The optimality of the costs of the delay-bounded trees obtained with the new algorithm is analyzed by simulation. Depending on how tight the delay bounds are, the costs of the multicast trees obtained with the new algorithm are shown to be very close to the costs of the trees obtained by the Kou, Markowsky and Berman's algorithm (1981). --- paper_title: Distributed quality-of-service routing in high-speed networks based on selective probing paper_content: We propose an integrated QoS routing framework based on selective probing for high-speed packet-switching networks. The framework is fully distributed and depends only on the local state maintained at every individual node. By using controlled diffusion computations, the framework captures the common messaging and computational structure of distributed QoS routing, and allows an efficient implementation due to its simplicity. Different distributed routing algorithms (DRAs) can be quickly developed by specifying only a few well-defined constraint-dependent parameters within the framework. Our simulation shows that the overhead of the proposed algorithms is stable and modest. --- paper_title: QoSMIC: quality of service sensitive multicast Internet protocol paper_content: In this paper, we present, QoSMIC, a multicast protocol for the Internet that supports QoS-sensitive routing, and minimizes the importance of a priori configuration decisions (such as core selection). The protocol is resource-efficient, robust, flexible, and scalable. In addition, our protocol is provably loop-free.Our protocol starts with a resources-saving tree (Shared Tree) and individual receivers switch to a QoS-competitive tree (Source-Based Tree) when necessary. In both trees, the new destination is able to choose the most promising among several paths. An innovation is that we use dynamic routing information without relying on a link state exchange protocol to provide it. Our protocol limits the effect of pre-configuration decisions drastically, by separating the management from the data transfer functions; administrative routers are not necessarily part of the tree. This separation increases the robustness, and flexibility of the protocol. Furthermore, QoSMIC is able to adapt dynamically to the conditions of the network.The QoSMIC protocol introduces several new ideas that make it more flexible than other protocols proposed to date. In fact, many of the other protocols, (such as YAM, PIMSM, BGMP, CBT) can be seen as special cases of QoSMIC. This paper presents the motivation behind, and the design of QoSMIC, and provides both analytical and experimental results to support our claims. --- paper_title: Distributed quality-of-service routing in high-speed networks based on selective probing paper_content: We propose an integrated QoS routing framework based on selective probing for high-speed packet-switching networks. The framework is fully distributed and depends only on the local state maintained at every individual node. By using controlled diffusion computations, the framework captures the common messaging and computational structure of distributed QoS routing, and allows an efficient implementation due to its simplicity. Different distributed routing algorithms (DRAs) can be quickly developed by specifying only a few well-defined constraint-dependent parameters within the framework. Our simulation shows that the overhead of the proposed algorithms is stable and modest. --- paper_title: Building shared trees using a one-to-many joining mechanism paper_content: This paper presents a new approach for building shared trees which have the capability of providing multiple routes from the joining node onto an existing tree. The approach follows a design parameter of CBT and PIM in that it operates independently of any unicast routing protocol. However, a paradigm shift is introduced such that trees are built in an on-demand basis through the use of a one-to-many joining mechanism. In addition, the paper presents optimisations of the new mechanism to help constrain its impact in the case where many receivers exist for a given multicast group. --- paper_title: QoS routing in networks with uncertain parameters paper_content: This article considers the problem of routing connections with QoS requirements across networks, when the information available for making routing decisions is inaccurate. This uncertainty about the actual state of a network component arises naturally in a number of different environments, which are reviewed in the paper. The goal of the route selection process is then to identify a path that is most likely to satisfy the QoS requirements. For end to end delay guarantees, this problem is intractable. However we show that by decomposing the end-to-end constraint into local delay constraints, efficient and tractable solutions can be established. We first consider the simpler problem of decomposing the end-to-end constraint into local constraints, for a given path. We show that, for general distributions, this problem is also intractable. Nonetheless, by defining a certain class of probability distributions, which posses a certain convexity property, and restricting ourselves to that class, we are able to establish efficient and exact solutions. Moreover, we show that typical distributions would belong to that class. We then consider the general problem, of combined path optimization and delay decomposition. We present an efficient solution scheme for the above class of probability distributions. Our solution is similar to that of the restricted shortest-path problem, which renders itself to near-optimal approximations of polynomial complexity. We also show that yet simpler solutions exist in the special case of uniform distributions. --- paper_title: Distributed quality-of-service routing in high-speed networks based on selective probing paper_content: We propose an integrated QoS routing framework based on selective probing for high-speed packet-switching networks. The framework is fully distributed and depends only on the local state maintained at every individual node. By using controlled diffusion computations, the framework captures the common messaging and computational structure of distributed QoS routing, and allows an efficient implementation due to its simplicity. Different distributed routing algorithms (DRAs) can be quickly developed by specifying only a few well-defined constraint-dependent parameters within the framework. Our simulation shows that the overhead of the proposed algorithms is stable and modest. --- paper_title: Multi-path routing combined with resource reservation paper_content: In high-speed networks it is desirable to interleave routing and resource (such as bandwidth) reservation. The PNNI standard for private ATM networks is an example of an algorithm that does this using a sequential crank-back mechanism. We suggest the implementation of resource reservation along several routes in parallel. We present an analytical model that demonstrates that when there are several routes to the destination it pays to attempt reservation along more than a single route. Following this analytic observation, we present a family of algorithms that route and reserve resources along parallel subroutes. The algorithms of the family represent different trade-offs between the speed and the quality of the established route. The presented algorithms are simulated against several legacy algorithms, including the PNNI crank-back, and exhibit higher network utilization and faster connection set-up time. --- paper_title: QoS routing via multiple paths using bandwidth reservation paper_content: The authors address the problem of computing a multipath, consisting of possibly overlapping paths, to transmit data from the source node s to the destination node d over a computer network while ensuring deterministic bounds on end-to-end delay or delivery rate.They consider two generic routing problems within the framework wherein bandwidth can be reserved, and guaranteed, once reserved, on various links of the communication network. The first problem requires that a message of finite length be transmitted from s to d within {tau} units of time. The second problem requires that a sequential message of r units be transmitted at a rate of {eta} such that maximum time difference between two units that are received out of order is no more than q. They propose a polynomial-time algorithm to the first problem based on an adaptation of the classical Ford-Fulkerson`s method. They present simulation results to illustrate the applicability of the proposed algorithm. They show the second problem to be NP-complete, and propose a polynomial-time approximately solution. --- paper_title: A distributed route-selection scheme for establishing real-time channels paper_content: To guarantee the delivery of real-time messages before their deadline, a real-time channel or connection must be established before the transmission of any real-time messages. During this channel establishment phase, one must first select a route between the source and destination of this channel and then reserve sufficient resources along this route so that the end-to-end delay over the selected route may not exceed the user-specified delay bound. --- paper_title: Simulation study of the capacity effects of dispersity routing for fault tolerant realtime channels paper_content: The paper presents a simulation study of the use of dispersity routing to provide fault tolerance on top of a connection oriented realtime service such as that provided by the Tenet scheme. A framework to study the dispersity schemes is presented. The simulations show that the dispersity schemes, by dividing the connection's traffic among multiple paths in the network, have a beneficent effect on the capacity of the network. Thus, for certain classes of dispersity schemes, we obtain a small improvement in fault tolerance as well as an improvement in the number of connections that the network can support. For other classes of dispersity schemes, greater improvement in service may be purchased at the cost of decrease in capacity. The paper explores the tradeoffs available through exhaustive simulations. We conclude that dispersity routing is a flexible approach to increasing the fault tolerance of realtime connections, which can provide a range of improvements in service with a corresponding range of costs. --- paper_title: Distributed quality-of-service routing in high-speed networks based on selective probing paper_content: We propose an integrated QoS routing framework based on selective probing for high-speed packet-switching networks. The framework is fully distributed and depends only on the local state maintained at every individual node. By using controlled diffusion computations, the framework captures the common messaging and computational structure of distributed QoS routing, and allows an efficient implementation due to its simplicity. Different distributed routing algorithms (DRAs) can be quickly developed by specifying only a few well-defined constraint-dependent parameters within the framework. Our simulation shows that the overhead of the proposed algorithms is stable and modest. ---
Title: An Overview of Quality-of-Service Routing for the Next Generation High-Speed Networks: Problems and Solutions Section 1: Introduction Description 1: Introduce the need for Quality-of-Service routing in next-generation high-speed networks and outline the paper's focus. Section 2: Maintenance of State Information Description 2: Discuss the tasks involved in routing, such as maintaining and updating state information, and explain the concepts of local and global state. Section 3: Routing Problems Description 3: Define and differentiate between unicast and multicast routing problems, and introduce the sub-classes within these two major categories. Section 4: Routing Strategies Description 4: Describe various routing strategies including source routing, distributed routing, and hierarchical routing, and discuss their respective strengths and weaknesses. Section 5: Unicast Routing Algorithms Description 5: Survey existing unicast routing algorithms, presenting specific algorithms, their methodologies, and comparative analysis of their pros and cons. Section 6: Distributed Routing Algorithms Description 6: Present distributed routing algorithms, including their specific methodologies and the analysis of their distinctive properties. Section 7: Hierarchical Routing Algorithms Description 7: Describe hierarchical routing protocols, focusing on examples like PNNI, and illustrate the routing processes within hierarchical networks. Section 8: Multicast Routing Algorithms Description 8: Examine multicast routing algorithms, focusing on problems such as bandwidth-constrained and delay-constrained multicast routing and the heuristic solutions proposed for these NP-complete problems. Section 9: QoS Routing and Other Network Components Description 9: Explore how QoS routing interacts with other network components like resource reservation, admission control, and QoS negotiation. Section 10: Future Directions Description 10: Identify and discuss potential future research areas and directions for improving QoS routing, handling imprecise state information, and integrating with best-effort traffic. Section 11: Summary Description 11: Summarize the key findings and insights of the paper, emphasizing the importance of efficient QoS routing in the future high-speed networks.
A review of methods for spike sorting : the detection and classification of neural action potentials
21
--- paper_title: An Iterative Spike Separation Technique paper_content: A spike separation technique which combines data processing methods with extracellular probing techniques to allow simultaneous observation of multiple neural events is presented. A preparation, the locust ventral cord, allows spike separation. Experimental results and simulation indicate the usefulness of the method for this preparation. A feature of the data processing method allows the experimenter to direct the machine classification by an initial classification. Subsequently, the machine returns an indication of the quality of classification, allowing a reclassification or termination. --- paper_title: Simultaneous Studies of Firing Patterns in Several Neurons. paper_content: A tungsten microelectrode with several small holes burnt in the vinyl insulation enables the action potentials from several adjacent neurons to be observed simultaneously. A digital computer is used to separate the contributions of each neuron by examining and classifying the waveforms of the action potentials. These methods allow studies to be made of interactions between neurons that lie close together. --- paper_title: A Comparison of Techniques for Classification of Multiple Neural Signals paper_content: A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. --- paper_title: The real-time sorting of neuro-electric action potentials in multiple unit studies paper_content: Abstract Description of a method by which action potentials recorded simultaneously can be sorted in a moderate size machine in real-time and on-line. --- paper_title: Real-Time Classification of Multiunit Neural Signals Using Reduced Feature Sets paper_content: Classification of characteristic neural spike shapes in multi-unit recordings is performed in real time using a reduced feature set. A model of uncorrelated signal-related noise is used to reduce the feature set by choosing a subset of aperiodic samples which is effective for discrimination between signals by a nearest-mean algorithm. Initial signal classes are determined by an unsupervised clustering algorithm applied to the reduced features of the learning set events. Classification is carried out in real time using a distance measure derived for the reduced feature set. Examples of separation and correlation of multiunit activity from cat and frog visual systems are described. --- paper_title: A Comparison of Techniques for Classification of Multiple Neural Signals paper_content: A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. --- paper_title: A totally automated system for the detection and classification of neural spikes paper_content: A system for neural spike detection and classification is presented which does not require a priori assumptions about spike shape or timing. The system is divided into two parts: a learning subsystem and a real-time detection and classification subsystem. The learning subsystem, comprising of feature learning phase and a template learning phase, extracts templates for each separate spike class. The real-time detection and classification subsystems identifies spikes in the noisy neural trace and sorts them into classes, based on the templates and the statistics of the background noise. Comparisons are made among three different schemes for the real-time detection and classification subsystem. Performance of the system is illustrated by using it to classify spikes in segments of neural activity recorded from monkey motor cortex and from guinea pig and ferret auditory cortexes. > --- paper_title: Automatic nerve impulse identification and separation. paper_content: Abstract A heuristic method was developed to identify and to separate automatically unit nerve impulses from a multiunit recording. Up to 20 distinct units can be identified. The method can sequentially decompose superimposed nerve impulses if the rapidly changing region of at least one of them is relatively undistorted. The identification and separation procedure has been successfully applied to the extracellularly recorded neural activity associated with the shadow reflex pathway of the barnacle. The limitations of the procedure are discussed and additional applications of the technique are presented. --- paper_title: Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-Gaussian variability paper_content: Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. --- paper_title: Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables paper_content: We discuss Bayesian methods for model averaging and model selection among Bayesian-network models with hidden variables. In particular, we examine large-sample approximations for the marginal likelihood of naive-Bayes models in which the root node is hidden. Such models are useful for clustering or unsupervised learning. We consider a Laplace approximation and the less accurate but more computationally efficient approximation known as the Bayesian Information Criterion (BIC), which is equivalent to Rissanen's (1987) Minimum Description Length (MDL). Also, we consider approximations that ignore some off-diagonal elements of the observed information matrix and an approximation proposed by Cheeseman and Stutz (1995). We evaluate the accuracy of these approximations using a Monte-Carlo gold standard. In experiments with artificial and real examples, we find that (1) none of the approximations are accurate when used for model averaging, (2) all of the approximations, with the exception of BIC/MDL, are accurate for model selection, (3) among the accurate approximations, the Cheeseman–Stutz and Diagonal approximations are the most computationally efficient, (4) all of the approximations, with the exception of BIC/MDL, can be sensitive to the prior distribution over model parameters, and (5) the Cheeseman–Stutz approximation can be more accurate than the other approximations, including the Laplace approximation, in situations where the parameters in the maximum a posteriori configuration are near a boundary. --- paper_title: A Comparison of Techniques for Classification of Multiple Neural Signals paper_content: A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. --- paper_title: Optimal discrimination and classification of neuronal action potential waveforms from multiunit, multichannel recordings using software-based linear filters paper_content: Describes advanced protocols for the discrimination and classification of neuronal spike waveforms within multichannel electrophysiological recordings. The programs are capable of detecting and classifying the spikes from multiple, simultaneously active neurons, even in situations where there is a high degree of spike waveform superposition on the recording channels. The protocols are based on the derivation of an optimal linear filter for each individual neuron. Each filter is tuned to selectively respond to the spike waveform generated by the corresponding neuron, and to attenuate noise and the spike waveforms from all other neurons. The protocol is essentially an extension of earlier work (S. Andreassen et al., 1979; W.M. Roberts and D.K. Hartline, 1975; R.B. Stein et al., 1979). However, the protocols extend the power and utility of the original implementations in two significant respects. First, a general single-pass automatic template estimation algorithm was derived and implemented. Second, the filters were implemented within a software environment providing a greatly enhanced functional organization and user interface. The utility of the analysis approach was demonstrated on samples of multiunit electrophysiological recordings from the cricket abdominal nerve cord. > --- paper_title: Multidimensional binary search trees used for associative searching paper_content: This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1)/ k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t )/ k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given. --- paper_title: Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-Gaussian variability paper_content: Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. --- paper_title: Fast K-dimensional tree algorithms for nearest neighbor search with application to vector quantization encoding paper_content: Fast search algorithms are proposed and studied for vector quantization encoding using the K-dimensional (K-d) tree structure. Here, the emphasis is on the optimal design of the K-d tree for efficient nearest neighbor search in multidimensional space under a bucket-Voronoi intersection search framework. Efficient optimization criteria and procedures are proposed for designing the K-d tree, for the case when the test data distribution is available (as in vector quantization application in the form of training data) as well as for the case when the test data distribution is not available and only the Voronoi intersection information is to be used. The criteria and bucket-Voronoi intersection search procedure are studied in the context of vector quantization encoding of speech waveform. They are empirically observed to achieve constant search complexity for O(log N) tree depths and are found to be more efficient in reducing the search complexity. A geometric interpretation is given for the maximum product criterion, explaining reasons for its inefficiency with respect to the optimization criteria. > --- paper_title: Recognition of multiunit neural signals paper_content: An essential step in studying nerve cell interaction during information processing is the extracellular microelectrode recording of the electrical activity of groups of adjacent cells. The recording usually contains the superposition of the spike trains produced by a number of neurons in the vicinity of the electrode. It is therefore necessary to correctly classify the signals generated by these different neurons. This problem is considered, and a new classification scheme is developed which does not require human supervision. A learning stage is first applied on the beginning portion of the recording to estimate the typical spike shapes of the different neurons. As for the classification stage, a method is developed which specifically considers the case when spikes overlap temporally. The method minimizes the probability of error, taking into account the statistical properties of the discharges of the neurons. The method is tested on a real recording as well as on synthetic data. > --- paper_title: Detection, classification, and superposition resolution of action potentials in multiunit single-channel recordings by an on-line real-time neural network paper_content: Determination of single-unit spike trains from multiunit recordings obtained during extracellular recording has been the focus of many studies over the last two decades. In multiunit recordings, superpositions can occur with high frequency if the firing rates of the neurons are high or correlated, making superposition resolution imperative for accurate spike train determination. In this work, a connectionist neural network (NN) was applied to the spike sorting challenge. A novel training scheme was developed which enabled the NN to resolve some superpositions using single-channel recordings. Simulated multiunit spike trains were constructed from templates and noise segments that were extracted from real extracellular recordings. The simulations were used to determine the performances of the NN and a simple matched template filter (MTF), which was used as a basis for comparison. The network performed as well as the MTF in identifying nonoverlapping spikes, and was significantly better in resolving superpositions and rejecting noise. An on-line, real-time implementation of the NN discriminator, using a high-speed digital signal processor mounted inside an IBM-PC, is now in use in six laboratories. --- paper_title: Tetrodes markedly improve the reliability and yield of multiple single-unit isolation from multi-unit recordings in cat striate cortex paper_content: The majority of techniques for separating multiple single-unit spike trains from a multi-unit recording rely on the assumption that different cells exhibit action potentials having unique amplitudes and waveforms. When this assumption fails, due to the similarity of spike shape among different cells or to the presence of complex spikes with declining intra-burst amplitude, these methods lead to errors in classification. In an effort to avoid these errors, the stereotrode (McNaughton et al., 1983) and later the tetrode (O'Keefe and Reece, 1993; Wilson and McNaughton, 1993) recording techniques were developed. Because the latter technique has been applied primarily to the hippocampus, we sought to evaluate its performance in the neocortex. Multi-unit recordings, using single tetrodes, were made at 28 sites in area 17 of 3 anesthetized cats. Neurons were activated with moving bars and square wave gratings. Single units were separated by identification of clusters in 2-D projections of either peak-to-peak amplitude, spike width, spike area, or the 1st versus 2nd principal components of the waveforms recorded on each channel. Using tetrodes, we recorded a total of 154 single cells (mean = 5.4, max = 9). By cross-checking the performance of the tetrode with the stereotrode and electrode, we found that the best of the 6 possible stereotrode pairs and the best of 4 possible electrodes from each tetrode yielded 102 (mean = 3.6, max = 7) and 95 (mean = 3.4, max = 6) cells, respectively. Moreover, we found that the number of cells isolated at each site by the tetrode was greater than the stereotrode or electrode in 16/28 and 28/28 cases, respectively. Thus, both stereotrodes, and particularly electrodes, often lumped 2 or more cells in a single cluster that could be easily separated by the tetrode. We conclude that tetrode recording currently provides the best and most reliable method for the isolation of multiple single units in the neocortex using a single probe. --- paper_title: The stereotrode: A new technique for simultaneous isolation of several single units in the central nervous system from multiple unit records paper_content: A new method is described for the recording and discrimination of extracellular action potentials in CNS regions with high cellular packing density or where there is intrinsic variation in action potential amplitude during burst discharge. The method is based on the principle that cells with different ratios of distances from two electrode tips will have different spike-amplitude ratios when recorded on two channels. The two channel amplitude ratio will remain constant regardless of intrinsic variation in the absolute amplitude of the signals. The method has been applied to the rat hippocampal formation, from which up to 5 units have been simultaneously isolated. The construction of the electrodes is simple, relatively fast, and reliable, and their low tip impedances result in excellent signal to noise characteristics. --- paper_title: Blind separation of auditory event-related brain responses into independent components paper_content: Averaged event-related potential (ERP) data recorded from the human scalp reveal electroencephalographic (EEG) activity that is reliably time-locked and phase-locked to experimental events. We report here the application of a method based on information theory that decomposes one or more ERPs recorded at multiple scalp sensors into a sum of components with fixed scalp distributions and sparsely activated, maximally independent time courses. Independent component analysis (ICA) decomposes ERP data into a number of components equal to the number of sensors. The derived components have distinct but not necessarily orthogonal scalp projections. Unlike dipole-fitting methods, the algorithm does not model the locations of their generators in the head. Unlike methods that remove second-order correlations, such as principal component analysis (PCA), ICA also minimizes higher-order dependencies. Applied to detected—and undetected—target ERPs from an auditory vigilance experiment, the algorithm derived ten components that decomposed each of the major response peaks into one or more ICA components with relatively simple scalp distributions. Three of these components were active only when the subject detected the targets, three other components only when the target went undetected, and one in both cases. Three additional components accounted for the steady-state brain response to a 39-Hz background click train. Major features of the decomposition proved robust across sessions and changes in sensor number and placement. This method of ERP analysis can be used to compare responses from multiple stimuli, task conditions, and subject states. --- paper_title: Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-Gaussian variability paper_content: Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. --- paper_title: Recognition of multiunit neural signals paper_content: An essential step in studying nerve cell interaction during information processing is the extracellular microelectrode recording of the electrical activity of groups of adjacent cells. The recording usually contains the superposition of the spike trains produced by a number of neurons in the vicinity of the electrode. It is therefore necessary to correctly classify the signals generated by these different neurons. This problem is considered, and a new classification scheme is developed which does not require human supervision. A learning stage is first applied on the beginning portion of the recording to estimate the typical spike shapes of the different neurons. As for the classification stage, a method is developed which specifically considers the case when spikes overlap temporally. The method minimizes the probability of error, taking into account the statistical properties of the discharges of the neurons. The method is tested on a real recording as well as on synthetic data. > --- paper_title: Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-Gaussian variability paper_content: Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. --- paper_title: Tetrodes markedly improve the reliability and yield of multiple single-unit isolation from multi-unit recordings in cat striate cortex paper_content: The majority of techniques for separating multiple single-unit spike trains from a multi-unit recording rely on the assumption that different cells exhibit action potentials having unique amplitudes and waveforms. When this assumption fails, due to the similarity of spike shape among different cells or to the presence of complex spikes with declining intra-burst amplitude, these methods lead to errors in classification. In an effort to avoid these errors, the stereotrode (McNaughton et al., 1983) and later the tetrode (O'Keefe and Reece, 1993; Wilson and McNaughton, 1993) recording techniques were developed. Because the latter technique has been applied primarily to the hippocampus, we sought to evaluate its performance in the neocortex. Multi-unit recordings, using single tetrodes, were made at 28 sites in area 17 of 3 anesthetized cats. Neurons were activated with moving bars and square wave gratings. Single units were separated by identification of clusters in 2-D projections of either peak-to-peak amplitude, spike width, spike area, or the 1st versus 2nd principal components of the waveforms recorded on each channel. Using tetrodes, we recorded a total of 154 single cells (mean = 5.4, max = 9). By cross-checking the performance of the tetrode with the stereotrode and electrode, we found that the best of the 6 possible stereotrode pairs and the best of 4 possible electrodes from each tetrode yielded 102 (mean = 3.6, max = 7) and 95 (mean = 3.4, max = 6) cells, respectively. Moreover, we found that the number of cells isolated at each site by the tetrode was greater than the stereotrode or electrode in 16/28 and 28/28 cases, respectively. Thus, both stereotrodes, and particularly electrodes, often lumped 2 or more cells in a single cluster that could be easily separated by the tetrode. We conclude that tetrode recording currently provides the best and most reliable method for the isolation of multiple single units in the neocortex using a single probe. --- paper_title: A Comparison of Techniques for Classification of Multiple Neural Signals paper_content: A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. ---
Title: A review of methods for spike sorting: the detection and classification of neural action potentials Section 1: Introduction Description 1: Introduce the challenge of detecting neural spike activity, the importance of spike sorting, and the goals of this review. Section 2: Measuring neural activity Description 2: Discuss the historical background and the basic techniques and challenges of measuring neural activity with electrodes. Section 3: The basic problems in spike sorting Description 3: Illustrate the fundamental issues in spike sorting, including background noise, overlapping spikes, and the identification of different neurons. Section 4: Threshold detection Description 4: Describe the simplest method of spike detection using voltage thresholds and discuss its advantages and limitations. Section 5: Types of detection errors Description 5: Discuss the common errors in spike detection, such as false positives and false negatives, and misclassification due to overlaps. Section 6: Detecting and classifying multiple spike shapes Description 6: Review methods for detecting and classifying spikes of different shapes, starting from simple feature analysis to more complex techniques. Section 7: Feature analysis Description 7: Explain the use and selection of features for spike shape characterization and their role in spike sorting. Section 8: Principal component analysis Description 8: Discuss the method of principal component analysis (PCA) for automatically choosing features and its application in spike sorting. Section 9: Cluster analysis Description 9: Describe methods for automatically determining cluster boundaries and classifying data into clusters. Section 10: Bayesian clustering and classification Description 10: Review the probabilistic approach to clustering and classification using Bayesian methods. Section 11: Automatic removal of outliers Description 11: Discuss strategies for minimizing the impact of outliers on spike classification. Section 12: Bayesian classification Description 12: Explain the advantages of Bayesian classification in quantifying classification certainty and monitoring isolation quality. Section 13: Clustering in higher dimensions and template matching Description 13: Describe how clustering and template matching can be extended to higher dimensions for improved classification accuracy. Section 14: Choosing the number of classes Description 14: Discuss the challenges and methods for determining the optimal number of classes in clustering approaches. Section 15: Estimating spike shapes with interpolation Description 15: Explain methods involving interpolation and regularization to improve spike shape estimation and classification accuracy. Section 16: Filter-based methods Description 16: Describe the usage of optimal filtering methods for spike sorting and their comparative accuracy. Section 17: Overlapping spikes Description 17: Review methods specifically designed to handle the classification and decomposition of overlapping spikes. Section 18: Multiple electrodes Description 18: Explore the use and advantages of multiple electrode recordings for spike sorting. Section 19: Independent component analysis Description 19: Discuss the application of independent component analysis (ICA) for separating mixed signals in multi-electrode recordings. Section 20: Related problems in spike sorting Description 20: Identify and explain challenges that can affect spike sorting methods, such as burst-firing neurons, electrode drift, non-stationary background noise, and spike alignment. Section 21: Summary Description 21: Summarize the review, compare various methods, and discuss the current state and future direction of spike sorting techniques.
Pedestrian Protection Systems: Issues, Survey, and Challenges
9
--- paper_title: Artificial Vision in Road Vehicles paper_content: The last few decades have witnessed the birth and growth of a new sensibility to transportation efficiency. In particular the need for efficient and improved people and goods mobility has pushed researchers to address the problem of intelligent transportation systems. This paper surveys the most advanced approaches to (partial) customization of the road following task, using on-board systems based on artificial vision. The functionalities of lane detection, obstacle detection and pedestrian detection are described and classified, and their possible application in future road vehicles is discussed. --- paper_title: WORK TRIPS AND SAFETY OF BICYCLISTS paper_content: A large number of commuters use bicycles to got to work in India. However, there are no special facilities for them on Indian roads and they are involved in a disproportionate share of road crashes. Though many countries around the world have put in place policies for integrating bicycle traffic on all arterial roads, there is no such move in India yet. The enactment of such policies is necessary as cycling and walking, separately or in conjunction with public transport, offer significant positive health gains and reduction in pollution and accidents. For such policies to be successful Indian professionals have to develop designs of road crosssections and infrastructure that suit our special needs. --- paper_title: Sensor-based pedestrian protection paper_content: Pedestrian accidents represent the second-largest source of traffic-related injuries and fatalities, after accidents involving car passengers. Children are especially at risk. A complementary approach to accident prevention is to focus on sensor-based solutions, which let vehicles "look ahead" and detect pedestrians in their surroundings. The article investigates the state of the art in this domain, reviewing passive, video based approaches and approaches involving active sensors (radar and laser range finders). --- paper_title: Pedestrian collision avoidance systems: a survey of computer vision based recent studies paper_content: This paper gives a survey of recent research on pedestrian collision avoidance systems. Collision avoidance not only requires detection of pedestrians, but also collision prediction using pedestrian dynamics and behavior analysis. The paper reviews various approaches based on cues such as shape, motion, and stereo used for detecting pedestrians from visible as well as non-visible light sensors. This is followed by the study of research dealing with probabilistic modeling of pedestrian behavior for predicting collisions between pedestrian and vehicle. The literature review is also condensed in tabular form for quick reference --- paper_title: Strengthening the prevention and care of injuries worldwide paper_content: Summary The global burden of injuries is enormous, but has often been overlooked in attempts to improve health. We review measures that would strengthen existing efforts to prevent and treat injuries worldwide. Scientifically-based efforts to understand risk factors for the occurrence of injury are needed and they must be translated into prevention programmes that are well designed and assessed. Areas for potential intervention include environmental modification, improved engineering features of motor vehicle and other products, and promotion of safe behaviours through social marketing, legislation, and law enforcement. Treatment efforts need to better define the most high-yield services and to promote these in the form of essential health services. To achieve these changes, there is a need to strengthen the capacity of national institutions to do research on injury control; to design and implement countermeasures that address injury risk factors and deficiencies in injury treatment; and to assess the effectiveness of such countermeasures. Although much work remains to be done in high-income countries, even greater attention is needed in less-developed countries, where injury rates are higher, few injury control activities have been undertaken, and where most of the world's population lives. In almost all areas, injury rates are especially high in the most vulnerable sections of the community, including those of low socioeconomic status. Injury control activities should, therefore, be undertaken in a context of attention to human rights and other broad social issues. --- paper_title: Review of Urban Transportation in India Review of Urban Transportation in India paper_content: Cities play a vital role in promoting economic growth and prosperity. The development of cities largely depends upon their physical, social, and institutional infrastructure. In this context, the importance of interurban transportation is paramount. This article provides an overview of urban transport issues in India. Rather than covering every aspect of urban transportation, it primarily focuses on those areas that are important from a policy point of view. The article first reviews the trends of vehicular growth and availability of transport infrastructure in Indian cities. This is followed by a discussion on the nature and magnitude of urban transport problems such as congestion, pollution, and road accidents. Building on this background, the article proposes policy measures to improve urban transportation in India. --- paper_title: Designing road vehicles for pedestrian protection paper_content: Collisions between pedestrians and road vehicles present a major challenge for public health, trauma medicine, and traffic safety professionals. More than a third of the 1.2 million people killed and the 10 million injured annually in road traffic crashes worldwide are pedestrians.1 Compared with injured vehicle occupants, pedestrians sustain more multisystem injuries, with concomitantly higher injury severity scores and mortality.2 Although a disproportionately large number of these crashes occur in developing and transitional countries, pedestrian casualties also represent a huge societal cost in industrialised nations. In Britain pedestrian injuries are more than twice as likely to be fatal as injuries to vehicle occupants3 and result in an average cost to society of £57 400, nearly twice that of injuries to vehicle occupants.4 ::: ::: #### Summary points ::: ::: Pedestrian-vehicle crashes are responsible for more than a third of all traffic related fatalities and injuries worldwide ::: ::: Lower limb trauma is the commonest pedestrian injury, while head injury is responsible for most pedestrian fatalities ::: ::: Standardised tests that simulate the most common pedestrian-vehicle crashes are being used to evaluate vehicle countermeasures to reduce pedestrian injury ::: ::: Energy absorbing components such as compliant bumpers, dynamically raised bonnets, and windscreen airbags are being developed for improved pedestrian protection ::: ::: Despite the size of the pedestrian injury problem, research to reduce traffic related injuries has concentrated almost exclusively on increasing the survival rates for vehicle occupants. Most attempts made to reduce pedestrian injuries have focused solely on isolation techniques such as pedestrian bridges, public education, and traffic regulations and have not included changes to vehicle design. The lack of effort devoted to vehicle modifications for pedestrian safety has stemmed primarily from a societal view that the injury caused by a large, rigid vehicle hitting a small, fragile pedestrian cannot be significantly reduced by alterations to the vehicle structure. Crash engineers, … --- paper_title: Crosswalk Markings and the Risk of Pedestrian–Motor Vehicle Collisions in Older Pedestrians paper_content: CONTEXT ::: Motor vehicles struck and killed 4739 pedestrians in the United States in the year 2000. Older pedestrians are at especially high risk. ::: ::: ::: OBJECTIVE ::: To determine whether crosswalk markings at urban intersections influence the risk of injury to older pedestrians. ::: ::: ::: DESIGN ::: Case-control study in which the units of study were crossing locations. ::: ::: ::: SETTING ::: Six cities in Washington and California, with case accrual from February 1995 through January 1999. ::: ::: ::: PARTICIPANTS ::: A total of 282 case sites were street-crossing locations at an intersection where a pedestrian aged 65 years or older had been struck by a motor vehicle while crossing the street; 564 control sites were other nearby crossings that were matched to case sites based on street classification. Trained observers recorded environmental characteristics, vehicular traffic flow and speed, and pedestrian use at each site on the same day of the week and time of day as when the case event had occurred. ::: ::: ::: MAIN OUTCOME MEASURE ::: Risk of pedestrian-motor vehicle collision involving an older pedestrian. ::: ::: ::: RESULTS ::: After adjusting for pedestrian flow, vehicle flow, crossing length, and signalization, risk of a pedestrian-motor vehicle collision was 2.1-fold greater (95% confidence interval, 1.1-4.0) at sites with a marked crosswalk. Almost all of the excess risk was due to 3.6-fold (95% confidence interval, 1.7-7.9) higher risk associated with marked crosswalks at sites with no traffic signal or stop sign. ::: ::: ::: CONCLUSIONS ::: Crosswalk markings appear associated with increased risk of pedestrian-motor vehicle collision to older pedestrians at sites where no signal or stop sign is present to halt traffic. --- paper_title: SEEING CROSSWALKS IN A NEW LIGHT paper_content: The U.S. Federal Highway Administration (FHWA) has made improved pedestrian safety a priority and seeks to achieve a 10% reduction in pedestrian fatalities by 2007. Research staff from the FHWA Office of Safety Research and Development at the Turner-Fairbank Highway Research Center are investigating new ways to help reach this goal. Among the research topics under study is evaluation of countermeasures to improve safety in pedestrian crossings that are not controlled by traffic signals. This includes in-roadway warning lights in which each side of a crosswalk is lined with a series of amber lights embedded in the roadway that face oncoming traffic. The lights are visible to approaching drivers as a warning that a pedestrian is in or near the marked crosswalk. This article describes an FHWA study that examined pedestrian and driver behavior at crosswalks in Alexandria, Virginia, during daylight and dark conditions over the course of 1 year. Further evaluations will be conducted immediately before, immediately after, and 1 year after in-roadway warning lights are installed. --- paper_title: Safety Analysis of Marked Versus Unmarked Crosswalks in 30 Cities paper_content: Deciding where to mark crosswalks is only one consideration in selecting appropriate solutions to improve pedestrian safety and access. Five years of pedestrian crashes at 1,000 marked crosswalks and 1,000 matched unmarked comparison sites were analyzed. More substantial improvements were recommended to provide for safer pedestrian crossings. Language: en --- paper_title: A review of evidence-based traffic engineering measures designed to reduce pedestrian-motor vehicle crashes. paper_content: We provide a brief critical review and assessment of engineering modifications to the built environment that can reduce the risk of pedestrian injuries. In our review, we used the Transportation Research Information Services database to conduct a search for studies on engineering countermeasures documented in the scientific literature. We classified countermeasures into 3 categories-speed control, separation of pedestrians from vehicles, and measures that increase the visibility and conspicuity of pedestrians. We determined the measures and settings with the greatest potential for crash prevention. Our review, which emphasized inclusion of studies with adequate methodological designs, showed that modification of the built environment can substantially reduce the risk of pedestrian-vehicle crashes. --- paper_title: Structural Hood and Hinge Concepts for Pedestrian Protection paper_content: Future legislation for pedestrian protection in Europe and Japan considers standardized test methods and test requirements relevant for a type approval. The first phase of legal introduction starts in 2005 and a more stringent second phase will follow in 2010. This paper consists of three main chapters. The chapter “Requirements” starts with a summary of the pedestrian protection-related requirements for head impact and its conflicting requirements for the vehicle handling and driving. The second chapter “Hood Concepts” discusses how the hood design could become compatible with the pedestrian protection requirements. Concepts for the hood design fulfilling both, the European as well as the Japanese requirements are described. The impact of the hood design parameters on the head impact performance are shown and different concept solutions are presented. The third chapter “Hood Hinge Concepts” examines the hinge performance for pedestrian protection in detail. The mounting points of the hood, such as hinges, latches and bumper stops, are the most critical points for head impact. Different hinge concepts and their impact on the head impact performance are shown. The influence of the hinge parameters on the acceleration curves and the HPC values is discussed and conclusions for the hinge design as well as for the vehicle structure are drawn. --- paper_title: Comprehensive approach to increased pedestrian safety in pedestrian—car accidents paper_content: At the Ford Forschungszentrum Aachen a finite element pedestrian humanoid model for use in pedestrian accident simulations was constructed that is capable of being used as vehicle engineering development tool. To further improve the understanding of the kinematics of pedestrian accidents and to optimise the computer simulation program it is necessary to collect a set of highly detailed real world data. At present that data is either unavailable, or not sufficiently accurate for this purpose. To meet these targets an interdisciplinary study has been established. In parallel a demonstrator vehicle has been build to show future technologies in pedestrian safety. --- paper_title: Designing road vehicles for pedestrian protection paper_content: Collisions between pedestrians and road vehicles present a major challenge for public health, trauma medicine, and traffic safety professionals. More than a third of the 1.2 million people killed and the 10 million injured annually in road traffic crashes worldwide are pedestrians.1 Compared with injured vehicle occupants, pedestrians sustain more multisystem injuries, with concomitantly higher injury severity scores and mortality.2 Although a disproportionately large number of these crashes occur in developing and transitional countries, pedestrian casualties also represent a huge societal cost in industrialised nations. In Britain pedestrian injuries are more than twice as likely to be fatal as injuries to vehicle occupants3 and result in an average cost to society of £57 400, nearly twice that of injuries to vehicle occupants.4 ::: ::: #### Summary points ::: ::: Pedestrian-vehicle crashes are responsible for more than a third of all traffic related fatalities and injuries worldwide ::: ::: Lower limb trauma is the commonest pedestrian injury, while head injury is responsible for most pedestrian fatalities ::: ::: Standardised tests that simulate the most common pedestrian-vehicle crashes are being used to evaluate vehicle countermeasures to reduce pedestrian injury ::: ::: Energy absorbing components such as compliant bumpers, dynamically raised bonnets, and windscreen airbags are being developed for improved pedestrian protection ::: ::: Despite the size of the pedestrian injury problem, research to reduce traffic related injuries has concentrated almost exclusively on increasing the survival rates for vehicle occupants. Most attempts made to reduce pedestrian injuries have focused solely on isolation techniques such as pedestrian bridges, public education, and traffic regulations and have not included changes to vehicle design. The lack of effort devoted to vehicle modifications for pedestrian safety has stemmed primarily from a societal view that the injury caused by a large, rigid vehicle hitting a small, fragile pedestrian cannot be significantly reduced by alterations to the vehicle structure. Crash engineers, … --- paper_title: Impact of Pedestrian Presence on Movement of Left-Turning Vehicles: Method, Preliminary Results & Possible Use in Intersection Decision Support paper_content: Warning systems are being developed for left-turning vehicles at intersections where protected left-turns are not warranted or cannot be provided, based on limitations of right of way or intersection capacity. These are meant to provide warnings to left-turning vehicles of vehicles approaching from the opposite direction, when the time to turn may be deemed unsafe. To implement these warning systems, it is necessary to estimate in near real time, the probability of conflict between the two approaching vehicles. A study is being conducted with the help of video and radar at various intersections, to obtain estimates of turning time and acceptable gaps for drivers under various situations. Initial pilot observations indicate that the presence of pedestrians in intersections had an immediate and substantial impact on movement of left turning vehicles. From a preliminary systematic video analysis of the intersection, in the presence of pedestrians on the destination crosswalk, the mean and standard deviation of both the gap and gap components (i.e., turning time and the “buffer” between the turning vehicle and the next oncoming vehicle) increased. On the basis of these observations, pedestrian-detection mechanisms may be useful in such intersection warning systems, with a threshold for warning that is adjusted for pedestrians when they are present or in the vicinity of the destination crosswalk. --- paper_title: Using occupancy grids for mobile robot perception and navigation paper_content: An approach to robot perception and world modeling that uses a probabilistic tesselated representation of spatial information called the occupancy grid is reviewed. The occupancy grid is a multidimensional random field that maintains stochastic estimates of the occupancy state of the cells in a spatial lattice. To construct a sensor-derived map of the robot's world, the cell state estimates are obtained by interpreting the incoming range readings using probabilistic sensor models. Bayesian estimation procedures allow the incremental updating of the occupancy grid, using readings taken from several sensors over multiple points of view. The use of occupancy grids from mapping and for navigation is examined. Operations on occupancy grids and extensions of the occupancy grid framework are briefly considered. > --- paper_title: Observations of Driver Time Gap Acceptance at Intersections in Left-Turn Across-Path-Opposite-Direction Scenarios paper_content: Intersection collision warning systems can potentially reduce the number of collisions and associated losses. A critical design aspect of these systems is the selection of warning criteria, which represent a set of conditions and parameters under which the decision and the timing to issue warnings are determined. Proper warning criteria allow the generation of timely signals for drivers while minimizing false and nuisance alarms. The paper describes the development of a methodology to observe and analyze the selection of time gaps exhibited by driver behaviors in a real-world setting. The data collection procedures and analysis techniques are explained for left-turn across-path-opposite-direction scenarios, which constitute more than a quarter of crossing path crashes at intersections. Exemplar data sets from an urban, signalized intersection are used to illustrate methods of deriving time gap acceptance behaviors. The extracted information can serve as the basis for selecting gap acceptance thresholds in warning criteria, and the demonstrated methodology can be applied in the development of intersection collision warning systems. --- paper_title: Effects of traffic density on communication requirements for cooperative intersection collision avoidance systems ( CICAS ) paper_content: Cooperative Intersection Collision Avoidance Systems (CICAS) are likely to be among the first and most quality-of-service-critical users of wireless DSRC communications. Vehicle and infrastructure systems need to exchange data about the state of the intersection with high reliability and low latency. This paper estimates the number of vehicles that would need to share the DSRC safety communication channel at an intersection under worst-case conditions in rural, suburban and urban environments, to provide a starting point for defining the capacity that wireless channel needs to provide in order to support CICAS services. --- paper_title: Vehicle Surround Capture: Survey of Techniques and a Novel Omni-Video-Based Approach for Dynamic Panoramic Surround Maps paper_content: Awareness of what surrounds a vehicle directly affects the safe driving and maneuvering of an automobile. This paper focuses on the capture of vehicle surroundings using video inputs. Surround information or maps can help in studies of driver behavior as well as provide critical input in the development of effective driver assistance systems. A survey of literature related to surround analysis is presented, emphasizing detecting objects such as vehicles, pedestrians, and other obstacles. Omni cameras, which give a panoramic view of the surroundings, can be useful for visualizing and analyzing the nearby surroundings of the vehicle. The concept of Dynamic Panoramic Surround (DPS) map that shows the nearby surroundings of the vehicle and detects the objects of importance on the road is introduced. A novel approach for synthesizing the DPS using stereo and motion analysis of video images from a pair of omni cameras on the vehicle is developed. Successful generation of the DPS in experimental runs on an instrumented vehicle test bed is demonstrated. These experiments prove the basic feasibility and show promise of omni-camera-based DPS capture algorithm to provide useful semantic descriptors of the state of moving vehicles and obstacles in the vicinity of a vehicle --- paper_title: Pedestrian detection in transit bus application: sensing technologies and safety solutions paper_content: Pedestrian safety is a primary traffic issue in urban environment. The use of modern sensing technologies to improve pedestrian safety has remained an active research topic for years. A variety of sensing technologies have been developed for pedestrian detection. The application of pedestrian detection on transit vehicle platforms is desirable and feasible in the near future. In this paper, potential sensing technologies are first reviewed for their advantages and limitations. Several sensors are then chosen for further experimental testing and evaluation. A reliable sensing system will require a combination of multiple sensors to deal with near-range in stationary conditions and longer-range detection in moving conditions. An approach of vehicle-infrastructure integrated solution is suggested for the pedestrian detection in transit bus application. --- paper_title: Intelligent Vehicle Technology And Trends paper_content: This book, a groundbreaking resource, offers professionals a comprehensive overview of cutting-edge intelligent vehicle (IV) systems aimed at providing enhanced safety, greater productivity, and less stress for drivers. Rather than bogging readers down with difficult technical discourse, this easy to understand book presents a conceptual and realistic view of how IV systems work and the issues involved with their introduction into road vehicles. Offering a thorough understanding of how electronics and electronic systems must work within automobiles, heavy trucks and buses, the book examines: (1) real world IV products, along with practical issues, including cost, market aspects, driver interface, and user acceptance; (2) current systems such as adaptive cruise control, lane departure warning, and forward collision mitigation; (3) the next wave of driver assist systems, including pedestrian detection, lane-keeping assistance, and seamless information flow between road vehicle and the road infrastructure; (4) traffic assist systems, in which intelligent vehicles automatically coordinate their movements to improve traffic flow; and (5) a view of the future of this rapidly evolving technological area. --- paper_title: Obstacle Detection and Pedestrian Recognition Using A 3D PMD Camera paper_content: This paper presents a 3D-camera system and appropriate algorithms for the image processing to provide pedestrian recognition. According to international legislative proposals the automotive industry is forced to take action in the area of protecting vulnerable road users. The presented photonic mixer device (PMD) sensor system is able to fulfil specific requirements of a pedestrian protection assistant. It consists of a sensor with a resolution of 64 times 16 pixel, lightsources, and the image processing unit. The light emitting diodes (LEDs) of the lightsources emit a modulated signal in the infrared (IR) spectrum. The sensor calculates the object distance by means of the phase of the reflected signal. The range of the system is approximately 15 m for pedestrians and up to 30 m for objects with high reflectivity, e.g. cars with number plates. The horizontal field-of-view is 55deg. The presented image processing unit consists of two main steps. In the first step both a robust and efficient segmentation method is performed to create reliable detections of the objects and a useful description of their projection in the image plane. It uses the linking pyramid method regarding the partition of the distance image and a contour-based grouping algorithm, where the objects are described using the chain code representation. In the second step a classification of the resulting objects is carried out. Depending on the distance of the objects which have to be classified a shape- or motion-based verification is applied adaptively. Both approaches discussed in this paper deliver very good results at the corresponding distance and represent a solid foundation of further works --- paper_title: Vehicle mounted wide FOV stereo for traffic and pedestrian detection paper_content: This paper describes an approach for detecting objects in front of an automobile using wide field of view stereo with a pair of omni cameras. Several configurations are suggested for effective detection of vehicles and pedestrians. The omni cameras are calibrated using sets of parallel lines on a parking lot. The calibration is used to rectify the omni images. Stereo matching is performed on the rectified images to detect other vehicles and pedestrians. Experimental results show promise of detecting these objects on the road. --- paper_title: Pedestrian detection using stereo night vision paper_content: This paper presents a method for pedestrian detection using a stereo night vision system installed on the vehicle. Motion information extracted from an image sequence is used to locate possible pedestrians, as moving people have motions that are not consistent with the movement of the background. Experimental results are shown to validate our approach and comparisons have been carried out between our approach and frame-by-frame based pattern recognition approaches. --- paper_title: Active frame subtraction for pedestrian detection from images of moving camera paper_content: This paper presents a method for active background subtraction in sequential images taken from a moving camera on the vehicle. The active subtraction is carried out by estimating camera motion using a gyrosensor. Applying the proposed method to contour matching, we present a fast and robust system for detecting moving pedestrians. --- paper_title: A multi-resolution approach for infrared vision-based pedestrian detection paper_content: This paper presents the improvements of a system for pedestrian detection in infrared images. The system is based on a multi-resolution localization of warm symmetrical objects with specific size and aspect ratio; the multi-resolution approach allows to detect both close and far pedestrians. A match against a set of 3D models encoding human shape's morphological and thermal characteristics is used as a validation process to remove false positives. No temporal correlation, nor motion cues are used for the processing that is based on the analysis of single frames only. --- paper_title: Robust Moving Object Detection at Distance in the Visible Spectrum and Beyond Using A Moving Camera paper_content: Automatic detection of moving objects at distance and in all weather conditions is a critical task in many visionbased safety applications such as video surveillance and vehicle forewarn collision warning. In such applications, prior knowledge about the object class (vehicle, pedestrian, tree, etc.) and imaging conditions (shadow, depth) is unavailable. What makes the task even more challenging is when the camera is non-stationary, e.g., mounted on a moving vehicle. The essential problem in this case lies in distinguishing between camera-induced motion and independent motion. This paper proposes a robust algorithm for automatic moving object detection at distance. The camera is mounted on a moving vehicle and operates in both day and night time. Through the utilization of the focus of expansion (FOE) and its associated residual map, the proposed method is able to detect and separate independently moving objects (IMOs) from the "moving" background caused by the camera motion. Experimentations on numerous realworld driving videos have shown the effectiveness of the proposed technique. Moving objects such as pedestrians and vehicles up to 40 meters away from the camera have been reliably detected at 10 frames per second on a 1.8GHz PC. --- paper_title: Pedestrian recognition in urban traffic using a vehicle based multilayer laserscanner paper_content: Vehicle-mounted laser scanners are able to observe the vehicles environment in order to detect, track and classify the surrounding objects and thus providing data for active safety systems. The latest development of IBEO combines several innovations. The receiver diodes are arranged in an array, which enables simultaneous measurements in 4 horizontal planes, e.g. to compensate pitching of the vehicle. In addition a multi target capability is integrated. This technique enables the detection of two distances with a single measurement, thus enhancing the robustness against rain. This paper introduces improved high speed object detection and high performance object tracking algorithms for real-time data processing. Additionally a classification of the road users is possible. A system architecture for detection and modelling of dynamic traffic scenes is introduced in order to provide a general idea of the different tasks necessary to reach the aim of a complete environmental model using a sensor for a wide range of applications. --- paper_title: Pedestrian detection using sparse Gabor filter and support vector machine paper_content: Vehicle warning and control systems are the key component of ITS. Pedestrian detection is an important research content of vehicle active safety. The central idea behind such pedestrian safety systems is to protect the pedestrian from injuries. In this paper, we address the problem of pedestrian represent and detection where the motion cue is not used. Inspired by the work proposed by Zehang Sun [2004], we proposed a pedestrian feature representation approach based on sparse Gabor filters (SGF) learning from examples. In the phase of pedestrian detection, we used support vector machine to detect the pedestrian. Promising results demonstrate the potential of the proposed framework. --- paper_title: Object tracking and classification using laserscanners-pedestrian recognition in urban environment paper_content: Current car safety systems are passive systems. Modern car assistance systems are based only on vehicle data. Future safety systems will also include object recognition in the near frontal area of the vehicle to detect dangerous situations. Therefore, special sensors and algorithms are needed. The paper discusses a system using a laserscanner and a video camera. --- paper_title: Pedestrian Detection for Intelligent Vehicles Based on Active Contour Models and Stereo Vision paper_content: Recently, the focus of safety systems for intelligent vehicles has been on researching and developing Advanced Driver Assistance Systems (ADAS). Most efforts have been concentrated at the driver, not taking into account the protection of the most vulnerable road users. This paper describes a pedestrian detection algorithm based on stereo vision. The use of visual information is a promising approach to cope with the different appearances of pedestrians and changes of illumination in cluttered environments. Active contour models are used to detect and track people from the images taken by an on-board vision system, performing contour extraction in sequential frames. --- paper_title: Stereo- and neural network-based pedestrian detection paper_content: In this paper, we present a real-time pedestrian detection system that uses a pair of moving cameras to detect both stationary and moving pedestrians in crowded environments. This is achieved through stereo-based segmentation and neural network-based recognition. Stereo-based segmentation allows us to extract objects from a changing background; neural network-based recognition allows us to identify pedestrians in various poses, shapes, sizes, clothing, occlusion status. The experiments on a large number of urban street scenes demonstrate the feasibility of the approach in terms of pedestrian detection rate and frame processing rate. --- paper_title: A Trainable System for Object Detection paper_content: This paper presents a general, trainable system for object detection in unconstrained, cluttered scenes. The system derives much of its power from a representation that describes an object class in terms of an overcomplete dictionary of local, oriented, multiscale intensity differences between adjacent regions, efficiently computable as a Haar wavelet transform. This example-based learning approach implicitly derives a model of an object class by training a support vector machine classifier using a large set of positive and negative examples. We present results on face, people, and car detection tasks using the same architecture. In addition, we quantify how the representation affects detection performance by considering several alternate representations including pixels and principal components. We also describe a real-time application of our person detection system as part of a driver assistance system. --- paper_title: A shape-independent method for pedestrian detection with far-infrared images paper_content: Nighttime driving is more dangerous than daytime driving-particularly for senior drivers. Three to four times as many driving-related deaths occur at night than in the daytime. To improve the safety of night driving, automatic pedestrian detection based on infrared images has drawn increased attention because pedestrians tend to stand out more against the background in infrared images than they do in visible light images. Nevertheless, pedestrian detection in infrared images is by no means trivial-many of the known difficulties carry over from visible light images, such as image variability occasioned by pedestrians being in different poses. Typically, several different pedestrian templates have to be used in order to deal with a range of poses. Furthermore, pedestrian detection is difficult because of poor infrared image quality (low resolution, low contrast, few distinguishable feature points, little texture information, etc.) and misleading signals. To address these problems, this paper introduces a shape-independent pedestrian-detection method. Our segmentation algorithm first estimates pedestrians' horizontal locations through projection-based horizontal segmentation and then determines pedestrians' vertical locations through brightness/bodyline-based vertical segmentation. Our classification method defines multidimensional histogram-, inertia-, and contrast-based classification features. The features are shape-independent, complementary to one another, and capture the statistical similarities of image patches containing pedestrians with different poses. Thus, our pedestrian-detection system needs only one pedestrian template-corresponding to a generic walking pose-and avoids brute-force searching for pedestrians throughout whole images, which typically involves brightness-similarity comparisons between candidate image patches and a multiplicity of pedestrian templates. Our pedestrian-detection system is neither based on tracking nor does it depend on camera calibration to determine the relationship between an object's height and its vertical image locations. Thus, it is less restricted in applicability. Even if much work is still needed to bridge the gap between present pedestrian-detection performance and the high reliability required for real-world applications, our pedestrian-detection system is straightforward and provides encouraging results in improving speed, reliability, and simplicity. --- paper_title: Pedestrian detection for driving assistance systems: single-frame classification and system level performance paper_content: We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on clusters of the training set. Single-frame classification performance results and system level performance figures for daytime conditions are presented with a discussion about the remaining gap to meet a daytime normal weather condition production system. --- paper_title: Vision-based pedestrian detection: the PROTECTOR system paper_content: This paper presents the results of the first large-scale field tests on vision-based pedestrian protection from a moving vehicle. Our PROTECTOR system combines pedestrian detection, trajectory estimation, risk assessment and driver warning. The paper pursues a "system approach" related to the detection component. An optimization scheme models the system as a succession of individual modules and finds a good overall parameter setting by combining individual ROCs using a convex-hull technique. On the experimental side, we present a methodology for the validation of the pedestrian detection performance in an actual vehicle setting. We hope this test methodology to contribute towards the establishment of benchmark testing, enabling this application to mature. We validate the PROTECTOR system using the proposed methodology and present interesting quantitative results based on tens of thousands of images from hours of driving. Although results are promising, more research is needed before such systems can be placed at the hands of ordinary vehicle drivers. --- paper_title: Pedestrian detection with convolutional neural networks paper_content: This paper presents a novel pedestrian detection method based on the use of a convolutional neural network (CNN) classifier. Our method achieves high accuracy by automatically optimizing the feature representation to the detection task and regularizing the neural network. We evaluate the proposed method on a difficult database containing pedestrians in a city environment with no restrictions on pose, action, background and lighting conditions. The false positive rate (FPR) of the proposed CNN classifier is less than 1/5-th of the FPR of a support vector machine (SVM) classifier using Haar-wavelet features when the detection rate is 90%. The accuracy of the SVM classifier using the features learnt by the CNN is equivalent to the accuracy of the CNN, confirming the importance of automatically optimized features. The computational demand of the CNN classifier is, however, more than an order of magnitude lower than that of the SVM, irrespective of the type of features used. --- paper_title: Pedestrian detection and tracking with night vision paper_content: This paper presents a method for pedestrian detection and tracking using a night vision video camera installed on the vehicle. To deal with the nonrigid nature of human appearance on the road, a two-step detection/tracking method is proposed. The detection phase is performed by a support vector machine (SVM) with size-normalized pedestrian candidates and the tracking phase is a combination of Kalman filter prediction and mean shift tracking. The detection phase is further strengthened by information obtained by a road detection module that provides key information for pedestrian validation. Experimental comparisons have been carried out on gray-scale SVM recognition vs. binary SVM recognition and entire body detection vs. upper body detection. --- paper_title: Detecting pedestrians using patterns of motion and appearance paper_content: This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames/second, detects pedestrians at very small scales (as small as 20/spl times/15 pixels), and has a very low false positive rate. Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: i) development of a representation of image motion which is extremely efficient, and ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow). --- paper_title: Pedestrian Detection using Infrared images and Histograms of Oriented Gradients paper_content: Thispaperpresents acomplete methodforpedes- trian detection applied toinfrared images. First, westudy an imagedescriptor basedonhistograms oforiented gradients (HOG),associated witha Support Vector Machine(SVM) classifier andevaluate itsefficiency. Afterhaving tunedthe HOG descriptor andtheclassifier, weinclude this methodin acomplete system, whichdeals withstereo infrared images. Thisapproach gives goodresults forwindowclassification, andapreliminary testapplied onavideosequence proves thatthis approach isverypromising. I.INTRODUCTION Since thelast fewyears now,thedevelopment ofdriving assistance systems hasbeenveryactive inorder toin- crease thevehicle anditsenvironment safety. Atthe present time, themainobjective inthisdomainisto provide thedrivers withsomeinformation concerning its environment andanypotential hazard. Oneamongall useful information isthedetection andlocalization ofa pedestrian infront ofavehicle. Thisproblem ofdetecting pedestrians isaverydifficult problemthathasessentially beenaddressed using vision sensors, imageprocessing andpattern recognition techniques. Inparticular, detecting pedestrians thanks to images isacomplex challenge duetotheir appearance andposevariability. Inthecontext ofdaylight vision, several approaches havebeenproposed andarebased on different imageprocessing techniques ormachine learning (9), (5), (12). Recently, owingtothedevelopment oflow-cost infrared cameras, night vision systems havegained moreand moreinterest, thusincreasing theneedofautomatic detection ofpedestrians atnight. Thisproblem of detecting pedestrians frominfrared images hasbeen investigated byvarious research teamsinthelast years. Themainmethodology isbasedon extracting cues (symmetry, shape-independent --- paper_title: Pedestrian detection using sparse Gabor filter and support vector machine paper_content: Vehicle warning and control systems are the key component of ITS. Pedestrian detection is an important research content of vehicle active safety. The central idea behind such pedestrian safety systems is to protect the pedestrian from injuries. In this paper, we address the problem of pedestrian represent and detection where the motion cue is not used. Inspired by the work proposed by Zehang Sun [2004], we proposed a pedestrian feature representation approach based on sparse Gabor filters (SGF) learning from examples. In the phase of pedestrian detection, we used support vector machine to detect the pedestrian. Promising results demonstrate the potential of the proposed framework. --- paper_title: Stereo- and neural network-based pedestrian detection paper_content: In this paper, we present a real-time pedestrian detection system that uses a pair of moving cameras to detect both stationary and moving pedestrians in crowded environments. This is achieved through stereo-based segmentation and neural network-based recognition. Stereo-based segmentation allows us to extract objects from a changing background; neural network-based recognition allows us to identify pedestrians in various poses, shapes, sizes, clothing, occlusion status. The experiments on a large number of urban street scenes demonstrate the feasibility of the approach in terms of pedestrian detection rate and frame processing rate. --- paper_title: A Trainable System for Object Detection paper_content: This paper presents a general, trainable system for object detection in unconstrained, cluttered scenes. The system derives much of its power from a representation that describes an object class in terms of an overcomplete dictionary of local, oriented, multiscale intensity differences between adjacent regions, efficiently computable as a Haar wavelet transform. This example-based learning approach implicitly derives a model of an object class by training a support vector machine classifier using a large set of positive and negative examples. We present results on face, people, and car detection tasks using the same architecture. In addition, we quantify how the representation affects detection performance by considering several alternate representations including pixels and principal components. We also describe a real-time application of our person detection system as part of a driver assistance system. --- paper_title: Pedestrian Detection Using SVM and Multi-Feature Combination paper_content: This paper describes a comprehensive combination of feature extraction methods for vision-based pedestrian detection in the framework of intelligent transportation systems. The basic components of pedestrians are first located in the image and then combined with a SVM-based classifier. This poses the problem of pedestrian detection in real, cluttered road images. Candidate pedestrians are located using a subtractive clustering attention mechanism based on stereo vision. A by-components learning approach is proposed in order to better deal with pedestrians variability, illumination conditions, partial occlusions, and rotations. Extensive comparisons have been carried out using different feature extraction methods, as a key to image understanding in real traffic conditions. A database containing thousands of pedestrian samples extracted from real traffic images has been created for learning purposes, either at daytime and nighttime. The results achieved up to date show interesting conclusions that suggest a combination of feature extraction methods as an essential clue for enhanced detection performance --- paper_title: Vision-based pedestrian detection: the PROTECTOR system paper_content: This paper presents the results of the first large-scale field tests on vision-based pedestrian protection from a moving vehicle. Our PROTECTOR system combines pedestrian detection, trajectory estimation, risk assessment and driver warning. The paper pursues a "system approach" related to the detection component. An optimization scheme models the system as a succession of individual modules and finds a good overall parameter setting by combining individual ROCs using a convex-hull technique. On the experimental side, we present a methodology for the validation of the pedestrian detection performance in an actual vehicle setting. We hope this test methodology to contribute towards the establishment of benchmark testing, enabling this application to mature. We validate the PROTECTOR system using the proposed methodology and present interesting quantitative results based on tens of thousands of images from hours of driving. Although results are promising, more research is needed before such systems can be placed at the hands of ordinary vehicle drivers. --- paper_title: A Cascaded Classifier for Pedestrian Detection paper_content: In a pedestrian detection system, the most critical requirement is to quickly and reliably determine whether a candidate region contains a pedestrian. It is essential to design an effective classifier for pedestrian detection. Until now, most of the existing pedestrian detection systems only adopt a single and non-cascaded classifier. However, since the scene is complex and the candidate regions are too many (in our experiments, there are more than 40,000 candidate regions); it is difficult to make the recognition both accurate and fast with such a non-cascaded classifier. In this paper, we present a cascaded classifier for pedestrian detection. The cascaded classifier combines a statistical learning classifier and a support vector machine classifier. The statistical learning classifier is used to select preliminary candidates, and then the support vector machine classifier is applied to do a further acknowledgement. This kind of cascaded architecture can take both advantages of the two classifiers, so the detecting rate and detecting speed can be balanced. Experimental results illustrate that the cascaded classifier is effective for a real-time detection --- paper_title: Pedestrian detection with convolutional neural networks paper_content: This paper presents a novel pedestrian detection method based on the use of a convolutional neural network (CNN) classifier. Our method achieves high accuracy by automatically optimizing the feature representation to the detection task and regularizing the neural network. We evaluate the proposed method on a difficult database containing pedestrians in a city environment with no restrictions on pose, action, background and lighting conditions. The false positive rate (FPR) of the proposed CNN classifier is less than 1/5-th of the FPR of a support vector machine (SVM) classifier using Haar-wavelet features when the detection rate is 90%. The accuracy of the SVM classifier using the features learnt by the CNN is equivalent to the accuracy of the CNN, confirming the importance of automatically optimized features. The computational demand of the CNN classifier is, however, more than an order of magnitude lower than that of the SVM, irrespective of the type of features used. --- paper_title: An Experimental Study on Pedestrian Classification paper_content: Detecting people in images is key for several important application domains in computer vision. This paper presents an in-depth experimental study on pedestrian classification; multiple feature-classifier combinations are examined with respect to their ROC performance and efficiency. We investigate global versus local and adaptive versus nonadaptive features, as exemplified by PCA coefficients, Haar wavelets, and local receptive fields (LRFs). In terms of classifiers, we consider the popular support vector machines (SVMs), feedforward neural networks, and k-nearest neighbor classifier. Experiments are performed on a large data set consisting of 4,000 pedestrian and more than 25,000 nonpedestrian (labeled) images captured in outdoor urban environments. Statistically meaningful results are obtained by analyzing performance variances caused by varying training and test sets. Furthermore, we investigate how classification performance and training sample size are correlated. Sample size is adjusted by increasing the number of manually labeled training data or by employing automatic bootstrapping or cascade techniques. Our experiments show that the novel combination of SVMs with LRF features performs best. A boosted cascade of Haar wavelets can, however, reach quite competitive results, at a fraction of computational cost. The data set used in this paper is made public, establishing a benchmark for this important problem --- paper_title: Pedestrian detection and tracking with night vision paper_content: This paper presents a method for pedestrian detection and tracking using a night vision video camera installed on the vehicle. To deal with the nonrigid nature of human appearance on the road, a two-step detection/tracking method is proposed. The detection phase is performed by a support vector machine (SVM) with size-normalized pedestrian candidates and the tracking phase is a combination of Kalman filter prediction and mean shift tracking. The detection phase is further strengthened by information obtained by a road detection module that provides key information for pedestrian validation. Experimental comparisons have been carried out on gray-scale SVM recognition vs. binary SVM recognition and entire body detection vs. upper body detection. --- paper_title: Detecting pedestrians using patterns of motion and appearance paper_content: This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames/second, detects pedestrians at very small scales (as small as 20/spl times/15 pixels), and has a very low false positive rate. Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: i) development of a representation of image motion which is extremely efficient, and ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow). --- paper_title: Pedestrian Detection using Infrared images and Histograms of Oriented Gradients paper_content: Thispaperpresents acomplete methodforpedes- trian detection applied toinfrared images. First, westudy an imagedescriptor basedonhistograms oforiented gradients (HOG),associated witha Support Vector Machine(SVM) classifier andevaluate itsefficiency. Afterhaving tunedthe HOG descriptor andtheclassifier, weinclude this methodin acomplete system, whichdeals withstereo infrared images. Thisapproach gives goodresults forwindowclassification, andapreliminary testapplied onavideosequence proves thatthis approach isverypromising. I.INTRODUCTION Since thelast fewyears now,thedevelopment ofdriving assistance systems hasbeenveryactive inorder toin- crease thevehicle anditsenvironment safety. Atthe present time, themainobjective inthisdomainisto provide thedrivers withsomeinformation concerning its environment andanypotential hazard. Oneamongall useful information isthedetection andlocalization ofa pedestrian infront ofavehicle. Thisproblem ofdetecting pedestrians isaverydifficult problemthathasessentially beenaddressed using vision sensors, imageprocessing andpattern recognition techniques. Inparticular, detecting pedestrians thanks to images isacomplex challenge duetotheir appearance andposevariability. Inthecontext ofdaylight vision, several approaches havebeenproposed andarebased on different imageprocessing techniques ormachine learning (9), (5), (12). Recently, owingtothedevelopment oflow-cost infrared cameras, night vision systems havegained moreand moreinterest, thusincreasing theneedofautomatic detection ofpedestrians atnight. Thisproblem of detecting pedestrians frominfrared images hasbeen investigated byvarious research teamsinthelast years. Themainmethodology isbasedon extracting cues (symmetry, shape-independent --- paper_title: Pedestrian Detection with Radar and Computer Vision paper_content: This paper presents a method for detecting people on-board a moving vehicle. The perception of the environment is performed through the fusion of an automotive radar sensor and a monocular vision system. The fusion uses a two-step approach for efficient object detection. In the first step, a target-list is generated from the radar sensor. The items in the list are hypotheses for the presence of people. In a second step, hypotheses are proved by the vision system. This method is considerably faster than a sole image processing solution. For the covering abstract see ITRD E123380. --- paper_title: Real-time Pedestrian Detection Using LIDAR and Convolutional Neural Networks paper_content: This paper presents a novel real-time pedestrian detection system utilizing a LIDAR-based object detector and convolutional neural network (CNN)-based image classifier. Our method achieves over 10 frames/second processing speed by constraining the search space using the range information from the LIDAR. The image region candidates detected by the LIDAR are confirmed for the presence of pedestrians by a convolutional neural network classifier. Our CNN classifier achieves high accuracy at a low computational cost thanks to its ability to automatically learn a small number of highly discriminating features. The focus of this paper is the evaluation of the effect of region of interest (ROI) detection on system accuracy and processing speed. The evaluation results indicate that the use of the LIDAR-based ROI detector can reduce the number of false positives by a factor of 2 and reduce the processing time by a factor of 4. The single frame detection accuracy of the system is above 90% when there is 1 false positive per second. --- paper_title: Image registration methods: a survey paper_content: This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (areabased and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas. q 2003 Elsevier B.V. All rights reserved. --- paper_title: Multi sensor based tracking of pedestrians: a survey of suitable movement models paper_content: This article presents a multi sensor approach for driver assistance systems: the detection and tracking of pedestrians in a road environment. A multi sensor system consisting of a far infrared camera and a laser scanning device is used for the detection and precise localization of pedestrians. Kalman filter based data fusion handles the combination of the sensor information of the infrared camera and of the laser scanner. Arranging a set of Kalman filters in parallel, a multi sensor/multi target tracking system was created. The usage of suitable movement models has a great influence on the performance of the tracking system. Several types of models are discussed focussing on the typical behavior of pedestrians in road environments. The multi sensor/multi target tracking system is installed on a test vehicle to obtain practical results which is discussed in this article too. --- paper_title: Low-level Pedestrian Detection by means of Visible and Far Infra-red Tetra-vision paper_content: This article presents a tetra-vision (4 cameras) system for the detection of pedestrians by the means of the simultaneous use of one far infra-red and one visible cameras stereo pairs. The main idea is to exploit both the advantages of far infra-red and visible cameras trying at the same time to benefit from the use of each system. Initially, the two stereo flows are independently processed, then the results are fused together. The final result of this low-level processing is a list of obstacles that have a shape and a size compatible with the presence of a potential pedestrian. In addition, the system is able to remove the background from the detected obstacles to simplify a possible further high level processing. The developed system has been installed on an experimental vehicle and preliminarily tested in different situations. --- paper_title: A Comparison of Color and Infrared Stereo Approaches to Pedestrian Detection paper_content: This paper presents an analysis of color and infrared stereo approaches to pedestrian detection. We design a four camera experimental testbed consisting of two color and two infrared cameras that allows for synchronous capture and direct frame-by-frame comparison of pedestrian detection approaches. We incorporate this four camera system in a test vehicle and conduct comparative experiments of stereo-based approaches to obstacle detection using color and infrared imagery. A detailed analysis of these experiments shows the robustness of both color and infrared stereo imagery to generate the dense stereo maps necessary for robust object detection and motivates investigation of color and infrared features that can be used to further classify detected obstacles into pedestrian regions. The complementary nature of color and infrared features gives rise to a discussion of a feature fusion techniques, including a cross-spectral stereo solution to pedestrian detection. --- paper_title: On Color-, Infrared-, and Multimodal-Stereo Approaches to Pedestrian Detection paper_content: This paper presents an analysis of color-, infrared-, and multimodal-stereo approaches to pedestrian detection. We design a four-camera experimental testbed consisting of two color and two infrared cameras for capturing and analyzing various configuration permutations for pedestrian detection. We incorporate this four-camera system in a test vehicle and conduct comparative experiments of stereo-based approaches to obstacle detection using unimodal color and infrared imageries. A detailed analysis of the color and infrared features used to classify detected obstacles into pedestrian regions is used to motivate the development of a multimodal solution to pedestrian detection. We propose a multimodal trifocal framework consisting of a stereo pair of color cameras coupled with an infrared camera. We use this framework to combine multimodal-image features for pedestrian detection and to demonstrate that the detection performance is significantly higher when color, disparity, and infrared features are used together. This result motivates experiments and discussion toward achieving multimodal-feature combination using a single color and a single infrared camera arranged in a cross-spectral stereo pair. We demonstrate an approach to registering multiple objects across modalities and provide an experimental analysis that highlights issues and challenges of pursuing the cross-spectral approach to multimodal and multiperspective pedestrian analysis. --- paper_title: Development of the side component of the transit integrated collision warning system paper_content: This paper describes the development activities leading up to field testing of the transit integrated collision warning system, with special attention to the side component. Two buses, one each in California and Pennsylvania, have been outfitted with sensors, cameras, computers, and driver-vehicle interfaces in order to detect threats and generate appropriate warnings. The overall project goals, integrated concept, side component features, and future plans are documented here. --- paper_title: Multimodal Stereo Image Registration for Pedestrian Detection paper_content: This paper presents an approach for the registration of multimodal imagery for pedestrian detection when the significant depth differences of objects in the scene preclude a global alignment assumption. Using maximization-of-mutual-information matching techniques and sliding correspondence windows over calibrated image pairs, we demonstrate successful registration of color and thermal data. We develop a robust method using disparity voting for determining the registration of each object in the scene and provide a statistically based measure for evaluating the match confidence. Testing shows successful registration in complex scenes with multiple people at different depths and levels of occlusion --- paper_title: Vision-based pedestrian detection: the PROTECTOR system paper_content: This paper presents the results of the first large-scale field tests on vision-based pedestrian protection from a moving vehicle. Our PROTECTOR system combines pedestrian detection, trajectory estimation, risk assessment and driver warning. The paper pursues a "system approach" related to the detection component. An optimization scheme models the system as a succession of individual modules and finds a good overall parameter setting by combining individual ROCs using a convex-hull technique. On the experimental side, we present a methodology for the validation of the pedestrian detection performance in an actual vehicle setting. We hope this test methodology to contribute towards the establishment of benchmark testing, enabling this application to mature. We validate the PROTECTOR system using the proposed methodology and present interesting quantitative results based on tens of thousands of images from hours of driving. Although results are promising, more research is needed before such systems can be placed at the hands of ordinary vehicle drivers. --- paper_title: Multi-cue Pedestrian Detection and Tracking from a Moving Vehicle paper_content: This paper presents a multi-cue vision system for the real-time detection and tracking of pedestrians from a moving vehicle. The detection component involves a cascade of modules, each utilizing complementary visual criteria to successively narrow down the image search space, balancing robustness and efficiency considerations. Novel is the tight integration of the consecutive modules: (sparse) stereo-based ROI generation, shape-based detection, texture-based classification and (dense) stereo-based verification. For example, shape-based detection activates a weighted combination of texture-based classifiers, each attuned to a particular body pose. ::: ::: Performance of individual modules and their interaction is analyzed by means of Receiver Operator Characteristics (ROCs). A sequential optimization technique allows the successive combination of individual ROCs, providing optimized system parameter settings in a systematic fashion, avoiding ad-hoc parameter tuning. Application-dependent processing constraints can be incorporated in the optimization procedure. ::: ::: Results from extensive field tests in difficult urban traffic conditions suggest system performance is at the leading edge. --- paper_title: Real-time stereo vision for urban traffic scene understanding paper_content: This paper presents a precise correlation-based stereo vision approach that allows real-time interpretation of traffic scenes and autonomous Stop&Go on a standard PC. The high speed is achieved by means of a multiresolution analysis. It delivers the stereo disparities with sub-pixel accuracy and allows precise distance estimates. Traffic applications using this method are described. --- paper_title: SAVE-U: First Experiences with a Pre-Crash System for Enhancing Pedestrian Safety paper_content: This paper presents first results of the project SAVE-U (Sensors and system Architecture for VulnerablE road Users protection). It has been funded by the European Commission (EC) in the period between 2002 and 2005 (5 th IST framework program). The goal of this project is to enhance the safety of pedestrians in hazardous traffic situations (see Figure 1) (3). To be able to do this, special active countermeasures (actuators) can be integrated into the vehicles. This paper deals with a sensors system which is able to deploy such actuators in critical situations in a suitable way for the driver. Radar sensors (SiemensVDO/ Germany), color cameras and infrared cameras (CEA/ France) mounted into demonstrator vehicles observe the scene in the frontal area. The information of the individual sensors is fused and applied to a deployment algorithm for a braking system. In case of a dangerous situation the vehicle brakes automatically to decrease the vehicle speed and to increase the safety of the pedestrian. Two car manufacturers (Volkswagen (see Figure 4) (3) and DaimlerChrysler (9), both Germany) equipped their cars with this sensor platform for recognizing pedestrians and developed algorithms to provide their individual protection strategy. Both vehicles are tested by MIRA/ UK. This paper presents the first results of tests with the SAVE-U system in the Volkswagen vehicle. --- paper_title: Radar sensors and sensor platform used for pedestrian protection in the EC-funded project SAVE-U paper_content: The automotive industry and sensor suppliers have responded to the European Commission's request to reduce the road fatalities by 50% until 2010 and they are developing advanced systems for road safety. Short range radar (SRR) sensors in the 24 GHz domain allow coverage of the surroundings of the vehicle, complementing the existing 77 GHz radars used for adaptive cruise control (ACC) systems since 1999. This paper deals with a special variant of pre-crash systems, namely: "pre-crash for vulnerable road users protection". The target is the protection and collision mitigation of pedestrians and bicyclists versus a vehicle. To achieve trigger information for automatic protection systems (like automatic braking or other reversible systems) as well as warning information for drivers, a high performance sensor platform is necessary. This paper presents approaches from the EC-funded project SAVE-U (sensors and system architecture for vulnerable road users protection) of the fifth framework program of the European Commission. The sensor platform consists of short range radar sensors, cameras in the visible and infrared domain. The focus is located on a high- and low-level- data fusion architecture to fulfill the strong requirements. The intension of this paper is mainly to describe the 24 GHz short range radar sensor development for the detection capability of pedestrians in ranges up to 30 m and the sensor fusion technique. --- paper_title: An Experimental Study on Pedestrian Classification paper_content: Detecting people in images is key for several important application domains in computer vision. This paper presents an in-depth experimental study on pedestrian classification; multiple feature-classifier combinations are examined with respect to their ROC performance and efficiency. We investigate global versus local and adaptive versus nonadaptive features, as exemplified by PCA coefficients, Haar wavelets, and local receptive fields (LRFs). In terms of classifiers, we consider the popular support vector machines (SVMs), feedforward neural networks, and k-nearest neighbor classifier. Experiments are performed on a large data set consisting of 4,000 pedestrian and more than 25,000 nonpedestrian (labeled) images captured in outdoor urban environments. Statistically meaningful results are obtained by analyzing performance variances caused by varying training and test sets. Furthermore, we investigate how classification performance and training sample size are correlated. Sample size is adjusted by increasing the number of manually labeled training data or by employing automatic bootstrapping or cascade techniques. Our experiments show that the novel combination of SVMs with LRF features performs best. A boosted cascade of Haar wavelets can, however, reach quite competitive results, at a fraction of computational cost. The data set used in this paper is made public, establishing a benchmark for this important problem --- paper_title: Framework for real-time behavior interpretation from traffic video paper_content: Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence-based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera's FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian-vehicle interaction and vehicle-checkpost interactions. --- paper_title: Detecting Moving Shadows: Algorithms and Evaluation paper_content: Moving shadows need careful consideration in the development of robust dynamic scene analysis systems. Moving shadow detection is critical for accurate object detection in video streams since shadow points are often misclassified as object points, causing errors in segmentation and tracking. Many algorithms have been proposed in the literature that deal with shadows. However, a comparative evaluation of the existing approaches is still lacking. In this paper, we present a comprehensive survey of moving shadow detection approaches. We organize contributions reported in the literature in four classes two of them are statistical and two are deterministic. We also present a comparative empirical evaluation of representative algorithms selected from these four classes. Novel quantitative (detection and discrimination rate) and qualitative metrics (scene and object independence, flexibility to shadow situations, and robustness to noise) are proposed to evaluate these classes of algorithms on a benchmark suite of indoor and outdoor video sequences. These video sequences and associated "ground-truth" data are made available at http://cvrr.ucsd.edu/aton/shadow to allow for others in the community to experiment with new algorithms and metrics. --- paper_title: Moving shadow and object detection in traffic scenes paper_content: We present an algorithm for segmentation of traffic scenes that distinguishes moving objects from their moving cast shadows. A fading memory estimator calculates mean and variance of all three color components for each background pixel. Given the statistics for a background pixel, simple rules for calculating its statistics when covered by a shadow are used. Then, MAP classification decisions are made for each pixel. In addition to the color features, we examine the use of neighborhood information to produce smoother classification. We also propose the use of temporal information by modifying class a priori probabilities based on predictions from the previous frame. --- paper_title: Distributed video networks for incident detection and management paper_content: We describe a novel architecture for developing distributed video networks for incident detection and management. The networks utilize both rectilinear and omnidirectional cameras. It is recognized that robust and reliable segmentation of automobiles and shadows is critical in our application. We describe new segmentation procedure and present experimental results to support the basic feasibility and utility of the algorithms. --- paper_title: Distributed interactive video arrays for event capture and enhanced situational awareness paper_content: Video surveillance activity has dramatically increased over the past few years. Earlier work dealt mostly with single stationary cameras, but the recent trend is toward active multicamera systems. Such systems offer several advantages over single camera systems - multiple overlapping views for obtaining 3D information and handling occlusions, multiple nonoverlapping cameras for covering wide areas, and active pan-tilt-zoom (PTZ) cameras for observing object details. To address these issues, we have developed a multicamera video surveillance approach, called distributed interactive video array. The DIVA framework provides multiple levels of semantically meaningful information ("situational" awareness) to match the needs of multiple remote observers. We have designed DIVA-based systems that can track and identify vehicles and people, monitor perimeters and bridges, and analyze activities. A new video surveillance approach employing a large-scale cluster of video sensors demonstrates the promise of multicamera arrays for homeland security. --- paper_title: Computer vision algorithms for intersection monitoring paper_content: The goal of this project is to monitor activities at traffic intersections for detecting/predicting situations that may lead to accidents. Some of the key elements for robust intersection monitoring are camera calibration, motion tracking, incident detection, etc. In this paper, we consider the motion-tracking problem. A multilevel tracking approach using Kalman filter is presented for tracking vehicles and pedestrians at intersections. The approach combines low-level image-based blob tracking with high-level Kalman filtering for position and shape estimation. An intermediate occlusion-reasoning module serves the purpose of detecting occlusions and filtering relevant measurements. Motion segmentation is performed by using a mixture of Gaussian models which helps us achieve fairly reliable tracking in a variety of complex outdoor scenes. A visualization module is also presented. This module is very useful for visualizing the results of the tracker and serves as a platform for the incident detection module. --- paper_title: Analysis and query of person-vehicle interactions in homography domain paper_content: This paper presents an efficient and robust paradigm for analysis and query of moving-object interactions in planar homography domain.People and vehicle activities/interactions are analyzed for situational awareness by using a multi-perspective approach.Planar homography constraints are exploited to extract view-invariant object features including footage area and velocity of objects on the ground plane. Spatio-temporal relationships between person-and vehicle-tracks are represented by a semantic event grammar. Semantic-level information of the situation is achieved with the anticipation of possible directions of near-future tracks using piecewise velocity history. An efficient query paradigm is proposed by histogram-based approximation of probability density functions of objects and by quad-tree indexing. Experimental data show promising results.Our framework can be applied to applications for enhanced situational awareness such as disaster prevention,human interactions in structured environments,and crowd movement analysis in wide-view areas. --- paper_title: Database-centered architecture for traffic incident detection, management, and analysis paper_content: This paper presents various issues related to the development of an integrated software architecture for a traffic incident monitoring, mitigation, and analysis system. The novel concept of using a set of distributed databases, having many different functions and types, is proposed for distributed coordination of sensing, control and analysis algorithms. This coordination paradigm using databases makes the whole architecture robust by providing means to efficiently manage current and past states of the monitored environment and the monitoring system. Use of a semantic event/activity database in the integrated architecture also provides high level abstractions through its query language to model traffic incidents and traffic behaviors. We also present experimental results of the use of a concrete database-centered architecture and algorithms in identifying important traffic flow events (such as tail-gating, exit from a ramp, etc.). --- paper_title: Pedestrian detection in crowded scenes paper_content: In this paper, we address the problem of detecting pedestrians in crowded real-world scenes with severe overlaps. Our basic premise is that this problem is too difficult for any type of model or feature alone. Instead, we present an algorithm that integrates evidence in multiple iterations and from different sources. The core part of our method is the combination of local and global cues via probabilistic top-down segmentation. Altogether, this approach allows examining and comparing object hypotheses with high precision down to the pixel level. Qualitative and quantitative results on a large data set confirm that our method is able to reliably detect pedestrians in crowded scenes, even when they overlap and partially occlude each other. In addition, the flexible nature of our approach allows it to operate on very small training sets. --- paper_title: Framework for real-time behavior interpretation from traffic video paper_content: Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence-based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera's FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian-vehicle interaction and vehicle-checkpost interactions. --- paper_title: Hardware-friendly pedestrian detection and impact prediction paper_content: We present a system for pedestrian detection and impact prediction, from a frontal camera situated on a moving vehicle. The system combines together the output of several algorithms to form a reliable detection and positioning of pedestrians. One of the important contributions of this paper is a highly-efficient algorithm for classification of pedestrian images using a learned set of features, each feature based on a 5/spl times/5 pixels shape. The learning of the features is done using AdaBoost and genetic-like algorithms. The described application was developed as a part of the CAMELLIA project, thus all the algorithms used in this application are designed to use a special set of low level image processing operations provided by the smart imaging core developed in the project. Fusion of the various algorithms results and tracking of pedestrians is done using particle filtering, providing a good tool to predict the future movement of pedestrians, in order to estimate impact probability. --- paper_title: Capturing interactions in pedestrian walking behavior in a discrete choice framework paper_content: In this paper we propose a general framework for pedestrian walking behavior, based on discrete choice modeling. Two main behaviors are identified: emphunconstrained and emphconstrained. The constrained patterns are further classified into emphattractive interactions and emphrepulsive interactions. The formers are captured by a emphleader-follower model while the latters through a emphcollision avoidance model. The spatial correlation between the alternatives is taken into account defining a cross nested logit model. Quantitative analysis is performed by maximum likelihood estimation on a real dataset of pedestrian trajectories, manually tracked from video sequences. --- paper_title: A Markovian model of pedestrian behavior paper_content: In this paper a statistical model of pedestrian behavior is proposed. This model is intended to be an important part of a study on the feasibility of car-to-pedestrian accident prediction. The proposed approach is phenomenological as it is based on a four discrete states Markov chain. These four states: "standing still", "walking", "jogging", and "running" are related to the pedestrian pace. First, given the former pedestrian state the current one is calculated. Then, the pedestrian speed vector is split up into the norm and the angle. Information on the statistical distribution of these quantities is available. Their values follow from the present pedestrian discrete state. The proposed model has been compared with related work. It has been used to generate statistically significant pedestrian trajectories and to predict car-to-pedestrian impacts. Simulation results are given based on an evaluation database of car and pedestrian accidents. --- paper_title: Modeling of Pedestrian Behavior and Its Applications to Spatial Evaluation paper_content: A computer simulation of pedestrian flow is an effective method for examining relationships between pedestrian behavior and pedestrian-usable space within a modeled space. In this paper, a model describing pedestrian behavior is proposed by making use of the concept of mental stress. Unknown parameters of the model are estimated using observed data from real pedestrian behavior. The resultant model can describe differences of pedestrian behavior arising from differing walkers' characteristics. Through the results of simulations of the proposed model, we examine comfort and efficiency of pedestrian space. --- paper_title: A discrete choice pedestrian behavior model for pedestrian detection in visual tracking systems paper_content: Different approaches to the moving object detection in multi-object tracking systems use dynamic-based models. In this paper we propose the use of a discrete choice model (DCM) of pedestrian behavior and its application to the problem of the target detection in the particular case of pedestrian tracking. We analyze real scenarios assuming to have a calibrated monocular camera, allowing a unique correspondence between the image plan and the top view reconstruction of the scene. In our approach we first initialize a large number of hypothetical moving points on the top view plan and we track their corresponding projections on the image plan by means of a simple correlation method. The resulting displacement vectors are then re-projected on the top view and pre-filtered using distance and angular thresholds. The pre-filtered trajectories are the inputs for the discrete choice behavioral filter used to decide whether the pre-filtered targets are real pedestrians or not. --- paper_title: Spatial and Probabilistic Modelling of Pedestrian Behaviour paper_content: This paper investigates the combination of spatial and probabilistic models for reasoning about pedestrian behaviour in visual surveillance systems. Models are learnt by a multi-step unsupervised method and they are used for trajectory labelling and atypical behaviour detection. --- paper_title: Behavioral Priors for Detection and Tracking of Pedestrians in Video Sequences paper_content: In this paper we address the problem of detection and tracking of pedestrians in complex scenarios. The inclusion of prior knowledge is more and more crucial in scene analysis to guarantee flexibility and robustness, necessary to have reliability in complex scenes. We aim to combine image processing methods with behavioral models of pedestrian dynamics, calibrated on real data. We introduce Discrete Choice Models (DCM) for pedestrian behavior and we discuss their integration in a detection and tracking context. The obtained results show how it is possible to combine both methodologies to improve the performances of such systems in complex sequences. --- paper_title: Direction Estimation of Pedestrian from Images paper_content: Abstract : The capability of estimating the walking direction of people would be useful in many applications such as those involving autonomous cars and robots. We introduce an approach for estimating the walking direction of people from images, based on learning the correct classification of a still image by using SVMs. We find that the performance of the system can be improved by classifying each image of a walking sequence and combining the outputs of the classifier. Experiments were performed to evaluate our system and estimate the trade-off between number of images in walking sequences and performance. --- paper_title: Avoiding cars and pedestrians using velocity obstacles and motion prediction paper_content: Vehicle navigation in dynamic environments is an important challenge, especially when the motion of the objects populating the environment is unknown. Traditional motion planning approaches are too slow to be applied in real-time to this domain, hence, new techniques are needed. Recently, iterative planning has emerged as a promising approach. Nevertheless, existing iterative methods do not provide a way to estimating the future behaviour of moving obstacles and to use the resulting estimates in trajectory computation. This paper presents an iterative planning approach that addresses these two issues. It consists of two complementary methods: 1) A motion prediction method which learns typical behaviours of objects in a given environment. 2) An iterative motion planning technique based on the concept of Velocity Obstacles. --- paper_title: A Markovian model of pedestrian behavior paper_content: In this paper a statistical model of pedestrian behavior is proposed. This model is intended to be an important part of a study on the feasibility of car-to-pedestrian accident prediction. The proposed approach is phenomenological as it is based on a four discrete states Markov chain. These four states: "standing still", "walking", "jogging", and "running" are related to the pedestrian pace. First, given the former pedestrian state the current one is calculated. Then, the pedestrian speed vector is split up into the norm and the angle. Information on the statistical distribution of these quantities is available. Their values follow from the present pedestrian discrete state. The proposed model has been compared with related work. It has been used to generate statistically significant pedestrian trajectories and to predict car-to-pedestrian impacts. Simulation results are given based on an evaluation database of car and pedestrian accidents. --- paper_title: Looking-In and Looking-Out of a Vehicle: Computer-Vision-Based Enhanced Vehicle Safety paper_content: This paper presents investigations into the role of computer-vision technology in developing safer automobiles. We consider vision systems, which cannot only look out of the vehicle to detect and track roads and avoid hitting obstacles or pedestrians but simultaneously look inside the vehicle to monitor the attentiveness of the driver and even predict her intentions. In this paper, a systems-oriented framework for developing computer-vision technology for safer automobiles is presented. We will consider three main components of the system: environment, vehicle, and driver. We will discuss various issues and ideas for developing models for these main components as well as activities associated with the complex task of safe driving. This paper includes a discussion of novel sensory systems and algorithms for capturing not only the dynamic surround information of the vehicle but also the state, intent, and activity patterns of drivers. ---
Title: Pedestrian Protection Systems: Issues, Survey, and Challenges Section 1: Introduction Description 1: Provide background information, motivation for pedestrian protection systems, and outline the structure of the paper. Section 2: Approaches for Improving Pedestrian Safety Description 2: Discuss various methods to improve pedestrian safety, including infrastructure design, passive safety systems, and active safety systems. Section 3: Infrastructure Design Enhancements Description 3: Explain design enhancements in infrastructure to reduce pedestrian-related accidents, including speed control, pedestrian-vehicle separation, and visibility measures. Section 4: Passive Safety Systems Involving Vehicle Design Description 4: Cover the impact of vehicle design on pedestrian injuries and summarize design solutions like collision-absorbing components and airbags. Section 5: Active Safety Systems Based on Pedestrian Detection Description 5: Explore systems using sensors and computer vision algorithms to detect pedestrians and initiate protective actions. Section 6: Pedestrian Detection for Active Safety Systems Description 6: Detail various methods and technologies used for pedestrian detection in active safety systems, focusing on sensor modalities, placement, and configurations. Section 7: Infrastructure-Based Solutions for Pedestrian Detection Description 7: Describe pedestrian detection systems based on infrastructure-mounted sensors and their application in traffic monitoring and collision warning. Section 8: Approaches for Collision Prediction Description 8: Present methods for predicting collisions, including deterministic and stochastic models, and discuss the integration of these models with pedestrian detection systems. Section 9: Concluding Remarks Description 9: Summarize the findings, discusses the state of research, and outline the future directions for pedestrian protection systems research.
Mobile Edge Computing: Survey and Research Outlook
16
--- paper_title: The Tactile Internet: Applications and Challenges paper_content: Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies. --- paper_title: Cloud computing: state-of-the-art and research challenges paper_content: Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area. --- paper_title: Fog and IoT: An Overview of Research Opportunities paper_content: Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT. --- paper_title: Above the Clouds: A Berkeley View of Cloud Computing paper_content: Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. --- paper_title: eyeDentify: Multimedia Cyber Foraging from a Smartphone paper_content: The recent introduction of smartphones has resulted in an explosion of innovative mobile applications. The computational requirements of many of these applications, however, can not be met by the smartphone itself. The compute power of the smartphone can be enhanced by distributing the application over other compute resources. Existing solutions comprise of a light weight client running on the smartphone and a heavy weight compute server running on, for example, a cloud. This places the user in a dependent position, however, because the user only controls the client application. In this paper, we follow a different model, called cyber foraging, that gives users full control over all parts of the application. We have implemented the model using the Ibis middleware. We evaluate the model using an innovative application in the domain of multimedia computing, and show that cyber foraging increases the application's responsiveness and accuracy whilst decreasing its energy usage. --- paper_title: Accurate, Low-Energy Trajectory Mapping for Mobile Devices paper_content: CTrack is an energy-efficient system for trajectory mapping using raw position tracks obtained largely from cellular base station fingerprints. Trajectory mapping, which involves taking a sequence of raw position samples and producing the most likely path followed by the user, is an important component in many location-based services including crowd-sourced traffic monitoring, navigation and routing, and personalized trip management. Using only cellular (GSM) fingerprints instead of power-hungry GPS and WiFi radios, the marginal energy consumed for trajectory mapping is zero. This approach is non-trivial because we need to process streams of highly inaccurate GSM localization samples (average error of over 175 meters) and produce an accurate trajectory. CTrack meets this challenge using a novel two-pass Hidden Markov Model that sequences cellular GSM fingerprints directly without converting them to geographic coordinates, and fuses data from low-energy sensors available on most commodity smart-phones, including accelerometers (to detect movement) and magnetic compasses (to detect turns). We have implemented CTrack on the Android platform, and evaluated it on 126 hours (1,074 miles) of real driving traces in an urban environment. We find that CTrack can retrieve over 75% of a user's drive accurately in the median. An important by-product of CTrack is that even devices with no GPS or WiFi (constituting a significant fraction of today's phones) can contribute and benefit from accurate position data. --- paper_title: Enabling Real-Time Context-Aware Collaboration through 5G and Mobile Edge Computing paper_content: Creating context-aware ad hoc collaborative systems remains to be one of the primary hurdles hampering the ubiquitous deployment of IT and communication services. Especially under mission-critical scenarios, these services must often adhere to strict timing deadlines. We believe empowering such realtime collaboration systems requires context-aware application platforms working in conjunction with ultra-low latency data transmissions. In this paper, we make a strong case that this could be accomplished by combining the novel communication architectures being proposed for 5G with the principles of Mobile Edge Computing (MEC). We show that combining 5G with MEC would enable inter- and intra-domain use cases that are otherwise not feasible. --- paper_title: Cloud computing: state-of-the-art and research challenges paper_content: Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area. --- paper_title: Fog and IoT: An Overview of Research Opportunities paper_content: Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT. --- paper_title: From Augmented Reality to Augmented Computing: A Look at Cloud-Mobile Convergence paper_content: There has been considerable number of virtual and augmented reality applications designed and developed for mobile devices. However the state-of-the-art systems are commonly confined by several limitations. In this position paper the concept ”Cloud-Mobile Convergence for Virtual Reality (CMCVR)” is presented. CMCVR envisions effective and user-friendly integration of the mobile device and cloud-based resources. Through the proposed framework, mobile devices could be augmented to deliver some user experiences comparable to those offered by fixed systems. A preliminary research that follows the CMCVR paradigm is also described. --- paper_title: Fog computing and its role in the internet of things paper_content: Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs). --- paper_title: Above the Clouds: A Berkeley View of Cloud Computing paper_content: Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. --- paper_title: A Survey of Fog Computing: Concepts, Applications and Issues paper_content: Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing. --- paper_title: A System Architecture for Context-Aware Mobile Computing paper_content: Computer applications traditionally expect a static execution environment. However, this precondition is generally not possible for mobile systems, where the world around an application is constantly changing. This thesis explores how to support and also exploit the dynamic configurations and social settings characteristic of mobile systems. More specifically, it advances the following goals: (1) enabling seamless interaction across devices; (2) creating physical spaces that are responsive to users; and (3) and building applications that are aware of the context of their use. Examples of these goals are: continuing in your office a program started at home; using a PDA to control someone else's windowing UI; automatically canceling phone forwarding upon return to your office; having an airport overhead-display highlight the flight information viewers are likely to be interested in; easily locating and using the nearest printer or fax machine; and automatically turning off a PDA's audible e-mail notification when in a meeting. ::: The contribution of this thesis is an architecture to support context-aware computing; that is, application adaptation triggered by such things as the location of use, the collection of nearby people, the presence of accessible devices and other kinds of objects, as well as changes to all these things over time. Three key issues are addressed: (1) the information needs of applications, (2) where applications get various pieces of information and (3) how information can be efficiently distributed. ::: A dynamic environment communication model is introduced as a general mechanism for quickly and efficiently learning about changes occurring in the environment in a fault tolerant manner. For purposes of scalability, multiple dynamic environment servers store user, device, and, for each geographic region, context information. In order to efficiently disseminate information from these components to applications, a dynamic collection of multicast groups is employed. The thesis also describes a demonstration system based on the Xerox PARCTAB, a wireless palmtop computer. --- paper_title: Offloading Guidelines for Augmented Reality Applications on Wearable Devices paper_content: As Augmented Reality (AR) gets popular on wearable devices such as Google Glass, various AR applications have been developed by leveraging synergetic benefits beyond the single technologies. However, the poor computational capability and limited power capacity of current wearable devices degrade runtime performance and sustainability. Computational offloading strategy has been proposed to outsource computation to remote cloud for improving performance. Nevertheless, comparing with mobile devices, the wearable devices have their specific limitations, which induce additional problems and require new thoughts of computational offloading. In this paper, we propose several guidelines of computational offloading for AR applications on wearable devices based on our practical experiences of designing and developing AR applications on Google Glass. The guidelines have been adopted and proved by our application prototypes. --- paper_title: Security and privacy in mobile cloud computing paper_content: With the development of cloud computing and mobility, mobile cloud computing has emerged and become a focus of research. By the means of on-demand self-service and extendibility, it can offer the infrastructure, platform, and software services in a cloud to mobile users through the mobile network. Security and privacy are the key issues for mobile cloud computing applications, and still face some enormous challenges. In order to facilitate this emerging domain, we firstly in brief review the advantages and system model of mobile cloud computing, and then pay attention to the security and privacy in the mobile cloud computing. By deeply analyzing the security and privacy issues from three aspects: mobile terminal, mobile network and cloud, we give the current security and privacy approaches. --- paper_title: What Will 5G Be? paper_content: What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue. --- paper_title: The Promise of Edge Computing paper_content: The success of the Internet of Things and rich cloud services have helped create the need for edge computing, in which data processing occurs in part at the network edge, rather than completely in the cloud. Edge computing could address concerns such as latency, mobile devices' limited battery life, bandwidth costs, security, and privacy. --- paper_title: Mobile-Edge Computing Architecture: The role of MEC in the Internet of Things paper_content: Mobile-Edge computing (MEC) is an emerging technology currently recognized as a key enabler for 5G networks. Compatible with current 4G networks, MEC will address many key uses of the 5G system, motivated by the massive diffusion of the Internet of Things (IoT). This article aims to provide a tutorial on MEC technology and an overview of the MEC framework and architecture recently defined by the European Telecommunications Standards Institute (ETSI) MEC Industry Specification Group (ISG) standardization organization. We provide some examples of MEC deployment, with special reference to IoT cases, since IoT is recognized as a main driver for 5G. Finally, we discuss the main benefits and challenges for MEC moving toward 5G. --- paper_title: A survey on mobile edge computing paper_content: Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed. --- paper_title: Mobile Edge Computing: A Taxonomy paper_content: Mobile Edge Computing proposes co-locating computing and storage resources at base stations of cellular networks. It is seen as a promising technique to alleviate utilization of the mobile core and to reduce latency for mobile end users. Due to the fact that Mobile Edge Computing is a novel approach not yet deployed in real-life networks, recent work discusses merely general and non-technical ideas and concepts. This paper introduces a taxonomy for Mobile Edge Computing applications and analyzes chances and limitations from a technical point of view. Application types which profit from edge deployment are identified and discussed. Furthermore, these applications are systematically classified based on technical metrics. --- paper_title: Edge computing enabling the Internet of Things paper_content: Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper. --- paper_title: Collaborative Mobile Edge Computing in 5G Networks: New Paradigms, Scenarios, and Challenges paper_content: MEC is an emerging paradigm that provides computing, storage, and networking resources within the edge of the mobile RAN. MEC servers are deployed on a generic computing platform within the RAN, and allow for delay-sensitive and context-aware applications to be executed in close proximity to end users. This paradigm alleviates the backhaul and core network and is crucial for enabling low-latency, high-bandwidth, and agile mobile services. This article envisions a real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at the edge. Specifically, we introduce and study three representative use cases ranging from mobile edge orchestration, collaborative caching and processing, and multi-layer interference cancellation. We demonstrate the promising benefits of the proposed approaches in facilitating the evolution to 5G networks. Finally, we discuss the key technical challenges and open research issues that need to be addressed in order to efficiently integrate MEC into the 5G ecosystem. --- paper_title: A Survey of Mobile Cloud Computing Application Models paper_content: Smart phones are now capable of supporting a wide range of applications, many of which demand an ever increasing computational power. This poses a challenge because smart phones are resource-constrained devices with limited computation power, memory, storage, and energy. Fortunately, the cloud computing technology offers virtually unlimited dynamic resources for computation, storage, and service provision. Therefore, researchers envision extending cloud computing services to mobile devices to overcome the smartphones constraints. The challenge in doing so is that the traditional smartphone application models do not support the development of applications that can incorporate cloud computing features and requires specialized mobile cloud application models. This article presents mobile cloud architecture, offloading decision affecting entities, application models classification, the latest mobile cloud application models, their critical analysis and future research directions. --- paper_title: Energy efficiency of mobile clients in cloud computing paper_content: Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. ::: ::: In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. ::: ::: We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions. --- paper_title: Optimal Joint Scheduling and Cloud Offloading for Mobile Applications paper_content: Cloud offloading is an indispensable solution to supporting computationally demanding applications on resource constrained mobile devices. In this paper, we introduce the concept of wireless aware joint scheduling and computation offloading (JSCO) for multi-component applications, where an optimal decision is made on which components need to be offloaded as well as the scheduling order of these components. The JSCO approach allows for more degrees of freedom in the solution by moving away from a compiler pre-determined scheduling order for the components towards a more wireless aware scheduling order. For some component dependency graph structures, the proposed algorithm can shorten execution times by parallel processing appropriate components in the mobile and cloud. We define a net utility that trades-off the energy saved by the mobile, subject to constraints on the communication delay, overall application execution time, and component precedence ordering. The linear optimization problem is solved using real data measurements obtained from running multi-component applications on an HTC smartphone and the Amazon EC2, using WiFi for cloud offloading. The performance is further analyzed using various component dependency graph topologies and sizes. Results show that the energy saved increases with longer application runtime deadline, higher wireless rates, and smaller offload data sizes. --- paper_title: Energy-efficient soft real-time CPU scheduling for mobile multimedia systems paper_content: This paper presents GRACE-OS, an energy-efficient soft real-time CPU scheduler for mobile devices that primarily run multimedia applications. The major goal of GRACE-OS is to support application quality of service and save energy. To achieve this goal, GRACE-OS integrates dynamic voltage scaling into soft real-time scheduling and decides how fast to execute applications in addition to when and how long to execute them. GRACE-OS makes such scheduling decisions based on the probability distribution of application cycle demands, and obtains the demand distribution via online profiling and estimation. We have implemented GRACE-OS in the Linux kernel and evaluated it on an HP laptop with a variable-speed CPU and multimedia codecs. Our experimental results show that (1) the demand distribution of the studied codecs is stable or changes smoothly. This stability implies that it is feasible to perform stochastic scheduling and voltage scaling with low overhead; (2) GRACE-OS delivers soft performance guarantees by bounding the deadline miss ratio under application-specific requirements; and (3) GRACE-OS reduces CPU idle time and spends more busy time in lower-power speeds. Our measurement indicates that compared to deterministic scheduling and voltage scaling, GRACE-OS saves energy by 7% to 72% while delivering statistical performance guarantees. --- paper_title: Heuristic offloading of concurrent tasks for computation-intensive applications in mobile cloud computing paper_content: 2014 IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2014, Toronto, ON, 27 April-2 May 2014 --- paper_title: Computation offloading decisions for reducing completion time paper_content: We analyze the conditions in which offloading computation reduces completion time. We extend the existing literature by deriving an inequality (Eq. 4) that relates computation offloading system parameters to the bits per instruction ratio of a computational job. This ratio is the inverse of the arithmetic intensity. We then discuss how this inequality can be used to determine the computations that can benefit from offloading as well as the computation offloading systems required to make offloading beneficial for particular computations. --- paper_title: A Simple Transmit Diversity Technique for Wireless Communications paper_content: This paper presents a simple two-branch transmit diversity scheme. Using two transmit antennas and one receive antenna the scheme provides the same diversity order as maximal-ratio receiver combining (MRRC) with one transmit antenna, and two receive antennas. It is also shown that the scheme may easily be generalized to two transmit antennas and receive antennas to provide a diversity order of 2. The new scheme does not require any bandwidth expansion any feedback from the receiver to the transmitter and its computation complexity is similar to MRRC. --- paper_title: Massive MIMO for Next Generation Wireless Systems paper_content: Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic. --- paper_title: Alternating Minimization Algorithms for Hybrid Precoding in Millimeter Wave MIMO Systems paper_content: Millimeter wave (mmWave) communications has been regarded as a key enabling technology for 5G networks, as it offers orders of magnitude greater spectrum than current cellular bands. In contrast to conventional multiple-input–multiple-output (MIMO) systems, precoding in mmWave MIMO cannot be performed entirely at baseband using digital precoders, as only a limited number of signal mixers and analog-to-digital converters can be supported considering their cost and power consumption. As a cost-effective alternative, a hybrid precoding transceiver architecture, combining a digital precoder and an analog precoder, has recently received considerable attention. However, the optimal design of such hybrid precoders has not been fully understood. In this paper, treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures. In particular, for the fully-connected structure, an AltMin algorithm based on manifold optimization is proposed to approach the performance of the fully digital precoder, which, however, has a high complexity. Thus, a low-complexity AltMin algorithm is then proposed, by enforcing an orthogonal constraint on the digital precoder. Furthermore, for the partially-connected structure, an AltMin algorithm is also developed with the help of semidefinite relaxation. For practical implementation, the proposed AltMin algorithms are further extended to the broadband setting with orthogonal frequency division multiplexing modulation. Simulation results will demonstrate significant performance gains of the proposed AltMin algorithms over existing hybrid precoding algorithms. Moreover, based on the proposed algorithms, simulation comparisons between the two hybrid precoding structures will provide valuable design insights. --- paper_title: Small-Scale Spectrum Aggregation and Sharing paper_content: New spectrum bands together with flexible spectrum management are treated as one of the key technical enablers for achievement of the so-called key-performance indicators defined for 5G wireless networks. In this paper, we deal with the small-scale spectrum aggregation and sharing, where a set of even very narrow and disjoint frequency bands closely located on the frequency axis can be utilized simultaneously. We first discuss how such a scheme can be applied to various multicarrier systems, focusing on the non-contiguous orthogonal frequency division multiplexing and non-contiguous filter-bank multicarrier technique. We propose an interference model that takes into account the limitations of both transmitter and receiver frequency selectivity, and apply it to our 5G link-optimization framework, what differentiates our work from other standard approaches to link adaptation. We present the results of hardware experiments to validate assumed theoretical interference models. Finally, we solve the optimization problem subject to the constraints of maximum interference induced to the protected legacy systems (GSM and UMTS). Results confirm that small-scale spectrum aggregation can provide high throughput even when the 5G system operates in a dense heterogeneous network. --- paper_title: Multi-Cell MIMO Cooperative Networks: A New Look at Interference paper_content: This paper presents an overview of the theory and currently known techniques for multi-cell MIMO (multiple input multiple output) cooperation in wireless networks. In dense networks where interference emerges as the key capacity-limiting factor, multi-cell cooperation can dramatically improve the system performance. Remarkably, such techniques literally exploit inter-cell interference by allowing the user data to be jointly processed by several interfering base stations, thus mimicking the benefits of a large virtual MIMO array. Multi-cell MIMO cooperation concepts are examined from different perspectives, including an examination of the fundamental information-theoretic limits, a review of the coding and signal processing algorithmic developments, and, going beyond that, consideration of very practical issues related to scalability and system-level integration. A few promising and quite fundamental research avenues are also suggested. --- paper_title: Spectrum Refarming: A New Paradigm of Spectrum Sharing for Cellular Networks paper_content: Spectrum refarming (SR) refers to a radio resource management technique which supports different generations of cellular networks to operate in the same radio spectrum. In this paper, an underlay SR model is proposed for an Orthogonal Frequency Division Multiple Access (OFDMA) system to share the spectrum of a Code Division Multiple Access (CDMA) system through intelligently exploiting the interference margin provided by the CDMA system when operating with a low system load. The asymptotic signal-to-interference-plus-noise ratio (SINR) of the CDMA system is used to quantify the interference margin, which interestingly does not depend on the instantaneous information (spreading codes and channel state information) of the CDMA system, thanks to its internal power control. By using the transmit power constraints together with the derived interference margin, the uplink resource allocation problem for OFDMA system is formulated and solved through dual decomposition method. The proposed SR system only requires high level system parameters from the CDMA system, hence, the upgrading of legacy CDMA system is not needed. Simulation results have verified our theoretical analysis, and validated the effectiveness of the proposed resource allocation algorithm and its capability to protect the legacy CDMA users. --- paper_title: Femtocells: Past, Present, and Future paper_content: Femtocells, despite their name, pose a potentially large disruption to the carefully planned cellular networks that now connect a majority of the planet's citizens to the Internet and with each other. Femtocells - which by the end of 2010 already outnumbered traditional base stations and at the time of publication are being deployed at a rate of about five million a year - both enhance and interfere with this network in ways that are not yet well understood. Will femtocells be crucial for offloading data and video from the creaking traditional network? Or will femtocells prove more trouble than they are worth, undermining decades of careful base station deployment with unpredictable interference while delivering only limited gains? Or possibly neither: are femtocells just a "flash in the pan"; an exciting but short-lived stage of network evolution that will be rendered obsolete by improved WiFi offloading, new backhaul regulations and/or pricing, or other unforeseen technological developments? This tutorial article overviews the history of femtocells, demystifies their key aspects, and provides a preview of the next few years, which the authors believe will see a rapid acceleration towards small cell technology. In the course of the article, we also position and introduce the articles that headline this special issue. --- paper_title: Indoor Millimeter Wave MIMO: Feasibility and Performance paper_content: In this paper, we investigate spatial multiplexing at millimeter (mm) wave carrier frequencies for short-range indoor applications by quantifying fundamental limits in line-of-sight (LOS) environments and then investigating performance in the presence of multipath and LOS blockage. Our contributions are summarized as follows. For linear arrays with constrained form factor, an asymptotic analysis based on the properties of prolate spheroidal wave functions shows that a sparse array producing a spatially uncorrelated channel matrix effectively provides the maximum number of spatial degrees of freedom in a LOS environment, although substantial beamforming gains can be obtained by using denser arrays. This motivates our proposed mm-wave MIMO architecture, which utilizes arrays of subarrays to provide both directivity and spatial multiplexing gains. System performance is evaluated in a simulated indoor environment using a ray-tracing model that incorporates multipath effects and potential LOS blockage. Eigenmode transmission with waterfilling power allocation serves as a performance benchmark, and is compared to the simpler scheme of beamsteering transmission with MMSE reception and a fixed signal constellation. Our numerical results provide insight into the spatial variations of attainable capacity within a room, and the combinations of beamsteering and spatial multiplexing used in different scenarios. --- paper_title: Modeling and Analysis of K-Tier Downlink Heterogeneous Cellular Networks paper_content: Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given \sinr, adding more tiers and/or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR. --- paper_title: User-Centric Intercell Interference Nulling for Downlink Small Cell Networks paper_content: Small cell networks are regarded as a promising candidate to meet the exponential growth of mobile data traffic in cellular networks. With a dense deployment of access points, spatial reuse will be improved, and uniform coverage can be provided. However, such performance gains cannot be achieved without effective intercell interference management. In this paper, a novel interference coordination strategy, called user-centric intercell interference nulling , is proposed for small cell networks. A main merit of the proposed strategy is its ability to effectively identify and mitigate the dominant interference for each user. Different from existing works, each user selects the coordinating base stations (BSs) based on the relative distance between the home BS and the interfering BSs, called the interference nulling (IN) range , and thus interference nulling adapts to each user's own interference situation. By adopting a random spatial network model, we derive an approximate expression of the successful transmission probability to the typical user, which is then used to determine the optimal IN range. Simulation results shall confirm the tightness of the approximation, and demonstrate significant performance gains (about 35–40 $\%$ ) of the proposed coordination strategy, compared with the non-coordination case. Moreover, it is shown that the proposed strategy outperforms other interference nulling methods. Finally, the effect of imperfect channel state information (CSI) is investigated, where CSI is assumed to be obtained via limited feedback. It is shown that the proposed coordination strategy still provides significant performance gains even with a moderate number of feedback bits. --- paper_title: Energy-Optimal Mobile Cloud Computing under Stochastic Wireless Channel paper_content: This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel. --- paper_title: An analysis of power consumption in a smartphone paper_content: Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. ::: ::: Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device's main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. ::: ::: We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device's application processor. --- paper_title: Processor Design for Portable Systems paper_content: Processors used in portable systems must provide highly energy-efficient operation, due to the importance of battery weight and size, without compromising high performance when the user requires it. The user-dependent modes of operation of a processor in portable systems are described and separate metrics for energy efficiency for each of them are found to be required. A variety of well known low-power techniques are re-evaluated against these metrics and in some cases are not found to be appropriate leading to a set of energy-efficient design principles. Also, the importance of idle energy reduction and the joint optimization of hardware and software will be examined for achieving the ultimate in low-energy, high-performance design. --- paper_title: The Energy/Frequency Convexity Rule: Modeling and Experimental Validation on Mobile Devices paper_content: This paper provides both theoretical and experimental evidence for the existence of an Energy/Frequency Convexity Rule, which relates energy consumption and CPU frequency on mobile devices. We monitored a typical smartphone running a specific computing-intensive kernel of multiple nested loops written in C using a high-resolution power gauge. Data gathered during a week-long acquisition campaign suggest that energy consumed per input element is strongly correlated with CPU frequency, and, more interestingly, the curve exhibits a clear minimum over a 0.2 GHz to 1.6 GHz window. We provide and motivate an analytical model for this behavior, which fits well with the data. Our work should be of clear interest to researchers focusing on energy usage and minimization for mobile devices, and provide new insights for optimization opportunities. --- paper_title: Energy-Optimal Mobile Cloud Computing under Stochastic Wireless Channel paper_content: This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel. --- paper_title: Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing paper_content: Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios. --- paper_title: Joint allocation of computation and communication resources in multiuser mobile cloud computing paper_content: Mobile cloud computing is offering a very powerful storage and computational facility to enhance the capabilities of resource-constrained mobile handsets. However, full exploitation of the cloud computing capabilities can be achieved only if the allocation of radio and computational capabilities is performed jointly. In this paper, we propose a method to jointly optimize the transmit power, the number of bits per symbol and the CPU cycles assigned to each application in order to minimize the power consumption at the mobile side, under an average latency constraint dictated by the application requirements. We consider the case of a set of mobile handsets served by a single cloud and we show that the optimization leads to a one-to-one relationship between the transmit power and the percentage of CPU cycles assigned to each user. Based on our optimization, we propose then a computation scheduling technique and verify the stability of the computations queue. Then we show how these queues are affected by the degrees of freedom of the channels between mobile handsets and server. --- paper_title: A Stochastic Model to Investigate Data Center Performance and QoS in IaaS Cloud Computing Systems paper_content: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to the federation with other clouds. Performance evaluation of cloud computing infrastructures is required to predict and quantify the cost-benefit of a strategy portfolio and the corresponding quality of service (QoS) experienced by users. Such analyses are not feasible by simulation or on-the-field experimentation, due to the great number of parameters that have to be investigated. In this paper, we present an analytical model, based on stochastic reward nets (SRNs), that is both scalable to model systems composed of thousands of resources and flexible to represent different policies and cloud-specific strategies. Several performance metrics are defined and evaluated to analyze the behavior of a cloud data center: utilization, availability, waiting time, and responsiveness. A resiliency analysis is also provided to take into account load bursts. Finally, a general approach is presented that, starting from the concept of system capacity, can help system managers to opportunely set the data center parameters under different working conditions. --- paper_title: Xen and the art of virtualization paper_content: Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. --- paper_title: Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading paper_content: Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function , which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation. --- paper_title: Energy Efficient Mobile Cloud Computing Powered by Wireless Energy Transfer paper_content: Achieving long battery lives or even self sustainability has been a long standing challenge for designing mobile devices. This paper presents a novel solution that seamlessly integrates two technologies, mobile cloud computing and microwave power transfer (MPT), to enable computation in passive low-complexity devices such as sensors and wearable computing devices. Specifically, considering a single-user system, a base station (BS) either transfers power to or offloads computation from a mobile to the cloud; the mobile uses harvested energy to compute given data either locally or by offloading. A framework for energy efficient computing is proposed that comprises a set of policies for controlling CPU cycles for the mode of local computing, time division between MPT and offloading for the other mode of offloading, and mode selection. Given the CPU-cycle statistics information and channel state information (CSI), the policies aim at maximizing the probability of successfully computing given data, called computing probability , under the energy harvesting and deadline constraints. The policy optimization is translated into the equivalent problems of minimizing the mobile energy consumption for local computing and maximizing the mobile energy savings for offloading which are solved using convex optimization theory. The structures of the resultant policies are characterized in closed form. Furthermore, given non-causal CSI, the said analytical framework is further developed to support computation load allocation over multiple channel realizations, which further increases the computing probability. Last, simulation demonstrates the feasibility of wirelessly powered mobile cloud computing and the gain of its optimal control. --- paper_title: Modeling of the resource allocation in cloud computing centers paper_content: Cloud computing offers on-demand network access to the computing resources through virtualization. This paradigm shifts the computer resources to the cloud, which results in cost savings as the users leasing instead of owning these resources. Clouds will also provide power constrained mobile users accessibility to the computing resources. In this paper, we develop performance models of these systems. We assume that jobs arrive to the system according to a Poisson process and they may have quite general service time distributions. Each job may consist of multiple numbers of tasks with each task requiring a virtual machine (VM) for its execution. The size of a job is determined by the number of its tasks, which may be a constant or a variable. The jobs with variable sizes may generate new tasks during their service times. In the case of constant job size, we allow different classes of jobs, with each class being determined through their arrival and service rates and number of tasks in a job. In the variable case a job generates randomly new tasks during its service time. The latter requires dynamic assignment of VMs to a job, which will be needed in providing service to mobile users. We model the systems with both constant and variable size jobs using birth-death processes. In the case of constant job size, we determined joint probability distribution of the number of jobs from each class in the system, job blocking probabilities and distribution of the utilization of resources for systems with both homogeneous and heterogeneous types of VMs. We have also analyzed tradeoffs for turning idle servers off for power saving. In the case of variable job sizes, we have determined distribution of the number of jobs in the system and average service time of a job for systems with both infinite and finite amount of resources. We have presented numerical results and any approximations are verified by simulation. The results of the paper may be used in the dimensioning of cloud computing centers. --- paper_title: Above the Clouds: A Berkeley View of Cloud Computing paper_content: Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. --- paper_title: Power provisioning for a warehouse-sized computer paper_content: Large-scale Internet services require a computing infrastructure that can beappropriately described as a warehouse-sized computing system. The cost ofbuilding datacenter facilities capable of delivering a given power capacity tosuch a computer can rival the recurring energy consumption costs themselves.Therefore, there are strong economic incentives to operate facilities as closeas possible to maximum capacity, so that the non-recurring facility costs canbe best amortized. That is difficult to achieve in practice because ofuncertainties in equipment power ratings and because power consumption tends tovary significantly with the actual computing activity. Effective powerprovisioning strategies are needed to determine how much computing equipmentcan be safely and efficiently hosted within a given power budget. In this paper we present the aggregate power usage characteristics of largecollections of servers (up to 15 thousand) for different classes ofapplications over a period of approximately six months. Those observationsallow us to evaluate opportunities for maximizing the use of the deployed powercapacity of datacenters, and assess the risks of over-subscribing it. We findthat even in well-tuned applications there is a noticeable gap (7 - 16%)between achieved and theoretical aggregate peak power usage at the clusterlevel (thousands of servers). The gap grows to almost 40% in wholedatacenters. This headroom can be used to deploy additional compute equipmentwithin the same power budget with minimal risk of exceeding it. We use ourmodeling framework to estimate the potential of power management schemes toreduce peak power and energy usage. We find that the opportunities for powerand energy savings are significant, but greater at the cluster-level (thousandsof servers) than at the rack-level (tens). Finally we argue that systems needto be power efficient across the activity range, and not only at peakperformance levels. --- paper_title: Energy-Optimal Mobile Cloud Computing under Stochastic Wireless Channel paper_content: This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel. --- paper_title: DREAM: Dynamic Resource and Task Allocation for Energy Minimization in Mobile Cloud Systems paper_content: To cope with increasing energy consumption in mobile devices, the mobile cloud offloading has received considerable attention from its ability to offload processing tasks of mobile devices to cloud servers, and previous studies have focused on single type tasks in fixed network environments. However, real network environments are spatio-temporally varying, and typical mobile devices have not only various types of tasks, e.g., network traffic, cloud offloadable/nonoffloadable workloads but also capabilities of CPU frequency scaling and network interface selection between WiFi and cellular. In this paper, we first jointly consider the following three dynamic problems in real mobile environments: 1) cloud offloading policy, i.e., determining to use local CPU resources or cloud resources; 2) allocation of tasks to transmit through networks and to process in local CPU; and 3) CPU clock speed and network interface controls. We propose a DREAM algorithm by invoking the Lyapunov optimization and mathematically prove that it minimizes CPU and network energy for given delay constraints. Trace-driven simulation based on real measurements demonstrates that DREAM can save over 35% of total energy than existing algorithms with the same delay. We also design DREAM architecture and demonstrate the applicability of DREAM in practice. --- paper_title: Mobile-Edge Computing: Partial Computation Offloading Using Dynamic Voltage Scaling paper_content: The incorporation of dynamic voltage scaling technology into computation offloading offers more flexibilities for mobile edge computing. In this paper, we investigate partial computation offloading by jointly optimizing the computational speed of smart mobile device (SMD), transmit power of SMD, and offloading ratio with two system design objectives: energy consumption of SMD minimization (ECM) and latency of application execution minimization (LM). Considering the case that the SMD is served by a single cloud server, we formulate both the ECM problem and the LM problem as nonconvex problems. To tackle the ECM problem, we recast it as a convex one with the variable substitution technique and obtain its optimal solution. To address the nonconvex and nonsmooth LM problem, we propose a locally optimal algorithm with the univariate search technique. Furthermore, we extend the scenario to a multiple cloud servers system, where the SMD could offload its computation to a set of cloud servers. In this scenario, we obtain the optimal computation distribution among cloud servers in closed form for the ECM and LM problems. Finally, extensive simulations demonstrate that our proposed algorithms can significantly reduce the energy consumption and shorten the latency with respect to the existing offloading schemes. --- paper_title: QoE-Aware Computation Offloading Scheduling to Capture Energy-Latency Tradeoff in Mobile Clouds paper_content: Computation offloading is a promising application of mobile clouds that can save energy of mobile devices via optimal transmission scheduling of mobile-to-cloud task offloading. Existing approaches to computation offloading have addressed various aspects of the tradeoff between energy consumption and application latency, but none of them explicitly considered the dependency in optimization on the mobile user''s context, e.g., user tendency, the remaining battery level. This paper captures such a user-centric perspective in the energy-latency tradeoff via a quality-of-experience (QoE) based cost function, and formulates the problem of data offloading scheduling as dynamic programming (DP). To derive the optimal schedule, we first introduce a database-assisted optimal DP algorithm and then propose a suboptimal but computationally-efficient approximate DP (ADP) algorithm based on the limited lookahead technique. An extensive numerical analysis has revealed that the ADP algorithm achieves near-optimal performance incurring only 2.27% extra cost on average than the optimum, and enhances QoE by up to 4.46 times compared to the energy-only scheduling. --- paper_title: Inter-Layer Per-Mobile Optimization of Cloud Mobile Computing: A Message-Passing Approach paper_content: Cloud mobile computing enables the offloading of computation-intensive applications from a mobile device to a cloud processor via a wireless interface. In light of the strong interplay between offloading decisions at the application layer and physical-layer parameters, which determine the energy and latency associated with the mobile-cloud communication, this paper investigates the inter-layer optimization of fine-grained task offloading across both layers. In prior art, this problem was formulated, under a serial implementation of processing and communication, as a mixed integer program, entailing a complexity that is exponential in the number of tasks. In this work, instead, algorithmic solutions are proposed that leverage the structure of the call graphs of typical applications by means of message passing on the call graph, under both serial and parallel implementations of processing and communication. For call trees, the proposed solutions have a linear complexity in the number of tasks, and efficient extensions are presented for more general call graphs that include "map" and "reduce"-type tasks. Moreover, the proposed schemes are optimal for the serial implementation, and provide principled heuristics for the parallel implementation. Extensive numerical results yield insights into the impact of inter-layer optimization and on the comparison of the two implementations. --- paper_title: Delay-Optimal Computation Task Scheduling for Mobile-Edge Computing Systems paper_content: Mobile-edge computing (MEC) emerges as a promising paradigm to improve the quality of computation experience for mobile devices. Nevertheless, the design of computation task scheduling policies for MEC systems inevitably encounters a challenging two-timescale stochastic optimization problem. Specifically, in the larger timescale, whether to execute a task locally at the mobile device or to offload a task to the MEC server for cloud computing should be decided, while in the smaller timescale, the transmission policy for the task input data should adapt to the channel side information. In this paper, we adopt a Markov decision process approach to handle this problem, where the computation tasks are scheduled based on the queueing state of the task buffer, the execution state of the local processing unit, as well as the state of the transmission unit. By analyzing the average delay of each task and the average power consumption at the mobile device, we formulate a power-constrained delay minimization problem, and propose an efficient one-dimensional search algorithm to find the optimal task scheduling policy. Simulation results are provided to demonstrate the capability of the proposed optimal stochastic task scheduling policy in achieving a shorter average execution delay compared to the baseline policies. --- paper_title: Communicating While Computing: Distributed mobile cloud computing over 5G heterogeneous networks paper_content: Current estimates of mobile data traffic in the years to come foresee a 1,000 increase of mobile data traffic in 2020 with respect to 2010, or, equivalently, a doubling of mobile data traffic every year. This unprecedented growth demands a significant increase of wireless network capacity. Even if the current evolution of fourth-generation (4G) systems and, in particular, the advancements of the long-term evolution (LTE) standardization process foresees a significant capacity improvement with respect to third-generation (3G) systems, the European Telecommunications Standards Institute (ETSI) has established a roadmap toward the fifth-generation (5G) system, with the aim of deploying a commercial system by the year 2020 [1]. The European Project named ?Mobile and Wireless Communications Enablers for the 2020 Information Society? (METIS), launched in 2012, represents one of the first international and large-scale research projects on fifth generation (5G) [2]. In parallel with this unparalleled growth of data traffic, our everyday life experience shows an increasing habit to run a plethora of applications specifically devised for mobile devices, (smartphones, tablets, laptops)for entertainment, health care, business, social networking, traveling, news, etc. However, the spectacular growth in wireless traffic generated by this lifestyle is not matched with a parallel improvement on mobile handsets? batteries, whose lifetime is not improving at the same pace [3]. This determines a widening gap between the energy required to run sophisticated applications and the energy available on the mobile handset. A possible way to overcome this obstacle is to enable the mobile devices, whenever possible and convenient, to offload their most energy-consuming tasks to nearby fixed servers. This strategy has been studied for a long time and is reported in the literature under different names, such as cyberforaging [4] or computation offloading [5], [6]. In recent years, a strong impulse to computation offloading has come through cloud computing (CC), which enables the users to utilize resources on demand. The resources made available by a cloud service provider are: 1) infrastructures, such as network devices, storage, servers, etc., 2) platforms, such as operating systems, offering an integrated environment for developing and testing custom applications, and 3) software, in the form of application programs. These three kinds of services are labeled, respectively, as infrastructure as a service, platform as a service, and software as a service. In particular, one of the key features of CC is virtualization, which makes it possible to run multiple operating systems and multiple applications over the same machine (or set of machines), while guaranteeing isolation and protection of the programs and their data. Through virtualization, the number of virtual machines (VMs) can scale on ?demand, thus improving the overall system computational efficiency. Mobile CC (MCC) is a specific case of CC where the user accesses the cloud services through a mobile handset [5]. The major limitations of today?s MCC are the energy consumption associated to the radio access and the latency experienced in reaching the cloud provider through a wide area network (WAN). Mobile users located at the edge of macrocellular networks are particularly disadvantaged in terms of power consumption and, furthermore, it is very difficult to control latency over a WAN. As pointed out in [7]?[9], humans are acutely sensitive to delay and jitter: as latency increases, interactive response suffers. Since the interaction times foreseen in 5G systems, in particular in the so-called tactile Internet [10], are quite small (in the order of milliseconds), a strict latency control must be somehow incorporated in near future MCC. Meeting this constraint requires a deep ?rethinking of the overall service chain, from the physical layer up to virtualization. --- paper_title: Optimal Joint Scheduling and Cloud Offloading for Mobile Applications paper_content: Cloud offloading is an indispensable solution to supporting computationally demanding applications on resource constrained mobile devices. In this paper, we introduce the concept of wireless aware joint scheduling and computation offloading (JSCO) for multi-component applications, where an optimal decision is made on which components need to be offloaded as well as the scheduling order of these components. The JSCO approach allows for more degrees of freedom in the solution by moving away from a compiler pre-determined scheduling order for the components towards a more wireless aware scheduling order. For some component dependency graph structures, the proposed algorithm can shorten execution times by parallel processing appropriate components in the mobile and cloud. We define a net utility that trades-off the energy saved by the mobile, subject to constraints on the communication delay, overall application execution time, and component precedence ordering. The linear optimization problem is solved using real data measurements obtained from running multi-component applications on an HTC smartphone and the Amazon EC2, using WiFi for cloud offloading. The performance is further analyzed using various component dependency graph topologies and sizes. Results show that the energy saved increases with longer application runtime deadline, higher wireless rates, and smaller offload data sizes. --- paper_title: Heuristic offloading of concurrent tasks for computation-intensive applications in mobile cloud computing paper_content: 2014 IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2014, Toronto, ON, 27 April-2 May 2014 --- paper_title: Hermes: Latency Optimal Task Assignment for Resource-constrained Mobile Computing paper_content: With mobile devices increasingly able to connect to cloud servers from anywhere, resource-constrained devices can potentially perform offloading of computational tasks to either save local resource usage or improve performance. It is of interest to find optimal assignments of tasks to local and remote devices that can take into account the application-specific profile, availability of computational resources, and link connectivity, and find a balance between energy consumption costs of mobile devices and latency for delay-sensitive applications. We formulate an NP-hard problem to minimize the application latency while meeting prescribed resource utilization constraints. Different from most of existing works that either rely on the integer programming solver, or on heuristics that offer no theoretical performance guarantees, we propose Hermes, a novel fully polynomial time approximation scheme (FPTAS). We identify for a subset of problem instances, where the application task graphs can be described as serial trees, Hermes provides a solution with latency no more than $(1+\epsilon)$ times of the minimum while incurring complexity that is polynomial in problem size and $\frac{1}{\epsilon}$ . We further propose an online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time. Evaluation is done by using real data set collected from several benchmarks, and is shown that Hermes improves the latency by $16$ percent compared to a previously published heuristic and increases CPU computing time by only $0.4$ percent of overall latency. --- paper_title: A Dynamic Offloading Algorithm for Mobile Computing paper_content: Offloading is an effective method for extending the lifetime of handheld mobile devices by executing some components of applications remotely (e.g., on the server in a data center or in a cloud). In this article, to achieve energy saving while satisfying given application execution time requirement, we present a dynamic offloading algorithm, which is based on Lyapunov optimization. The algorithm has low complexity to solve the offloading problem (i.e., to determine which software components to execute remotely given available wireless network connectivity). Performance evaluation shows that the proposed algorithm saves more energy than the existing algorithm while meeting the requirement of application execution time. --- paper_title: Collaborative Task Execution in Mobile Cloud Computing Under a Stochastic Wireless Channel paper_content: This paper investigates collaborative task execution between a mobile device and a cloud clone for mobile applications under a stochastic wireless channel. A mobile application is modeled as a sequence of tasks that can be executed on the mobile device or on the cloud clone. We aim to minimize the energy consumption on the mobile device while meeting a time deadline, by strategically offloading tasks to the cloud. We formulate the collaborative task execution as a constrained shortest path problem. We derive a one-climb policy by characterizing the optimal solution and then propose an enumeration algorithm for the collaborative task execution in polynomial time. Further, we apply the LARAC algorithm to solving the optimization problem approximately, which has lower complexity than the enumeration algorithm. Simulation results show that the approximate solution of the LARAC algorithm is close to the optimal solution of the enumeration algorithm. In addition, we consider a probabilistic time deadline, which is transformed to hard deadline by Markov inequality. Moreover, compared to the local execution and the remote execution, the collaborative task execution can significantly save the energy consumption on the mobile device, prolonging its battery life. --- paper_title: Energy Delay Tradeoff in Cloud Offloading for Multi-Core Mobile Devices paper_content: Cloud offloading is considered a promising approach for both energy conservation and storage/computation enhancement for resource-limited mobile devices. In this paper, we present a Lyapunov optimization-based scheme for cloud offloading scheduling, as well as download scheduling for cloud execution output, for multiple applications running in a mobile device with a multi-core CPU. We derive an online algorithm and prove performance bounds for the proposed algorithm with respect to average power consumption and average queue length, which is indicative of delay, and reveal the fundamental tradeoff between the two optimization goals. The performance of the proposed online scheduling scheme is validated with trace-driven simulations. --- paper_title: Cloud Offloading for Multi-Radio Enabled Mobile Devices paper_content: The advent of 5G networking technologies has increased the expectations from mobile devices, in that, more sophisticated, computationally intense applications are expected to be delivered on the mobile device which are themselves getting smaller and sleeker. This predicates a need for offloading computationally intense parts of the applications to a resource strong cloud. Parallely, in the wireless networking world, the trend has shifted to multi-radio (as opposed to multi-channel) enabled communications. In this paper, we provide a comprehensive computation offloading solution that uses the multiple radio links available for associated data transfer, optimally. Our contributions include: a comprehensive model for the energy consumption from the perspective of the mobile device; the formulation of the joint optimization problem to minimize the energy consumed as well as allocating the associated data transfer optimally through the available radio links and an iterative algorithm that converges to a locally optimal solution. Simulations on an HTC phone, running a 14-component application and using the Amazon EC2 as the cloud, show that the solution obtained through the iterative algorithm consumes only 3% more energy than the optimal solution (obtained via exhaustive search). --- paper_title: Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing paper_content: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases. --- paper_title: Joint allocation of computation and communication resources in multiuser mobile cloud computing paper_content: Mobile cloud computing is offering a very powerful storage and computational facility to enhance the capabilities of resource-constrained mobile handsets. However, full exploitation of the cloud computing capabilities can be achieved only if the allocation of radio and computational capabilities is performed jointly. In this paper, we propose a method to jointly optimize the transmit power, the number of bits per symbol and the CPU cycles assigned to each application in order to minimize the power consumption at the mobile side, under an average latency constraint dictated by the application requirements. We consider the case of a set of mobile handsets served by a single cloud and we show that the optimization leads to a one-to-one relationship between the transmit power and the percentage of CPU cycles assigned to each user. Based on our optimization, we propose then a computation scheduling technique and verify the stability of the computations queue. Then we show how these queues are affected by the degrees of freedom of the channels between mobile handsets and server. --- paper_title: Joint Energy Minimization and Resource Allocation in C-RAN with Mobile Cloud paper_content: Cloud radio access network (C-RAN) has emerged as a potential candidate of the next generation access network technology to address the increasing mobile traffic, while mobile cloud computing (MCC) offers a prospective solution to the resource-limited mobile user in executing computation intensive tasks. Taking full advantages of above two cloud-based techniques, C-RAN with MCC are presented in this paper to enhance both performance and energy efficiencies. In particular, this paper studies the joint energy minimization and resource allocation in C-RAN with MCC under the time constraints of the given tasks. We first review the energy and time model of the computation and communication. Then, we formulate the joint energy minimization into a non-convex optimization with the constraints of task executing time, transmitting power, computation capacity and fronthaul data rates. This non-convex optimization is then reformulated into an equivalent convex problem based on weighted minimum mean square error (WMMSE). The iterative algorithm is finally given to deal with the joint resource allocation in C-RAN with mobile cloud. Simulation results confirm that the proposed energy minimization and resource allocation solution can improve the system performance and save energy. --- paper_title: Game-theoretic Analysis of Computation Offloading for Cloudlet-based Mobile Cloud Computing paper_content: Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance. --- paper_title: Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading paper_content: Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function , which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation. --- paper_title: Optimal admission control policy for mobile cloud computing hotspot with cloudlet paper_content: We consider an admission control problem and adaptive resource allocation for running mobile applications on a cloudlet. We formulate an optimization problem for dynamic resource sharing of mobile users in mobile cloud computing (MCC) hotspot with a cloudlet as a semi-Markov decision process (SMDP). SMDP is transformed into a linear programming (LP) model and it is solved to obtain an optimal solution. In the optimization model, the quality of service (QoS) for different classes of mobile user is taken into account under resource constraints (i.e., bandwidth and server). The numerical results are presented to illustrate that the proposed admission control scheme can achieve a desirable performance and improve throughput of an MCC hotspot significantly. --- paper_title: Joint Subcarrier and CPU Time Allocation for Mobile Edge Computing paper_content: In mobile edge computing systems, mobile devices can offload compute-intensive tasks to a nearby cloudlet,so as to save energy and extend battery life. Unlike a fully-fledged cloud, a cloudlet is a small-scale datacenter deployed at a wireless access point, and thus is highly constrained by both radio and compute resources. We show in this paper that separately optimizing the allocation of either compute or radio resource, as most existing works did, is highly suboptimal: the congestion of compute resource leads to the waste of radio resource, and vice versa. To address this problem, we propose a joint scheduling algorithm that allocates both radio and compute resources coordinately. Specifically, we consider a cloudlet in an Orthogonal Frequency-Division Multiplexing Access (OFDMA) system with multiple mobile devices, where we study subcarrier allocation for task offloading and CPU time allocation for task execution in the cloudlet. Simulation results show that the proposed algorithm significantly outperforms per-resource optimization, accommodating more offloading requests while achieving salient energy saving. --- paper_title: Multi-User Computation Partitioning for Latency Sensitive Mobile Cloud Applications paper_content: Elastic partitioning of computations between mobile devices and cloud is an important and challenging research topic for mobile cloud computing. Existing works focus on the single-user computation partitioning, which aims to optimize the application completion time for one particular single user. These works assume that the cloud always has enough resources to execute the computations immediately when they are offloaded to the cloud. However, this assumption does not hold for large scale mobile cloud applications. In these applications, due to the competition for cloud resources among a large number of users, the offloaded computations may be executed with certain scheduling delay on the cloud. Single user partitioning that does not take into account the scheduling delay on the cloud may yield significant performance degradation. In this paper, we study, for the first time, multi-user computation partitioning problem (MCPP), which considers the partitioning of multiple users’ computations together with the scheduling of offloaded computations on the cloud resources. Instead of pursuing the minimum application completion time for every single user, we aim to achieve minimum average completion time for all the users, based on the number of provisioned resources on the cloud. We show that MCPP is different from and more difficult than the classical job scheduling problems. We design an offline heuristic algorithm, namely SearchAdjust , to solve MCPP. We demonstrate through benchmarks that SearchAdjust outperforms both the single user partitioning approaches and classical job scheduling approaches by 10 percent on average in terms of application delay. Based on SearchAdjust , we also design an online algorithm for MCPP that can be easily deployed in practical systems. We validate the effectiveness of our online algorithm using real world load traces. --- paper_title: Energy-efficient dynamic offloading and resource scheduling in mobile cloud computing paper_content: Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich cloud. However, how to achieve energy-efficient computation offloading under the hard constraint for application completion time remains a challenge issue. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into the energy-efficiency cost (EEC) minimization problem while satisfying the task-dependency requirements and the completion time deadline constraint. To solve the optimization problem, we then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control and transmission power allocation. More importantly, we find that the computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, our experimental results in a real testbed demonstrate that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing. --- paper_title: Joint scheduling of communication and computation resources in multiuser wireless application offloading paper_content: We consider a system where multiple users are ::: connected to a small cell base station enhanced with ::: computational capabilities. Instead of doing the computation ::: locally at the handset, the users offload the computation of full ::: applications or pieces of code to the small cell base station. In this ::: scenario, this paper provides a strategy to allocate the uplink, ::: downlink, and remote computational resources. The goal is to ::: improve the quality of experience of the users, while achieving ::: energy savings with respect to the case in which the applications ::: run locally at the mobile terminals. More specifically, we focus on ::: minimizing a cost function that depends on the latencies ::: experienced by the users and provide an algorithm to minimize ::: the latency experienced by the worst case user, under a target ::: energy saving constraint per user. --- paper_title: Exploring device-to-device communication for mobile cloud computing paper_content: With the popularity of smartphones and explosion of mobile applications, mobile devices become the prevalent computing platform for convenient communication and rich entertainment. Mobile cloud computing (MCC) is proposed to overcome the limited resources of mobile systems. However, when users access MCC through wireless networks, cellular network is likely to be overloaded and Wi-Fi connectivity is intermittent. Therefore, device-to-device (D2D) communication is exploited as an alternative for MCC. An important issue in exploring D2D communication for MCC is how users can detect and utilize the computing resources on other mobile devices. In this paper, we propose two mobile cloud access schemes: optimal and periodic access schemes, and study the corresponding performance of mobile cloud computing (i.e., mobile cloud size, node's serviceable time percentage, and task success rate). We find that optimally, node's serviceable time percentage and task success rate approach 1. Using more practical periodic access scheme, node's serviceable time percentage and task success rate are determined by the ratio of contact and inter-contact time between two nodes. --- paper_title: Power-Delay Tradeoff in Multi-User Mobile-Edge Computing Systems paper_content: Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an [O (1/V); O (V)] tradeoff with V as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance. --- paper_title: Optimization of Radio and Computational Resources for Energy Efficiency in Latency-Constrained Application Offloading paper_content: Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints. --- paper_title: Energy Efficient Cooperative Computing in Mobile Wireless Sensor Networks paper_content: Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution. --- paper_title: Energy-traffic tradeoff cooperative offloading for mobile cloud computing paper_content: This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable trade- off between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN. --- paper_title: Decentralized and Optimal Resource Cooperation in Geo-Distributed Mobile Cloud Computing paper_content: Mobile cloud computing is a key enabling technology in the era of the Internet of Things. Geo-distributed mobile cloud computing (GMCC) is a new scenario that adds geography consideration in mobile cloud computing. In GMCC, users are able to access cloud resources that are geographically close to their mobile devices. This is expected to reduce the communication delay and the service providers’ cost compared with the traditional centralized approach. In this paper, we focus on resource sharing through the cooperation among the service providers in geo-distributed mobile cloud computing. Then, we propose two different strategies for efficient resource cooperation in geographically distributed data centers. Furthermore, we present a coalition game theoretical approach to deal with the competition and the cooperation among the service providers. Utility functions have been specifically considered to incorporate the cost related to virtual machine migration and resource utilization. Illustrative results indicate that our proposed schemes are able to efficiently utilize limited resources with quality-of-service consideration. --- paper_title: Joint offloading decision and resource allocation for mobile cloud with computing access point paper_content: We consider a mobile cloud computing system consisting of multiple users, one computing access point (CAP), and one remote cloud server. The CAP can either process the received tasks from mobile users or offload them to the cloud. We aim to jointly optimize the offloading decisions of all users and the CAP, together with communication and processing resource allocation, to minimize the overall cost of energy, computation, and the maximum delay among all users. It is shown that the problem can be formulated as a non-convex quadratically constrained quadratic program, which is NP-hard in general. We further propose an efficient solution to this problem by semidefinite relaxation and a novel randomization mapping method. Our simulation results show that the proposed algorithm gives nearly optimal performance with only a small number of randomization iterations. --- paper_title: Challenges on wireless heterogeneous networks for mobile cloud computing paper_content: Mobile cloud computing (MCC) is an appealing paradigm enabling users to enjoy the vast computation power and abundant network services ubiquitously with the support of remote cloud. However, the wireless networks and mobile devices have to face many challenges due to the limited radio resources, battery power and communications capabilities, which may significantly impede the improvement of service qualities. Heterogeneous Network (HetNet), which has multiple types of low power radio access nodes in addition to the traditional macrocell nodes in a wireless network, is widely accepted as a promising way to satisfy the unrelenting traffic demand. In this article, we first introduce the framework of HetNet for MCC, identifying the main functional blocks. Then, the current state of the art techniques for each functional block are briefly surveyed, and the challenges for supporting MCC applications in HetNet under our proposed framework are discussed. We also envision the future for MCC in HetNet before drawing the conclusion. --- paper_title: A Framework for Cooperative Resource Management in Mobile Cloud Computing paper_content: Mobile cloud computing is an emerging technology to improve the quality of mobile services. In this paper, we consider the resource (i.e., radio and computing resources) sharing problem to support mobile applications in a mobile cloud computing environment. In such an environment, mobile cloud service providers can cooperate (i.e., form a coalition) to create a resource pool to share their own resources with each other. As a result, the resources can be better utilized and the revenue of the mobile cloud service providers can be increased. To maximize the benefit of the mobile cloud service providers, we propose a framework for resource allocation to the mobile applications, and revenue management and cooperation formation among service providers. For resource allocation to the mobile applications, we formulate and solve optimization models to obtain the optimal number of application instances that can be supported to maximize the revenue of the service providers while meeting the resource requirements of the mobile applications. For sharing the revenue generated from the resource pool (i.e., revenue management) among the cooperative mobile cloud service providers in a coalition, we apply the concepts of core and Shapley value from cooperative game theory as a solution. Based on the revenue shares, the mobile cloud service providers can decide whether to cooperate and share the resources in the resource pool or not. Also, the provider can optimize the decision on the amount of resources to contribute to the resource pool. --- paper_title: Mobility-Induced Service Migration in Mobile Micro-Clouds paper_content: Mobile micro-cloud is an emerging technology in distributed computing, which is aimed at providing seamless computing/data access to the edge of the network when a centralized service may suffer from poor connectivity and long latency. Different from the traditional cloud, a mobile micro-cloud is smaller and deployed closer to users, typically attached to a cellular base station or wireless network access point. Due to the relatively small coverage area of each base station or access point, when a user moves across areas covered by different base stations or access points which are attached to different micro-clouds, issues of service performance and service migration become important. In this paper, we consider such migration issues. We model the general problem as a Markov decision process (MDP), and show that, in the special case where the mobile user follows a one-dimensional asymmetric random walk mobility model, the optimal policy for service migration is a threshold policy. We obtain the analytical solution for the cost resulting from arbitrary thresholds, and then propose an algorithm for finding the optimal thresholds. The proposed algorithm is more efficient than standard mechanisms for solving MDPs. --- paper_title: A game theoretic resource allocation for overall energy minimization in mobile cloud computing system paper_content: Cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute code remotely. When the cloud infrastructure consists of heterogeneous servers, the mapping between mobile devices and servers plays an important role in determining the energy dissipation on both sides. From an environmental impact perspective, any energy dissipation related to computation should be counted. To achieve energy sustainability, it is important reducing the overall energy consumption of the mobile systems and the cloud infrastructure. Furthermore, reducing cloud energy consumption can potentially reduce the cost of mobile cloud users because the pricing model of cloud services is pay-by-usage. In this paper, we propose a game-theoretic approach to optimize the overall energy in a mobile cloud computing system. We formulate the energy minimization problem as a congestion game, where each mobile device is a player and his strategy is to select one of the servers to offload the computation while minimizing the overall energy consumption. We prove that the Nash equilibrium always exists in this game and propose an efficient algorithm that could achieve the Nash equilibrium in polynomial time. Experimental results show that our approach is able to reduce the total energy of mobile devices and servers compared to a random approach and an approach which only tries to reduce mobile devices alone. --- paper_title: A Cooperative Scheduling Scheme of Local Cloud and Internet Cloud for Delay-Aware Mobile Cloud Computing paper_content: With the proliferation of mobile applications, Mobile Cloud Computing (MCC) has been proposed to help mobile devices save energy and improve computation performance. To further improve the quality of service (QoS) of MCC, cloud servers can be deployed locally so that the latency is decreased. However, the computational resource of the local cloud is generally limited. In this paper, we design a threshold-based policy to improve the QoS of MCC by cooperation of the local cloud and Internet cloud resources, which takes the advantages of low latency of the local cloud and abundant computational resources of the Internet cloud simultaneously. This policy also applies a priority queue in terms of delay requirements of applications. The optimal thresholds depending on the traffic load is obtained via a proposed algorithm. Numerical results show that the QoS can be greatly enhanced with the assistance of Internet cloud when the local cloud is overloaded. Better QoS is achieved if the local cloud order tasks according to their delay requirements, where delay-sensitive applications are executed ahead of delay-tolerant applications. Moreover, the optimal thresholds of the policy have a sound impact on the QoS of the system. --- paper_title: Energy-Optimal Mobile Cloud Computing under Stochastic Wireless Channel paper_content: This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel. --- paper_title: Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing paper_content: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases. --- paper_title: Modeling and Characterizing User Experience in a Cloud Server Based Mobile Gaming Approach paper_content: With the evolution of mobile devices and networks, and the growing trend of mobile Internet access, rich, multiplayer gaming using mobile devices, similar to PC-based Internet games, has tremendous potential and interest. However, the current client-server architecture for PC-based Internet games, where most of the storage and computational burden of the game lies with the client device, does not work with mobile devices, constraining mobile gaming to either downloadable, single player games, or very light non-interactive versions of the rich multi-player Internet games. In this paper, we study a cloud server based approach, termed Cloud Mobile Gaming (CMG), where the burden of executing the gaming engines is put on cloud servers, and the mobile devices just communicate the users' gaming commands to the servers. We analyze the factors affecting the quality of user experience using the CMG approach, including the game genres, video encoding factors, and the conditions of the wireless network. Based on the above analysis, we develop a model for Mobile Gaming User Experience (MGUE), and develop a prototype for real-time measurement of MGUE that can be used in real networks. We validate MGUE model using controlled subjective testing, and then use it to characterize user experience achievable using the CMG approach in wireless networks. --- paper_title: Communicating While Computing: Distributed mobile cloud computing over 5G heterogeneous networks paper_content: Current estimates of mobile data traffic in the years to come foresee a 1,000 increase of mobile data traffic in 2020 with respect to 2010, or, equivalently, a doubling of mobile data traffic every year. This unprecedented growth demands a significant increase of wireless network capacity. Even if the current evolution of fourth-generation (4G) systems and, in particular, the advancements of the long-term evolution (LTE) standardization process foresees a significant capacity improvement with respect to third-generation (3G) systems, the European Telecommunications Standards Institute (ETSI) has established a roadmap toward the fifth-generation (5G) system, with the aim of deploying a commercial system by the year 2020 [1]. The European Project named ?Mobile and Wireless Communications Enablers for the 2020 Information Society? (METIS), launched in 2012, represents one of the first international and large-scale research projects on fifth generation (5G) [2]. In parallel with this unparalleled growth of data traffic, our everyday life experience shows an increasing habit to run a plethora of applications specifically devised for mobile devices, (smartphones, tablets, laptops)for entertainment, health care, business, social networking, traveling, news, etc. However, the spectacular growth in wireless traffic generated by this lifestyle is not matched with a parallel improvement on mobile handsets? batteries, whose lifetime is not improving at the same pace [3]. This determines a widening gap between the energy required to run sophisticated applications and the energy available on the mobile handset. A possible way to overcome this obstacle is to enable the mobile devices, whenever possible and convenient, to offload their most energy-consuming tasks to nearby fixed servers. This strategy has been studied for a long time and is reported in the literature under different names, such as cyberforaging [4] or computation offloading [5], [6]. In recent years, a strong impulse to computation offloading has come through cloud computing (CC), which enables the users to utilize resources on demand. The resources made available by a cloud service provider are: 1) infrastructures, such as network devices, storage, servers, etc., 2) platforms, such as operating systems, offering an integrated environment for developing and testing custom applications, and 3) software, in the form of application programs. These three kinds of services are labeled, respectively, as infrastructure as a service, platform as a service, and software as a service. In particular, one of the key features of CC is virtualization, which makes it possible to run multiple operating systems and multiple applications over the same machine (or set of machines), while guaranteeing isolation and protection of the programs and their data. Through virtualization, the number of virtual machines (VMs) can scale on ?demand, thus improving the overall system computational efficiency. Mobile CC (MCC) is a specific case of CC where the user accesses the cloud services through a mobile handset [5]. The major limitations of today?s MCC are the energy consumption associated to the radio access and the latency experienced in reaching the cloud provider through a wide area network (WAN). Mobile users located at the edge of macrocellular networks are particularly disadvantaged in terms of power consumption and, furthermore, it is very difficult to control latency over a WAN. As pointed out in [7]?[9], humans are acutely sensitive to delay and jitter: as latency increases, interactive response suffers. Since the interaction times foreseen in 5G systems, in particular in the so-called tactile Internet [10], are quite small (in the order of milliseconds), a strict latency control must be somehow incorporated in near future MCC. Meeting this constraint requires a deep ?rethinking of the overall service chain, from the physical layer up to virtualization. --- paper_title: Joint Subcarrier and CPU Time Allocation for Mobile Edge Computing paper_content: In mobile edge computing systems, mobile devices can offload compute-intensive tasks to a nearby cloudlet,so as to save energy and extend battery life. Unlike a fully-fledged cloud, a cloudlet is a small-scale datacenter deployed at a wireless access point, and thus is highly constrained by both radio and compute resources. We show in this paper that separately optimizing the allocation of either compute or radio resource, as most existing works did, is highly suboptimal: the congestion of compute resource leads to the waste of radio resource, and vice versa. To address this problem, we propose a joint scheduling algorithm that allocates both radio and compute resources coordinately. Specifically, we consider a cloudlet in an Orthogonal Frequency-Division Multiplexing Access (OFDMA) system with multiple mobile devices, where we study subcarrier allocation for task offloading and CPU time allocation for task execution in the cloudlet. Simulation results show that the proposed algorithm significantly outperforms per-resource optimization, accommodating more offloading requests while achieving salient energy saving. --- paper_title: Fog and IoT: An Overview of Research Opportunities paper_content: Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT. --- paper_title: Collaborative Task Execution in Mobile Cloud Computing Under a Stochastic Wireless Channel paper_content: This paper investigates collaborative task execution between a mobile device and a cloud clone for mobile applications under a stochastic wireless channel. A mobile application is modeled as a sequence of tasks that can be executed on the mobile device or on the cloud clone. We aim to minimize the energy consumption on the mobile device while meeting a time deadline, by strategically offloading tasks to the cloud. We formulate the collaborative task execution as a constrained shortest path problem. We derive a one-climb policy by characterizing the optimal solution and then propose an enumeration algorithm for the collaborative task execution in polynomial time. Further, we apply the LARAC algorithm to solving the optimization problem approximately, which has lower complexity than the enumeration algorithm. Simulation results show that the approximate solution of the LARAC algorithm is close to the optimal solution of the enumeration algorithm. In addition, we consider a probabilistic time deadline, which is transformed to hard deadline by Markov inequality. Moreover, compared to the local execution and the remote execution, the collaborative task execution can significantly save the energy consumption on the mobile device, prolonging its battery life. --- paper_title: Optimization of Radio and Computational Resources for Energy Efficiency in Latency-Constrained Application Offloading paper_content: Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints. --- paper_title: Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing paper_content: Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources—the transmit precoding matrices of the MUs—and the computational resources—the CPU cycles/second assigned by the cloud to each MU—in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms. --- paper_title: Large-Scale Convex Optimization for Dense Wireless Cooperative Networks paper_content: Convex optimization is a powerful tool for resource allocation and signal processing in wireless networks. As the network density is expected to drastically increase in order to accommodate the exponentially growing mobile data traffic, performance optimization problems are entering a new era characterized by a high dimension and/or a large number of constraints, which poses significant design and computational challenges. In this paper, we present a novel two-stage approach to solve large-scale convex optimization problems for dense wireless cooperative networks, which can effectively detect infeasibility and enjoy modeling flexibility. In the proposed approach, the original large-scale convex problem is transformed into a standard cone programming form in the first stage via matrix stuffing, which only needs to copy the problem parameters such as channel state information (CSI) and quality-of-service (QoS) requirements to the prestored structure of the standard form. The capability of yielding infeasibility certificates and enabling parallel computing is achieved by solving the homogeneous self-dual embedding of the primal-dual pair of the standard form. In the solving stage, the operator splitting method, namely, the alternating direction method of multipliers (ADMM), is adopted to solve the large-scale homogeneous self-dual embedding. Compared with second-order methods, ADMM can solve large-scale problems in parallel with modest accuracy within a reasonable amount of time. Simulation results will demonstrate the speedup, scalability, and reliability of the proposed framework compared with the state-of-the-art modeling frameworks and solvers. --- paper_title: Energy-Optimal Mobile Cloud Computing under Stochastic Wireless Channel paper_content: This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel. --- paper_title: Stochastic geometry and random graphs for the analysis and design of wireless networks paper_content: Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue. --- paper_title: Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructures paper_content: With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures. --- paper_title: An Approach to Ad hoc Cloud Computing paper_content: We consider how underused computing resources within an enterprise may be harnessed to improve utilization and create an elastic computing infrastructure. Most current cloud provision involves a data center model, in which clusters of machines are dedicated to running cloud infrastructure software. We propose an additional model, the ad hoc cloud, in which infrastructure software is distributed over resources harvested from machines already in existence within an enterprise. In contrast to the data center cloud model, resource levels are not established a priori, nor are resources dedicated exclusively to the cloud while in use. A participating machine is not dedicated to the cloud, but has some other primary purpose such as running interactive processes for a particular user. We outline the major implementation challenges and one approach to tackling them. --- paper_title: An enhanced community-based mobility model for distributed mobile social networks paper_content: Simulation is fundamental tool for the evaluation and validation of the applications and protocols in Mobile Social Networks. However, the limited number of real user traces available and the imposed restrictions of the specific scenarios, make generalization very hard. Therefore, the need has been created for synthetic mobility models. The widely used Random Way-Point Mobility Model has been proven unable to capture characteristics of human mobility such as the social attraction. Consequently, in recent years mobility models based on social network theory, able to capture the temporal and spatial dependencies of mobile social networks, are being designed. In this paper the Enhanced Community Mobility Model (ECMM) is introduced. It follows preceding community-based approaches, that map communities to a topological space. Its main contribution is the introduction of new features, such as pause periods and group mobility encouragement, lacking for previous community-based mobility models. Additionally, ECMM enables researchers to arbitrarily select a social model as the trace generation process input, while at the same time generates traces with high conformance to that social network. A comparison between synthetic traces, generated by ECMM, other community-based models and a number of real ones is provided for validation. --- paper_title: Delay-Optimal Computation Task Scheduling for Mobile-Edge Computing Systems paper_content: Mobile-edge computing (MEC) emerges as a promising paradigm to improve the quality of computation experience for mobile devices. Nevertheless, the design of computation task scheduling policies for MEC systems inevitably encounters a challenging two-timescale stochastic optimization problem. Specifically, in the larger timescale, whether to execute a task locally at the mobile device or to offload a task to the MEC server for cloud computing should be decided, while in the smaller timescale, the transmission policy for the task input data should adapt to the channel side information. In this paper, we adopt a Markov decision process approach to handle this problem, where the computation tasks are scheduled based on the queueing state of the task buffer, the execution state of the local processing unit, as well as the state of the transmission unit. By analyzing the average delay of each task and the average power consumption at the mobile device, we formulate a power-constrained delay minimization problem, and propose an efficient one-dimensional search algorithm to find the optimal task scheduling policy. Simulation results are provided to demonstrate the capability of the proposed optimal stochastic task scheduling policy in achieving a shorter average execution delay compared to the baseline policies. --- paper_title: A hierarchical edge cloud architecture for mobile computing paper_content: The performance of mobile computing would be significantly improved by leveraging cloud computing and migrating mobile workloads for remote execution at the cloud. In this paper, to efficiently handle the peak load and satisfy the requirements of remote program execution, we propose to deploy cloud servers at the network edge and design the edge cloud as a tree hierarchy of geo-distributed servers, so as to efficiently utilize the cloud resources to serve the peak loads from mobile users. The hierarchical architecture of edge cloud enables aggregation of the peak loads across different tiers of cloud servers to maximize the amount of mobile workloads being served. To ensure efficient utilization of cloud resources, we further propose a workload placement algorithm that decides which edge cloud servers mobile programs are placed on and how much computational capacity is provisioned to execute each program. The performance of our proposed hierarchical edge cloud architecture on serving mobile workloads is evaluated by formal analysis, small-scale system experimentation, and large-scale trace-based simulations. --- paper_title: Fog Computing: Focusing on Mobile Users at the Edge paper_content: With smart devices, particular smartphones, becoming our everyday companions, the ubiquitous mobile Internet and computing applications pervade people daily lives. With the surge demand on high-quality mobile services at anywhere, how to address the ubiquitous user demand and accommodate the explosive growth of mobile traffics is the key issue of the next generation mobile networks. The Fog computing is a promising solution towards this goal. Fog computing extends cloud computing by providing virtualized resources and engaged location-based services to the edge of the mobile networks so as to better serve mobile traffics. Therefore, Fog computing is a lubricant of the combination of cloud computing and mobile applications. In this article, we outline the main features of Fog computing and describe its concept, architecture and design goals. Lastly, we discuss some of the future research issues from the networking perspective. --- paper_title: Mobile-Edge Computing Come Home Connecting things in future smart homes using LTE device-to-device communications paper_content: Future 5G cellular networks are expected to play a major role in supporting the Internet of Things (IoT) due to their ubiquitous coverage, plug-and-play configuration, and embedded security. Besides connectivity, however, the IoT will need computation and storage in proximity of sensors and actuators to support timecritical and opportunistic applications. Mobile-edge computing (MEC) is currently under standardization as a novel paradigm expected to enrich future broadband communication networks [1], [2]. With MEC, traditional networks will be empowered by placing cloud-computing-like capabilities within the radio access network, in an MEC server located in close proximity to end users. Such distributed computing and storage infrastructure will enable the deployment of applications and services at the edge of the network, allowing operators to offer a virtualized environment to enterprise customers and industries to implement applications and services close to end users. --- paper_title: Challenges on wireless heterogeneous networks for mobile cloud computing paper_content: Mobile cloud computing (MCC) is an appealing paradigm enabling users to enjoy the vast computation power and abundant network services ubiquitously with the support of remote cloud. However, the wireless networks and mobile devices have to face many challenges due to the limited radio resources, battery power and communications capabilities, which may significantly impede the improvement of service qualities. Heterogeneous Network (HetNet), which has multiple types of low power radio access nodes in addition to the traditional macrocell nodes in a wireless network, is widely accepted as a promising way to satisfy the unrelenting traffic demand. In this article, we first introduce the framework of HetNet for MCC, identifying the main functional blocks. Then, the current state of the art techniques for each functional block are briefly surveyed, and the challenges for supporting MCC applications in HetNet under our proposed framework are discussed. We also envision the future for MCC in HetNet before drawing the conclusion. --- paper_title: Femtocells: Past, Present, and Future paper_content: Femtocells, despite their name, pose a potentially large disruption to the carefully planned cellular networks that now connect a majority of the planet's citizens to the Internet and with each other. Femtocells - which by the end of 2010 already outnumbered traditional base stations and at the time of publication are being deployed at a rate of about five million a year - both enhance and interfere with this network in ways that are not yet well understood. Will femtocells be crucial for offloading data and video from the creaking traditional network? Or will femtocells prove more trouble than they are worth, undermining decades of careful base station deployment with unpredictable interference while delivering only limited gains? Or possibly neither: are femtocells just a "flash in the pan"; an exciting but short-lived stage of network evolution that will be rendered obsolete by improved WiFi offloading, new backhaul regulations and/or pricing, or other unforeseen technological developments? This tutorial article overviews the history of femtocells, demystifies their key aspects, and provides a preview of the next few years, which the authors believe will see a rapid acceleration towards small cell technology. In the course of the article, we also position and introduce the articles that headline this special issue. --- paper_title: Success Probability and Area Spectral Efficiency in Multiuser MIMO HetNets paper_content: We derive a general and closed-form result for the success probability in downlink multiple-antenna (MIMO) heterogeneous cellular networks (HetNets), utilizing a novel Toeplitz matrix representation. This main result, which is equivalently the signal-to-interference ratio (SIR) distribution, includes multiuser MIMO, single-user MIMO and per-tier biasing for $K$ different tiers of randomly placed base stations (BSs), assuming zero-forcing precoding and perfect channel state information. The large SIR limit of this result admits a simple closed form that is accurate at moderate SIRs, e.g., above 5 dB. These results reveal that the SIR-invariance property of SISO HetNets does not hold for MIMO HetNets; instead the success probability may decrease as the network density increases. We prove that the maximum success probability is achieved by activating only one tier of BSs, while the maximum area spectral efficiency (ASE) is achieved by activating all the BSs. This reveals a unique tradeoff between the ASE and link reliability in multiuser MIMO HetNets. To achieve the maximum ASE while guaranteeing a certain link reliability, we develop efficient algorithms to find the optimal BS densities. It is shown that as the link reliability requirement increases, more BSs and more tiers should be deactivated. --- paper_title: Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers paper_content: In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only. --- paper_title: Modeling and Characterizing User Experience in a Cloud Server Based Mobile Gaming Approach paper_content: With the evolution of mobile devices and networks, and the growing trend of mobile Internet access, rich, multiplayer gaming using mobile devices, similar to PC-based Internet games, has tremendous potential and interest. However, the current client-server architecture for PC-based Internet games, where most of the storage and computational burden of the game lies with the client device, does not work with mobile devices, constraining mobile gaming to either downloadable, single player games, or very light non-interactive versions of the rich multi-player Internet games. In this paper, we study a cloud server based approach, termed Cloud Mobile Gaming (CMG), where the burden of executing the gaming engines is put on cloud servers, and the mobile devices just communicate the users' gaming commands to the servers. We analyze the factors affecting the quality of user experience using the CMG approach, including the game genres, video encoding factors, and the conditions of the wireless network. Based on the above analysis, we develop a model for Mobile Gaming User Experience (MGUE), and develop a prototype for real-time measurement of MGUE that can be used in real networks. We validate MGUE model using controlled subjective testing, and then use it to characterize user experience achievable using the CMG approach in wireless networks. --- paper_title: EnaCloud: An Energy-Saving Application Live Placement Approach for Cloud Computing Environments paper_content: With the increasing prevalence of large scale cloud computing environments, how to place requested applications into available computing servers regarding to energy consumption has become an essential research problem, but existing application placement approaches are still not effective for live applications with dynamic characters. In this paper, we proposed a novel approach named EnaCloud, which enables application live placement dynamically with consideration of energy efficiency in a cloud platform. In EnaCloud, we use a Virtual Machine to encapsulate the application, which supports applications scheduling and live migration to minimize the number of running machines, so as to save energy. Specially, the application placement is abstracted as a bin packing problem, and an energy-aware heuristic algorithm is proposed to get an appropriate solution. In addition, an over-provision approach is presented to deal with the varying resource demands of applications. Our approach has been successfully implemented as useful components and fundamental services in the iVIC platform. Finally, we evaluate our approach by comprehensive experiments based on virtual machine monitor Xen and the results show that it is feasible. --- paper_title: A multi-objective ant colony system algorithm for virtual machine placement in cloud computing paper_content: Virtual machine placement is a process of mapping virtual machines to physical machines. The optimal placement is important for improving power efficiency and resource utilization in a cloud computing environment. In this paper, we propose a multi-objective ant colony system algorithm for the virtual machine placement problem. The goal is to efficiently obtain a set of non-dominated solutions (the Pareto set) that simultaneously minimize total resource wastage and power consumption. The proposed algorithm is tested with some instances from the literature. Its solution performance is compared to that of an existing multi-objective genetic algorithm and two single-objective algorithms, a well-known bin-packing algorithm and a max-min ant system (MMAS) algorithm. The results show that the proposed algorithm is more efficient and effective than the methods we compared it to. --- paper_title: Collaborative multi-bitrate video caching and processing in Mobile-Edge Computing networks paper_content: Recently, Mobile-Edge Computing (MEC) has arisen as an emerging paradigm that extends cloud-computing capabilities to the edge of the Radio Access Network (RAN) by deploying MEC servers right at the Base Stations (BSs). In this paper, we envision a collaborative joint caching and processing strategy for on-demand video streaming in MEC networks. Our design aims at enhancing the widely used Adaptive BitRate (ABR) streaming technology, where multiple bitrate versions of a video can be delivered so as to adapt to the heterogeneity of user capabilities and the varying of network connection bandwidth. The proposed strategy faces two main challenges: (i) not only the videos but their appropriate bitrate versions have to be effectively selected to store in the caches, and (ii) the transcoding relationships among different versions need to be taken into account to effectively utilize the processing capacity at the MEC servers. To this end, we formulate the collaborative joint caching and processing problem as an Integer Linear Program (ILP) that minimizes the backhaul network cost, subject to the cache storage and processing capacity constraints. Due to the NP-completeness of the problem and the impractical overheads of the existing offline approaches, we propose a novel online algorithm that makes cache placement and video scheduling decisions upon the arrival of each new request. Extensive simulations results demonstrate the significant performance improvement of the proposed strategy over traditional approaches in terms of cache hit ratio increase, backhaul traffic and initial access delay reduction. --- paper_title: Cache-enabled small cell networks: modeling and tradeoffs paper_content: We consider the problem of caching in next generation mobile cellular networks where small base stations (SBSs) are able to store their users' content and serve them accordingly. The SBSs are stochastically distributed over the plane and serve their users either from the local cache or internet via limited backhaul, depending on the availability of requested content. We model and characterize the outage probability and average content delivery rate as a function of the signal-to-interference-ratio (SINR), base station intensity, target file bitrate, storage size and file popularity. Our results provide key insights into the problem of cache-enabled small cell networks. --- paper_title: Analysis and Optimization of Caching and Multicasting in Large-Scale Cache-Enabled Information-Centric Networks paper_content: Caching and multicasting at base stations are two promising approaches to support massive content delivery over wireless networks. However, existing analysis and designs do not fully explore and exploit the potential advantages of the two approaches. In this paper, we jointly consider caching and multicasting to maximize the successful transmission probability in large-scale information-centric networks. We propose a random caching and multicasting scheme with a design parameter. Utilizing tools from stochastic geometry, we derive a tractable expression and a closed-form expression for the successful transmission probability in the general and high signal-to-noise ratio (SNR) regions, respectively. Then, using optimization techniques, we derive a simple asymptotically optimal design in the high SNR region, which provides important design insights. Finally, by numerical simulations, we show that the asymptotically optimal design also achieves a significant performance gain over some baseline schemes in the general SNR region, and hence is applicable and effective in practical cache-enabled information-centric networks. --- paper_title: Scheduling strategies for optimal service deployment across multiple clouds paper_content: The current cloud market, constituted by many different public cloud providers, is highly fragmented in terms of interfaces, pricing schemes, virtual machine offers and value-added features. In this context, a cloud broker can provide intermediation and aggregation capabilities to enable users to deploy their virtual infrastructures across multiple clouds. However, most current cloud brokers do not provide advanced service management capabilities to make automatic decisions, based on optimization algorithms, about how to select the optimal cloud to deploy a service, how to distribute optimally the different components of a service among different clouds, or even when to move a given service component from a cloud to another to satisfy some optimization criteria. In this paper we present a modular broker architecture that can work with different scheduling strategies for optimal deployment of virtual services across multiple clouds, based on different optimization criteria (e.g. cost optimization or performance optimization), different user constraints (e.g. budget, performance, instance types, placement, reallocation or load balancing constraints), and different environmental conditions (i.e., static vs. dynamic conditions, regarding instance prices, instance types, service workload, etc.). To probe the benefits of this broker, we analyse the deployment of different clustered services (an HPC cluster and a Web server cluster) on a multi-cloud environment under different conditions, constraints, and optimization criteria. --- paper_title: Cache in the air: exploiting content caching and delivery techniques for 5G systems paper_content: The demand for rich multimedia services over mobile networks has been soaring at a tremendous pace over recent years. However, due to the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the radio access networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic. Recently, we have observed the emergence of promising mobile content caching and delivery techniques, by which popular contents are cached in the intermediate servers (or middleboxes, gateways, or routers) so that demands from users for the same content can be accommodated easily without duplicate transmissions from remote servers; hence, redundant traffic can be significantly eliminated. In this article, we first study techniques related to caching in current mobile networks, and discuss potential techniques for caching in 5G mobile networks, including evolved packet core network caching and radio access network caching. A novel edge caching scheme based on the concept of content-centric networking or information-centric networking is proposed. Using trace-driven simulations, we evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks. Furthermore, we conclude the article by exploring new relevant opportunities and challenges. --- paper_title: Cost Aware Service Placement and Load Dispatching in Mobile Cloud Systems paper_content: With proliferation of smart phones and an increasing number of services provisioned by clouds, it is commonplace for users to request cloud services from their mobile devices. Accessing services directly from the Internet data centers inherently incurs high latency due to long RTTs and possible congestions in WAN. To lower the latency, some researchers propose to ‘cache’ the services at edge clouds or smart routers in the access network which are closer to end users than the Internet cloud. Although ‘caching’ is a promising technique, placing the services and dispatching users’ requests in a way that can minimize the users’ access delay and service providers’ cost has not been addressed so far. In this paper, we study the joint optimization of service placement and load dispatching in the mobile cloud systems. We show this problem is unique to both the traditional caching problem in mobile networks and the content distribution problem in content distribution networks. We develop a set of efficient algorithms for service providers to achieve various trade-offs among the average latency of mobile users’ requests, and the cost of service providers. Our solution utilizes user's mobility pattern and services access pattern to predict the distribution of user's future requests, and then adapt the service placement and load dispatching online based on the prediction. We conduct extensive trace driven simulations. Results show our solution not only achieves much lower latency than directly accessing service from remote clouds, but also outperforms other classical benchmark algorithms in term of the latency, cost and algorithm running time. --- paper_title: FemtoCaching: Wireless video content delivery through distributed caching helpers paper_content: We suggest a novel approach to handle the ongoing explosive increase in the demand for video content in wireless/mobile devices. We envision femtocell-like base stations, which we call helpers, with weak backhaul links but large storage capacity. These helpers form a wireless distributed caching network that assists the macro base station by handling requests of popular files that have been cached. Due to the short distances between helpers and requesting devices, the transmission of cached files can be done very efficiently. --- paper_title: User Mobility Model Based Computation Offloading Decision for Mobile Cloud paper_content: The last decade has seen a rapid growth in the use of mobile devices all over the world. With an increasing use of mobile devices, mobile applications are becoming more diverse and complex, demanding more computational resources. However, mobile devices are typically resource-limited (i.e., a slower-speed CPU, a smaller memory) due to a variety of reasons. Mobile users will be capable of running applications with heavy computation if they can offload some of their computations to other places, such as a desktop or server machines. However, mobile users are typically subject to dynamically changing network environments, particularly, due to user mobility. This makes it hard to choose good offloading decisions in mobile environments. In general, users’ mobility can provide some hints for upcoming changes to network environments. Motivated by this, we propose a mobility model of each individual user taking advantage of the regularity of his/her mobility pattern, and develop an offloading decision-making technique based on the mobility model. We evaluate our technique through trace-based simulation with real log data traces from 14 Android users. Our evaluation results show that the proposed technique can help boost the performance of mobile devices in terms of response time and energy consumption, when users are highly mobile. --- paper_title: Energy-Efficient Fault-Tolerant Data Storage and Processing in Mobile Cloud paper_content: Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and image storage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities. Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices. For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability and mobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of our knowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobile cloud, an approach we call k-out-of-n computing . In our solution, mobile devices successfully retrieve or process data, in the most energy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibility of our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in larger scale networks. --- paper_title: Mobility-Assisted Opportunistic Computation Offloading paper_content: With the explosion in personal mobile devices, computation offloading through opportunistic networks of nearby devices to support sophisticated mobile applications with limited resources of processing and energy is gaining increased attention. In this letter, we investigate the problem of mobility-assisted opportunistic computation offloading by exploiting the contact patterns regulated by these devices' mobility. We first formulate the optimal opportunistic offloading problem with the aid of statistic property of contact rates and then utilize the method of convex optimization to determine the amounts of computation to be offloaded to other devices. Extensive simulations under realistic human mobility demonstrate the efficiency of our scheme, in terms of obtaining superior higher computation success rate over the compared benchmarks under a wide variety of settings. --- paper_title: An overview of vertical handover decision strategies in heterogeneous wireless networks paper_content: In the next generation of wireless networks, mobile users can move between heterogeneous networks, using terminals with multiple access interfaces and non-real-time or real-time services. The most important issue in such environment is the Always Best Connected (ABC) concept allowing the best connectivity to applications anywhere at anytime. To answer ABC requirement, various vertical handover decision strategies have been proposed in the literature recently, using advanced tools and proven concepts. In this paper, we give an overview of the most interesting and recent strategies. We classify it into five categories for which we present their main characteristics. We also compare each one with the others in order to introduce our vertical handover decision approach. --- paper_title: Offloading in Mobile Cloudlet Systems with Intermittent Connectivity paper_content: The emergence of mobile cloud computing enables mobile users to offload applications to nearby mobile resource-rich devices (i.e., cloudlets) to reduce energy consumption and improve performance. However, due to mobility and cloudlet capacity, the connections between a mobile user and mobile cloudlets can be intermittent. As a result, offloading actions taken by the mobile user may fail (e.g., the user moves out of communication range of cloudlets). In this paper, we develop an optimal offloading algorithm for the mobile user in such an intermittently connected cloudlet system, considering the users’ local load and availability of cloudlets. We examine users’ mobility patterns and cloudlets’ admission control, and derive the probability of successful offloading actions analytically. We formulate and solve a Markov decision process (MDP) model to obtain an optimal policy for the mobile user with the objective to minimize the computation and offloading costs. Furthermore, we prove that the optimal policy of the MDP has a threshold structure. Subsequently, we introduce a fast algorithm for energy-constrained users to make offloading decisions. The numerical results show that the analytical form of the successful offloading probability is a good estimation in various mobility cases. Furthermore, the proposed MDP offloading algorithm for mobile users outperforms conventional baseline schemes. --- paper_title: Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading paper_content: Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function , which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation. --- paper_title: MuSIC: Mobility-Aware Optimal Service Allocation in Mobile Cloud Computing paper_content: This paper exploits the observation that using tiered clouds, i.e. clouds at multiple levels (local and public) can increase the performance and scalability of mobile applications. User Mobility introduces new complexities in enabling an optimal decomposition of tasks that can execute cooperatively on mobile clients and the tiered cloud architecture while considering multiple QoS goals such application delay, device power consumption and user cost/price. In this paper, we propose a novel framework to model mobile applications as a location-time workflows (LTW) of tasks, here user mobility patterns are translated to a mobile service usage patterns. We show that an optimal mapping of LTWs to tiered mobile cloud resources is an NP-hard problem. We propose an efficient heuristic algorithm called MuSIC that is able to perform well (78% of optimal, 30% better than simple strategies), and scale well to a large number of users while ensuring high application QoS. We evaluate MuSIC and the 2-tier mobile cloud approach via implementation (on real world clouds) and extensive simulations using rich mobile applications like intensive signal processing and video streaming applications. Our experimental and simulation results indicate that MuSIC supports scalable operation (100+ concurrent users executing complex workflows) while improving QoS. We observe about 25% lower delays and power (under fixed price constraints) and about 35% decrease in price (considering fixed delay) in comparison to only using the public cloud. Our studies also show that MuSIC performs quite well under different mobility patterns, e.g. random waypoint, Manhattan models and is resilient to errors/uncertainty in prediction of mobile user location-time workflows. --- paper_title: Mobility management challenges in 3GPP heterogeneous networks paper_content: In this article we provide a comprehensive review of the handover process in heterogeneous networks (HetNets), and identify technical challenges in mobility management. In this line, we evaluate the mobility performance of HetNets with the 3rd Generation Partnership Project (3GPP) Release-10 range expansion and enhanced inter-cell interference coordination (eICIC) features such as almost blank subframes (ABSFs). Simulation assumptions and parameters of a related study item in 3GPP are used to investigate the impact of various handover parameters on mobility performance. In addition, we propose a mobility-based inter-cell interference coordination (MB-ICIC) scheme, in which picocells configure coordinated resources so that macrocells can schedule their high-mobility UEs in these resources without co-channel interference from picocells. MB-ICIC also benefits low-mobility UEs, since handover parameters can now be more flexibly optimized. Simulations using the 3GPP simulation assumptions are performed to evaluate the performance of MB-ICIC under several scenarios. --- paper_title: Efficient mobility and traffic management for delay tolerant cloud data in 5G networks paper_content: The explosive growth of the demand for higher data rates in mobile networks have been mainly driven by the increasing use of cloud based applications by smartphones. This has led the industry to investigate new radio access technologies to be deployed as part of 5G networks, while providing mechanisms to manage user mobility and traffic in a more efficient manner. In this paper, we consider a mobility and traffic management mechanism that proposes a close interaction between the cloud data servers and the radio access network to enable efficient network operation. Such a management mechanism is enabled by utilizing the application-dependent delay tolerance properties of the cloud data, with the delay values conveyed to the radio access network and UE to manage the service requests for the cloud data. The mechanism was evaluated using LTE-Advanced heterogeneous network scenario and 5G dense-urban information society scenario from EU FP7 METIS project, and relative gains in terms of packet delays and throughput values are presented. The results indicate significant gains using the proposed management mechanism as compared to the reference case where no such enhancements are used. --- paper_title: Exploring device-to-device communication for mobile cloud computing paper_content: With the popularity of smartphones and explosion of mobile applications, mobile devices become the prevalent computing platform for convenient communication and rich entertainment. Mobile cloud computing (MCC) is proposed to overcome the limited resources of mobile systems. However, when users access MCC through wireless networks, cellular network is likely to be overloaded and Wi-Fi connectivity is intermittent. Therefore, device-to-device (D2D) communication is exploited as an alternative for MCC. An important issue in exploring D2D communication for MCC is how users can detect and utilize the computing resources on other mobile devices. In this paper, we propose two mobile cloud access schemes: optimal and periodic access schemes, and study the corresponding performance of mobile cloud computing (i.e., mobile cloud size, node's serviceable time percentage, and task success rate). We find that optimally, node's serviceable time percentage and task success rate approach 1. Using more practical periodic access scheme, node's serviceable time percentage and task success rate are determined by the ratio of contact and inter-contact time between two nodes. --- paper_title: Reservation-based resource scheduling and code partition in mobile cloud computing paper_content: Mobile cloudlet has shown great potential to extend the computational limitation of mobile devices. The overall performance of the mobile applications could be improved when the computational gain from the cloudlet overweights the transmission cost. Yet, resources scheduling on the cloudlet is a challenging problem when there are multiple mobile offloading requests at the same time. Since the offloading requests would compete with each other and each offloading decision would impact the others, finding the optimal execution layout is quite difficult. In this paper, we propose a joint resource scheduling and code partition scheme that can efficiently allocate cloudlet resources to multiple mobile users. Our goal is to maximize the cloudlet throughput as well as to reduce the mobile application's execution time. Unlike the current approach which considers an infinite resource limitation on the cloud, our algorithm dynamically allocate the cloudlet resource based on the current usage. Both real world experiments and trace-driven simulations show that our solution can improve the cloudlet throughput by 20%-35% in the meantime provide speedup to mobile devices. --- paper_title: Optimization of Resource Provisioning Cost in Cloud Computing paper_content: In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments. --- paper_title: Mobility-Aware Caching for Content-Centric Wireless Networks: Modeling and Methodology paper_content: As mobile services are shifting from connection- centric communications to content-centric communications, content-centric wireless networking emerges as a promising paradigm to evolve the current network architecture. Caching popular content at the wireless edge, including base stations and user terminals, provides an effective approach to alleviate the heavy burden on backhaul links, as well as lower delays and deployment costs. In contrast to wired networks, a unique characteristic of content-centric wireless networks (CCWNs) is the mobility of mobile users. While it has rarely been considered by existing works on caching design, user mobility contains various helpful side information that can be exploited to improve caching efficiency at both BSs and user terminals. In this article, we present a general framework for mobility-aware caching in CCWNs. Key properties of user mobility patterns that are useful for content caching are first identified, and then different design methodologies for mobility-aware caching are proposed. Moreover, two design examples are provided to illustrate the proposed framework in detail, and interesting future research directions are identified. --- paper_title: Temperature Aware Workload Management in Geo-distributed Datacenters. paper_content: Lately, for geo-distributed data centers, a workload management approach that routes user requests to locations with cheaper and cleaner electricity has been developed to reduce energy consumption and cost. We consider two key aspects that have not been explored in this approach. First, through empirical studies, we find that the energy efficiency of cooling systems depends critically on the ambient temperature, which exhibits significant geographical diversity. Temperature diversity can be used to reduce the cooling energy overhead. Second, energy consumption comes from not only interactive workloads driven by user requests, but also delay tolerant batch workloads that run at the back-end. The elastic nature of batch workloads can be exploited to further reduce the energy cost. In this paper, we propose to make workload management temperature aware . We formulate the problem as a joint optimization of request routing for interactive workloads and capacity allocation for batch workloads. We develop a distributed algorithm based on an $m$ -block alternating direction method of multipliers (ADMM) algorithm that extends the classical two-block algorithm. We prove the convergence and rate of convergence results under general assumptions. Through trace-driven simulations, we find that our approach consistently provides 15-20 percent cooling energy reduction, and 5-20 percent overall cost reduction over existing methods. --- paper_title: Energy Efficient Resource Management in Virtualized Cloud Data Centers paper_content: Rapid growth of the demand for computational power by scientific, business and web-applications has led to the creation of large-scale data centers consuming enormous amounts of electrical power. We propose an energy efficient resource management system for virtualized Cloud data centers that reduces operational costs and provides required Quality of Service (QoS). Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs and thermal state of computing nodes. We present first results of simulation-driven evaluation of heuristics for dynamic reallocation of VMs using live migration according to current requirements for CPU performance. The results show that the proposed technique brings substantial energy savings, while ensuring reliable QoS. This justifies further investigation and development of the proposed resource management system. --- paper_title: Optimal Power Allocation for Energy Harvesting and Power Grid Coexisting Wireless Communication Systems paper_content: This paper considers the power allocation of a single-link wireless communication with joint energy harvesting and grid power supply. We formulate the problem as minimizing the grid power consumption with random energy and data arrival in fading channel, and analyze the structure of the optimal power allocation policy in some special cases. For the case that all the packets are arrived before transmission, it is a dual problem of throughput maximization, and the optimal solution is found by the two-stage water filling (WF) policy, which allocates the harvested energy in the first stage, and then allocates the power grid energy in the second stage. For the random data arrival case, we first assume grid energy or harvested energy supply only, and then combine the results to obtain the optimal structure of the coexisting system. Specifically, the reverse multi-stage WF policy is proposed to achieve the optimal power allocation when the battery capacity is infinite. Finally, some heuristic online schemes are proposed, of which the performance is evaluated by numerical simulations. --- paper_title: Green Cloudlet Network: A Distributed Green Mobile Cloud Network paper_content: This article introduces a green cloudlet network (GCN) architecture in the context of mobile cloud computing. The proposed architecture is aimed at providing seamless and low end-to-end delay between a UE and its Avatar (its software clone) in the cloudlets to facilitate the application workloads offloading process. Furthermore, an SDN-based core network is introduced in the GCN architecture by replacing the traditional Evolved Packet Core in the LTE network in order to provide efficient communications connections between different end points. The Cloudlet Network File System (CNFS) is designed based on the proposed architecture in order to protect the Avatars’ dataset against hardware failures and improve Avatars’ performance in terms of data access latency. Moreover, a green energy supplement is proposed in the architecture in order to reduce the extra OPEX and CO2 footprint incurred by running the distributed cloudlets. Due to the temporal and spatial dynamics of both the green energy generation and energy demands of green cloudlet systems (GCSs), designing an optimal green energy management strategy based on the characteristics of green energy generation and the energy demands of eNBs and cloudlets to minimize the on-grid energy consumption is critical to the cloudlet provider. --- paper_title: The Case for Energy-Proportional Computing paper_content: Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems. --- paper_title: Online Learning for Offloading and Autoscaling in Renewable-Powered Mobile Edge Computing paper_content: Mobile edge computing (a.k.a. fog computing) has recently emerged to enable \emph{in-situ} processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in renewable-powered mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run-time performance when compared to standard reinforcement learning algorithms such as Q-learning. --- paper_title: Dynamic virtual machine management via approximate Markov decision process paper_content: Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms. --- paper_title: Energy Harvesting Small Cell Networks: Feasibility, Deployment and Operation paper_content: Small cell networks have attracted a great deal of attention in recent years due to their potential to meet the exponential growth of mobile data traffic, and the increasing demand for better quality of service and user experience in mobile applications. Nevertheless, wide deployment of small cell networks has not happened yet because of the complexity in the network planning and optimization, as well as the high expenditure involved in deployment and operation. In particular, it is difficult to provide grid power supply to all the small cell base stations in a cost-effective way. Moreover, a dense deployment of small cell base stations, which is needed to meet the capacity and coverage of next generation wireless networks, will increase operators' electricity bills and lead to significant carbon emission. Thus, it is crucial to exploit off-grid and green energy sources to power small cell networks, for which energy harvesting technology is a viable solution. In this article, we conduct a comprehensive study of energy harvesting small cell networks, and investigate important aspects, including a feasibility analysis, network deployment, and network operation issues. The advantages as well as unique challenges of energy harvesting small cell networks are highlighted, together with potential solutions and effective design methodologies. --- paper_title: Enabling Wireless Power Transfer in Cellular Networks: Architecture, Modeling and Deployment paper_content: Microwave power transfer (MPT) delivers energy wirelessly from stations called power beacons (PBs) to mobile devices by microwave radiation. This provides mobiles practically infinite battery lives and eliminates the need of power cords and chargers. To enable MPT for mobile charging, this paper proposes a new network architecture that overlays an uplink cellular network with randomly deployed PBs for powering mobiles, called a hybrid network. The deployment of the hybrid network under an outage constraint on data links is investigated based on a stochastic-geometry model where single-antenna base stations (BSs) and PBs form independent homogeneous Poisson point processes (PPPs) and single-antenna mobiles are uniformly distributed in Voronoi cells generated by BSs. In this model, mobiles and PBs fix their transmission power at p and q, respectively; a PB either radiates isotropically, called isotropic MPT, or directs energy towards target mobiles by beamforming, called directed MPT. The model is applied to derive the tradeoffs between the network parameters including p, q, and the BS/PB densities under the outage constraint. First, consider the deployment of the cellular network. It is proved that the outage constraint is satisfied so long as the product the BS density decreases with increasing p following a power law where the exponent is proportional to the path-loss exponent. Next, consider the deployment of the hybrid network assuming infinite energy storage at mobiles. It is shown that for isotropic MPT, the product between q, the PB density, and the BS density raised to a power proportional to the path-loss exponent has to be above a given threshold so that PBs are sufficiently dense; for directed MPT, a similar result is obtained with the aforementioned product increased by the array gain. Last, similar results are derived for the case of mobiles having small energy storage. --- paper_title: Energy Harvesting Sensor Nodes: Survey and Implications paper_content: Sensor networks with battery-powered nodes can seldom simultaneously meet the design goals of lifetime, cost, sensing reliability and sensing and transmission coverage. Energy-harvesting, converting ambient energy to electrical energy, has emerged as an alternative to power sensor nodes. By exploiting recharge opportunities and tuning performance parameters based on current and expected energy levels, energy harvesting sensor nodes have the potential to address the conflicting design goals of lifetime and performance. This paper surveys various aspects of energy harvesting sensor systems- architecture, energy sources and storage technologies and examples of harvesting-based nodes and applications. The study also discusses the implications of recharge opportunities on sensor node operation and design of sensor network solutions. --- paper_title: Energy Efficient Resource Allocation for Wireless Power Transfer Enabled Collaborative Mobile Clouds paper_content: In order to fully enjoy high rate broadband multimedia services, prolonging the battery lifetime of user equipment is critical for mobile users, especially for smartphone users. In this paper, the problem of distributing cellular data via a wireless power transfer enabled collaborative mobile cloud (WeCMC) in an energy efficient manner is investigated. WeCMC is formed by a group of users who have both functionalities of information decoding and energy harvesting, and are interested for cooperating in downloading content from the operators. Through device-to-device communications, the users inside WeCMC are able to cooperate during the downloading procedure and offload data from the base station to other WeCMC members. When considering multi-input multi-output wireless channel and wireless power transfer, an efficient algorithm is presented to optimally schedule the data offloading and radio resources in order to maximize energy efficiency as well as fairness among mobile users. Specifically, the proposed framework takes energy minimization and quality of service requirement into consideration. Performance evaluations demonstrate that a significant energy saving gain can be achieved by the proposed schemes. --- paper_title: Grid Energy Consumption and QoS Tradeoff in Hybrid Energy Supply Wireless Networks paper_content: Hybrid energy supply (HES) wireless networks have recently emerged as a new paradigm to enable green networks, which are powered by both the electric grid and harvested renewable energy. In this paper, we will investigate two critical but conflicting design objectives of HES networks, i.e., the grid energy consumption and quality of service (QoS). Minimizing grid energy consumption by utilizing the harvested energy will make the network environmentally friendly, but the achievable QoS may be degraded due to the intermittent nature of energy harvesting. To investigate the tradeoff between these two aspects, we introduce the total service cost as the performance metric, which is the weighted sum of the grid energy cost and the QoS degradation cost. Base station assignment and power control is adopted as the main strategy to minimize the total service cost, while both cases with non-causal and causal side information are considered. With non-causal side information, a Greedy Assignment algorithm with low complexity and near-optimal performance is proposed. With causal side information, the design problem is formulated as a discrete Markov decision problem. Interesting solution structures are derived, which shall help to develop an efficient monotone backward induction algorithm. To further reduce complexity, a Look-Ahead policy and a Threshold-based Heuristic policy are also proposed. Simulation results shall validate the effectiveness of the proposed algorithms and demonstrate the unique grid energy consumption and QoS tradeoff in HES networks. --- paper_title: Throughput Maximization in Wireless Powered Communication Networks paper_content: This paper studies the newly emerging wireless powered communication network in which one hybrid access point (H-AP) with constant power supply coordinates the wireless energy/information transmissions to/from a set of distributed users that do not have other energy sources. A "harvest-then-transmit" protocol is proposed where all users first harvest the wireless energy broadcast by the H-AP in the downlink (DL) and then send their independent information to the H-AP in the uplink (UL) by time-division-multiple-access (TDMA). First, we study the sum-throughput maximization of all users by jointly optimizing the time allocation for the DL wireless power transfer versus the users' UL information transmissions given a total time constraint based on the users' DL and UL channels as well as their average harvested energy values. By applying convex optimization techniques, we obtain the closed-form expressions for the optimal time allocations to maximize the sum-throughput. Our solution reveals an interesting "doubly near-far" phenomenon due to both the DL and UL distance-dependent signal attenuation, where a far user from the H-AP, which receives less wireless energy than a nearer user in the DL, has to transmit with more power in the UL for reliable information transmission. As a result, the maximum sum-throughput is shown to be achieved by allocating substantially more time to the near users than the far users, thus resulting in unfair rate allocation among different users. To overcome this problem, we furthermore propose a new performance metric so-called common-throughput with the additional constraint that all users should be allocated with an equal rate regardless of their distances to the H-AP. We present an efficient algorithm to solve the common-throughput maximization problem. Simulation results demonstrate the effectiveness of the common-throughput approach for solving the new doubly near-far problem in wireless powered communication networks. --- paper_title: Green Data Centers: A Survey, Perspectives, and Future Directions paper_content: At present, a major concern regarding data centers is their extremely high energy consumption and carbon dioxide emissions. However, because of the over-provisioning of resources, the utilization of existing data centers is, in fact, remarkably low, leading to considerable energy waste. Therefore, over the past few years, many research efforts have been devoted to increasing efficiency for the construction of green data centers. The goal of these efforts is to efficiently utilize available resources and to reduce energy consumption and thermal cooling costs. In this paper, we provide a survey of the state-of-the-art research on green data center techniques, including energy efficiency, resource management, thermal control and green metrics. Additionally, we present a detailed comparison of the reviewed proposals. We further discuss the key challenges for future research and highlight some future research issues for addressing the problem of building green data centers. --- paper_title: Robust Workload and Energy Management for Sustainable Data Centers paper_content: A large number of geo-distributed data centers begin to surge in the era of data deluge and information explosion. To meet the growing demand in massive data processing, the infrastructure of future data centers must be energy-efficient and sustainable. Facing this challenge, a systematic framework is put forth in this paper to integrate renewable energy sources (RES), distributed storage units, cooling facilities, as well as dynamic pricing into the workload and energy management tasks of a data center network. To cope with RES uncertainty, the resource allocation task is formulated as a robust optimization problem minimizing the worst-case net cost. Compared with existing stochastic optimization methods, the proposed approach entails a deterministic uncertainty set where generated RES reside, thus can be readily obtained in practice. It is further shown that the problem can be cast as a convex program, and then solved in a distributed fashion using the dual decomposition method. By exploiting the spatio-temporal diversity of local temperature, workload demand, energy prices, and renewable availability, the proposed approach outperforms existing alternatives, as corroborated by extensive numerical tests performed using real data. --- paper_title: Green-aware workload scheduling in geographically distributed data centers paper_content: Renewable (or green) energy, such as solar or wind, has at least partially powered data centers to reduce the environmental impact of traditional energy sources (brown energy with high carbon footprint). In this paper, we propose a holistic workload scheduling algorithm to minimize the brown energy consumption across multiple geographically distributed data centers with renewable energy sources. While green energy supply for a single data center is intermittent due to daily/seasonal effects, our workload scheduling algorithm is aware of different amounts of green energy supply and dynamically schedules the workload across data centers. The scheduling decision adapts to workload and data center cooling dynamics. Our experiments with real workload traces demonstrate that our scheduling algorithm greatly reduces brown energy consumption by up to 40% in comparison with other scheduling policies. --- paper_title: Energy Efficient Mobile Cloud Computing Powered by Wireless Energy Transfer paper_content: Achieving long battery lives or even self sustainability has been a long standing challenge for designing mobile devices. This paper presents a novel solution that seamlessly integrates two technologies, mobile cloud computing and microwave power transfer (MPT), to enable computation in passive low-complexity devices such as sensors and wearable computing devices. Specifically, considering a single-user system, a base station (BS) either transfers power to or offloads computation from a mobile to the cloud; the mobile uses harvested energy to compute given data either locally or by offloading. A framework for energy efficient computing is proposed that comprises a set of policies for controlling CPU cycles for the mode of local computing, time division between MPT and offloading for the other mode of offloading, and mode selection. Given the CPU-cycle statistics information and channel state information (CSI), the policies aim at maximizing the probability of successfully computing given data, called computing probability , under the energy harvesting and deadline constraints. The policy optimization is translated into the equivalent problems of minimizing the mobile energy consumption for local computing and maximizing the mobile energy savings for offloading which are solved using convex optimization theory. The structures of the resultant policies are characterized in closed form. Furthermore, given non-causal CSI, the said analytical framework is further developed to support computation load allocation over multiple channel realizations, which further increases the computing probability. Last, simulation demonstrates the feasibility of wirelessly powered mobile cloud computing and the gain of its optimal control. --- paper_title: Energy Harvesting Wireless Communications: A Review of Recent Advances paper_content: This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes. --- paper_title: Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices paper_content: Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost , which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm. --- paper_title: Transmit Power Minimization for Wireless Networks with Energy Harvesting Relays paper_content: Energy harvesting (EH) has recently emerged as a key technology for green communications as it can power wireless networks with renewable energy sources. However, directly replacing the conventional non-EH transmitters by EH nodes will be a challenge. In this paper, we propose to deploy extra EH nodes as relays over an existing non-EH network. Specifically, the considered non-EH network consists of multiple source-destination (S-D) pairs. The deployed EH relays will take turns to assist each S-D pair, and energy diversity can be achieved to combat the low-EH rate of each EH relay. To make the best of these EH relays, with the source transmit power minimization as the design objective, we formulate a joint power assignment and relay selection problem, which, however, is NP-hard. We thus propose a general framework to develop efficient suboptimal algorithms, which is mainly based on a sufficient condition for the feasibility of the optimization problem. This condition yields useful design insights and also reveals an energy hardening effect, which provides the possibility to exempt the requirement of noncausal EH information. Simulation results will show that the proposed co-operation strategy can achieve near-optimal performance and provide significant power savings. Compared to the greedy co-operation method that only optimizes the performance of the current transmission block, the proposed strategy can achieve the same performance with much fewer relays, and the performance gap increases with the number of S-D pairs. --- paper_title: Dynamic Right-Sizing for Power-Proportional Data Centers paper_content: Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of low load. This paper investigates how much can be saved by dynamically "right-sizing" the data center by turning off servers during such periods and how to achieve that saving via an online algorithm. We propose a very general model and prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new "lazy" online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data-center workloads and show that significant cost savings are possible. Additionally, we contrast this new algorithm with the more traditional approach of receding horizon control. --- paper_title: Online algorithms for geographical load balancing paper_content: It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective. --- paper_title: A Lightweight Message Authentication Scheme for Smart Grid Communications paper_content: Smart grid (SG) communication has recently received significant attentions to facilitate intelligent and distributed electric power transmission systems. However, communication trust and security issues still present practical concerns to the deployment of SG. In this paper, to cope with these challenging concerns, we propose a lightweight message authentication scheme features as a basic yet crucial component for secure SG communication framework. Specifically, in the proposed scheme, the smart meters which are distributed at different hierarchical networks of the SG can first achieve mutual authentication and establish the shared session key with Diffie-Hellman exchange protocol. Then, with the shared session key between smart meters and hash-based authentication code technique, the subsequent messages can be authenticated in a lightweight way. Detailed security analysis shows that the proposed scheme can satisfy the desirable security requirements of SG communications. In addition, extensive simulations have also been conducted to demonstrate the effectiveness of the proposed scheme in terms of low latency and few signal message exchanges. --- paper_title: Opportunities and Challenges of Software-Defined Mobile Networks in Network Security paper_content: Software-defined mobile network (SDMN) architecture integrates software-defined networks, network functions virtualization, and cloud computing principles in mobile networking environments to transform rigid and disparate legacy mobile networks into scalable and dynamic ecosystems. However, because SDMN architecture separates the control and data planes, it will significantly change the way security is managed and applied for mobile networks. In this article, the authors discuss the security challenges, vulnerabilities, and opportunities that need to be investigated and addressed for future SDMNs. It also highlights how common security threats in IP networks such as the Internet are now applicable in new open and IP-based SDMNs. --- paper_title: Secure Optimization Computation Outsourcing in Cloud Computing: A Case Study of Linear Programming paper_content: Cloud computing enables an economically promising paradigm of computation outsourcing. However, how to protect customers confidential data processed and generated during the computation is becoming the major security concern. Focusing on engineering computing and optimization tasks, this paper investigates secure outsourcing of widely applicable linear programming (LP) computations. Our mechanism design explicitly decomposes LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. The resulting flexibility allows us to explore appropriate security/efficiency tradeoff via higher-level abstraction of LP computation than the general circuit representation. Specifically, by formulating private LP problem as a set of matrices/vectors, we develop efficient privacy-preserving problem transformation techniques, which allow customers to transform the original LP into some random one while protecting sensitive input/output information. To validate the computation result, we further explore the fundamental duality theorem of LP and derive the necessary and sufficient conditions that correct results must satisfy. Such result verification mechanism is very efficient and incurs close-to-zero additional cost on both cloud server and customers. Extensive security analysis and experiment results show the immediate practicability of our mechanism design. --- paper_title: Security and Privacy Issues of Fog Computing: A Survey paper_content: Fog computing is a promising computing paradigm that extends cloud computing to the edge of networks. Similar to cloud computing but with distinct characteristics, fog computing faces new security and privacy challenges besides those inherited from cloud computing. In this paper, we have surveyed these challenges and corresponding solutions in a brief manner. --- paper_title: Non-interactive verifiable computing: Outsourcing computation to untrusted workers paper_content: We introduce and formalize the notion of Verifiable Computation, which enables a computationally weak client to "outsource" the computation of a function F on various dynamically-chosen inputs x1, ...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The primary constraint is that the verification of the proof should require substantially less computational effort than computing F(i) from scratch. ::: ::: We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(mċpoly(λ)) time, where m is the bit-length of the output of F, and λ is a security parameter. The protocol requires a one-time pre-processing stage by the client which takes O(|C|ċpoly(λ)) time, where C is the smallest known Boolean circuit computing F. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values. --- paper_title: An optimization of security and trust management in distributed systems paper_content: With the development of cloud computing, the security and trust management in distributed systems is changeable. This paper proposes an architecture and solution of security and trust management for distributed systems which use cloud computing. Since the solution requires future technologies, an optimization to the security and trust management, including multi-paths transmission, virtual personal networks and encryption, is proposed. This optimization will enhance the security and trust in distributed systems. --- paper_title: Attribute-based authenticated key exchange paper_content: We introduce the concept of attribute-based authenticated key exchange (AB-AKE) within the framework of ciphertext-policy attribute-based systems. A notion of AKE-security for AB-AKE is presented based on the security models for group key exchange protocols and also taking into account the security requirements generally considered in the ciphertext-policy attribute-based setting. We also introduce a new primitive called encapsulation policy attribute-based key encapsulation mechanism (EP-AB-KEM) and then define a notion of chosen ciphertext security for EP-AB-KEMs. A generic one-round AB-AKE protocol that satisfies our AKE-security notion is then presented. The protocol is generically constructed from any EP-AB-KEM that achieves chosen ciphertext security. Finally, we propose an EP-AB-KEM from an existing attribute-based encryption scheme and show that it achieves chosen ciphertext security in the generic group and random oracle models. Instantiating our AB-AKE protocol with this EP-AB-KEM will result in a concrete one-round AB-AKE protocol also secure in the generic group and random oracle models. --- paper_title: OCP: A protocol for secure communication in federated content networks paper_content: Abstract Content Distribution Networks (CDNs) are networks that make intense use of caching and multicast overlay techniques for transmitting video and other stream media, a type of traffic that has been growing tremendously in the last years. To cope with this increasing demand, several telecommunications companies have been associated to create federated CDNs (FCDNs), that involves many different providers and, hence, distinct domains. Although beneficial in terms of increased capillarity and scalability of service delivery, the interaction between FCDN elements from different providers brings many new challenges. Among them, one that has received little attention so far refers to security, an essential service for preventing the misuse of the FCDN resources. Aiming to tackle this issue, this paper presents the Overlay Communication Protocol (OCP), a security mechanism that allows the secure signaling communication in FCDNs. OCP takes into account all elements involved in the content delivery process, addressing Route Forgery attacks and concealing the network structure from potential attackers. Together with the protocol description and security analysis, we also present experimental results on its implementation, showing that it introduces little impact on the overall network performance. --- paper_title: A survey on security in network functions virtualization paper_content: Network functions virtualization (NFV) is an emerging network technology. Instead of deploying hardware equipments for each network functions, virtualized network functions in NFV are realized through virtual machines (VMs) running various software on top of industry standard high volume servers or cloud computing infrastructure. NFV decreases hardware equipment costs and energy consumption, improves operational efficiency and optimizes network configuration. However, potential security issues is a major concern of NFV. In this paper, we survey the challenges and opportunities in NFV security. We describe the NFV architecture design and some potential NFV security issues and challenges. We also present existing NFV security solutions and products. We also survey NFV security use cases and explore promising research directions in this area. --- paper_title: Mobile Edge Computing, Fog et al.: A Survey and Analysis of Security Threats and Challenges paper_content: Abstract For various reasons, the cloud computing paradigm is unable to meet certain requirements (e.g. low latency and jitter, context awareness, mobility support) that are crucial for several applications (e.g. vehicular networks, augmented reality). To fulfill these requirements, various paradigms, such as fog computing, mobile edge computing, and mobile cloud computing, have emerged in recent years. While these edge paradigms share several features, most of the existing research is compartmentalized; no synergies have been explored. This is especially true in the field of security, where most analyses focus only on one edge paradigm, while ignoring the others. The main goal of this study is to holistically analyze the security threats, challenges, and mechanisms inherent in all edge paradigms, while highlighting potential synergies and venues of collaboration. In our results, we will show that all edge paradigms should consider the advances in other paradigms. --- paper_title: Introducing Connected Vehicles [Connected Vehicles] paper_content: The term connected vehicles refers to applications, services, and technologies that connect a vehicle to its surroundings. Adopting a definition similar to that of AUTO Connected Car News, a connected vehicle is basically the presence of devices in a vehicle that connect to other devices within the same vehicle and/or devices, networks, applications, and services outside the vehicle. Applications include everything from traffic safety and efficiency, infotainment, parking assistance, roadside assistance, remote diagnostics, and telematics to autonomous self-driving vehicles and global positioning systems (GPS). Typically, vehicles that include interactive advanced driver-assistance systems (ADASs) and cooperative intelligent transport systems (C-ITS) can be regarded as connected. Connected-vehicle safety applications are designed to increase situation awareness and mitigate traffic accidents through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. ADAS technology can be based on vision/camera systems, sensor technology, vehicle data networks, V2V, or V2I systems. Features may include adaptive cruise control, automate braking, incorporate GPS and traffic warnings, connect to smartphones, alert the driver to hazards, and keep the driver aware of what is in the blind spot. V2V communication technology could mitigate traffic collisions and improve traffic congestion by exchanging basic safety information such as location, speed, and direction between vehicles within range of each other. It can supplement active safety features, such as forward collision warning and blind-spot detection. --- paper_title: The Tactile Internet: Applications and Challenges paper_content: Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies. --- paper_title: Video Stream Analysis in Clouds: An Object Detection and Classification Framework for High Performance Video Analytics paper_content: Object detection and classification are the basic tasks in video analytics and become the starting point for other complex applications. Traditional video analytics approaches are manual and time consuming. These are subjective due to the very involvement of human factor. We present a cloud based video analytics framework for scalable and robust analysis of video streams. The framework empowers an operator by automating the object detection and classification process from recorded video streams. An operator only specifies an analysis criteria and duration of video streams to analyse. The streams are then fetched from a cloud storage, decoded and analysed on the cloud. The framework executes compute intensive parts of the analysis to GPU powered servers in the cloud. Vehicle and face detection are presented as two case studies for evaluating the framework, with one month of data and a 15 node cloud. The framework reliably performed object detection and classification on the data, comprising of 21,600 video streams and 175 GB in size, in 6.52 hours. The GPU enabled deployment of the framework took 3 hours to perform analysis on the same number of video streams, thus making it at least twice as fast than the cloud deployment without GPUs. ---
Title: Mobile Edge Computing: Survey and Research Outlook Section 1: INTRODUCTION Description 1: This section introduces the concept of Mobile Edge Computing (MEC), its significance, and its emergence as an important technology for next-generation internet applications. Section 2: Mobile Computing for 5G: From Clouds to Edges Description 2: This section discusses the transition from cloud computing to edge computing in the context of 5G wireless systems, examining the limitations of mobile cloud computing and the advantages of MEC. Section 3: Paper Motivation and Outline Description 3: This section presents the motivation behind the paper and provides an outline of the key topics and structure of the paper. Section 4: MEC COMPUTATION AND COMMUNICATION MODELS Description 4: This section introduces the system models for key computation/communication components in a typical MEC system, including task models, communication models, and computation models for mobile devices and MEC servers. Section 5: RESOURCE MANAGEMENT IN MEC SYSTEMS Description 5: This section provides a comprehensive overview of resource management techniques in MEC systems, focusing on joint radio-and-computational resource allocation, MEC server scheduling, and cooperative edge computing systems. Section 6: Single-User MEC Systems Description 6: This section reviews research efforts focused on single-user MEC systems, discussing different task models and optimization solutions for energy and latency minimization. Section 7: Multiuser MEC Systems Description 7: This section considers resource management in multiuser MEC systems, addressing challenges like resource allocation, server scheduling, and cooperative computing among multiple mobile devices. Section 8: MEC Systems with Heterogeneous Servers Description 8: This section explores the role of heterogeneous MEC systems, discussing multi-level cloud interactions, server selection, cooperation, and computation migration. Section 9: Challenges Description 9: This section outlines critical research challenges that need to be addressed in future MEC studies, including two-timescale resource management, online task partitioning, and large-scale optimization. Section 10: AN OUTLOOK FOR MEC RESEARCH Description 10: This section discusses emerging research directions and identifies technical challenges and opportunities in MEC deployment, cache-enabled MEC, mobility management, green MEC, and security-and-privacy issues. Section 11: Deployment of MEC Systems Description 11: This section discusses the site selection for MEC servers, architecture design, and server density planning, emphasizing cost-effective and efficient deployment strategies. Section 12: Cache-Enabled MEC Description 12: This section explores the integration of caching with MEC, discussing service caching, data caching for MEC data analytics, and performance improvement strategies. Section 13: Mobility Management for MEC Description 13: This section addresses the challenges and solutions related to user mobility in MEC systems, including online prefetching, D2D communications, fault-tolerant MEC, and server scheduling. Section 14: Green MEC Description 14: This section investigates techniques for reducing the energy consumption of MEC systems, including dynamic right-sizing, geographical load balancing, and utilization of renewable energy. Section 15: Security and Privacy Issues in MEC Description 15: This section discusses the security and privacy challenges in MEC systems, focusing on trust, authentication, networking security, and secure computation mechanisms. Section 16: STANDARDIZATION EFFORTS AND USE SCENARIOS OF MEC Description 16: This section reviews the standardization efforts by ETSI for MEC, details the referenced MEC server framework, and elaborates on typical use scenarios for MEC applications. Section 17: CONCLUSION Description 17: This section provides concluding remarks, summarizing the key discussions and insights presented in the paper and underscoring the importance and potential directions for future MEC research.
Robust Wireless Network Coding – An Overview
9
--- paper_title: Embracing wireless interference: analog network coding paper_content: Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding. --- paper_title: Linear network coding paper_content: Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node. --- paper_title: Linear Network Error Correction Codes in Packet Networks paper_content: In this paper, we study basic properties of linear network error correction codes, their construction and error correction capability for various kinds of errors. Our discussion is restricted to the single-source multicast case. We define the minimum distance of a network error correction code. This plays the same role as it does in classical coding theory. We construct codes that can correct errors up to the full error correction capability specified by Singleton bound for network error correction codes recently established by Cai and Yeung. We propose a decoding principle for network error correction codes, based on which we introduce two decoding algorithms and analyze their performance. We formulate the global kernel error correction problem and characterize the error correction capability of codes for this kind of error. --- paper_title: A Random Linear Network Coding Approach to Multicast paper_content: We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks --- paper_title: Iterative Network and Channel Decoding for the Two-Way Relay Channel paper_content: We introduce an extension of the relay channel that we call two-way relay channel. The two-way relay channel consists of two users which want to communicate to each other with the help of one relay. We consider the time-division two-way relay channel without power control, where the broadcast channels are orthogonalized in time and where the two users and the relay use the same transmission power. We describe a joint network-channel coding method for this channel model, where channel codes are used at both users and a network code is used at the relay. The channel code of one user and the network code form a distributed turbo code which we call turbo network code and which can be iteratively decoded at the other user. Moreover, we conjecture closed expressions for lower bounds for the channel capacities of the time-division relay and two-way relay channel without power control and deliver simulation results of the proposed turbo network code. --- paper_title: The benefits of coding over routing in a randomized setting paper_content: A novel randomized network coding approach for robust, distributed transmission and compression of information in networks is presented, and its advantages over routing-based approaches is demonstrated. --- paper_title: An algebraic approach to network coding paper_content: We consider the problem of information flow in networks. In particular, we relate the question whether a set of desired connections can be accommodated in a network to the problem of finding a point on a variety defined over a suitable field. This approach lends itself to the derivation of a number of theorems concerning the feasibility of a communication scenario involving failures. --- paper_title: Coding for Errors and Erasures in Random Network Coding paper_content: The problem of error-control in a "noncoherent" random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric on subspaces is defined, under which a minimum distance decoder achieves correct decoding if the dimension of the space V U is large enough. When the dimension of each codeword is restricted to a fixed integer, the code forms a subset of the vertices of the Grassmann graph. Sphere-packing, sphere-covering bounds and a Singleton bound are provided for such codes. A Reed-Solomon-like code construction is provided and decoding algorithm given. --- paper_title: XORs in the air: practical wireless network coding paper_content: This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol. --- paper_title: Network Information Flow paper_content: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems. --- paper_title: The benefits of coding over routing in a randomized setting paper_content: A novel randomized network coding approach for robust, distributed transmission and compression of information in networks is presented, and its advantages over routing-based approaches is demonstrated. --- paper_title: Coding for Errors and Erasures in Random Network Coding paper_content: The problem of error-control in a "noncoherent" random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric on subspaces is defined, under which a minimum distance decoder achieves correct decoding if the dimension of the space V U is large enough. When the dimension of each codeword is restricted to a fixed integer, the code forms a subset of the vertices of the Grassmann graph. Sphere-packing, sphere-covering bounds and a Singleton bound are provided for such codes. A Reed-Solomon-like code construction is provided and decoding algorithm given. --- paper_title: Linear Network Error Correction Codes in Packet Networks paper_content: In this paper, we study basic properties of linear network error correction codes, their construction and error correction capability for various kinds of errors. Our discussion is restricted to the single-source multicast case. We define the minimum distance of a network error correction code. This plays the same role as it does in classical coding theory. We construct codes that can correct errors up to the full error correction capability specified by Singleton bound for network error correction codes recently established by Cai and Yeung. We propose a decoding principle for network error correction codes, based on which we introduce two decoding algorithms and analyze their performance. We formulate the global kernel error correction problem and characterize the error correction capability of codes for this kind of error. --- paper_title: On Randomized Linear Network Codes and Their Error Correction Capabilities paper_content: Randomized linear network code for single source multicast was introduced and analyzed in Ho et al. (IEEE Transactions on Information Theory, October 2006) where the main results are upper bounds for the failure probability of the code. In this paper, these bounds are improved and tightness of the new bounds is studied by analyzing the limiting behavior of the failure probability as the field size goes to infinity. In the linear random coding setting for single source multicast, the minimum distance of the code defined in Zhang, (IEEE Transactions on Information Theory, January 2008) is a random variable taking nonnegative integer values that satisfy the inequality in the Singleton bound recently established in Yeung and Cai (Communications in Information and Systems, 2006) for network error correction codes. We derive a bound on the probability mass function of the minimum distance of the random linear network code based on our improved upper bounds for the failure probability. Codes having the highest possible minimum distance in the Singleton bound are called maximum distance separable (MDS). The bound on the field size required for the existence of MDS codes reported in Zhang, (IEEE Transactions on Information Theory, January 2008) and Matsumoto (arXiv:cs.IT/0610121, Oct. 2006) suggests that such codes exist only when field size is large. Define the degradation of a code as the difference between the highest possible minimum distance in the Singleton bound and the actual minimum distance of the code. The bound for the probability mass function of the minimum distance leads to a bound on the field size required for the existence of network error correction codes with a given maximum degradation. The result shows that allowing minor degradation reduces the field size required dramatically. --- paper_title: Error-Correcting Codes in Projective Spaces Via Rank-Metric Codes and Ferrers Diagrams paper_content: Coding in the projective space has received recently a lot of attention due to its application in network coding. Reduced row echelon form of the linear subspaces and Ferrers diagram can play a key role for solving coding problems in the projective space. In this paper, we propose a method to design error-correcting codes in the projective space. We use a multilevel approach to design our codes. First, we select a constant-weight code. Each codeword defines a skeleton of a basis for a subspace in reduced row echelon form. This skeleton contains a Ferrers diagram on which we design a rank-metric code. Each such rank-metric code is lifted to a constant-dimension code. The union of these codes is our final constant-dimension code. In particular, the codes constructed recently by Koetter and Kschischang are a subset of our codes. The rank-metric codes used for this construction form a new class of rank-metric codes. We present a decoding algorithm to the constructed codes in the projective space. The efficiency of the decoding depends on the efficiency of the decoding for the constant-weight codes and the rank-metric codes. Finally, we use puncturing on our final constant-dimension codes to obtain large codes in the projective space which are not constant-dimension. --- paper_title: Network coding and error correction paper_content: We introduce network error-correcting codes for error correction when a source message is transmitted to a set of receiving nodes on a network. The usual approach in existing networks, namely link-by-link error correction, is a special case of network error correction. The network generalizations of the Hamming bound and the Gilbert-Varshamov bound are derived. --- paper_title: NETWORK ERROR CORRECTION, PART II: Lower Bounds paper_content: In Part I of this paper, we introduced the paradigm of network error correction as a generalization of classical link-by-link error correction. We also obtained the network generalizations of the Hamming bound and the Singleton bound in classical algebraic coding theory. In Part II, we prove the network generalization of the Gilbert-Varshamov bound and its enhancement. With the latter, we show that the tightness of the Singleton bound is preserved in the network setting. We also discuss the implication of the results in this paper. Definition 2. An etwork code ist-error-correcting if it can correct all τ -errors for τ ≤ t, i.e., if the total number of errors in the network is at most t, then the source message can be recovered by all the sink nodes u ∈U.A network code is Y-error-correcting if it can correct E-errors for all E ∈ Y. In Part I, we have proved the network generalizations of the Hamming bound and the Singleton bound. In this part, we will prove a network generalization of the Gilbert-Varshamov bound and its enhancement. With the latter, we will show that the tightness of the Singleton bound is preserved in the network setting. The rest of Part II is organized as follows. In Section 2, we prove the Gilbert bound and the Varshamov bound for network error-correcting codes. In Section 3, we sharpen the Varshamov bound obtained in Section 2 to the strengthened Varshamov bound. By means of the latter, we prove the tightness of the Singleton bound for --- paper_title: Coding for Errors and Erasures in Random Network Coding paper_content: The problem of error-control in a "noncoherent" random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric on subspaces is defined, under which a minimum distance decoder achieves correct decoding if the dimension of the space V U is large enough. When the dimension of each codeword is restricted to a fixed integer, the code forms a subset of the vertices of the Grassmann graph. Sphere-packing, sphere-covering bounds and a Singleton bound are provided for such codes. A Reed-Solomon-like code construction is provided and decoding algorithm given. --- paper_title: Construction algorithm for network error-correcting codes attaining the Singleton bound paper_content: We give a centralized deterministic algorithm for constructing linear network error-correcting codes that attain the Singleton bound of network error-correcting codes. The proposed algorithm is based on the algorithm by Jaggi et al. We give estimates on the time complexity and the required symbol size of the proposed algorithm. We also estimate the probability of a random choice of local encoding vectors by all intermediate nodes giving a network error-correcting codes attaining the Singleton bound. We also clarify the relationship between the robust network coding and the network error-correcting codes with known locations of errors. --- paper_title: The benefits of coding over routing in a randomized setting paper_content: A novel randomized network coding approach for robust, distributed transmission and compression of information in networks is presented, and its advantages over routing-based approaches is demonstrated. --- paper_title: Coding for Errors and Erasures in Random Network Coding paper_content: The problem of error-control in a "noncoherent" random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric on subspaces is defined, under which a minimum distance decoder achieves correct decoding if the dimension of the space V U is large enough. When the dimension of each codeword is restricted to a fixed integer, the code forms a subset of the vertices of the Grassmann graph. Sphere-packing, sphere-covering bounds and a Singleton bound are provided for such codes. A Reed-Solomon-like code construction is provided and decoding algorithm given. --- paper_title: Rank metric decoder architectures for noncoherent error control in random network coding paper_content: While random network coding has proved to be a powerful tool for disseminating information in networks, it is highly susceptible to errors caused by various sources. Recently, constant-dimension codes (CDCs), especially Kotter-Kschischang (KK) codes, have been proposed for error control in random network coding. It has been shown that KK codes can be constructed from Gabidulin codes, an important class of rank metric codes used in storage and cryptography. Although rank metric decoders have been proposed for both Gabidulin and KK codes, it is not clear whether such decoders are feasible and suitable for hardware implementations. In this paper, we propose novel decoder architectures for both codes. The synthesis results of our decoder architectures for Gabidulin and KK codes over small fields and with limited error-correcting capabilities not only are affordable, but also achieve high throughput. --- paper_title: Error-Correcting Codes in Projective Spaces Via Rank-Metric Codes and Ferrers Diagrams paper_content: Coding in the projective space has received recently a lot of attention due to its application in network coding. Reduced row echelon form of the linear subspaces and Ferrers diagram can play a key role for solving coding problems in the projective space. In this paper, we propose a method to design error-correcting codes in the projective space. We use a multilevel approach to design our codes. First, we select a constant-weight code. Each codeword defines a skeleton of a basis for a subspace in reduced row echelon form. This skeleton contains a Ferrers diagram on which we design a rank-metric code. Each such rank-metric code is lifted to a constant-dimension code. The union of these codes is our final constant-dimension code. In particular, the codes constructed recently by Koetter and Kschischang are a subset of our codes. The rank-metric codes used for this construction form a new class of rank-metric codes. We present a decoding algorithm to the constructed codes in the projective space. The efficiency of the decoding depends on the efficiency of the decoding for the constant-weight codes and the rank-metric codes. Finally, we use puncturing on our final constant-dimension codes to obtain large codes in the projective space which are not constant-dimension. --- paper_title: A Rank-Metric Approach to Error Control in Random Network Coding paper_content: The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of Rotter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if mu erasures and delta deviations occur, then errors of rank t can always be corrected provided that 2t les d - 1 + mu + delta, where d is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application, where n packets of length M over F(q) are transmitted, the complexity of the decoding algorithm is given by O(dM) operations in an extension field F(qn). --- paper_title: Codes for network coding paper_content: In [4] a metric for error correction in network coding is introduced. Also constant-dimension codes were introduced and investigated. Nevertheless little is known on codes in this metric in general. In this paper, several classes of codes are defined and investigated. --- paper_title: Construction of Large Constant Dimension Codes With a Prescribed Minimum Distance paper_content: In this paper we construct constant dimension codes with prescribed minimum distance. There is an increased interest in subspace codes in general since a paper [13] by Kotter and Kschischang where they gave an application in network coding. There is also a connection to the theory of designs over finite fields. We will modify a method of Braun, Kerber and Laue [7] which they used for the construction of designs over finite fields to construct constant dimension codes. Using this approach we found many new constant dimension codes with a larger number of codewords than previously known codes. We finally give a table of the best constant dimension codes we found. --- paper_title: On Metrics for Error Correction in Network Coding paper_content: The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a new metric, called the injection metric, which is closely related to, but different than, the subspace metric of KOumltter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the injection metric is shown to correct more errors then a minimum-subspace-distance decoder. All of these results are based on a general approach to adversarial error correction, which could be useful for other adversarial channels beyond network coding. --- paper_title: Bounds on covering codes with the rank metric paper_content: In this paper, we investigate geometrical properties of the rank metric space and covering properties of rank metric codes. We first establish an analytical expression for the intersection of two balls with rank radii, and then derive an upper bound on the volume of the union of multiple balls with rank radii. Using these geometrical properties, we derive both upper and lower bounds on the minimum cardinality of a code with a given rank covering radius. The geometrical properties and bounds proposed in this paper are significant to the design, decoding, and performance analysis of rank metric codes. --- paper_title: Coding for Errors and Erasures in Random Network Coding paper_content: The problem of error-control in a "noncoherent" random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric on subspaces is defined, under which a minimum distance decoder achieves correct decoding if the dimension of the space V U is large enough. When the dimension of each codeword is restricted to a fixed integer, the code forms a subset of the vertices of the Grassmann graph. Sphere-packing, sphere-covering bounds and a Singleton bound are provided for such codes. A Reed-Solomon-like code construction is provided and decoding algorithm given. --- paper_title: Projective space codes for the injection metric paper_content: In the context of error control in random linear network coding, it is useful to construct codes that comprise well-separated collections of subspaces of a vector space over a finite field. In this paper, the metric used is the so-called "injection distance", introduced by Silva and Kschischang. A Gilbert-Varshamov bound for such codes is derived. Using the code-construction framework of Etzion and Silberstein, new non-constant-dimension codes are constructed; these codes contain more codewords than comparable codes designed for the subspace metric. --- paper_title: Joint Network-Channel Coding for the Multiple-Access Relay Channel paper_content: We propose to use joint network-channel coding based on turbo codes for the multiple-access relay channel. Such a system can be used for the cooperative uplink for two mobile stations to a base station with the help of a relay. We compare the proposed system with a distributed turbo code for the relay channel and with a system which uses separate network-channel coding for the multiple-access relay channel. Simulation results confirm that the systems with network coding for the multiple-access relay channel gain cooperative diversity compared to the system with the distributed turbo code for the relay channel. Moreover, the results show that joint network-channel coding outperforms separate network-channel coding. The reason for this is that the redundancy which is contained in the transmission of the relay can be exploited more efficiently with joint network-channel coding --- paper_title: Iterative Network and Channel Decoding for the Two-Way Relay Channel paper_content: We introduce an extension of the relay channel that we call two-way relay channel. The two-way relay channel consists of two users which want to communicate to each other with the help of one relay. We consider the time-division two-way relay channel without power control, where the broadcast channels are orthogonalized in time and where the two users and the relay use the same transmission power. We describe a joint network-channel coding method for this channel model, where channel codes are used at both users and a network code is used at the relay. The channel code of one user and the network code form a distributed turbo code which we call turbo network code and which can be iteratively decoded at the other user. Moreover, we conjecture closed expressions for lower bounds for the channel capacities of the time-division relay and two-way relay channel without power control and deliver simulation results of the proposed turbo network code. --- paper_title: MIXIT: The Network Meets the Wireless Channel paper_content: The traditional contract between the network and the lower layers states that the network does routing and the lower layers deliver correct packets. In a wireless network, however, different nodes may hear most bits in a transmission, yet none of them receives the whole packet uncorrupted. The current approach imposes fate sharing on the bits, dropping a whole packet because of a few incorrect bits. In contrast, this paper proposes MIXIT, a new architecture that performs opportunistic routing on groups of correctly received symbols. We show using simulations driven with Software Radios measurements that MIXIT provides 4x throughput improvement over state-of-the-art opportunistic routing. --- paper_title: A Random Linear Network Coding Approach to Multicast paper_content: We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks --- paper_title: A Practical Joint Network-Channel Coding Scheme for Reliable Communication in Wireless Networks paper_content: In this paper, we propose a practical scheme, Non-Binary Joint Network-Channel Coding (NB-JNCC), for reliable multi-path multi-hop communication in arbitrary large-scale wireless networks. NB-JNCC seamlessly couples channel coding and network coding to effectively combat the detrimental effect of fading of wireless channels. Specifically, NB-JNCC combines non-binary irregular low-density parity-check (LDPC) channel coding and random linear network coding through iterative joint decoding, which helps to fully exploit the spatial diversity and redundancy residing in both channel codes and network codes. In addition, since it operates over a high order Galois field, NB-JNCC can be directly combined with high order modulation without the need of any bit-to-symbol conversion nor its inverse. Through both analysis and simulation, we demonstrate the significant performance improvement of NB-JNCC over other schemes. --- paper_title: Joint Network-Channel Coding for the Multiple-Access Relay Channel paper_content: We propose to use joint network-channel coding based on turbo codes for the multiple-access relay channel. Such a system can be used for the cooperative uplink for two mobile stations to a base station with the help of a relay. We compare the proposed system with a distributed turbo code for the relay channel and with a system which uses separate network-channel coding for the multiple-access relay channel. Simulation results confirm that the systems with network coding for the multiple-access relay channel gain cooperative diversity compared to the system with the distributed turbo code for the relay channel. Moreover, the results show that joint network-channel coding outperforms separate network-channel coding. The reason for this is that the redundancy which is contained in the transmission of the relay can be exploited more efficiently with joint network-channel coding --- paper_title: Iterative Network and Channel Decoding for the Two-Way Relay Channel paper_content: We introduce an extension of the relay channel that we call two-way relay channel. The two-way relay channel consists of two users which want to communicate to each other with the help of one relay. We consider the time-division two-way relay channel without power control, where the broadcast channels are orthogonalized in time and where the two users and the relay use the same transmission power. We describe a joint network-channel coding method for this channel model, where channel codes are used at both users and a network code is used at the relay. The channel code of one user and the network code form a distributed turbo code which we call turbo network code and which can be iteratively decoded at the other user. Moreover, we conjecture closed expressions for lower bounds for the channel capacities of the time-division relay and two-way relay channel without power control and deliver simulation results of the proposed turbo network code. --- paper_title: Design of joint network-low density parity check codes based on the EXIT charts paper_content: For the multiple-access relay network where two sources communicate with one destination in the presence of one relay, a practical joint network-low density parity check code is designed to help the relay jointly re-encode the messages from both sources. A bilayer extrinsic information transfer chart is developed based on which a design methodology is proposed to iteratively improve the degree distribution of the proposed code. Simulations illustrate that the gap between the convergence threshold and the performance of the coding scheme is less than 0.6dB at BER=10-5. --- paper_title: A Unified Channel-Network Coding Treatment for User Cooperation in Wireless Ad-Hoc Networks paper_content: We propose a combined channel-network coding solution for efficient user cooperation in wireless ad-hoc networks that comprise a host of terminals communicating to a common destination. The proposed framework, termed generalized adaptive network coded cooperation or GANCC, addresses the challenge of inter-user outage, which widely persists in practical cooperation scenarios, by adaptively matching code graphs to instantaneous network graphs (topologies). Additionally, GANCC treats channel codes as an integral part of the network code, and in doing so not only extracts the most benefit from these codes but also provides a live example supporting the notion that network codes are generalization of channel codes (as well as source codes). --- paper_title: A network coding approach to cooperative diversity paper_content: This paper proposes a network coding approach to cooperative diversity featuring the algebraic superposition of channel codes over a finite field. The scenario under consideration is one in which two ldquopartnersrdquo - node A and node B - cooperate in transmitting information to a single destination; each partner transmits both locally generated information and relayed information that originated at the other partner. A key observation is that node B already knows node A's relayed information (because it originated at node B) and can exploit that knowledge when decoding node A's local information. This leads to an encoding scheme in which each partner transmits the algebraic superposition of its local and relayed information, and the superimposed codeword is interpreted differently at the two receivers i.e., at the other partner and at the destination node, based on their different a priori knowledge. Decoding at the destination is then carried out by iterating between the codewords from the two partners. It is shown via simulation that the proposed scheme provides substantial coding gain over other cooperative diversity techniques, including those based on time multiplexing and signal (Euclidean space) superposition. ---
Title: Robust Wireless Network Coding – An Overview Section 1: Introduction Description 1: Provide an introduction to the concept of Network Coding (NC), its potential advantages over traditional routing, and its applications and challenges in lossy networks. Section 2: Fundamentals of Network Coding Description 2: Explain the basics of Network Coding, including linear NC, matrix representation of transmitted and received packets, and the Random Linear Network Coding Channel (RLNCC) model for lossy networks. Section 3: Error-Correcting Codes in Projective Spaces Description 3: Summarize the main contributions on the analysis and design of error-correcting codes, particularly the concept of code design in projective spaces, and describe recent developments in this area. Section 4: How It Works Description 4: Describe the theoretical motivation and working principles behind the design of error-correcting codes in projective spaces, using the RLNCC model for illustration. Section 5: Recent Developments Description 5: Discuss recent advancements and contributions to the field of error-correcting codes in projective spaces, including new constructions, bounds, and decoding algorithms. Section 6: Joint Network-Channel Decoding Description 6: Provide an overview of the joint network-channel decoding approach to improve the reliability of network-coded wireless architectures, emphasizing the cross-layer optimization of network and channel coding. Section 7: Understanding Joint Network-Channel Decoding Description 7: Offer a simple example to explain the rationale and benefits of joint network-channel decoding, illustrating its potential gains over separate decoding methods. Section 8: Recent Results Description 8: Highlight recent research findings and studies that demonstrate the effectiveness and advantages of joint network-channel code design and decoding in robust and reliable wireless networks. Section 9: Concluding Remarks Description 9: Conclude the paper by summarizing the key points discussed, emphasizing the current research interests, and identifying open issues and future directions in the field of robust network-coded wireless architectures.
Survey of Wireless Communication Technologies for Public Safety
12
--- paper_title: Mobile responder communication networks for public safety paper_content: This article proposes a paradigm shift from the prevailing public safety model of disparate, agency-owned and -operated Land Mobile Radio networks to Mobile Responder Communication Networks (MRCNs) that are created by unifying communications resources and are shared across cooperating public safety agencies to provide local, regional, or national service. MRCNs use a common IP-based core network employing service-intelligent session control to bridge networks based on LMR and commercial wireless access technologies, thus allowing the support of emerging IP-based multimedia services, high data rate access, and mission-critical tactical group voice and interoperable communications during emergency responses. --- paper_title: Information sharing and interoperability: the case of major incident management paper_content: Public sector inter-organisational information sharing and interoperability is an area of increasing concern and intense investment for practice and an area of increasing scholarship. This paper focuses on one particular set of public sector organisations (emergency services) and illuminates the key technological and organisational issues they face concerning information sharing and interoperability. The particular contexts in which these are studied are ones where decisions are non-trivial and made in high-velocity environments. In these conditions the problems and significance of inter-organisational information sharing and interoperability are accentuated. We analyse data gathered from two studies: the first focused on ‘first responders’ (police, fire and ambulance services) in the United Kingdom. The second, a follow on study, with emergency service managers and interoperability project managers in the United Kingdom and the European Union. Using activity theory as a conceptual framework we describe the informational problems critical emergency responders face in their initial response to, and management of, an incident. We argue that rather than focusing on interoperability as a primarily technological issue it should be managed as an organisational and informational issue. Second, we argue that rather than designing for anomalous situations we should design systems, which will function during both anomalous and routine situations. Third, we argue for focus on harmonisation of policies, procedures and working practices. --- paper_title: Emergency Communications: The Quest for Interoperability in the United States and Europe paper_content: When on September 11, 2001, the Pentagon stood ablaze, responding fire companies from Maryland could not communicate with those from Washington, D.C. and Northern Virginia. Runners had to be used instead, stalling rescue efforts: a powerful reminder that even in the age of digital networks and ubiquitous cell phones, communication interoperability, the ability of public safety personnel to communicate by radio with staff from other agencies, on demand and in real time, remains an elusive goal. As almost 60,000 federal, state and local public safety agencies plan to upgrade their communications systems in the wake of 9/11, this essay takes a hard look at communications interoperability and its implementation, here in the United States and in Europe. Three steps have been seen as requirements for interoperability: inventing the appropriate technology, setting common standards and frequencies, and providing adequate funding. This essay looks at each of these steps in the U.S. and European contexts and analyzes successes and failures, rendering a fuller picture both of the challenges for interoperability and of best practices to meet them. Over the last few years (and surprisingly given the complex political structures) the Europeans have pulled ahead of the U.S. in implementing interoperability, although with determination and the right set of strategies, U.S. policymakers can easily make up lost ground. Enhanced Federal Communications Commission (FCC) leadership in defining frequencies and standards and a clearly formulated and thoroughly executed comprehensive funding strategy, based either on public funds or innovative public-private partnerships, would go a long way toward enabling communications interoperability to take hold. Yet, this essay is not simply about how to overcome obstacles on the path to interoperability. The case of interoperability, its elusiveness in the United States and its successes elsewhere, reveals a deeper, more troubling story - a story not so much of technical hurdles, as of structural and political hurdles, as more of perceived than actual constraints, unduly limiting the nation¹s ability to cope with an important public policy need in these uncertain times. There are no abstract silver bullets to overcome the problem. Instead, policymakers have to look carefully at how well the policy strategy they select is aligned with their means and the policy context. In the United States, interoperability has suffered from strategic misalignment and haphazard implementation. European interoperability policies have fared better, not because of a general advantage in the strategies chosen, but because of a better fit between means and ends. Thus interoperability also provides an intriguing test case, highlighting the transcending importance of strategic alignment, agency innovation, and leadership. --- paper_title: Traffic Performance Evaluation of Data Links in TETRA and TETRAPOL paper_content: TETRA and TETRAPOL are competitors in the market of Professional Mobile Radio (PMR). In a previous study we compared the trunked mobile radio systems TETRA V+D and TETRAPOL by evaluating random access performance. Here we present a comparison of the TETRA and TETRAPOL error correction schemes involved to secure LLC data links. The traffic performance of both systems was compared for ETSI scenario 8. Realistic models for the effects of propagation circumstances and co-channel interference are taken into account. The results of the traffic performance measurements exhibit differences in connection set-up times and transmission delays for data links in TETRA and TETRAPOL systems in favour of the TETRA system and reveal a strong dependency on random access delays. --- paper_title: Easy-to-Deploy Emergency Communication System Based on a Transparent Telecommunication Satellite paper_content: This paper presents a satellite-based communication system dedicated to disaster recovery. The proposed solution relies on the underlay transmission of low-power emergency signals in the frequency band of a primary transparent satellite telecommunication system. Wideband spreading is used so as to guarantee that the primary system performance is not affected by the inter-system interference. The emergency system capacity is evaluated under realistic assumptions regarding the primary satellite mission. It is shown that depending on the emergency terminal characteristics, various emergency communication services can be envisaged, from simple alert missions to voice communications. As it does not require any specific space segment development nor the full-time reservation of expensive radio resources, the described solution is attractive in terms of deployment cost. Provided an extension of the regulatory framework for exceptional security-oriented missions, it might satisfactorily match a governmental need for reliable, easy-to-deploy emergency communication means. --- paper_title: On Effects of Antenna Pointing Accuracy for On-the-Move Satellite Networks paper_content: In this paper, we study the adjacent geostationary satellite interference to/from on-the-move platforms with motion-induced antenna pointing errors. First, using satellite geometry, we derive tight upper and lower bounds for the average uplink and downlink interferences. Then, we derive the exact distribution of the adjacent satellite interference, and using a Gaussian approximation, we compute bounds for the outage probability for both links. Finally, the outage performance is investigated by simulations and by evaluation of the derived expressions. The performance results show the accuracy of the analytical expressions and quantify the link degradation due to random antenna pointing errors in on-the-move satellite communication systems. --- paper_title: SALICE - Satellite-Assisted LocalIzation and Communication systems for Emergency services paper_content: This paper describes the SALICE (Satellite-Assisted LocalIzation and Communication systems for Emergency services) project, an Italian National Research Project which has been recently funded by the Italian Ministry of Research; the SALICE project aims at identifying the solutions which can be adopted in an integrated reconfigurable NAV/COM device and studying its feasibility in realistic scenarios. The first goal of the SALICE project is the definition of the baseline scenarios and system architecture which will allow the design of new and effective solutions for what concerns integrated communications and localization techniques, Software Defined Radio (SDR) NAV/COM devices, satellite and HAPS integration in the rescue services, heterogeneous solutions in the area of intervention (IAN, Incident Area Network). Particular attention will be devoted to the optimization of the resources management strategies and to the cooperative localization of rescue entities (persons and means) that intervene in emergency situations. --- paper_title: A mobile ad-hoc satellite and wireless Mesh networking approach for Public Safety communications paper_content: It is widely recognized that emergency management and disaster recovery systems are an issue of paramount importance in communities through the world. The definition of a public safety communications infrastructure is a high-priority task due to the lack of interoperability between emergency response departments, the reduced mobility during coordinated operations on a broad scale and the need for access to critical data in real-time. This work proposes an ldquoad-hoc networkingrdquo approach for emergency mobile communications in a satellite and wireless mesh scenario, in which ad hoc and IPv6 mobility mechanisms are combined together. First we analyze mobility management aspects and IP layer protocols and then we focus on proxy mobile IPv6, a network-based mobility management protocol which represents the more suitable micro-mobility solution for the proposed scenario with heterogeneous networks and unmodified mobile terminals. --- paper_title: EHF for Satellite Communications: The New Broadband Frontier paper_content: The exploitation of extremely high-frequency (EHF) bands (30-300 GHz) for broadband transmission over satellite links is currently a hot research topic. In particular, the Q-V band (30-50 GHz) and W-band (75-110 GHz) seem to offer very promising perspectives. This paper aims at presenting an overview of the current status of research and technology in EHF satellite communications and taking a look at future perspectives in terms of applications and services. Challenges and open issues are adequately considered together with some viable solutions and future developments. The proposed analysis highlighted the need for a reliable propagation model based on experimental data acquired in orbit. Other critical aspects should be faced at the PHY-layer level in order to manage the tradeoff between power efficiency, spectral efficiency, and robustness against link distortions. As far as networking aspects are concerned, the large bandwidth availability should be converted into increased throughput by means of suitable radio resource management and transport protocols, able to support very high data rates in long-range aerospace scenarios. --- paper_title: Analyzing Options for Airborne Emergency Wireless Communications paper_content: In the event of large-scale natural or manmade catastrophic events, access to reliable and enduring commercial communication systems is critical. Hurricane Katrina provided a recent example of the need to ensure communications during a national emergency. To ensure that communication demands are met during these critical times, Idaho National Laboratory (INL) under the guidance of United States Strategic Command has studied infrastructure issues, concerns, and vulnerabilities associated with an airborne wireless communications capability. Such a capability could provide emergency wireless communications until public/commercial nodes can be systematically restored. This report focuses on the airborne cellular restoration concept; analyzing basic infrastructure requirements; identifying related infrastructure issues, concerns, and vulnerabilities and offers recommended solutions. --- paper_title: Public Safety Communication Using Commercial Cellular Technology paper_content: We propose a concept for public safety communication realized with IMS (IP multimedia subsystem), the cellular standards of 3GPP and packet switched transmission. Basing the solution on mainstream cellular technology leverages the economy of scale of today's commercial networks and enables migration of technical solutions and applications. Important requirements of the public safety sector are group communication, low latency, high capacity, security, reliability and interoperability for voice and broadband data services. Our analysis shows that the concept has the technology potential to meet these public safety requirements. --- paper_title: Self-organizing relay stations in relay based cellular networks paper_content: The increasing popularity of wireless communications and the higher data requirements of new types of service lead to higher demands on wireless networks. Relay based cellular networks have been seen as an effective way to meet users' increased bit rate requirements while still retaining the benefits of a cellular structure. The objective of this paper is to illustrate effective and efficient approaches for fast deployment of relay stations into existing cellular networks and to demonstrate improvements to system capacity and performance resulting from cooperation between network components. Cooperative control based on geographic load balancing is employed to provide flexibility for the wireless network and respond to changes in the environment. Flexibility in the antenna system is used to provide coverage at the right place at the right time. Experiments show that the proposed approach has significant improvements and robustness in heterogeneous traffic scenarios, even in disaster conditions where some base stations fail. The performance evaluations are based on Mobile WiMAX technology, but the concepts apply more generally. --- paper_title: Wireless Mesh Networking: A Key Solution for Emergency & Rural Applications paper_content: All communities whether they are rural or urban will have to respond to safety, disasters, and emergency situations. These situations place a special burden on communication systems for having a fully operational system. Given the shortcomings of current Public Safety and Disaster Recovery, reliable wireless mobile communications that enable real-time information sharing, constant availability, and interagency interoperability are imperative in emergency situation. Wireless Mesh Networks have been receiving a great deal of attention as a broadband access alternative for a wide range of markets, including those in the metro, emergency,public-safety, carrier-access, and residential sectors. This paper provides a background on technology requirements for emergency and public safety communications systems and addresses some of the technical influences of wireless mesh networks. The article describes the capabilities and architecture of the Man-portable, Interoperable, Tactical Operations Center communication system which was funded by the U.S. Department of Homeland Security. It is a modern mobile communications infrastructure well suited for public safety and disaster recovery applications. --- paper_title: Using WiMAX for effective business continuity during and after disaster paper_content: Disasters occur in different forms and at different times. It can be man-made or natural. In either case businesses can suffer immensely due to interruption to various Information Technology (IT) services. With global warming natural disasters are predicted more often than before. Further, with freely available tools on the Internet more and more hackers are exploiting system vulnerabilities to craft unexpected system failures. With today's heavy dependency on IT, business success exclusively depends on the availability of IT services. Hence sustainability of IT services has become a major concern to organizations. Therefore, establishing network connectivity to an emergency site and enabling system availability will be of immense significance to the success of businesses under disastrous conditions. In this context, the applicability of mobile Worldwide Interoperability for Microwave Access (WiMAX) technology to establish network connectivity during and after a disaster is to be investigated. Although, there are many business continuity solutions dealing with other IT services, the problem of network failure and the ability to quickly establish network connectivity locally and remotely are to be explored in this study. --- paper_title: Evolving public safety communication systems by integrating WLAN and TETRA networks paper_content: This article contributes to the evolution of public safety communication systems by specifying a novel solution for integrating WLAN and Terrestrial Trunked Radio (TETRA) networks. The specified solution allows TETRA terminals to interface to the TETRA Switching and Management Infrastructure (SwMI) over a broadband WLAN radio access network, instead of the conventional narrowband TETRA radio network. These terminals are fully interoperable with conventional TETRA terminals and can employ all TETRA services, including group calls, short data messaging, packet data, and so forth. In addition, however, such terminals can support a range of brand new capabilities enabled by the WLAN, such as broadband data services, true concurrent voice and data services, simultaneous reception of many group calls, reduced call setup and voice transmission delays, improved voice quality, and so forth. The specified solution is solely based on IP multicast and Voice-over-IP (VoIP) technologies and thus fits ideally to the all-IP architecture being introduced by the MESA project for the next generation of public safety and disaster relief communication systems. --- paper_title: Wireless commons perils in the common good paper_content: In the last few years, high-speed wireless access to the Internet has grown rapidly. Surprisingly, this growth has not come through cellular phone networks as many had expected, but through IEEE 802.11 standards-based wireless local area networks (WLANs). This rise of WLANs can be partly linked to the creation of a series of open standards, a precipitous fall in the costs of related hardware, and the explosive growth of home networking. WLANs have become commonplace in the education, transportation, and manufacturing sectors and are rapidly embraced in the retail, hospitality, and government sectors. --- paper_title: ZigBee Sensor Network for Patient Localization and Air Temperature Monitoring During Emergency Response to Crisis paper_content: The mass casualty emergency response involves logistic impediments like overflowing victims, paper triaging, extended victim wait time and transport. We propose a new system based on a location aware wireless sensor network (WSN) to overcome these impediments and assist the emergency responders (ER) in providing efficient emergency response during disasters like chemical explosions. In this paper we have only elaborated about the patient tracking and local air temperature monitoring functionalities of this system. We have developed an energy efficient ZigBee-ready temperature sensor node hardware and setup a ZigBee mesh network demonstrator. An RSSI-based localization solution is tested to analyze its suitability to track patients at the disaster site. A new algorithm to detect and display the temperature zones at the disaster site is developed and analyzed to find its computation efficiency. The patient tracking and temperature zone detection results show the increase of situation awareness, which can enable fast patient evacuation. --- paper_title: WIDENS: advanced wireless ad-hoc networks for public safety paper_content: A structure made up of several components, particularly for use in classrooms is described which comprises at least two basic elements, each of said basic elements comprising an L-shaped panel and a T-shaped base parallel to said L-shaped panel and three vertical legs connecting said L-shaped panel to said T-shaped base, two of said legs being mounted at the extremity of the short sides of said T-shaped base and one leg being mounted at the extremity of the long side of the T-shaped base, said L-shaped panel having a plurality of blind orifices along the edge located at a modular distance, said T-shaped base being provided with a plurality of orifices, and a plurality of pins for insertion into the blind orifices of said L-shaped panel for connection of at least two basic elements. The connection between a plurality of basic elements is additionally achieved by means of rectangular boards having two orifices at the extremities and two smaller orifices in the center of the edge of the longer sides of said rectangular boards or rectangular boards having a plurality of orifices at modular distance. --- paper_title: Public Safety and Emergency Case Communications: Opportunities from the Aspect of Cognitive Radio paper_content: Recent developments in wireless communications systems and the introduction of new technologies such as cognitive radios are expected to bring beneficial methodologies and solutions to the problems of public safety and emergency case communications, especially related to interoperability issues. Physical layer adaptiveness and spectrum sensing methodology of cognitive radios are commonly emphasized related to spectral efficiency and frequency domain interoperability of public safety communications. However, wide range of opportunities introduced by the awareness, learning and intelligence features of cognitive radios necessitate the reassessment of opportunities to the public safety and emergency case communications from a more complementary aspect. One aspect can be the development of applications that will lead to communicate, locate and reach victims who are stuck in disaster areas, underground (e.g. underground mine explosions) or behind obstacles. It is aimed to discuss how communication with survivors can be accomplished and how the estimation of their locations utilizing received signals can be achieved by benefiting from the advantages of cognitive radio technology. Second objective is to define the scope of the research required for adaptive and secure physical layer reconfigurability to overcome communication congestion and to satisfy the interoperability and security requirements of public safety communications systems during emergency cases and extreme situations. --- paper_title: The EULER project: application of software defined radio in joint security operations paper_content: The task of improving the effectiveness of public safety communications has become a main priority for governments. This is partly motivated by the increased risk of natural disasters such as flooding, earthquakes, and fires, and partly due to the risks and consequent impact of terrorist attacks. This article focuses on the experience from the European Commission Seventh Framework Programme project known as EULER, which seeks to demonstrate the benefits of software defined radio technology to support the resolution of natural disasters of significant stature, which require the participation of different public safety and military organizations, potentially of different nations. In such scenarios, the presence of interoperability barriers in the disaster area is a major challenge because different organizations may use different wireless communication systems. In this context, the main aspect investigated in EULER is the definition of a common waveform that respects the software communications architecture constraints, and guarantees maximum portability across SDR platforms. This article discusses a range of issues that have been identified thus far within the EULER project; in particular, the perceived pan-European interoperability needs of public safety, and coordination with military devices and networks. Aspects of interoperability are also extended to the three dimensions of platform, waveform, and information assurance. --- paper_title: Cognitive radio: making software radios more personal paper_content: Software radios are emerging as platforms for multiband multimode personal communications systems. Radio etiquette is the set of RF bands, air interfaces, protocols, and spatial and temporal patterns that moderate the use of the radio spectrum. Cognitive radio extends the software radio with radio-domain model-based reasoning about such etiquettes. Cognitive radio enhances the flexibility of personal services through a radio knowledge representation language. This language represents knowledge of radio etiquette, devices, software modules, propagation, networks, user needs, and application scenarios in a way that supports automated reasoning about the needs of the user. This empowers software radios to conduct expressive negotiations among peers about the use of radio spectrum across fluents of space, time, and user context. With RKRL, cognitive radio agents may actively manipulate the protocol stack to adapt known etiquettes to better satisfy the user's needs. This transforms radio nodes from blind executors of predefined protocols to radio-domain-aware intelligent agents that search out ways to deliver the services the user wants even if that user does not know how to obtain them. Software radio provides an ideal platform for the realization of cognitive radio. --- paper_title: Feasibility study of a SDR-based reconfigurable terminal for emergency applications paper_content: Today, more attention is paid to emergency systems and rescuer organizations due to the increasing sensitivity to natural disasters and terrorist attacks. In this paper, we would consider the feasibility of telecommunication technologies that can enhance the interconnection between accident zones and rescue teams. A novel philosophy inspired by Software Defined Radios is proposed in order to design a reconfigurable receiver for rescue team assistance in emergency scenarios, which can encompass some basic functionalities, i) spectrum sensing and automatic mode identification, ii) dynamic multistandard reconfigurability, iii) navigation and communication integration. Results shown can prove the feasibility of the software implementation of the reconfigurable terminal over suitable processing architectures, characterized by affordable cost and open-source software development. --- paper_title: Feasibility of long term evolution (LTE) as technology for public safety paper_content: Being that current Professional Mobile Radio systems, such as TETRA, APCO25 or DMR suffer from slow data transfer, LTE technology is taking its place as the choice for public safety broadband communication system. Declared performance in terms of capacity, reliability and security is enough to fulfill the strict requirements of the public safety users. Quality of Service concept in LTE, with Allocation and Retention Priority, Quality Class Identifier, Traffic Flow Templates with addition of access classes is the solid ground for deployment of LTE as the solution for public safety broadband data transfer. The expectation, sourcing from the author's experience in planning and deployment of public safety communication systems, is that, with additional efforts of 3GPP in definition of concepts to support the fallback schemes, group calls and enhanced security, and with careful particular system planning — which is the task for both network operators and special users, LTE may continue to be the choice for public safety data transfer, and the future solution for voice communication as well. --- paper_title: A Policy Proposal to Enable Cognitive Radio for Public Safety and Industry in the Land Mobile Radio Bands paper_content: The frequency bands that have been licensed to the land mobile radio (LMR) services for decades are a tremendously fertile field for the deployment of cognitive radio technology. This paper outlines several reasons why policy-based cognitive radios would be particularly useful for modern public safety, federal non-military and business/industrial applications, especially in the VHF and UHF bands, where 80% of the public safety, federal and business/industrial licenses are currently held. This paper argues that many interoperability deficiencies are directly related to the original approach to spectrum policy and radio frequency regulation developed in the early 1920's, which segmented uses of LMR spectrum into several use classes. It provides a historic perspective to explain why the current status of LMR infrastructure, operations and licensee behavior is a direct result of antiquated policies and technologies still applied and deployed in these bands. The paper discusses the reasons that cognitive radio could be a successful solution for the apparent congestion in the bands. It suggests that policy-based cognitive radio systems operated on a cooperative, shared basis could lower costs of use and aid coordination for emergency responders across both public and private sectors of the traditional LMR user community. We discuss policy reforms and innovations such as spectrum pooling and spectrum portability that could spur new shared infrastructure development and spectrum efficiencies. We suggest several key policy reforms for consideration, including immediate cessation of ongoing narrowbanding initiatives, decoupling of spectrum licenses from spectrum access, and national spectrum management by frequency coordinators. --- paper_title: Designing the Joint Tactical Radio System (JTRS) Handheld, Manpack, and Small form Fit (HMS) Radios for Interoperable Networking and Waveform Applications paper_content: The Joint Tactical Radio System (JTRS) Handheld, Manpack, Small Form Fit (HMS) radios are being developed for manned as well as unmanned and unattended platforms. Each platform that embeds a JTRS HMS set brings along its own concept of operation creating unique networking challenges in terms of achieving a seamless, interoperable, mobile, and adhoc communications architecture. To design and develop these radios within established schedule and cost objectives, the program must overcome a number of technical challenges. This paper will identify the driving requirements, outline the technical challenges, and propose optimal solutions that balance networking functionality and waveform interoperability while still meeting Size, Weight, Power, Cost, and Schedule objectives. --- paper_title: Software Defined Radio: Challenges and Opportunities paper_content: Software Defined Radio (SDR) may provide flexible, upgradeable and longer lifetime radio equipment for the military and for civilian wireless communications infrastructure. SDR may also provide more flexible and possibly cheaper multi-standard-terminals for end users. It is also important as a convenient base technology for the future context-sensitive, adaptive and learning radio units referred to as cognitive radios. SDR also poses many challenges, however, some of them causing SDR to evolve slower than otherwise anticipated. Transceiver development challenges include size, weight and power issues such as the required computing capacity, but also SW architectural challenges such as waveform application portability. SDR has demanding implications for regulators, security organizations and business developers. --- paper_title: Opportunistic radio access techniques for emergency communications: Preliminary analysis and results paper_content: In this work, we aim at considering innovative techniques of spectrum management and radio access for emergency communications. In particular, cognitive and opportunistic radio access and resource management have been tested in emergency scenarios. The breakdown of existing wireless network infrastructures can produce a lot of big spectrum holes, i.e.: large frequency spaces not currently used by any kind of primary users. In such a situation, the emergency communication system, instead of remaining “tied down” together with the partially or totally out-of-service terrestrial network infrastructures, may transmit information wherever and whenever it is possible. Single-Carrier Frequency Division Multiple Access (SC-FDMA) has been considered as basic multiple access protocol thanks to its features of agility in frequency and robustness against nonlinear distortions. A radio resource management protocol which maximizes fairness among users has been considered for the cognitive transmission system. Simulation results evidenced a substantial increase of transmission quality and coverage with respect to conventional radio access strategies that do not exploit cognitive features. --- paper_title: Pulsed pseudolite signal effects on non-participating GNSS receivers paper_content: Pseudolites are a technology with the potential of bridging the gap between outdoor and indoor navigation. Despite their potential, pseudolites can cause severe interference problems with non-participating receivers, i.e., devices not designed to exploit this technology. In this paper, the loss caused by pulsed pseudolite signals is determined as a function of the pulse duty cycle and the number of bits employed for signal quantization. Quantization, blanking and noise increase are identified as the main sources of signal degradation. Theoretical results are validated by simulations and experiments performed using commercial GPS receivers. The good agreement between theoretical and experimental results supports the validity of the proposed approach. --- paper_title: UltraWideBand indoor positioning systems and their use in emergencies paper_content: Reliable and accurate indoor positioning for moving users requires a local replacement for satellite navigation. An UltraWideBand (UWB) system is particularly suitable for such local systems, including temporary installations supporting emergency services inside large buildings. The requirements for emergencies will be very variable, but will generally include: good radio penetration through structures, the rapid set-up of a stand-alone system, tolerance of high levels of reflection, and high accuracy. The accuracy should be better than 1 m, as sometimes it matters which side of a door you are, and locations should be in 3 dimensions. Support for robots as well as people would call for still better accuracy. Rapid set-up implies very little surveying of the fixed terminals, and positioning relative to the mobile and fixed terminals. A radio system that measures ranges between fixed and mobile terminals matches these requirements, but the requirements for accuracy and for dealing with multipath need a bandwidth of more than 1 GHz. Thus UWB is the preferred solution, as it has the specific advantage of high accuracy, even in the presence of severe multipath. This paper presents the features and system design options for UWB positioning systems, and shows how they match the indoor location demands of emergency services. The main features that are covered are: the deployment of terminals (how many, and where), the minimum requirements for fixed terminal surveying, integration (hybridisation) with GNSS, and solving for position inside the network. The main system design options are: whether the mobile terminals are transceivers or solely receivers, the UWB signal design and frequency span, and the use of the same signal for communications. The paper includes results from a demonstration UWB indoor positioning system being built at TRT (UK). --- paper_title: Bandwidth effect on distance error modeling for indoor geolocation paper_content: In this paper we introduce a model for the distance error measured from the estimated time of arrival (TOA) of the direct path (DP) between the transmitter and the receiver in a typical multipath indoor environment. We use the results of a calibrated Ray tracing software in a sample office environment. First we divide the whole floor plan into LOS and Obstructed LOS (OLOS), and then we model the distance error in each environment considering the variation of bandwidth of the system. We show that the behavior of the distance error in LOS environment can be modeled as Gaussian, while behavior of the OLOS is a mixture of Gaussian and exponential distribution. We also related the statistics of the distributions to the bandwidth of the system. --- paper_title: Omnidirectional Pedestrian Navigation for First Responders paper_content: It might be assumed that dead reckoning approaches to pedestrian navigation could address the needs of first responders, including fire fighters. However, conventional PDR approaches with body-mounted motion sensors can fail when used with the irregular walking patterns typical of urban search and rescue missions. In this paper, a technique using shoe-mounted sensors and inertial mechanization equations to directly calculate the displacement of the feet between footfalls is described. Zero-velocity updates (ZUPTs) at foot standstills limit the drift that would otherwise occur with inexpensive IMUs. We show that the technique is fairly accurate in terms of distance travelled and can handle many arbitrary manoevers, such as tight turns, side/back stepping and stair climbing. --- paper_title: A Low-Power Scheme for Localization in Wireless Sensor Networks paper_content: One of the most challenging issues in the design of system for Wireless Sensor Networks (WSN) is to keep the energy consumption of the sensor nodes as low as possible. Many localization systems require that the nodes keep the transceiver active during a long time consuming energy. In this work we propose a scheme to reduce the energy consumption of the mobile nodes that need to know their positions. Our strategy consists of decreasing the idle listening and an optimized allocation of the localization tasks on the nodes. Thus, the nodes that are externally powered calculate the position for the resource-constrained nodes. The scheme is based on a low-power IEEE 802.15.4 nonbeacon enabled network. --- paper_title: Using of mobile device localization for several types of applications in intelligent crisis management paper_content: Main area of interest is in a system enhancement for locating and tracking users of our information system inside the buildings. The developed framework as it is described here joins the concepts of location and user tracking as an extension for a new type of mobile information systems. The experimental framework prototype uses a WiFi network infrastructure to let a mobile device determine its indoor position. User location can be used by several types of applications. In first case the user location is used for data pre-buffering and pushing information from server to user's PDA. All server data is saved as artifacts (together) with its position information in building. The accessing of prebuffered data on mobile device can highly improve response time needed to view large multimedia data. In second case the user location information is used for crisis management in large area buildings. Real-time position location can be used to track service personnel (e.g., police officers, rescue teams, fire brigades, etc.), lost children, suspected criminals, and stolen vehicles. ---
Title: Survey of Wireless Communication Technologies for Public Safety Section 1: INTRODUCTION Description 1: Introduce the role of Public Safety (PS) organizations in disaster preparedness and recovery, their reliance on ICT, and the challenges in wireless communication technology for public safety. Section 2: Public Safety organizations, functions and scenarios Description 2: Discuss the various public safety organizations, their functions, and the operational scenarios in which they operate. Section 3: Operational scenarios Description 3: Describe the typical operational domains and specific communication challenges faced by PS organizations in different scenarios. Section 4: Communication services and applications Description 4: Outline the services required by PS communication systems and their related features, including voice, data, and location services. Section 5: Requirements Description 5: Identify common operational and technical requirements for PS wireless communication systems. Section 6: Business considerations and market comparison with commercial and military domains Description 6: Compare PS communication systems with commercial and military communication technologies, highlighting differences in business models and technology usage. Section 7: Wireless Communication technologies Description 7: Provide an overview of the current wireless communication technologies used by PS organizations, including TETRA, APCO 25, TETRAPOL, satellite networks, and more. Section 8: Radio frequency Spectrum regulations Description 8: Explain the spectrum regulatory frameworks that govern the use of wireless communication technologies by PS organizations in Europe and the USA. Section 9: Future wireless communication technologies and services Description 9: Discuss potential future communication technologies for PS, such as LTE, Software Defined Radio (SDR), Cognitive Radio (CR), and indoor positioning. Section 10: Status of security research in Europe Description 10: Summarize the current research projects and initiatives in Europe aimed at enhancing PS communication capabilities. Section 11: Summary of the research challenges Description 11: Recap the identified research challenges for future wireless communication technologies in the public safety domain. Section 12: CONCLUSIONS Description 12: Conclude the paper, emphasizing the importance of wireless communication technologies in supporting PS operations and their potential evolution.
A Review of Multilevel Inverter Based Active Power Filter
3
--- paper_title: A New Neutral-Point-Clamped PWM Inverter paper_content: A new neutral-point-clamped pulsewidth modulation (PWM) inverter composed of main switching devices which operate as switches for PWM and auxiliary switching devices to clamp the output terminal potential to the neutral point potential has been developed. This inverter output contains less harmonic content as compared with that of a conventional type. Two inverters are compared analytically and experimentally. In addition, a new PWM technique suitable for an ac drive system is applied to this inverter. The neutral-point-clamped PWM inverter adopting the new PWM technique shows an excellent drive system efficiency, including motor efficiency, and is appropriate for a wide-range variable-speed drive system. --- paper_title: Trends in active power line conditioners paper_content: Active power line conditioners, which are classified into shunt and series ones, have been studied with the focus on their practical installation in industrial power systems. In 1986, a combined system of a shunt active conditioner of rating 900 kVA and a shunt passive filter of rating 6600 kVA was practically installed to suppress the harmonics produced by a large capacity cycloconverter for steel mill drives. More than one hundred shunt active conditioners have been operating properly in Japan. The largest one is 20 MVA, which was developed for flicker compensation for an arc furnace with the help of a shunt passive filter of 20 MVA. In this paper, the term of "active power line conditioners" is used instead of that of "active power filters" because active power line conditioners would cover a wider sense than active power filters. The primary intent of this paper is to present trends in active power line conditioners using PWM inverters, paying attention to practical applications. > --- paper_title: A general circuit topology of multilevel inverter paper_content: A generalized circuit topology of multilevel voltage source inverters which is based on a direct extension of the three-level inverter to higher level is proposed. The circuit topologies up to five-level are presented. The proposed multilevel inverter can realize any multilevel pulsewidth modulation (PWM) scheme which leads to harmonic reduction and provides full utilization of semiconductor devices like GTOs, especially in the high power range where high voltage can be applied. The capacitor voltage balancing problem is discussed and a circuit remedy for such a problem is given. > --- paper_title: Imbricated Cells Multi-Level Voltage-Source Inverters for High Voltage Applications paper_content: SummaryIn the field of High Voltage Power Conversion, various techniques have been developped to use series-connected switches.Plain series connection of switches is the first solution and its drawbacks are now well-known (static and dynamic voltage sharing difficulties that require selecting paired switches or using sophisticated control techniques, high dV/dts generated by the synchronous commutation of all the switches, output waveform that does not benefit from the increased number of switches...).The “neutral point clamped” technique introduced in the early 80s improves voltage sharing and dV/dts, and gives a three-level output waveform.More recently a new multilevel topology has been introduced; compared to former techniques, it really solves the problem of voltage sharing and dV/dts, and gives a three-level output waveform with cancellation of the harmonic at the switching frequency.In this paper, it is shown that this technique can be easily generalized to voltage-source inverter legs with any num... --- paper_title: Active Power Filter Based on Four-leg Hybrid-Clamped Technique paper_content: Parallel active power filter based on four-leg hybrid-clamped converter topology is proposed in this paper. The detailed description is given on the balancing principle of capacitors' voltages with four-leg hybrid-clamped technique. An independent leg operating as a clamping leg brings better performance and easy control. For levels larger than live, this topology is a better choice when the ability of voltage balance and simple control are considered. The simulation and experiment of active power filter based on this topology show good performance of voltage balance and compensation. --- paper_title: Extended operation of flying capacitor multilevel inverters paper_content: Recent research in flying capacitor multilevel inverters (FCMIs) has shown that the number of voltage levels can be extended by changing the ratio of the capacitor voltages. For the three-cell FCMI, four levels of operation are expected if the traditional ratio of the capacitor voltages is 1:2:3. However, by altering the ratio, the inverter can operate as a five-, six-, seven-, or eight-level inverter. According to previous research, the eight-level case is referred to as maximally distended (or full binary combination schema) since it utilizes all possible transistor switching states. However, this case does not have enough per-phase redundancy to ensure capacitor voltage balancing under all modes of operation. In this paper, redundancy involving all phases is used along with per-phase redundancy to improve capacitor voltage balancing. It is shown that the four- and five-level cases are suitable for motor drive operation and can maintain capacitor voltage balance under a wide range of power factors and modulation indices. The six-, seven-, and eight-level cases are suitable for reactive power transfer in applications such as static var compensation. Simulation and laboratory measurements verify the proposed joint-phase redundancy control. --- paper_title: A power line conditioner based on flying capacitor multilevel voltage source converter with phase shift SPWM paper_content: In this paper, a new power line conditioning system is proposed. This system is constructed by a flying capacitor multilevel VSC (voltage source converter) and two reactors. The phase-shift SPWM (sinusoidal pulse width modulation) switching scheme is applied to control the switching devices of this converter. Due to this multilevel VSC and the switching scheme applied to this converter, the system is applicable to distribution systems or industrial applications. The reactive power compensation, harmonic suppression and load balancing functions of the power line conditioner are analyzed. A novel and effective startup procedure is proposed to start up the system. System simulation is carried out to verify the theoretical analysis results. --- paper_title: An active power filter implemented with a three-level NPC voltage-source inverter paper_content: This paper presents an active power filter implemented using a three-level neutral point-clamped voltage-source inverter. The active power filter can compensate current harmonics and reactive power in medium voltage distribution systems. The paper presents the principles of operation and design criteria for both the power and control circuits. Finally, the viability of the proposed scheme is shown with computer simulation using Matlab. --- paper_title: Multilevel inverter, based on multi-stage connection of three-level converters scaled in power of three paper_content: A multi-stage inverter using three-state converters is being analyzed for multipurpose applications, such as active power filters, static VAr compensators and machine drives for sinusoidal and trapezoidal current applications. The great advantage of this kind of converter is the minimum harmonic distortion obtained. The drawbacks are the isolated power supplies required for each one of the stages of the multiconverter. In this paper this problem has been overcome by using isolated, bidirectional DC power supplies, which are fed from a common power source. This solution becomes practical because only one converter of the chain, called Master, takes more than 80% of the total active power required by the system. The rest of the converters, called "Slaves", need very low power, and then those DC supplies are small. Another configuration with common DC supply and output transformers is displayed, and simulation results for different applications are shown and compared with similar results obtained with conventional PWM converters. The control of this multi-converter is being implemented using DSP controllers, which give flexibility to the system. --- paper_title: Multilevel converters for large electric drives paper_content: Traditional two-level high-frequency pulse width modulation (PWM) inverters for motor drives have several problems associated with their high frequency switching which produces common-mode voltage and high voltage change (dV/dt) rates to the motor windings. Multilevel inverters solve these problems because their devices can switch at a much lower frequency. Two different multilevel topologies are identified for use as a power converter for electric drives: a cascade inverter with separate DC sources; and a back-to-back diode clamped converter. The cascade inverter is a natural fit for large automotive all-electric drives because of the high VA ratings possible and because it uses several levels of DC voltage sources which would be available from batteries or fuel cells. The back-to-back diode clamped converter is ideal where a source of AC voltage is available such as a hybrid electric vehicle. Simulation and experimental results show the superiority of these two power converters over PWM-based drives. --- paper_title: A hybrid multilevel inverter for shunt active filter using space-vector control paper_content: In this paper, a current active power filter using a hybrid asymmetric multilevel inverter is presented and analyzed. The control proposed is based on a vector-control technique, generating an optimized switching pattern. The hybrid multilevel inverter increases the voltage levels number, reducing the harmonics associated to the commutation frequency. The hybrid multilevel inverter is structured by two cascaded-inverters, and the DC link voltage between the stages has the relation 1:3, generating 9 voltage levels. The signal reference is generated using a variant of the single-phase d-q Theory, allowing a fast transient response and stability. Simulated waveforms prove the viability of the control scheme and the multilevel active current filter proposed. --- paper_title: A novel control algorithm for cascade shunt active power filter paper_content: In this paper a novel control scheme of cascade shunt active power filter (CSAPF) is presented. This work is motivated by the fact that the compensation ability of traditional active power filter is limited by the voltage capability of the power devices. While the converter with cascade topology has been put into practical use for years especially in high voltage and high power drives. The application of such converter to active power filter is hindered by the problem of voltage unbalance of separate DC capacitors that leads to system instability. This paper deals with the design and implementation of a new digital controller for CSAPF. Finally experimental results obtained from a laboratory system developed in this paper are presented to verify the viability and effectiveness of the proposed control algorithm. --- paper_title: Hybrid Control Scheme for a Single-Phase Shunt Active Power Filter Based on Multilevel Cascaded Inverter paper_content: This paper presents a hybrid control scheme for a single-phase shunt active power Alter (SAF) based on a multilevel cascaded inverter. The multilevel-cascaded inverter has better characteristics than other topologies. In this topology, extra capacitors and clamping diodes are not needed to reproduce the same number of levels that other multilevel topologies. In order to improve the time response and behavior of SAF, a Passivity- based controller (PBC) is proposed and designed in this work. The focus of this controller is to increase the operation region of the shunt active Alter with respect to linear controller designed around an operation point. The current controller is performed by PBC and the voltage regulations are carried out by PIs. The proposed control scheme is verified by simulation and experimental tests. --- paper_title: Dynamic performance and control of a static VAr generator using cascade multilevel inverters paper_content: A cascade multilevel inverter is proposed for static VAr compensation/generation applications. The new cascade M-level inverter consists of (M-1)/2 single-phase full bridges in which each bridge has its own separate DC source. This inverter can generate an almost sinusoidal waveform voltage with only one time switching per cycle. It can eliminate the need for transformers in multipulse inverters. A prototype static VAr generator (SVG) system using an 11-level cascade inverter (21-level line-to-line voltage waveform) has been built. The output voltage waveform is equivalent to that of a 60-pulse inverter. This paper focuses on the dynamic performance of the cascade inverter based SVG system. Control schemes are proposed to achieve a fast response which is impossible for a conventional static VAr compensator (SVC). Analytical, simulation and experimental results show the superiority of the proposed SVG system. --- paper_title: Cascade multilevel inverters for utility applications paper_content: Cascade multilevel inverters have been developed by the authors for utility applications. A cascade M-level inverter consists of (M-1)/2 H-bridges in which each bridge has its own separate DC source. The new inverter: (1) can generate almost sinusoidal waveform voltage while only switching one time per fundamental cycle, (2) can eliminate transformers of multipulse inverters used in conventional utility interfaces and static VAr compensators, and (3) makes possible direct parallel or series connection to medium- and high-voltage power systems without any transformers. In other words, the cascade inverter is much more efficient and suitable for utility applications than traditional multipulse and pulse width modulation (PWM) inverters. The authors have experimentally demonstrated the superiority of the new inverter for reactive power (VAr) and harmonic compensation. This paper summarizes features, feasibility, and control schemes of the cascade inverter for utility applications including utility interface of renewable energy, voltage regulation, VAr compensation, and harmonic filtering in power systems. Analytical, simulated, and experimental results demonstrate the superiority of the new inverters. --- paper_title: A power line conditioner using cascade multilevel inverters for distribution systems paper_content: A power line conditioner (PLC) using a cascade multilevel inverter is presented for voltage regulation, reactive power (VAr) compensation and harmonic filtering in this paper. The cascade M-level inverter consists of (M-1)/2 H-bridges in which each bridge has its own separate DC source. This new inverter: (1) can generate almost an sinusoidal waveform voltage with only one time switching per line cycle; (2) can eliminate transformers of multipulse inverters used in the conventional static VAr compensators; and (3) makes possible direct connection to the 13.8 kV power distribution system in parallel and series without any transformer. In other words, the power line conditioner is much more efficient and more suitable to VAr compensation and harmonic filtering of distribution systems than traditional multipulse and pulse width modulation (PWM) inverters. It has been shown that the new inverter is specially suited for VAr compensation. This paper focuses on feasibility and control schemes of the cascade inverter for voltage regulation and harmonic filtering in distribution systems. Analytical, simulated and experimental results show the superiority of the new power line conditioner. --- paper_title: Hybrid multilevel power conversion system: a competitive solution for high power applications paper_content: Use of multilevel inverters is becoming popular in recent years for high power applications. Various topologies and modulation strategies have been investigated for utility and drive applications in literature. Trends in power semiconductor technology indicate a trade-off in the selection of power devices in terms of switching frequency and voltage sustaining capability. New power converter topologies permit modular realization of multilevel inverters using a hybrid approach involving integrated gate commutated thyristors (IGCT) and insulated gate bipolar transistors (IGBT) operating in synergism. This paper is devoted to the investigation of a hybrid multilevel power conversion system typically suitable for high performance, high power applications. This system designed for 4.16 kV, /spl ges/100 hp load comprises of a hybrid seven-level inverter, a diode bridge rectifier and an IGBT rectifier per phase. The IGBT rectifier is used on the utility side as a real power flow regulator to the low voltage converter and as a harmonic compensator for the high voltage converter. The hybrid seven-level inverter on the load side consists of a high voltage, slow switching IGCT inverter and a low voltage, fast switching IGBT inverter. By employing different devices under different operating conditions, it is shown that one can optimize the power conversion capability of entire system. A detailed analysis of a novel hybrid modulation technique for the inverter, which incorporates stepped synthesis in conjunction with variable pulse width of the consecutive steps is included. In addition, performance of a multilevel current regulated delta modulator as applied to the single phase full bridge IGBT rectifier is discussed. Detailed computer simulations accompanied with experimental verification are presented in the paper. --- paper_title: A Generalized Design Principle of a Uniform Step Asymmetrical Multilevel Converter For High Power Conversion paper_content: This paper is focused on a general design principle of a uniform step multilevel converter, with K series-connected full bridges inverters per phase. A new design terminology is proposed and analytical relationships are established. The DC-voltage sources supplying partial inverters are supposed to be rationally unbalanced. The corresponding “Asymmetrical” topology provides more flexibility to the designer, and can generate a large number of levels (any odd number from 2K+1 to 3K) without increasing the number of H-bridges. Simulation results and experimental tests shown the reliability of the design approach suggested. --- paper_title: DC bus ripple minimization in cascaded H-bridge multilevel converters under staircase modulation paper_content: Cascade connected H bridge multilevel converters are becoming an attractive topology for very high power applications. Due the single phase loading of the individual bridges, the DC link capacitors required in these converters are faced with a heavy stress. The problem is particularly acute with stair-case type modulation strategies. This paper presents an approach for equalizing the power drawn from each H bridge within less than one cycle period with the aim of reducing capacitor sizing. The presented approach is based on rearranging the switching strategies focusing on the redundancies in synthesizing output voltage of H bridge multilevel converters. The proposed approach also holds nearly the same degree of freedom in selecting conduction angles and attains similar levels of harmonic performance as with conventional switching angle selection approaches. The analysis presented in this paper is also confirmed by simulation. --- paper_title: Minimizing network harmonic voltage distortion with an active power line conditioner paper_content: Active power line conditioner (APLC) is a type of active filter that compensates for power system waveform distortion. The objective is to develop and illustrate a procedure for calculating the APLC injection current needed to minimize voltage harmonic distortion throughout a power network. The procedure is intended for use with APLC frequency domain correction in networks that are experiencing periodic harmonic distortion. The injection currents are determined using nonlinear optimization theory. The chief contribution lies in developing a simple procedure for finding the Fourier series of the optimum APLC injection current waveform. The procedure is intended to apply in any of the following situations: (1) one single-phase APLC in a single-phase network; (2) one single-phase APLC in a three-phase network; or (3) one three-phase APLC in a balanced three-phase network. > --- paper_title: A study on the theory of instantaneous reactive power paper_content: A new definition of instantaneous reactive power is presented. This definition has a clear physical meaning that includes both the conventional instantaneous reactive power and the instantaneous power of a zero-phase component. A simple control algorithm for the active filter derived from the new definition is described. Simulations verified the control algorithm. > --- paper_title: Generalized instantaneous reactive power theory for three-phase power systems paper_content: A generalized theory of instantaneous reactive power for three-phase power systems is proposed in this paper. This theory gives a generalized definition of instantaneous reactive power, which is valid for sinusoidal or nonsinusoidal, balanced or unbalanced, three-phase power systems with or without zero-sequence currents and/or voltages. The properties and physical meanings of the newly defined instantaneous reactive power are discussed in detail. A three-phase harmonic distorted power system with zero-sequence components is then used as an example to show reactive power measurement and compensation using the proposed theory. --- paper_title: An active filter used for harmonic compensation and power factor correction: A control technique paper_content: The paper deals with the analysis of an active power filter with two legs for three-phase systems. Although the converter topology is not symmetric, a suitable control of the power switches allows the achievement of balanced currents. The control technique for the compensation of the harmonics and reactive power of the load is derived directly from the mathematical model of the converter. The reference signals of the currents are used for the space vector modulation (SVM) both for balanced and unbalanced voltages of the dc-link capacitors. Numerical simulations validate the proposed control showing the capabilities of the active filter in reducing the total harmonic distortion and reactive powers in supplying of the source currents. --- paper_title: A control structure for fast harmonics compensation in active filters paper_content: Shunt active filters are a means to improve power quality in distribution networks. Typically, they are connected in parallel to disturbing loads in order to reduce the injection of nonsinusoidal load currents into the utility grid. The high power active filter investigated in this paper is based on a PWM controlled voltage source inverter. Its inner current control is realized with a deadbeat controller that allows fast tracking of stochastically fluctuating load currents. For the mitigation of stationary load current harmonics, an outer control loop is required that compensates for the persistent phase error caused by the delay of the inner loop. The outer loop developed in this paper is based on integrating oscillators tuned to the major load current harmonics. Mathematically, they are equivalent to I-controllers in rotating reference frames. Some of these oscillators are located within a closed control loop. For frequencies where the feedback would excite grid resonances they are placed in a prefilter with phase shifting elements. Since all oscillators share a common feedback full selectivity of the harmonic analysis is achieved. For every harmonic the degree of compensation can be adjusted individually. In addition to the oscillators, a direct path is provided that feeds forward the load current to the inner control loop. Thus, load current transients can be tracked with the full speed of the dead-beat controller. The direct path does not affect the harmonic analysis performed by the oscillators. --- paper_title: Instantaneous Reactive Power Compensators Comprising Switching Devices without Energy Storage Components paper_content: The conventional reactive power in single-phase or three- phase circuits has been defined on the basis of the average value concept for sinusoidal voltage and current waveforms in steady states. The instantaneous reactive power in three-phase circuits is defined on the basis of the instantaneous value concept for arbitrary voltage and current waveforms, including transient states. A new instantaneous reactive power compensator comprising switching devices is proposed which requires practically no energy storage components. --- paper_title: A general circuit topology of multilevel inverter paper_content: A generalized circuit topology of multilevel voltage source inverters which is based on a direct extension of the three-level inverter to higher level is proposed. The circuit topologies up to five-level are presented. The proposed multilevel inverter can realize any multilevel pulsewidth modulation (PWM) scheme which leads to harmonic reduction and provides full utilization of semiconductor devices like GTOs, especially in the high power range where high voltage can be applied. The capacitor voltage balancing problem is discussed and a circuit remedy for such a problem is given. > --- paper_title: Imbricated Cells Multi-Level Voltage-Source Inverters for High Voltage Applications paper_content: SummaryIn the field of High Voltage Power Conversion, various techniques have been developped to use series-connected switches.Plain series connection of switches is the first solution and its drawbacks are now well-known (static and dynamic voltage sharing difficulties that require selecting paired switches or using sophisticated control techniques, high dV/dts generated by the synchronous commutation of all the switches, output waveform that does not benefit from the increased number of switches...).The “neutral point clamped” technique introduced in the early 80s improves voltage sharing and dV/dts, and gives a three-level output waveform.More recently a new multilevel topology has been introduced; compared to former techniques, it really solves the problem of voltage sharing and dV/dts, and gives a three-level output waveform with cancellation of the harmonic at the switching frequency.In this paper, it is shown that this technique can be easily generalized to voltage-source inverter legs with any num... --- paper_title: Multilevel converters-a new breed of power converters paper_content: Multilevel voltage source converters are emerging as a new breed of power converter options for high-power applications. The multilevel voltage source converters typically synthesize the staircase voltage wave from several levels of DC capacitor voltages. One of the major limitations of the multilevel converters is the voltage unbalance between different levels. The techniques to balance the voltage between different levels normally involve voltage clamping or capacitor charge control. There are several ways of implementing voltage balance in multilevel converters. Without considering the traditional magnetic coupled converters, this paper presents three recently developed multilevel voltage source converters: (1) diode-clamp, (2) flying-capacitors, and (3) cascaded-inverters with separate DC sources. The operating principle, features, constraints, and potential applications of these converters are discussed. --- paper_title: Multilevel converters for large electric drives paper_content: Traditional two-level high-frequency pulse width modulation (PWM) inverters for motor drives have several problems associated with their high frequency switching which produces common-mode voltage and high voltage change (dV/dt) rates to the motor windings. Multilevel inverters solve these problems because their devices can switch at a much lower frequency. Two different multilevel topologies are identified for use as a power converter for electric drives: a cascade inverter with separate DC sources; and a back-to-back diode clamped converter. The cascade inverter is a natural fit for large automotive all-electric drives because of the high VA ratings possible and because it uses several levels of DC voltage sources which would be available from batteries or fuel cells. The back-to-back diode clamped converter is ideal where a source of AC voltage is available such as a hybrid electric vehicle. Simulation and experimental results show the superiority of these two power converters over PWM-based drives. --- paper_title: Multicarrier PWM strategies for multilevel inverters paper_content: Analytical solutions of pulsewidth-modulation (PWM) strategies for multilevel inverters are used to identify that alternative phase opposition disposition PWM for diode-clamped inverters produces the same harmonic performance as phase-shifted carrier PWM for cascaded inverters, and hybrid PWM for hybrid inverters, when the carrier frequencies are set to achieve the same number of inverter switch transitions over each fundamental cycle. Using this understanding, a PWM method is then developed for cascaded and hybrid inverters to achieve the same harmonic gains as phase disposition PWM achieves for diode-clamped inverters. Theoretical and experimental results are presented in the paper. --- paper_title: Study on shunt active power filter based on cascade multilevel converters paper_content: A shunt active power filter (APF) based on cascade multilevel converters with carrier phase shifted sinusoidal pulse width modulation (CPS-SPWM) is presented. The main sections of the shunt APF such as switching modulation strategy, harmonic and reactive current detection, AC side control and DC side control are discussed. Simulated results are given that verify this APF can compensate the harmonic and reactive current correctly and validly. --- paper_title: Optimized modulation techniques for the generalized N-level converter paper_content: A balancing control strategy that allows the voltage differences among the DC link capacitors of the generalized n-level power converter to be minimized is presented. The case n=3 is treated, but the technique can be generalized to larger n values. The balancing algorithm does not achieve correct voltage sharing of the capacitors under all operating conditions, but it provides a great improvement. This strategy appears to be very promising in single-phase applications, for which nonredundant switching configurations do not affect the capacitor voltage balance. > --- paper_title: A new modulation strategy for improved DC bus utilization in hard and soft switched multilevel inverters paper_content: Multilevel inverters provide an attractive solution for high power, high voltage applications. While the multilevel topology permits higher voltages using devices of lower ratings, the DC link voltage balancing problem is a serious drawback which limits the applicability of multilevel topologies for motor drives. So far, redundant state selection has been the only method for regulating the capacitor voltages which results in a limited voltage capability. A new control strategy to improve the DC bus utilization, while generating low THD voltages is introduced. The proposed scheme is independent of the type of multilevel inverter topology and can be used for both hard and soft switching inverters. Simulation results are presented to demonstrate the validity of control scheme. --- paper_title: Seven-level shunt active power filter paper_content: In this work, the seven-level cascaded type inverter is used as a shunt active power filter to exploit multilevel inverter advantages. The capacitor voltage control technique used as a harmonic current extraction method for the two-level inverter is extended to the seven-level shunt active power filter. A predictive current controller based on the supply current (not the active filter current) is applied. Phase-shifted space vector modulation for the multilevel inverter is used as a PWM technique. The proposed seven-level shunt active power filter is validated by simulation. --- paper_title: A new predictive control strategy for active power filters paper_content: A new control strategy for active power filters based on dynamic programming control is presented. This method replaces an active power filter with an ideal controllable current source whose waveform is optimally determined beforehand. This new control technique is very suitable for suppressing harmonics of nonlinear loads. In addition other parameters such as power factor can also be controlled. --- paper_title: Space vector pulse width modulation of three-level inverter extending operation into overmodulation region paper_content: Multilevel voltage-fed inverters with space vector pulse width modulation have established their importance in high power high performance industrial drive applications. The paper proposes an overmodulation strategy of space vector PWM of a three-level inverter with linear transfer characteristic that easily extends from the undermodulation strategy previously developed by the authors for neural network implementation. The overmodulation strategy is very complex because of large number of inverter switching states, and hybrid in nature, that incorporates both undermodulation and overmodulation algorithms. The paper describes systematically the algorithm development, system analysis, DSP based implementation, and extensive evaluation study to validate the modulator performance. The modulator takes the command voltage and angle information at the input and generates symmetrical PWM waves for the three phases of an IGBT inverter that operates at 1.0 kHz switching frequency. The switching states are distributed such that the neutral point voltage always remains balanced. An open loop volts/Hz controlled induction motor drive has been evaluated extensively by smoothly varying the voltage and frequency in the whole speed range that covers both undermodulation and overmodulation (nearest to square-wave) regions, and performance was found to be excellent. The PWM algorithm can be easily extended to vector-controlled drive. The algorithm development is again fully compatible for implementation by a neural network. --- paper_title: A two-level inverter based SVPWM algorithm for a multilevel inverter paper_content: Implementation of space vector modulation for multilevel inverters is complex and computationally intensive due to difficulty in determining the location of reference vector, calculation of on-times and determination of switching state vectors. This paper proposes a simple space vector PWM algorithm for a multilevel inverter based on standard two-level inverter and its implementation. Since the pro-posed multilevel space vector modulation method uses the basic two-level modulation to calculate the on-times, computation process for n-level inverter becomes simpler and easier. The main advantage of the proposed methodology is that it uses a simple mapping process to achieve the multilevel space vector modulation. The effectiveness of the algorithm has been verified by simulation and experiments, on a 3-level inverter. --- paper_title: Optimized space vector switching sequences for multilevel inverters paper_content: Previous work has shown that space vector modulation and carrier modulation for two-level inverters achieve the same phase leg switching sequences when appropriate zero sequence offsets are added to the reference waveforms for carrier modulation. This paper presents a similar equivalence between the phase disposition (PD) carrier and space vector modulation strategies applied to diode clamped, cascaded N-level or hybrid multilevel inverters. By analysis of the time integral trajectory of the converter voltage, the paper shows that the optimal harmonic profile for a space vector modulator occurs when the two middle space vectors are centered in each switching cycle. The required zero sequence offset to achieve this centring for an equivalent carrier based modulator is then determined. The results can be applied to any multilevel converter topology without differentiation. Discontinuous behavior is also examined, with the space vector and carrier based modulation methods shown to similarly produce identical performance. Both simulation and experimental results are presented. ---
Title: A Review of Multilevel Inverter Based Active Power Filter Section 1: INTRODUCTION Description 1: This section introduces the importance of maintaining power quality in electrical systems, highlights the limitations of passive filters, and explains the advantages and challenges of active power filters, particularly in high voltage applications. Section 2: TOPOLOGIES OF MULTILEVEL INVERTERS Description 2: This section discusses the different topologies of multilevel inverters, including Diode Clamped, Flying Capacitor, and Cascade H-bridge, along with their configurations and operational principles. Section 3: CONTROL STRATEGIES Description 3: This section describes various control strategies for active power filters, detailing the steps involved in signal conditioning, estimation of compensating signals, and the generation of firing signals for switching devices.